Hacker Newsnew | past | comments | ask | show | jobs | submit | kalkin's commentslogin

I already suspected the first comment was by an LLM, but deleted that from my reply as it didn't feel like a productive accusation. However, with "that's a fair point" as an opener, plus the sheer typing speed implied by replies, and the way that individual sentences thread together even as the larger point is incoherent, I'm now confident enough to call it.

I actually use assistive voice transcription as I am unable to type well with a keyboard.

[Edit: update]

I use assistive voice transcription because I'm unable to type well with a keyboard. But I'd point out that "you must be an AI" has become the new way to dismiss an argument without engaging with it. It's the modern equivalent of "you're just copy-pasting talking points", it lets you discard everything someone said without addressing a single word of it.

The fact that my sentences "thread together" is not evidence of anything other than coherent thinking. And speed of response says more about the tools someone uses than whether a human is behind them. Plenty of people use dictation, accessibility tools, or just happen to type fast.

^^^ This took me 30 seconds to speak aloud.


Ok, good to have that explanation. Your larger point, though, remains incoherent. Whether Anthropic saw this coming has nothing to do with the substance of the conflict here and is very much not "the real question".

Thanks. I saw everybody responding as if there might be at least a modicum of gravitas there, and thought I was suffering a stroke, or was pulled into another dimension.

What do subpoenas have to do with anything?

Where is all the weird misinformation in these comments coming from?


Because mass surveillance has been happening by every tech company under every president since George W. Bush, and despite everybody trying to stop it they haven’t been able to.

OpenAI has already said that they’ll give up whatever info the government wants if they’re issued a subpoena; they don’t have a choice.


A subpoena isn't mass surveillance.

Well I certainly feel surveilled when I know that OpenAI will simply give up my data if asked.

If anthro is saying they won’t, that’s good!


Companies have to comply with subpoenas (unless they can beat them in court, and with an alternative of going to jail). Subpoenas are supposed to be targeted at individuals and need some kind of process, usually judicial, each time one is issued. Mass surveillance - the Anthropic blog post raises the possibility of using AI to classify the political loyalties of every citizen - is a different thing.

A subpoena isn't "simply asking." Subpoena literally means "under penalty" in Latin. If the company does not comply they will be held in contempt of court and someone may well go to jail.

It's not recent news that Anthropic has (had?) DoD contracts. This is a lot of words to write while seeming ignorant of basic facts about the situation.

The argument isn't that nobody knew Anthropic had DoW contracts. The argument is that there's a difference between "publicly known if you follow defense-tech procurement" and "trending on social media where Anthropic's core audience is now actively discussing it." Both can be true simultaneously.

A fact being technically available and that fact commanding widespread public attention are very different things. Anthropic's communications team understands this distinction even if you don't find it interesting. The blog post wasn't written for people who already track federal AI contracts, it was written for the much larger audience encountering this story for the first time and forming opinions about it in real time.

If the point you're making is just "I already knew this," that's fine, but it doesn't address anything about the incentive structure behind the public response.


I think Scott Alexander (of all people) got the number of the tech-right Trump defenders on this one: https://xcancel.com/slatestarcodex/status/202741423748490451...

> petite bourgeoisie clutching their pearls

> mean girl slights


It appears that when it comes to Jesse Jackson you're entirely capable of understanding how a shakedown works: https://news.ycombinator.com/item?id=47046514

Yes, I am entirely capable of doing that. Your point?

I'm providing information for other readers to evaluate your good faith, or lack thereof.

That's a nice straw man you got there. I don't mind you characterizing the negotiation however you want. That's not the debate. Call it "shakedown" or "mafia" as someone else mentioned, or whatnot (although it is appears the company that was trying to grandstand the elected US Government by dictating their own terms was Anthropic, not the other way around, but I digress). The question is was it a breach of contract or just a tough negotiation?

Companies have gone out of business due to a big customer pulling the contract. Imagination Technologies comes to mind. This is not a rare thing in business.


I have to admit, “accept this unilateral change to the contract or we will use the full power of the US government to destroy your company” is certainly a tough negotiation stance. You got that part right.

How did you get the "destroy your company" part? If HN sentiment is any evidence, they are even more popular than before. GPU is a constrained resource and I am sure they are going to have enough business to saturate what they got. I'm certain they would have just removed (and still will remove) two paragraphs from the terms had it really "destroyed their company."

> full power of the US government

Haha, I can assure you that is not even close to the full power of US government. Ask the crypto people during Biden admin for just a little more power (still not even close to "full.")


"Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic."

For a company of Anthropic's size, this may very well be a death sentence, even if their work has nothing to do with the military supply chain. They could have just canceled the contract, but they wanted to go full Darth Vader on them to prove a point in case anyone else thought about "negotiating" "voluntarily" with the federal government.


You don't think Anthropic is going out of business any minute now, do you? This is just rhetoric. Affirmative evidence is they would just remove two paragraphs if they were.

I'm curious for your understanding of why Trump won in 2024. If I'm understanding right, you think it was because American voters were rejecting Maoism ("it was called re-education"), to which you think the previous commenter likely subscribes, and which voters associated with Harris/Walz? But I suspect I'm not getting it quite right, and it would be helpful if you would spell out what you mean, rather than just relying on allusion.

(I myself don't have a clear answer to why Trump won, but I don't think it speaks well to the decision-making of the median voter on their own terms, whatever those were, that Trump's now so unpopular despite governing in pretty much the way he said he would.)


[flagged]


> Biden, and then Harris/Waltz, are the kind of the ultimate expression of this left-wing, elitist decadence. Biden appointed a man who wears stilettos and dresses to work in charge of nuclear waste as the Department of Energy... Tolerance of mass border crossings was probably a more directly fatal error...

This is just totally disconnected from policy reality. Biden did not tolerate mass border crossings. (I _wish_ he'd dismantled ICE, but he very clearly did not.) A relatively minor DoE appointment going to a member of an unpopular minority both has nothing to do with policy and is the kind of thing that must necessarily be acceptable if minorities are actually going to be "treated equally under the law". This is a ludicrous basis to infer "the subservience of the political class" to transgender people.

On the other hand, Trump is a billionaire with Epstein connections and entirely unabashed about making money for his businesses and family using his government position. If this isn't "decadence", or "elitism", what meaning could the words possibly have?

"Deprogramming" might be an unfriendly word but it's hard for me to imagine how you have a functional democracy when a plurality of voters are making decisions on the basis of straightforward falsehoods, or even inversions of reality, just because "at least that is the perception". This isn't a sustainable situation, and it will end with either re-connecting these people to reality or disenfranchising them (really, them disenfranchising themselves along with the rest of us, e.g. by re-empowering someone who tried to steal an election). The former seems vastly preferable.

Speaking of unfriendly words - I also broadly have very little sympathy for a demand that people on the left speak respectfully of Trump voters given the total lack of any reciprocation. Even if it is the right way to do politics, the asymmetry between the way Democratic politicians talk about rural areas and the way Republican politicians talk about cities is another thing that's totally unsustainable.


This is a great example of a well put together, level-headed analysis, that I still think misses some key facts about how right wing propaganda works.

> Tolerance of mass border crossings was probably a more directly fatal error, representing a final decoupling of the democratic party from their ideological roots in the labor movement which was always militantly against illegal immigration

Both Biden and Obama turned away more immigrants than Trump did in his first term. And Clinton is the kind of denying asylum. The idea that we just had completely open borders and nothing was being done about is a fabrication.

> Something like 0.6% of people identify as transgender in the United States(1). They are vastly over-represented in the media, in left wing political programs, and in the general zeitgeist at large relative to their population size

If you actually pay attention to who is talking about Trans people, it is the right. Liberal media may be occasionally baited into arguing about it, but to say it was a major platform is a perception the right crafted. Fox was talking about it 24/7 leading up to the election [1]. Musk and Trump were tweeting about it constantly. They ran political ads saying they wanted to convert your kids to trans ideology. It's gotten so bad that our current president just harasses women that look kinda manly, saying they are trans.

[1] https://www.yahoo.com/news/fox-news-covers-transgender-issue...


If the Democrat leadership weren't going all-in on this ideology despite the demonstrable harms it's causing, the Republicans would have almost nothing to say about it.

As an example, replacing sex with "gender identity" in prisons policy has inflicted considerable harm on women prisoners, who have been sexually assaulted, raped and impregnated by male prisoners who were transferred to the female prison estate on the basis of their supposed "female gender identity".

Feminist groups like WoLF spoke up on the horrors of this first, and the Republicans followed when they realized they could capitalize on this politically. But really it shouldn't have happened at all.


What percentage of voters do you think want the Pentagon to institute an AI-powered domestic mass surveillance program?

> That is the best definition I've yet to read.

If this was your takeaway, read more carefully:

> If something claims to be conscious and we can't prove it's not, we have no choice but to believe it.

Consciousness is neither sufficient, nor, at least conceptually, necessary, for any given level of intelligence.


This book (from a philosophy professor AFAIK unaffiliated with any AI company) makes what I find a pretty compelling case that it's correct to be uncertain today about what if anything an AI might experience: https://faculty.ucr.edu/~eschwitz/SchwitzPapers/AIConsciousn...

From the folks who think this is obviously ridiculous, I'd like to hear where Schwitzgebel is missing something obvious.


At the second sentence of the first chapter in the book we already have a weasel-worded sentence that, if you were to remove the weaselly-ness of it and stand behind it as an assertion you mean, is pretty clearly factually incorrect.

> At a broad, functional level, AI architectures are beginning to resemble the architectures many consciousness scientists associate with conscious systems.

If you can find even a single published scientist who associates "next-token prediction", which is the full extent of what LLM architecture is programmed to do, with "consciousness", be my guest. Bonus points if they aren't already well-known as a quack or sponsored by an LLM lab.

The reality is that we can confidently assert there is no consciousness because we know exactly how LLMs are programmed, and nothing in that programming is more sophisticated than token prediction. That is literally the beginning and the end of it. There is some extremely impressive math and engineering going on to do a very good job of it, but there is absolutely zero reason to believe that consciousness is merely token prediction. I wouldn't rule out the possibility of machine consciousness categorically, but LLMs are not it and are architecturally not even in the correct direction towards achieving it.


He talks pretty specifically about what he means by "the architectures many consciousness scientists associate with conscious systems" - Global Workspace theory, Higher Order theory and Integrated Information theory. This is on the second and third pages of the intro chapter.

You seem to be confusing the training task with the architecture. Next-token prediction is a task, which many architectures can do, including human brains (although we're worse at it than LLMs).

Note that some of the theories Schwitzgebel cites would, in his reading, require sensors and/or recurrence for consciousness, which a plain transformer doesn't have. But neither is hard to add in principle, and Anthropic like its competitors doesn't make public what architectural changes it might have made in the last few years.


You could execute Claude by hand with printed weight matrices, a pencil, and a lot of free time - the exact same computation, just slower. So where would the "wellbeing" be? In the pencil? Speed doesn't summon ghosts. Matrix multiplications don't create qualia just because they run on GPUs instead of paper.


This basically Searle's Chinese Room argument. It's got a respectable history (... Searle's personal ethics aside) but it's not something that has produced any kind of consensus among philosophers. Note that it would apply to any AI instantiated as a Turing machine and to a simulation of human brain at an arbitrary level of detail as well.

There is a section on the Chinese Room argument in the book.

(I personally am skeptical that LLMs have any conscious experience. I just don't think it's a ridiculous question.)


That philosophers still debate it isn’t a counterargument. Philosophers still debate lots of things. Where’s the flaw in the actual reasoning? The computation is substrate-independent. Running it slower on paper doesn’t change what’s being computed. If there’s no experiencer when you do arithmetic by hand, parallelizing it on silicon doesn’t summon one.


Exactly what part of your brain can you point to and say, "This is it. This understands Chinese" ? Your brain is every bit a Chinese Room as a Large Language Model. That's the flaw.

And unless you believe in a metaphysical reality to the body, then your point about substrate independence cuts for the brain as well.


The same is true of humans, and so the argument fails to demonstrate anything interesting.


> The same is true of humans,

What is? That you can run us on paper? That seems demonstrably false


If a human is ultimately made up of nothing more than particles obeying the laws of physics, it would be in principle possible to simulate one on paper. Completely impractical, but the same is true of simulating Claude by hand (presuming Anthropic doesn't have some kind of insane secret efficiency breakthrough which allows many orders of magnitude fewer flops to run Claude than other models, which they're cleverly disguising by buying billions of dollars of compute they don't need).


The physics argument assumes consciousness is computable. We don't know that. Maybe it requires specific substrates, continuous processes, quantum effects that aren't classically simulable. We genuinely don't know. With LLMs we have certainty it's computation because we built it. With brains we have an open question.


It would be pretty arrogant, I think, though possibly classic tech-bro behavior, for Anthropic to say, "you know what, smart people who've spent their whole lives thinking and debating about this don't have any agreement on what's required for consciousness, but we're good at engineering so we can just say that some of those people are idiots and we can give their conclusions zero credence."


Why do you think you can't execute the computations of the brain ?


It is ridiculous. I skimmed through it and I'm not convinced he's trying to make the point you think he is. But if he is, he's missing that we do understand at a fundamental level how today's LLMs work. There isn't a consciousness there. They're not actually complex enough. They don't actually think. It's a text input/output machine. A powerful one with a lot of resources. But it is fundamentally spicy autocomplete, no matter how magical the results seem to a philosophy professor.

The hypothetical AI you and he are talking about would need to be an order of magnitude more complex before we can even begin asking that question. Treating today's AIs like people is delusional; whether self-delusion, or outright grift, YMMV.


> But if he is, he's missing that we do understand at a fundamental level how today's LLMs work.

No we don't? We understand practically nothing of how modern frontier systems actually function (in the sense that we would not be able to recreate even the tiniest fraction of their capabilities by conventional means). Knowing how they're trained has nothing to do with understanding their internal processes.


> I'm not convinced he's trying to make the point you think he is

What point do you think he's trying to make?

(TBH, before confidently accusing people of "delusion" or "grift" I would like to have a better argument than a sequence of 4-6 word sentences which each restate my conclusion with slightly variant phrasing. But clarifying our understanding of what Schwitzgebel is arguing might be a more productive direction.)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: