I think the reason AI isn't going to replace CEOs, or anyone in the C suite, is pretty obvious. They see themselves as the company. Everyone else is a resource. AI is here to replace resources, just like investing in a brand new lawn mower. For them, replacing an executive with AI is like saying you're going to marry a broom.
They're just a thin layer to be replaced last. They're just arrogant enough to think they're the company, but ultimately the endgame is -- all humans become economically insignificant compared to the automated economy.
It's interesting seeing all the ChatGPT users in this thread, knowing what we know about OpenAI. Either they don't care about what OpenAI does, don't know their reputation, or feel like their use is too insignificant to matter.
Absolutely not surprising. Just ask HN users what browser are they using and the answer will be Chrome or Chrome clone in 99% of cases. I even got a reply once along the lines "why do you use Firefox?". I was at a loss for words.
I also observe exact same pattern in two different countries among experienced IT workers. They mostly don't care at all about any non-tech implications of the services or employees they are using. Creepto, gambling, tax evasion, supporting monopolies, etc. - all fair game.
PS: I'm guilty of the same too, in other areas. But at least I'm selfaware about my transgressions.
GPT user here ;) And Chrome user too :-D
I can tell you straight away why I choose Chrome (or Chromium-based browsers) over Firefox. In fact it is very simple reason - speed.
Just run Speedometer 3.1 and observe. On my machine Chrome produces around 45, Firefox stucks at 34.
MotionMark - the difference is like x3.
I have to use very heavy websites on a daily basis (like Salesforce) and I can literally feel the difference.
For what it’s worth, I cancelled my ChatGPT subscription, and every time I try debugging a Linux system issue, I feel sad that Claude is sooooooo confidently bad at it.
Claude is noticeably poor for my use case on this particular issue. That said, I imagine I’m not alone in refusing to continue paying OpenAI. We’re in for a wild ride.
A lot of folks here will be startup types though, and while there is the idea that you'll make it big, I think day to day people work at startups for the satisfaction.
Sam is a pathological liar. He is also trying to build a monopoly which is never a good thing. And finally he is trying to get humans into a binary choice where either there is massive unemployment due to overly successful LLMs resulting in a crisis, or there is a crisis because of the failed LLM expectations.
tl;dr: If you need to pay for LLM for work, at least don't pay to the market leader.
This is sarcastically-stated but an excellent point, and an honest answer will come up with a vanishingly small list. We geeks may think we care about Important Things, but our industry cares for nothing but money and power — morality is a hindrance to the accumulation of those.
You mean absurdly high compensation for very comfortable, low-stress office work? People work to feed their families, not just because working is 'cool'.
Because of the em-dash? Unfortunately, some writing hipsters created this "uh actually we were writing emdashes first, it's dramatic increase of use since llm proliferation in the 2020s shouldn't mean we can't use it!" movement. This has lead to purposeful use of emdashes to bait people to call them lllms. You can tell because the spaces around it most likely is because they had to copy and paste it from somewhere else as they (like most humans on non macs) don't actually know how to write an emdash otherwise.
Wow, that’s some pretty farfetched speculation. I’ve been using double-dash as punctuation since I started writing on a computer in the 1980s. I like that MacOS connects them for me.
Consider for a moment how different your assumptions are from reality. Can you learn from that?
One example I can think of is Google + Project Maven [1], where Google was partnering with the DoD but "withdrew in 2018 after internal protests". Though they've since partnered with the DoD on other initiatives [2].
You are halfway to the correct answer. You are correctly recognizing that evil comes into an infinite spectrum of severity and many actors are evil at different levels at the same time. Now take the next step and recognize that fighting said evil also comes into many different levels of severity. It is not just either clear 100% win or do nothing and immediately resign without a fight. There are many intermediate levels of fight in between two maxima. For example as a small step, one can continue using and paying to LLM corpos, but at least avoid the worst one of them, which OAI objectively is today.
This is moral relativism at its finest, and just plain wrong. I'm not willing to go so far as to call Anthropic a good player, but they are surprisingly often willing to put their money where their mouth is. Obviously everything can be interpreted as a PR move as well, but we just lack context to know true intentions. Personally I have repeatedly sold being a good org as a PR move, it is the easiest way to do good in a capitalist environment. The success of such a sales pitch significantly relies on the moral values (or lack thereof) of the other decision makers at the company.
With Project Glasswing for example, I'm impressed at how generally well thought out it is, and very much appreciate that they donate a lot of money to OSS. I would have liked them to extend the Project to smaller players as well, but power centralisation is an inherent problem of AI, not something that is unique to Anthropic.
There are varying degrees of evil though. Saying that Anthropic and OpenAI are both "evil" to the same degree is disingenuous (in my opinion) given Altman's sociopathic behavior.
The difference between Silicon Valley and Wall Street is that Wall Street knows they are lying when they justify the awful things they do in the name of enriching themselves.
If an AI company has done unethical things do you think it is inappropriate to discuss that? Take Grok: among other things it created sexualized images of underaged women without their consent, not by accident but as a feature. Is that just something you want to ignore? In response the people in charge merely restricted the feature to paid subscribers instead of removing it.
Do you think people who mention grok creating CSAM is a holier-than-thou attitude? Do you not think the people who ignore that are worse than other people?
I don't see a comment in this thread concretely discussing said unethical things.
Not sure why you felt the need to switch the topic to Grok. About its nudification incident, it seems a bit far stretched to say that malicious actors bypassing its safety controls was not an accident.
Initially, the image features were restricted to paying subscribers to prevent abuse by anonymous actors; this obviously happened while they were tightening safety controls to stop abuse.
If you're going to bring up that old topic, at least try to get the facts straight.
I switched to grok because its a very cut and dry case of an ai company having poor ethics.
To me it seems a LOT of a stretch to think that the people behind grok belived their safty controls worked, but you can belive that if you wish. Deepfakes of non-consenting adults were trending on X all the time, elon even appears to have shared them himself, which is pretty bad even if they're all just adults, and I'm sure you belive that they belived the AI could tell the difference between an underage person and an adult perfectly, although it seems clear they didn't test it very much.
I for one am appalled at TCP/IP because it facilitates so much unethical behavior. I of course am holier than thou because I do not ignore this and am a voice that raises awareness. I shall not be silenced!
I assume in any sort of thread on a topic like this there is going to be inorganic activity. These companies are all fighting rather hard to try to gain marketshare, potentially worth $trillions, with a product fully capable of producing endless reasonably compelling content to populate an account, a website, or any other basic proof of identity one might ever want.
It's probably never been the case that plurality of views meant anything since online is a bubble to begin with, filtered by endless biases wherever we happen to be reading, making it an even more fringe bubble, but the advent of AI has pushed it all over the edge to the point that perceived pluralities are just completely and utterly meaningless. Somewhat depressing for a one who enjoys online chat as a pasttime, but it's the reality of the world now.
This issue is independent of topic or side. Astroturfing is real. For instance you obviously don't just take Amazon reviews at face value. In the past doing such things in social media, including forums like this, was much more difficult because you need to generate an entire persona around an account to make it not an immediately obvious inorganic account.
And so the cost:reward there was relatively poor leaving it to things like militaries and governments to carry it out for influence campaigns and what not. But LLMs have now completely changed the game. You can easily create an arbitrarily large number of passably believable personas and backstories, autonomously, with no real limitations on scale.
This is obviously going to be abused when the stakes are sufficiently high. And in this case we're talking about a market that these companies likely believe to be worth trillions of dollars. And they can likely even convince themselves that what they're doing isn't immoral pretty easily, in the same way they convinced themselves that letting their software be used to kill people by the all-so-ethical US military is perfectly cool. So why in the world wouldn't they 'inform people of the strengths of their product' on a wide scale?
What's a good way to think about this? Because it does cross my mind about the billions of dollars at play - at the same time, I'm not a pessimist. I think my middle ground is kind of just the usual, taking things with a grain of salt. I mean, I chose to reply to this comment in good faith it's human to human, commenter to unpaid/unaffiliated commenter.
I hope I keep that faith. I hope our billions of neighbors on the web enable me to keep that faith over the coming years. Definitely uncertain about the future of the web but want to love it like I've loved it 1990s-today. (Guess I should volunteer w/the EFF while job hunting, try for for-purpose jobs...)
I don’t know why you dismiss it. There is plenty of astroturfing here, bots and otherwise.
I believe the rule around here is to not assume everyone who disagrees with you or has opinions you don’t understand is a shill. Perhaps there’s a bit of that in the post you replied to, but to me seems mostly about mourning the loss of quality conversations online.
Gotta say, I agree. Not that things were ever great, but it’s really in the crapper now.
They were trying to keep the facade up until they were allowed to become a public benefit corporation. At least that's the way it seemed to me. Now they are fully mask off.
I think the problem is more with using PRIVATE repos. My letters are also private and I would be pretty pissed if the mail carrier was reading them. Why does GitHub think it has the right to do this?
I thought some of those polling numbers would be higher. Do people really think it serves a purpose for tech companies to hold ALL the wealth? People must have heard a bit about economics in high school and figured there was no need to think critically beyond that.
reply