Hacker Newsnew | past | comments | ask | show | jobs | submit | mrdependable's commentslogin

I think the reason AI isn't going to replace CEOs, or anyone in the C suite, is pretty obvious. They see themselves as the company. Everyone else is a resource. AI is here to replace resources, just like investing in a brand new lawn mower. For them, replacing an executive with AI is like saying you're going to marry a broom.

They're just a thin layer to be replaced last. They're just arrogant enough to think they're the company, but ultimately the endgame is -- all humans become economically insignificant compared to the automated economy.

https://www.theguardian.com/technology/2026/apr/13/meta-ai-m...


Care to provide any examples of what sort of content are in these conversations you had with AI?

More likely he will have a new contract with some private security firm.

Maybe he can build a moat, and a well fortified structure on the inside, with a little draw bridge to let people in.

Some poor security guards are going to end up getting gunned down.

I was thinking more like Blackwater, not standard security.

It's interesting seeing all the ChatGPT users in this thread, knowing what we know about OpenAI. Either they don't care about what OpenAI does, don't know their reputation, or feel like their use is too insignificant to matter.

If Sam Altman told me what time it was I'd check my watch and probably still not believe him.

Absolutely not surprising. Just ask HN users what browser are they using and the answer will be Chrome or Chrome clone in 99% of cases. I even got a reply once along the lines "why do you use Firefox?". I was at a loss for words.

I also observe exact same pattern in two different countries among experienced IT workers. They mostly don't care at all about any non-tech implications of the services or employees they are using. Creepto, gambling, tax evasion, supporting monopolies, etc. - all fair game.

PS: I'm guilty of the same too, in other areas. But at least I'm selfaware about my transgressions.


GPT user here ;) And Chrome user too :-D I can tell you straight away why I choose Chrome (or Chromium-based browsers) over Firefox. In fact it is very simple reason - speed. Just run Speedometer 3.1 and observe. On my machine Chrome produces around 45, Firefox stucks at 34. MotionMark - the difference is like x3. I have to use very heavy websites on a daily basis (like Salesforce) and I can literally feel the difference.

For what it’s worth, I cancelled my ChatGPT subscription, and every time I try debugging a Linux system issue, I feel sad that Claude is sooooooo confidently bad at it.

Claude is noticeably poor for my use case on this particular issue. That said, I imagine I’m not alone in refusing to continue paying OpenAI. We’re in for a wild ride.


Do you know game theory? If you look at it through this perspective this doesn't sound like a good strategy.

Basically the classical prisoner dilemma. The other devs with less moral can then outperform you.

It could be a valid strategy if you can increase your crediblity with this relinquishment.


Life is more than just empty status games and money hoarding at (almost) all cost. In fact, a good life lived well (TM) is anything but that.

But I write this on mostly US forum full of faangs and similar so i dont expect strong agreement.


A lot of folks here will be startup types though, and while there is the idea that you'll make it big, I think day to day people work at startups for the satisfaction.

I am on this very forum as an explicit effort to counterbalance that very view. You have my strong agreement.

Someone who is never rational is equally bad as someone who claims there is nothing else in humans.

> Do you know game theory?

Never heard of it. The food there good?

> The other devs with less moral can then outperform you.

I long for the days where it’s only my moral compass holding me back.


Could you spell it out? I pay a $20/m OpenAI subscription and I haven't read the reasons why I might want to stop.


Paywall.


Sam is a pathological liar. He is also trying to build a monopoly which is never a good thing. And finally he is trying to get humans into a binary choice where either there is massive unemployment due to overly successful LLMs resulting in a crisis, or there is a crisis because of the failed LLM expectations.

tl;dr: If you need to pay for LLM for work, at least don't pay to the market leader.


What has the tech industry ever resisted on moral or reputational grounds?

This is sarcastically-stated but an excellent point, and an honest answer will come up with a vanishingly small list. We geeks may think we care about Important Things, but our industry cares for nothing but money and power — morality is a hindrance to the accumulation of those.

Worse, they exploit our curiosity and open-mindedness to build their empires for them. Which we willingly do because cool shiny shit.

Nerd-sniping as a weapon of oppression


A lot of it is simply that they are far more open to the idea of curiousity as having value than most people.

> cool shiny shit

You mean absurdly high compensation for very comfortable, low-stress office work? People work to feed their families, not just because working is 'cool'.


llm made this post?

Because of the em-dash? Unfortunately, some writing hipsters created this "uh actually we were writing emdashes first, it's dramatic increase of use since llm proliferation in the 2020s shouldn't mean we can't use it!" movement. This has lead to purposeful use of emdashes to bait people to call them lllms. You can tell because the spaces around it most likely is because they had to copy and paste it from somewhere else as they (like most humans on non macs) don't actually know how to write an emdash otherwise.

Wow, that’s some pretty farfetched speculation. I’ve been using double-dash as punctuation since I started writing on a computer in the 1980s. I like that MacOS connects them for me.

Consider for a moment how different your assumptions are from reality. Can you learn from that?



One example I can think of is Google + Project Maven [1], where Google was partnering with the DoD but "withdrew in 2018 after internal protests". Though they've since partnered with the DoD on other initiatives [2].

[1] https://en.wikipedia.org/wiki/Project_Maven

[2] https://www.reuters.com/business/autos-transportation/us-dep...


That is probably the most notable example and in the end, those few still lost.

There was Lavabit, though that's an example of just one such event.

Edit: and to some extent Apple, at least in the past.


Maybe it's time to start.

There is a saying in India that roughly translates to - Everyone is naked in this bath house.

All AI players are 50 shades of evil and are only concerned about their profits.

Instead of virtue signaling it's best to use the tools that work best for your needs.


You are halfway to the correct answer. You are correctly recognizing that evil comes into an infinite spectrum of severity and many actors are evil at different levels at the same time. Now take the next step and recognize that fighting said evil also comes into many different levels of severity. It is not just either clear 100% win or do nothing and immediately resign without a fight. There are many intermediate levels of fight in between two maxima. For example as a small step, one can continue using and paying to LLM corpos, but at least avoid the worst one of them, which OAI objectively is today.

This is moral relativism at its finest, and just plain wrong. I'm not willing to go so far as to call Anthropic a good player, but they are surprisingly often willing to put their money where their mouth is. Obviously everything can be interpreted as a PR move as well, but we just lack context to know true intentions. Personally I have repeatedly sold being a good org as a PR move, it is the easiest way to do good in a capitalist environment. The success of such a sales pitch significantly relies on the moral values (or lack thereof) of the other decision makers at the company.

With Project Glasswing for example, I'm impressed at how generally well thought out it is, and very much appreciate that they donate a lot of money to OSS. I would have liked them to extend the Project to smaller players as well, but power centralisation is an inherent problem of AI, not something that is unique to Anthropic.


There are varying degrees of evil though. Saying that Anthropic and OpenAI are both "evil" to the same degree is disingenuous (in my opinion) given Altman's sociopathic behavior.

You would have to practically live in a hippie commune to avoid buying products and services from companies led by sociopaths.

I don't think people paying for AI at this moment are concerned about moral or ethics.

The difference between Silicon Valley and Wall Street is that Wall Street knows they are lying when they justify the awful things they do in the name of enriching themselves.

What's with the holier-than-thou attitude? Why do you think you're better than someone using chatgpt?

People love to roleplay as activists because it gives their life some meaning and illusion of control

And people love to roleplay as nihilists because it means they don't have to be responsible for anything.

If an AI company has done unethical things do you think it is inappropriate to discuss that? Take Grok: among other things it created sexualized images of underaged women without their consent, not by accident but as a feature. Is that just something you want to ignore? In response the people in charge merely restricted the feature to paid subscribers instead of removing it.

Do you think people who mention grok creating CSAM is a holier-than-thou attitude? Do you not think the people who ignore that are worse than other people?


I don't see a comment in this thread concretely discussing said unethical things.

Not sure why you felt the need to switch the topic to Grok. About its nudification incident, it seems a bit far stretched to say that malicious actors bypassing its safety controls was not an accident.

Initially, the image features were restricted to paying subscribers to prevent abuse by anonymous actors; this obviously happened while they were tightening safety controls to stop abuse.

If you're going to bring up that old topic, at least try to get the facts straight.


I switched to grok because its a very cut and dry case of an ai company having poor ethics.

To me it seems a LOT of a stretch to think that the people behind grok belived their safty controls worked, but you can belive that if you wish. Deepfakes of non-consenting adults were trending on X all the time, elon even appears to have shared them himself, which is pretty bad even if they're all just adults, and I'm sure you belive that they belived the AI could tell the difference between an underage person and an adult perfectly, although it seems clear they didn't test it very much.


to grok or from?

I for one am appalled at TCP/IP because it facilitates so much unethical behavior. I of course am holier than thou because I do not ignore this and am a voice that raises awareness. I shall not be silenced!

I assume in any sort of thread on a topic like this there is going to be inorganic activity. These companies are all fighting rather hard to try to gain marketshare, potentially worth $trillions, with a product fully capable of producing endless reasonably compelling content to populate an account, a website, or any other basic proof of identity one might ever want.

It's probably never been the case that plurality of views meant anything since online is a bubble to begin with, filtered by endless biases wherever we happen to be reading, making it an even more fringe bubble, but the advent of AI has pushed it all over the edge to the point that perceived pluralities are just completely and utterly meaningless. Somewhat depressing for a one who enjoys online chat as a pasttime, but it's the reality of the world now.


Yeah yeah yeah, everyone's a bot expect you with all the right opinions...

This issue is independent of topic or side. Astroturfing is real. For instance you obviously don't just take Amazon reviews at face value. In the past doing such things in social media, including forums like this, was much more difficult because you need to generate an entire persona around an account to make it not an immediately obvious inorganic account.

And so the cost:reward there was relatively poor leaving it to things like militaries and governments to carry it out for influence campaigns and what not. But LLMs have now completely changed the game. You can easily create an arbitrarily large number of passably believable personas and backstories, autonomously, with no real limitations on scale.

This is obviously going to be abused when the stakes are sufficiently high. And in this case we're talking about a market that these companies likely believe to be worth trillions of dollars. And they can likely even convince themselves that what they're doing isn't immoral pretty easily, in the same way they convinced themselves that letting their software be used to kill people by the all-so-ethical US military is perfectly cool. So why in the world wouldn't they 'inform people of the strengths of their product' on a wide scale?


hehe :)

What's a good way to think about this? Because it does cross my mind about the billions of dollars at play - at the same time, I'm not a pessimist. I think my middle ground is kind of just the usual, taking things with a grain of salt. I mean, I chose to reply to this comment in good faith it's human to human, commenter to unpaid/unaffiliated commenter.

I hope I keep that faith. I hope our billions of neighbors on the web enable me to keep that faith over the coming years. Definitely uncertain about the future of the web but want to love it like I've loved it 1990s-today. (Guess I should volunteer w/the EFF while job hunting, try for for-purpose jobs...)


I don’t know why you dismiss it. There is plenty of astroturfing here, bots and otherwise.

I believe the rule around here is to not assume everyone who disagrees with you or has opinions you don’t understand is a shill. Perhaps there’s a bit of that in the post you replied to, but to me seems mostly about mourning the loss of quality conversations online.

Gotta say, I agree. Not that things were ever great, but it’s really in the crapper now.


Funny how quickly they have become like every other tech company. There is basically no hint of OpenAI the non-profit anymore.

Edit: Why did this go from their press release to a news story?


Frontier AI + tens of billions in capex was always going to end here


Sir, I'm sorry to be the one to tell you this. But you've been in a coma since 2022 after a severe car accident.

It's now the year 2026. That dead horse has already been beaten.


They were trying to keep the facade up until they were allowed to become a public benefit corporation. At least that's the way it seemed to me. Now they are fully mask off.


I think the problem is more with using PRIVATE repos. My letters are also private and I would be pretty pissed if the mail carrier was reading them. Why does GitHub think it has the right to do this?


My guess is that we are going to see a new uber expensive video generation tool from them aimed at filmmakers in the next year.


I think we are, in fact, getting dumber.


I thought some of those polling numbers would be higher. Do people really think it serves a purpose for tech companies to hold ALL the wealth? People must have heard a bit about economics in high school and figured there was no need to think critically beyond that.


There are a lot of useful cases for OpenClaw, just like there are use cases for letting my dog drive my car. Still don’t let my dog drive though.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: