Hacker Newsnew | past | comments | ask | show | jobs | submit | sothatsit's commentslogin

Could the proxy place further restrictions like only replacing the placeholder with the real API key in approved HTTP headers? Then an API server is much less likely to reflect it back.

It can, yes. (I don't know how Deno's work, but that's how ours works.)

It’s like in Anthropic’s own experiment. People who used AI to do their work for them did worse than the control group. But people who used AI to help them understand the problem, brainstorm ideas, and work on their solution did better.

The way you approach using AI matters a lot, and it is a skill that can be learned.


Could it be that ingve has submitted a lot of links, but has not made that many comments?

I believe Karma solely comes from upvotes for comments minus downvotes. Submissions don't count.

That might be in real life ("afk"), but on HN even submissions give you karma.

Have a look at your submissions, they brought you karma. <https://news.ycombinator.com/submitted?id=exagolo>

Although nothing is crystal clear, the karma system is not 1:1 for submissions.

<https://news.ycombinator.com/item?id=29024032>

    Comment upvote +1
    Comment downvote -1
    Submission upvote >0 && <1 (not documented to prevent abuses)
    Submission downvote not possible (only flagging is allowed)

The idea of avoiding config files, and having the config be getting your agent to modify its own codebase, is fascinating.

My gut reaction says that I don't like it, but it is such an interesting idea to think about.


Dario Amodei has said that their models actually have a good return, even when accounting for training costs [0]. They lose money because of R&D, training the next bigger models, and I assume also investment in other areas like data centers.

Sam Altman has made similar statements, and Chinese companies also often serve their models very cheaply. All of this makes me believe them when they say they are profitable on API usage. Usage on the plans is a bit more unknown.

[0] https://youtu.be/GcqQ1ebBqkc?si=Vs2R4taIhj3uwIyj&t=1088


We can also look at the inference costs at 3rd party inference providers.

Their whole company has to be profitable, or at least not run out of money/investors. If you have no cash you can't just point to one part of your business as being profitable, given that it will quickly become hopelessly out-of-date when other models overtake it.

Other models will only overtake as long as there is enough investor money or margins from inference for others to continue training bigger and bigger models.

We can see from inference costs at third party providers that the inference is profitable enough to sustain even third party providers of proprietary models that they are undoubtedly paying licensing/usage fees for, and so these models won't go away.


Yeah, that’s the whole game they’re playing. Compete until they can’t raise more and then they will start cutting costs and introducing new revenue sources like ads.

They spend money on growth and new models. At some point that will slow and then they’ll start to spend less on R&D and training. Competition means some may lose, but models will continue to be served.


> Sam Altman has made similar statements, and Chinese companies also often serve their models very cheaply.

Sam Altman got fired by his own board for dishonesty, and a lot of the original OpenAI people have left. I don't know the guy, but given his track record I'm not sure I'd just take his word for it.

As for chinese models..: https://www.wheresyoured.at/the-enshittifinancial-crisis/#th...

From the article:

> You’re probably gonna say at this point that Anthropic or OpenAI might go public, which will infuse capital into the system, and I want to give you a preview of what to look forward to, courtesy of AI labs MiniMax and Zhipu (as reported by The Information), which just filed to go public in Hong Kong.

> Anyway, I’m sure these numbers are great-oh my GOD!

> In the first half of this year, Zhipu had a net loss of $334 million on $27 million in revenue, and guess what, 85% of that revenue came from enterprise customers. Meanwhile, MiniMax made $53.4 million in revenue in the first nine months of the year, and burned $211 million to earn it.


I believe the skills would contain the documentation. It would have been nice for them to give more information on the granularity of the skills they created though.

This seems like an issue that will be fixed in newer model releases that are better trained to use skills.

Efficiency is easy to measure. And whatever is measured becomes the goal.

It is harder to measure craft, care, or wonder. My best proxy is emails from real people, but those are sporadic, unpredictable, and a lot harder to judge than analytics screens that update every minute.


100%. This is what I posted about on Hacker News ([1] where it got no traction) and Reddit [2] (where it led to a discussion but then got deleted by a mod).

[1] https://news.ycombinator.com/item?id=46705588

[2] https://www.reddit.com/r/ExperiencedDevs/comments/1qj03gq/wh...


People should be able to do full 3d scans of their bodies, and then doctors should be able to tell them what they should ignore. If they spot something abnormal they could suggest coming back 6 months or a year later to check if it has changed, just like mole scans. The problems that you suggest only come from people overreacting to test results. We can do better.

You can now in the US it’s just expensive, and of little medical value:

Yep, it is working as intended then. My point was more that “preventative MRIs cause more problems than they solve” is an annoying statement because it does not have to be true if you get good medical advice. But saying “preventative MRIs are not worth the cost” is quite reasonable.

This is fairly uncharitable. The goal is not to trick people into reading, it is to motivate them as to why they should read. It is more about highlighting the most interesting part of your article to tell people why they should spend the time. You still have to deliver on your promises.

I feel like Gwern’s example is quite illustrative of this point. Just framing the content differently makes you more motivated to jump into it, even if you’re reading about the same content as before.


I don't know. It's almost universally assumed to be true that "making someone want to read on" is inherently good but IMO it's not. Why is it good to be "more motivated to jump into it"? If a plain description and some context does not motivate you, it would be better to spend your time elsewhere.

I prefer reading interesting stories to textbooks. It's that simple. If all you want to read is textbook entries, then you are an outlier.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: