Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think people confuse the power of the technology with the very real bubble we’re living in.

There’s no question that we’re in a bubble which will eventually subside, probably in a “dot com” bust kind of way.

But let me tell you…last month I sent several hundred million requests to AI, as a single developer, and got exactly what I needed.

Three things are happening at once in this industry… (1) executives are over promising a literal unicorn with AGI, that is totally unnecessary for the ongoing viability of LLM’s and is pumping the bubble. (2) the technology is improving and delivery costs are changing as we figure out what works and who will pay. (3) the industry’s instincts are developing, so it’s common for people to think “AI” can do something it absolutely cannot do today.

But again…as one guy, for a few thousand dollars, I sent hundreds of millions of requests to AI that are generating a lot of value for me and my team.

Our instincts have a long way to go before we’ve collectively internalized the fact that one person can do that.



> But let me tell you…last month I sent several hundred million requests to AI, as a single developer, and got exactly what I needed

There are 2.6 million seconds in a month. You are claiming to have sent hundreds of requests per second to AI.


That's exactly what happened – I called the OpenAI API, using custom application code running on a server, a few hundred million times.

It is trivial for a server to send/receive 150 requests per second to the API.

This is what I mean by instincts...we're used to thinking of developers-pressing-keys as a fundamental bottleneck, and it still is to a point. But as soon as the tracks are laid for the AI to "work", things go from speed-of-human-thought to speed-of-light.


A lot of people are feeding all the email and slack messages for entire companies through AI to classify sentiment (positive, negative, neutral etc), or summarize it for natural language search using a specific dictionary. You can process each message multiple ways for all sorts of things, or classify images. There's a lot of uses for the smaller cheaper faster llms


Yeah I'm curious now.

If you have a lot of GPU's and you're doing massive text processing like spam detection for hundreds of thousands of users, sure.

But "as a single developer", "value for me and my team"... I'm confused.


I'm NDA'ed on the specifics, sorry.

In general terms, we had to call the OpenAI API X00,000,000 times for a large-scale data processing task. We ended up with about 2,000,000 records in a database, using data created, classified, and cleaned by the AI.

There were multiple steps involved, so each individual record was the result of many round trips between the AI and the server, and not all calls are 1-to-1 with a record.

None of this is rocket science, and I think any average developer could pull off a similar task given enough time...but I was the only developer involved in the process.

The end product is being sold to companies who benefit from the data we produced, hence "value for me and the team."

The real point is that generative AI can, under the right circumstances, create absurd amounts of "productivity" that wouldn't have been possible otherwise.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: