As long as the hallucination problem remains, I think we are going to see a significant hype bubble crash within a year or so.
Yes, it is still useful under proper guidance, but building things on full automation that are reliable doesn't seem to be something that is actually within realms of reality at present. More innovations will be required for that.
We don't even have to go to crypto. AI has had many boom/bust cycles. The term "AI Winter" dates back from the 80s!
Of course, at every cycle we get new tools. The thing is, once tools become mainstream, people stop referring to them as "AI". Take assistants (Amazon Echo, Cortana, Siri, etc). They are doing things that were active areas of "AI" research not long ago. Voice recognition and text to speech were very hard problems. Now people use them without remembering that they were once AI.
I predict that GPT will follow the same cycle. It's way too overhyped right now (because it's impressive, just like Dragon Naturally Speaking was). But people will try to everyday scenarios and – outside of niches – they will be disappointed. Cue crash as investments dry up.
Hopefully this time we won't have too many high profile casualties, like what happened with Lisp Machines.
I never upgraded to pro and I spent like $2 on credits so far this month. I could easily see them hitting a $1b/yr which to me isn't niche market or a hype bubble.
My understanding is the base model is pretty good about knowing whether it knows stuff or not. it's human feedback training that causes it to lose that signal.
Thanks, but I didn't find any details about performance of pre reinforcement training and after. Looking to understand more about the assertion that hallucinations are introduced by the reinforcement training.
https://arxiv.org/abs/2303.08774
The technical report has before and after comparisons. It's a bit worse on some tests. and they pretty explicitly mention the issue of calibration (how well confidence on a problem results in the ability or accuracy solving that problem).