I'm not trying to pick on you or anything, but at the top of the thread you said "I mean, I can ask for obscure things with subtle nuance where I misspell words and mess up my question and it figures it out" and now you're saying "it is trivial to show that it can do things literally impossible even 5 years ago"
This leads me to believe that the issue is not that llm skeptics refuse to see, but that you are simply unaware of what is possible without them--because that sort of fuzzy search was SOTA for information retrieval and commonplace about 15 years ago (it was one of the early accomplishments of the "big data/data science" era) long before LLMs and deepnets were the new hotness.
This is the problem I have with the current crop of AI tools: what works isn't new and what's new isn't good.
It's also a red flag to hear "it is trivial to show that it can do things literally impossible even 5 years ago" 10 comments deep without anybody doing exactly that...
This leads me to believe that the issue is not that llm skeptics refuse to see, but that you are simply unaware of what is possible without them--because that sort of fuzzy search was SOTA for information retrieval and commonplace about 15 years ago (it was one of the early accomplishments of the "big data/data science" era) long before LLMs and deepnets were the new hotness.
This is the problem I have with the current crop of AI tools: what works isn't new and what's new isn't good.