I still don't see what the big deal is. If LLMs are (or become) all they're cracked up to be, it shouldn't matter whether someone "learns to use the tools" today or tomorrow or five years from now. In fact they should become much easier to use as they become more intelligent, you shouldn't need all these fancy prompting strategies anymore.
(Reminds me of search engines. People who really knew how to search for things honed that skill over a period of time, only for those skills to become irrelevant now that search engines are much smarter.)
I guess my point is - why must the technology insist upon itself? Evangelizing for people to use it when they don't want to, just sounds cultish. If it's useful, people will eventually use it - like any other new technology. If someone doesn't find it useful yet, maybe they just don't work in a field that AI is good at yet.
I agree with what you’ve said. If you want to see what is cultish, look at the subreddit the post is talking about. It’s denialism and not healthy behavior.
(Reminds me of search engines. People who really knew how to search for things honed that skill over a period of time, only for those skills to become irrelevant now that search engines are much smarter.)
I guess my point is - why must the technology insist upon itself? Evangelizing for people to use it when they don't want to, just sounds cultish. If it's useful, people will eventually use it - like any other new technology. If someone doesn't find it useful yet, maybe they just don't work in a field that AI is good at yet.