My guess is the LLM was then an expert on the user guide…so as long as you asked it questions about standard by the book use cases then it was perfectly well trained.
But building something from those use cases? If one of them was a iPhone podcast app then the dev would be in the possession of an iPhone podcast app.
Ideally they should have trained the llm on a few dozen iPhone podcast apps :/
Maybe it’s due to cost? I’m curious to know whether Apple fine tuned the Siri LLM to run lean in order to save money, and in the same vein that OpenAI loses money on even the paid queries. It has to break even somewhere unless hydro becomes miraculously free.
I think it says more about their self perception of their abilities in realms where they have no special expertise. So many Silicon Valley leaders weigh on in on matters of civilizational impact. It seems making a few right choices suddenly turns people into experts who need to weigh in on everything else.
I don’t think I’m being hyperbolic to say this is a really dangerous trend.
Science and expertise carried these people to their current positions, and then they throw it all away for a cult of personality as if their personal whims manifested everything their engineers built.
But building something from those use cases? If one of them was a iPhone podcast app then the dev would be in the possession of an iPhone podcast app.
Ideally they should have trained the llm on a few dozen iPhone podcast apps :/