It shows an amazing lack of understanding for what an LLM is, even from the people selling and implementing them. You're exactly right in that they are terrible if correctness matters, but that should be obvious. If they where 100% correct, the size of the models would be much larger, as they'd need to retain all the original training data.
You can use the LLMs for language understand and interpreting questions, but the would need access to databases containing authoritative answers and not answer anything for which they don't have an answer.
You can use the LLMs for language understand and interpreting questions, but the would need access to databases containing authoritative answers and not answer anything for which they don't have an answer.