I believe it is extremely important to disclose that the ‘responses leaks’ you obtained did not originate from LLM models themselves, but rather through other insecure systems / in a more conventional manner.
Just to avoid yet another case of hallucinations outputs getting misinterpreted.
Just to avoid yet another case of hallucinations outputs getting misinterpreted.