Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
consp
4 months ago
|
parent
|
context
|
favorite
| on:
A small number of samples can poison LLMs of any s...
This is the definition of training the model on it's own output. Apparently that is all ok now.
baby
4 months ago
|
next
[–]
I mean you're supposed to use RAG to avoid hallucinations
MagicMoonlight
4 months ago
|
prev
[–]
Yeah they call it “synthetic data” and wonder why their models are slop now
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: