Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It isn't wrong, just think about how weights are updated via (mini-)batches, and how tokenization works, and you will understand that LLM's can't ignore poisoning / outliers like humans do. This would be a classic recent example (https://arxiv.org/abs/2510.07192): IMO because the standard (non-robust) loss functions allow for anchor points .


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: