Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'd say a lack of a proper fundamental understanding of trained neural networks is the main cause. People throw NNs at any problem they can think of, get good results and when they want to publish, they come up with an explanation that is more esoteric than founded in solid theory because the monster they generated is so inscrutable.


The stuff is what the stuff is, brother. https://youtu.be/ajGX7odA87k?t=931


Thanks! This is a great talk.


Re:inscrutable

We'll have to see where this all leads, to ver the next few years/decades. Maybe someone will manage to combine "a proper fundamental understanding of trained neural networks," and good results. That'll lead to (perhaps) good theories, to explain the good results.

If "good results" continue to outpace our understanding of wtf the useful NN is up to... It'll have to be studied expirementaly, like the way we study biology.

Ie, we might see CS theory adapt from "mathematical," to "scientific" to in its methods and theories.

The current trajectory seems to be heading here. There is tremendous interest and resources in NNs. As they become more commercially important, interest and resources dedicated to developing them increases. They only need the NNs to work, not to be scriptable.

Scientists are not just going to give up though. They'll study NNs expirementaly as black boxes if that's all they have.


What you're saying is a bit tautological in a way you may not intend.

What the paper describes is those research papers which aim at, that, giving a fundamental understanding of a trained neural network. That the papers are satisfied with "it works" stands in the way of anyone having this fundamental understanding.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: