However, it's still quite hard to get useful results from it in practice.
---------------
I personally believe that one component of intelligence is the ability to apply cognitive patterns created for a particular input to other inputs. (Very simplified example: A block of "neurons" which have learned to recognize the pattern "is hurt by" when given a subject (group of pixels in image) and object (other group of pixels in image), could be applied to another subject/object pair, for example coming from processed audio. But if the audio processing takes 10 layers, and the image processing 5, the connection has to run backwards)
To do this in a state of the art deep network, you need the ability to create backward connections. Backward connections imply loops, and loops break backprop (unlike loops in RNNs, which can be easily unrolled AFAIK). So with the current backprop-trained feedforward model, you have to create patterns multiple time instead of reusing them.
This is why I will pay attention to backprop alternatives which allow loops, despite their (currently many) disadvantages. This and modular training are the two aspects of learning I would personally focus on.
However, it's still quite hard to get useful results from it in practice.
---------------
I personally believe that one component of intelligence is the ability to apply cognitive patterns created for a particular input to other inputs. (Very simplified example: A block of "neurons" which have learned to recognize the pattern "is hurt by" when given a subject (group of pixels in image) and object (other group of pixels in image), could be applied to another subject/object pair, for example coming from processed audio. But if the audio processing takes 10 layers, and the image processing 5, the connection has to run backwards)
To do this in a state of the art deep network, you need the ability to create backward connections. Backward connections imply loops, and loops break backprop (unlike loops in RNNs, which can be easily unrolled AFAIK). So with the current backprop-trained feedforward model, you have to create patterns multiple time instead of reusing them.
This is why I will pay attention to backprop alternatives which allow loops, despite their (currently many) disadvantages. This and modular training are the two aspects of learning I would personally focus on.