Scaling certainly is one of the next big challenges, the current network sizes severely limit us in our inference performance.
Just to clear things up: Our circuits were actually trained via backprop.
This is what allowed us to reach performance levels very close to equivalently sized but simulated SNNs (and even rather close to the accuracy of ANNs of the same size).
Fully self-learning systems are certainly one of the overarching goals of our field. Unsurprisingly, there are many challenges to be solved along the way.
> why not make an ASIC for the prettrained model
Our paper does not really touch the topic of deployment (except for the study on post-deployment degradation of the circuits, maybe). Model-specific ASICs, however, would likely not pose an economically viable solution.
> We can do robust training on GPU already.
We certainly can! Deploying those trained models on novel, "imperfect" hardware is the challenge.
I hope you I did not cause offense, for neuromorphics this is a wonderful paper and it's important to do basic research like this! I'm just a bit jaded after 5 years of following the literature and seeing most papers sidestep what I see as the big road block.
We can now train sparse, quantized, robust neural networks which are already specified in terms of primitives for which ASIC macros can easily be designed. If we are going to make a new chip that anyway, IP like this is a benchmark I compare to in my mind.
If we want flexibility,FPGAs are being integrated with modern CPUs and will allow you to program precise weights if you want them, making it more feasible to do complex tasks.
So this is regarding the point of ASICS. I don't want to bash your paper, but to me this is the competition to beat and why I reacted to the title given by quanta with context that I think I'd important for people not familiar with the literature.
I fully believe neuromorphic or neuromorphic inspired inference engines will (continue to) have their place.
As for the deployment of robust weights to imperfect hardware, an ex colleagues of mine started this line of research when I did my internship at IBM https://www.nature.com/articles/s41467-020-16108-9
So I meant robust in this sense, robust to deployment to real devices
Scaling certainly is one of the next big challenges, the current network sizes severely limit us in our inference performance.
Just to clear things up: Our circuits were actually trained via backprop. This is what allowed us to reach performance levels very close to equivalently sized but simulated SNNs (and even rather close to the accuracy of ANNs of the same size).