I agree but I think to be fair it seems that there’s an open question of just how much more we can get from scaling / tricks. I would assume that there’s agreement that e.g. continual learning just won’t be solved without a radical departure from the current stack. But even with all of the baggage we have right now, if you believe extrapolations we have ~2 GPT4->5 sized leaps before everyone has to get out of the pool