Inference is already wasteful (compared to humans) but training is absurd. There's strong reason to believe we can do better (even prior to having figured out how).
That's a potential outcome of any increase in training efficiency.
Which we should expect, even from prior experience with any other AI breakthrough, where first we learn to do it and then we learn to do it efficiently.
E.g. Deep Blue in 1997 was IBM showing off a supercomputer, more than it was any kind of reasonably efficient algorithm, but those came over the next 20-30 years.