It is probably a both question. If 100x is the goal, they’ll have to double up the efficiency 7 times, which seems basically plausible given how early-days it still is (I mean they have been training on GPUs this whole time, not ASICs… bitcoins are more developed and they are a dumb scam machine). Probably some of the doubling will be software, some will be hardware.
I'm pretty skeptical of the scaling hypothesis, but I also think there is a huge amount of efficiency improvement runway left to go.
I think it's more likely that the return to further scaling will become net negative at some point, and then the efficiency gains will no longer be focused on doing more with more but rather doing the same amount with less.
But it's definitely an unknown at this point, from my perspective. I may be very wrong about that.
I don't know if the hardware can be scaled up. That's why I wrote "if we're able to scale them" at the root of this thread.