From modern perspective it's obvious that simply upping the bandwidth allows streaming high-quality videos, but it's not strictly about "more bigger cable". Huge leaps in various technologies were needed for you to watch video in 4k:
- 4k consumer-grade cameras
- SSDs
- video codecs
- hardware-accelerated video encoding
- large-scale internet infrastructure
- OLED displays
What I'm trying to say is that I clearly remember reading an old article about sharing mp3s on P2P networks and the person writing the article was confident that video sharing, let alone video streaming, let alone high-quality video streaming, wouldn't happen in foreseeable future because there were just too many problems with that.
If you went back in time just 10 years and told people about ChatGPT they simply wouldn't believe you. They imagined that an AI that can do things that current LLMs can do must be insanely complex, but once technology made that step, we realized "it's actually not that complicated". Sure, AGI won't surface from simply adding more GPUs into LLMs, just like LLMs didn't emerge from adding more GPUs to "cat vs dog" AI. But if technology took us from "AI can tell apart dog and cat 80% of the time" to "AI is literally wiping out entire industry sectors like translation or creative work while turning people into dopamine addicts en masse" within ten years, then I assume that I'll see AGI within my lifetime.
There's nothing about 4K videos that needs an SSD, an OLED display, or any particular video codec, and "large-scale internet infrastructure" is just a different way of saying "lots of high-bandwidth links". Hardware graphics acceleration was also around long before any form of 4K video, and a video decoding accerator is such an obvious solution that dedicated accelerators were used for early full-motion video before CPUs could reasonably decode them.
Your anecdote regarding P2P file sharing is ridiculous, and you've almost certainly misunderstood what the author was saying (or the author themselves was an idiot). That there wasn't sufficient bandwidth or computing power to stream 4K video at consumer price points during the heyday of mp3 file sharing, didn't mean that no one knew how to do it. It would be as ridiculous as me today saying that 16K stereoscopic streaming video can't happen. Just because it's infeasible today, doesn't mean that it's impossible.
Regarding ChatGPT, setting aside the fact that the transformer model that ChatGPT is built on was under active research 10 years ago, sure, breakthroughs happen. That doesn't mean that you can linearly extrapolate future breakthroughs. That would be like claiming that if we developer faster and more powerful rockets, then we will eventually be able to travel faster than light.
- 4k consumer-grade cameras
- SSDs
- video codecs
- hardware-accelerated video encoding
- large-scale internet infrastructure
- OLED displays
What I'm trying to say is that I clearly remember reading an old article about sharing mp3s on P2P networks and the person writing the article was confident that video sharing, let alone video streaming, let alone high-quality video streaming, wouldn't happen in foreseeable future because there were just too many problems with that.
If you went back in time just 10 years and told people about ChatGPT they simply wouldn't believe you. They imagined that an AI that can do things that current LLMs can do must be insanely complex, but once technology made that step, we realized "it's actually not that complicated". Sure, AGI won't surface from simply adding more GPUs into LLMs, just like LLMs didn't emerge from adding more GPUs to "cat vs dog" AI. But if technology took us from "AI can tell apart dog and cat 80% of the time" to "AI is literally wiping out entire industry sectors like translation or creative work while turning people into dopamine addicts en masse" within ten years, then I assume that I'll see AGI within my lifetime.