A modern NVMe on PCIe 4.0 can deliver up to 5GB/s, which is only 4 times slower. You can get faster by using RAIDs and I believe some enterprise class stuff can get a bit faster still at the expense of disk space. PCIe 4.0 would top out at 8GB/s, so for faster you'll need PCIe 5.0 (soon).
RAM bandwidth scales with the number of DIMMs used, e.g. a current AMD EPYC machines can do 220 GB/s with 16 DIMMs per spec sheet.
How well does NVMe scale to multiple devices, that is, how many GB/s can you practically get today out of a server packed with NVMe until you hit a bottleneck (e.g. running out of PCIe lanes)?
An AMD Epyc can have 128 PCIe 4.0 lanes, each 8GB/s, meaning it tops out at a measely 1TB/s of total bandwidth. And you can in fact saturate that with the bigger Epycs. However, You will probably loose 4 lanes to your chipset and local disk setup, maybe some more depending on server setup but it'll remain close to 1TB/s.