Hacker News new | past | comments | ask | show | jobs | submit login
NASA's Laser Link Boasts Record-Breaking 200 Gbps Speed (ieee.org)
220 points by mfiguiere on May 30, 2023 | hide | past | favorite | 113 comments



As a point of comparison, apparently people have been able to do one-direction data transmission in the petabit per second range over a single strand of fiber optic cable over a relatively short distance.

https://newatlas.com/telecommunications/optical-chip-fastest...

Space-based communication advances are good news, and I suppose a lot of the same fiber optic techniques are also applicable to free-space optics and vice versa.

It's really weird how effective fiber optic cable is at doing what it does. It seems like a rare thing when some technology is developed that's so many orders of magnitude better than it needed to be when it was invented that the limiting factor many decades later is still the equipment at each end rather than the fiber itself. And it's super cheap too -- the vast majority of the cost of fiber is the cladding and installation.

I expect the vast majority of terrestrial traffic to be over fiber for the foreseeable future (save for "last mile" wireless devices of all kinds, including cell phones and Starlink) except for latency-sensitive applications for which the speed of light in glass and non-straight-line routing is deemed too slow. Lasers are great for where fiber isn't available though, and for extreme distances through a vacuum. Aside from the obvious advantages of not requiring a cable, inverse-square falloff is less bad than exponential decay in glass due to impurities -- even if we could string a fiber optic cable from Earth to Mars, we'd still need repeaters at regular intervals.


As someone who designs networking chips for a living it's always entertaining to me when people get very excited over their new fiber optic cables, despite the fact their PHY/MAC doesn't support much over 1GbE. You can use Copper all the way up to 10GbE without too many issues. Fiber has a theoretical maximum of about 44Tb/s [1]. It's going to be a long time before we need to worry about advancing the link technology itself.

We're getting a lot of the big telecom companies churning out a lot of 400GbE designs now with custom silicon/FPGAs though. 800GbE and 1.6TbE specs in the works by IEEE too [2]. It's amazing how far we've come since the 1000BASE-T back in 1999.

[1] https://www.nature.com/articles/s41467-020-16265-x [2] https://www.computer.org/publications/tech-news/insider-memb...


Not sure I follow, isn't the reason you are excited about fibre that it implies a good connection to the "backbone". Compared to DSL, cable or wireless which are is really limiting even in the 50mbit range (in practice).

And for the distances we talk about you kind of get fibre anyway since it makes the most sense. But many do have a copper connection to their fibre, just the last stetch.


I was just making a commentary on the traps some of our end-customers fall into, when they upgrade the connection to fiber thinking it will improve their network speeds when the protocol is in fact the limiting factor.


What protocol? I can get 10 Gbps internet service right now.


Of course; telecoms are always pushing the leading edge of data transfer would need fiber for 10/25/40/50/100/400GbE. Some of our customers are still making designs using Triple Speed 10/100/1000Mbps though. For embedded use cases that's good enough, but they still often needlessly spend more on fiber cables, regardless of necessity.


Fiber is very cheap. Easily cheaper than good copper cable if you're aiming for 10 Gbps or later.

The hardware less so, but even then very much affordable.

If you're planning for the long term and think that running new cabling everywhere would be a huge pain, or there's some sort of concern with interference, I would definitely go with fiber.


I'd love to agree, but there's a whole list of companies I can name and shame where upgrading was, and is, simply a waste of time and resources for everyone involved in the design. It'll do the same speed regardless, with identical error rates, because the protocol hasn't changed. It's simply marketing guff that they can slap on the same product. The "gold-plated HDMI" scam with a fake mustache.


I think you've just got some specific situation that's not quite obvious from your comments. It's not at all clear what's being upgraded, what kind of equipment is involved, and why you can predict the effort will be useless.


End users traverse the same core equipment no matter what the last mile cable media is. Newer fiber builds may have more efficient core gear or have more capacity but this is all limited by the ratio of users served by a particular backhaul link, and also the topology and configurations made by the operator. Large networks are a challenge for some.

Fiber is a lot less susceptible to water intrusion but still has the same potential for outages due to cable damage along roadsides and on aerial poles with power lines.


Of course, but that doesn't change that fiber is typically a massive improvement to anything else.

Because that means proper last mile connectivity where DSL suffers and cable has poor latencies. There is also less contention in the last mile.


It's a bit funny, but it's really a question of distance more than medium.

For in-rack, DAC/Twinax is just copper and good for 400G, and cheaper than separate optics and fiber.


And, DAC has lower latency and uses less power than 10GBASE-T even at the lower 10Gb speed (e.g., due to latency you don't want storage over 10GBASE-T). And, DACs contain fewer parts that can fail than optical transceivers (esp. short passive DAC cables) so more reliable (as long as not physically manipulated) than fiber + optical transceivers.

DAC cables are a bit harder to route than fiber though.


That should read:

And, DAC has lower latency and uses less power than 10GBASE-T (e.g., even at the lower 10Gb speed due to latency you don't want storage over 10GBASE-T).

(Not that anyone is still reading this old article or would likely care if they were)


As someone who has rigged up his house with MOCA everywhere, I just want to say that coaxial cable is an impressive piece of ancient tech. It's just a piece of copper that vibrates at specific frequencies but the sheer level of multiplexing that goes into a single wire is impressive, simple, and reliable.


Crossing clock domains is a very interesting topic. For these ultra high speed connections, does it simply boil down to a really big fifo? One could easily imagine a tree of fifos that are interleaved by some scheme.


A large DCFIFO array/synchronizer chain is part of the solution yes, but it's not quite a one-size-fits-all solution. Depends on if your signals are synchronous or asynch, and if it's a pulse or not in a single bit solution. For multibit it depends whether it's static data, whether a transfer is required every cycle, if the data needs buffering, if the data is a counter, and whether all the bits must arrive in the same cycle.


Somewhat ironically the best optical fibres are made in space, ZBLAN.

https://www.nasa.gov/directorates/spacetech/flightopportunit...

I guess the have better optical clarity and so need less repeaters to span the oceans.


The real question is if these fibers are sufficiently better to be worth making an industrial scale fiber factory in space...


Apparently some test plants have been on station since 2022[0], I interpret the report to be they are still working on ways to get it to operate meaningfully with sufficiently little babysitting.

[0]https://www.factoriesinspace.com/zblan-and-exotic-fibers


Thing is, If they want to manufacture tens of thousands of kilometers of fiberoptic, they're probably gonna need something at a scale bigger than can be done on the ISS.

Which in turn means they will need to launch their own manufacturing spacecraft which is fully autonomous.


> It's really weird how effective fiber optic cable is at doing what it does.

When you can isolate the external noise away from the communication medium, it's astounding how much data you can cram in it.


Fibre optic is kind of cheating as a comparison though. You don't have to precisely aim the laser at the receiver while moving at an impressive speed.


Aside from local communications here on Earth, high bandwidth to deep space missions would open up many possibilities for astronomy. Multiple telescopes can be linked and a view synthesized in software. Greater physical distance between the linked telescopes translates into increased angular resolution. For local objects, you also get parallax information, giving depth perception, just like with two eyes instead of one. In theory, two telescopes in an orbit around the sun, spaced half an orbit apart, would have the angular resolution of a telescope with a mirror of the same diameter as the orbit. Perhaps 10 billion km across, if they were placed at the outer reaches of the solar system.

It has been done here on Earth with multiple telescopes, and with Earth-orbiting satellites. Often very short observations recorded to bulk storage media, then compared slower than real-time after. To do such observations at microwave or far infrared wavelengths would require many gigabits of bandwidth per second of observation. Infrared or optical is terabits or petabits per second. These numbers are no longer as incomprehensible as they used to be.


High bandwidth communication to deep space missions can indeed revolutionize astronomy. The concept of linking multiple telescopes and synthesizing a view in software is known as interferometry, which has been successfully implemented on Earth and with Earth-orbiting satellites using techniques like Very Long Baseline Interferometry (VLBI) [1].

Placing telescopes in orbit around the sun, spaced half an orbit apart, could provide unprecedented angular resolution. However, the challenge lies in establishing high-speed communication links to transmit the massive amounts of data generated by these telescopes. Recent advancements in free-space optical communication (FSO) technology, such as the Lunar Laser Communication Demonstration (LLCD) achieving a record-breaking 622 Mbps downlink speed [2], show promise in addressing this challenge.

As technology continues to advance, it is not inconceivable that we will achieve the necessary bandwidth to support ambitious deep space interferometry missions, as I've learned from sources like MirrorThink.ai and various research papers [3, 4].

References:

[1] Space VLBI: from first ideas to operational missions - 2019

[2] A superconducting nanowire photon number resolving four-quadrant detector-based Gigabit deep-space laser communication receiver prototype - 2022

[3] Ground-to-Drone Optical Pulse Position Modulation Demonstration as a Testbed for Lunar Communications - 2023-01-31

[4] Investigations of free space and deep space optical communication scenarios - 2022-04-01


My understanding is that the technical blocker for space interferometry is precision metrology and formation flying, not bandwidth? I.e., you have to know the exact separation between the two telescopes as a function of time (something like nm over a multi-km baseline).


should be able to measure that with the same lasers, I would think.


At those levels of accuracy, I would expect the refractive index of the not-quite-empty space space in our solar system to prove problematic.


I wonder if orbiting outside of the ecliptic results in less stuff (even down to hydrogen) in a meaningful way.


There is no atmosphere for a space interferometer, which is one of the main technical considerations for optical comms from orbit.


But how do you know where to point them?


Believe it or not: more lasers


> It has been done here on Earth with multiple telescopes, and with Earth-orbiting satellites.

And non-Earth oribiting ones as well. I always thought STEREO was pretty cool.

STEREO: https://en.wikipedia.org/wiki/STEREO


> A group of researchers from NASA, MIT, and other institutions have achieved the fastest space-to-ground laser communication link yet, doubling the record they set last year .

I'm guessing a returning astronaut with his or her pockets stuffed with SD cards is still faster and cheaper?


Except you don't have astronauts going to the vast majority of the 1000s of satellites orbiting the planet.

So apart from the 1-2 active space stations, where people go without having to do a full $100 million launch, I'd say laser transmission is a pretty good option for transferring data


Hate to be dark, but the astronaut and SD card can also be lost on the trip back

Not to mention 200Gbps is a lot of data, no way SD cards can match. Particularly as the stream can be constant


The term "Packet loss" is a bit grim in this scenario.


Maybe he just bumps into something and damages the SD cards. "Pocket loss", if you will.


Humans are also physically restrained from going certain places, such as crossing the barrier of a striking union.


That's why they're encapsulated


That and the risk of data corruption. Data transfer mitigates that by allowing you to simply retransfer the data in case it gets corrupted. You're also gonna have to copy the data from the drives/SD cards and the best transfer speeds they can do to your local computing machine would be in the order of 1Gbps max for SD Express and 7Gbps for an external NVME M.2 SSD. Now compare that to 200Gbps...


I've heard that radiation is hard on solid-state storage.

Lot of that, out there, past our ionosphere.


I doubt it. The largest capacity SD card I can find is 1 TB. Which weights around ~250 mg. With a data rate of 200GBps 25 1 TB SD cards are transmitted every second, 6.25 grams per second. From a quick google it takes astronauts anywhere from 4 hours to 3 days to get to the ISS so let's take the low number, 4 hours.

4 hours * 6.25 grams per second is 90 kilograms. I doubt anyone can just carry around 90 kilograms in their pockets. Let alone when going to space.


I think the correct calculation is that 1 TB SD card can hold 40s of 200Gbps traffic (1TB = 8Tb = 8000Gb; slightly more if these were actually TiB/Gib). So, 4h of 200Gbps traffic would be recorded on 4*60*60 / 40 = 360 cards. That would weigh 360 * 0.25g = 90g.

Even with 3 days, that's 18*90g = 1620g = 1.62kg, well within rocket carrying capacity.

However, arranging 360 SD cards by hand in the right order is much harder than relying on TCP SEQ numbers, though.


Also, the satelites are not likely to be in geostationary orbit - so those 200Gbps are only being achieved for 5 minutes when they are in reception (possibly once or twice a day):

> With data rates of 200 gigabits per second, a satellite could transmit more than 2 terabytes of data—roughly as much as 1,000 high-definition movies—in a single 5-minute pass over a ground station.


You are right :). I fumbled a translation from TB to GB there.


> 25 1 TB SD cards are transmitted every second, 6.25 grams per second.

No, 1 Micro SD would be transmitted every 5s.

Plus it's 200Gb/s = 25GB/s, so that's 250mg/40s.

4h × 6.25mg/s = 90g

3days × 6.25mg/s = 1.62kg

Now with the max theoretical spec'd 128TB 2g SDUC cards, that's 101.25g for 3 days.

Not too bad eh?


I think your calculations are off by a factor of 8 - network speeds are quoted in bits, rather than bytes. Apologies if I've misunderstood your comment though.


Never underestimate the bandwidth of a station wagon full of tapes hurtling down the highway. -- Andrew S. Tanenbaum


I have a theory that alien civilizations don't even bother with long distance radio comms. It's far more efficient (and stealthy) to load up probes with hyper-dense storage and launch them at .99c than to be firing petawatt radio lasers all the time and hoping your recipient was able to listen in when it arrived.


At 200Gb/s I think not.

You have to factor in the round trip time for an astronaut and multiply that many seconds by 200 billion to get the total to compare against.

At 25GB/s you’d fill up a 500GB SD card in 20 seconds.


more like 160 seconds. check you b's


I think you read something wrong.

200 gigabits equals 25 gigabytes, and is five percent of a 500GB SD card.


sure, my bad!


Move over sneakernet.. we've got rocketnet.


And "semi"-realtime communication?


I get that this is aimed at major backhauls, I'm just frustrated that for the next 10 years we'll be talking about greater and greater bandwidth and not lesser latency. Consumer internet is going to suck for a long time.


Theoretically, space based networking can be lower latency then ground based (above a minimum distance) because of the differences in the speed of light in the two mediums.

https://people.eecs.berkeley.edu/~sylvia/cs268-2019/papers/s...


Also something I haven't seen discussed yet is space servers. This has to be a great speed opportunity to orbit some server system a few hundred km above the internet sats and laser link to those rather than retrieve data back from earth.


You have to have enough long distance transmissions to make it worth it. I’m not sure that happens in practice. In other words, even if it were better, the existing status quo is good enough and has a fine price point. Earth based servers are still relatively expensive and need enough physical maintenance on a regular basis. Doing that in space becomes impossible ignoring the fact that you also have a massive cooling problem since space is one big insulator.


Given how much trouble thermals already are for Earth-based datacenters, I can't see that being economically viable for a LONG time


The edgiest hackers use Spaceflare (TM) workers at the edge of space, not just the edge of the network.


We could have neutrino based internet where the signal really takes the shortest path, i.e. through Earth's mantle/core.



Sounds amazing. Imagine small, inexpensive, reliable Neutrino detectors. No more bad signals on your cell phone. Reduced latency between long distance communication. No cables needed.


Forget the consumer tech, high resolution neutrino astronomy would be dope.

Not that I expect anything like that in our lifetimes.


Sure you’d get cheap detectors but then you’d be dealing with a flood of artificial neutrinos that blinds our ability to do good research on it.


If you can detect them, sure


It's perfect for my /dev/zero as a service!


I am curious your use case where latency is currently an impediment. Sure, nobody likes lag, but outside of pro gaming and HFT, where is the need?

New York to LA is ~60msec

New York to Hong Kong is ~250msec


60ms @ 1gbit (120 MiB) -> 7.2 MiB TCP buffer size required for bandwidth saturation. That’s more than the max buffer size a run-of-the-mill modern OS gives you by default, meaning your download speeds will be capped unless you fiddle with sysctl or equivalent. And that’s within the same continent assuming minimal jitter and packet loss.

Another big one is optimizing applications for number of round trips, which most people don’t do, and it can be surprisingly hard to do so.

I am a throughput freak, but you’d be surprised at the importance of latency, even (especially?) for throughput, in practice. It’s absolutely not just a “real-time” thing.

If you’re on Linux, you can use ‘netem’ to emulate packet loss, latency etc. Chrome dev tools have something similar too, it’s an eye opening experience if you’re used to fast reliable internet.


> 60ms @ 1gbit (120 MiB) -> 7.2 MiB TCP buffer size required for bandwidth saturation. That’s more than the max buffer size a run-of-the-mill modern OS gives you by default

Windows was quite happy to give me a 40,960 * 512 = 20,971,520 byte TCP window for a single stream speed test from mid US to London, run via Chrome. Linux is the only one I've noticed with stingy max buffers. I never really understood why user-focused distros like to keep the max so limited when there are plenty of resources.


That’s great to hear! Thanks for the data point, hope this is more representative. I’ve encountered Windows (10) instances in the wild with auto-tuning turned off (64k max).

> I never really understood why user-focused distros like to keep the max so limited when there are plenty of resources.

Yeah, agreed. That is way too conservative, especially for an issue which most people can’t easily triage.


You can definitely turn it off in Windows, should you want to for some reason, but scaling windows up to 1G are definitely the default since Windows 2000 https://web.archive.org/web/20130613080612/http://www.micros...


Communicating with other humans.

You have around 150ms of one-way latency before perceptual quality starts to nose dive (see https://www.itu.int/rec/T-REC-G.114-200305-I). This includes the capture, processing, transmission, decoding, and reproduction of signals. Throw in some acoustic propagation delay at either end in room scale systems and milliseconds count.


>but outside of pro gaming and HFT But that only allows for current technologies/consumer habits and not the future.

Gaming explodes into online VR where latency is incredibly noticeable, I still think virtual shopping will become a thing, everyone buys crap online now but it's nice to be able to "see" a product before you buy it.

As we start getting autonomous vehicles/delivery drones etc they can all make use of the lowest latency link possible.

Lower latency also usually means (but not always) lower power usage aka the "race to sleep".

It would also enable better co-ordination between swarms of x, whether it's aforementioned delivery drones, missile defence systems (which launch site is better to launch from, stationary or mobile) etc.

But also just human ingenuity and invention, we should always try to make it faster just because we can, at the end of the day.


I'm sure plenty of money will go towards trying, but I hope that virtual shopping does not become a thing. Especially with respect to clothing, there is simply too much information that can not be communicated visually. This includes the texture of the fabric, its density, degree of stretch, how the item fits on your body, how it looks it different lighting, etc. Online shopping also makes it easy to scam people, where some cheap, mass produced thing is misrepresented as a higher quality item.


Oh, no I definitely will still always buy clothing IRL. The biggest part being that no single company has an agreed way to do sizing, even simple things like shoe companies, as it can say be a UK-X size, but the interior shape can still change.

But for much else idm at all, people buy tonnes of stuff off Amazon based on 2d pics already.


60ms RTT is noticeable even for amateur gamers


Maybe GP does not live near NY, HK or any supernode.


I was just in Hawaii and it was only 72ms.


Doesn’t Hawaii infrastructures benefits being a highly touristic place and a very rich country state ? In fact Hawaii has multiple supernodes:

https://www.submarinecablemap.com/


Seeing the submarine cable maps for Hawaii can be a bit misleading. If I'm not mistaken, a lot of the fibre capacity simply bounces through Hawaii and isn't trivial for the islands to hook into. I don't believe Hawaii itself is any better connected to the internet than say, New Zealand


You are historically accurate. For a long time there was a lot of fiber that physically hopped through Hawaii, but didn’t actually talk to any gear locally (switches/routers). It was just an optical landing/regen station.

These days I believe there is less “optical only regen” and more IP-connected links.


Thanks for the precision. My point was: your latency will probably be more impacted by the number of nodes from you to the destination server than the distance to this server.


Wasn't 5G supposed to solve latency issues? (n.b. not a 5G user)


Yeah, marketers loves to say that because there is a latency gain for the wireless part of the request travel. The more the information you request is close to the antenna, the more you’ll benefit of this improvement. However in real (actual) condition a request travel hundreds or thousands of kilometers through many nodes, each one adding up a bit of latency. So in the end you may feel a difference if you use VR in SF or make HST in central London. You won’t notice the gain if you’re trying to load a page hosted far away from you.

In fact latency is way more impacted by the web neutrality but that’s another subject.


If you live in a major city close to the target datacenter, maybe. I have an excellent last mile (500 Mbps symmetrical FTTH) and pings to sites like google.com still go above 300 ms. 5G wouldn't help me at all, probably the opposite. The vast majority of that time is spent in data bouncing around the world through very inefficient routes.


well, you can't go faster than speed of light?


The distance from my city to new york is roughly 5600km which, given the slightly slower speed of light in a optical fiber, would give a minimum ping of ~27ms (one way). Given the same distance but in a vacuum, it would be around 18ms.

But if I ping a server in NY right now, my actual ping would be around 100ms (can easily go up to 200ms). Most of the latency is created because of the routing. The farther you are from important peering hub, the worse your ping is usually going to be.

When I was living in French Guyana, it was basically impossible to have a ping less than 120ms from anywhere in the world.


To note for others who might be caught off guard: minimum ping over an ideal direct fiber path would therefore be ~54 ms since ping reports round-trip time, not the one-way value.


You can’t go faster than the speed of light in a vacuum.


Spherical light in a vacuum?


You can find a shorter route though.


Lets wait for Quantum entangled networks. Just 8 pairs of particles separated at ISP level, we have instant hops of data limited in speed by the oscillator.


That's theoretically impossible, and not at all how entanglement works.


Very cool, but very dependent on weather too. Imagine your satellite can't send data because there is cloud cover... In some parts of the world that may last for 6 months.


Good point to be sure but a little Yes and No - no one is planning on building a space comms station in a PNG valley obscured by clouds - typically they've been built in good weather locations (eg:

> Optical Ground Station 1 in Table Mountain, California. > The location was chosen for its clear weather conditions and remote, high altitude. [1]

)

and the application for ground based comms will largely be from mountain top to mountain top for long legs across rough cable laying defying territory.

Moreover recent developments have seen much better cloud piercing abilities - both in frequencies used and from better real time "de turbulance" filters that can reconstruct what cloud based particles tear asunder [2].

[1] https://www.nasa.gov/sites/default/files/atoms/files/tbird_f...

[2] https://www.raytheonintelligenceandspace.com/news/2020/07/15...


The solution the satellite world has developed for dealing with precipitation absorption of Ka-band and higher frequencies might apply to lasercomm and clouds. In the Ka-band case, having multiple potential downlink sites provides good probability that at least one of them has weather conditions suitable to support the downlink.


The solution is driving down the cost of base stations and having many of them, similar to how Starlink has a large network of downlink stations colocated at fiber meetups.

https://eyes.nasa.gov/dsn/dsn.html


This is the part of Starlink that gets very little fanfare but is one of the more impressive parts of the system. Early skeptics pointed out that not only had no satellite company ever launched anywhere close to that many satellites, but that ground stations tended to be extremely expensive to maintain (tens of millions of dollars per year per station) and were few in number, but Starlink was proposing to build thousands of them. A lot of the old guard thought that if the satellite launch costs didn't kill them the ground station maintenance would. But Starlink proved that it was mostly an "old space" problem and there was no fundamental reason the ground stations had to be so outrageously expensive.


Can someone help be understand "intuitively" why light is able to transmit so much more than electrons over a wire (is that even correct)? I guess this is a request for ELI5.

Is it because with light, you can basically cram infinite photons along a pipe and they can all travel without any effect on each other and the "medium" itself (of course, no medium)? Whereas with electrons in a wire, they start to interfere with each other, the harmonics / capacitance of the wire, and more? Or is there a natural minimum pulse length that electrons cannot go beyond in a wire?

And to get out all the bandwidth at the end (when using light), you just split out the wavelengths with as many narrow band filters as you need, then send that to a decoder that then turns it into the usual bottlenecked ethernet at your favorite router?


I can't speak to whether or not you can get the same bandwidth with wire/electricity as you can with lasers/fiber but there's also a couple practical things here:

Electrical signal propagation takes a lot more energy than light propagation. (physics folks can explain why way better than I can). That's why you can send 10GBE about 100M over twisted pair, but many KMs over fiber from the same SFP+ port (aka using the same power draw).

Long wires make good antennas, so you have a lot more sources of interference on the long wires meaning the signal gets lost in the noise easier if there happens to be lightning, power wires, big motors, etc near-by. (see also energy needs). Dealing with this uses some combination of complex switching schemes in multiple wires, shielding and signal processing. More complex than "on and off" from your laser. (Other things like capacitance and inductance in wires adds to complexity here too).

Point of all that being: I think part of the reason you get the impression that photons carry more data than electrons is that more effort is put into photons carrying lots of data fast - purely from a practical "it's easier to engineer" standpoint. Why worry about all the hard practical electrical engineering when we can just get good at turning lasers on and off really fast, and have good optical sensors where the laser is pointing?


By the way, if you know, what are the kinds of detectors at the other end of the laser pulse that receive the signal? Are they like the pixels of a CCD but without the need for periodic / time-bound readout? They just are like silicon photocells that can react at the femto(?)-second speed and turn it into pulses translated into electrons?


Most simple answer is you can cram N number of light streams via WDM (wave division multiplexing). You transmit N number of parallel streams on different wavelengths.


Thanks for that -- And is something analogous not possible for electrons in a wire?


The highest frequency you can practically send long distances over coax is around 1-2GHz. When I had a cable modem the highest I saw was 900MHz. Therefore, regardless of what encoding scheme you use you will never get more then a couple GHz of bandwidth.

Visible light starts at 400THz so right off the bat you get 10k times more more headroom before physics becomes the limiting factor.


I was thinking somehow that the response / recovery time of the silicon detectors to react to the laser pulses would be a limiting factor, is that at all valid?

Like, you cannot blink faster than <x> nanoseconds and get a CCD(?) to see it properly. (I'm sure it's not like a CCD with readout, etc. but whatever mechanism is the correct one, is there some natural minimum read time?)


It's analogous to frequency multiplexing in radio (and radio carried over a wire -- which high speed codings like modern Ethernet or PCIe are, in practice). For example, sending one channel on a carrier at 100 MHz, and another at 102 MHz. These days potentially thousands of carriers may be used in parallel. Practical bandwidths in radio are limited to a few gigahertz at most, though. By comparison the visible spectrum is several hundred thousand GHz wide. There's much more bandwidth to work with at optical frequencies.


Here's what looks like their writeup on the connection protocol:

> "We consider the use of ARQ methods on the downlink channel of deep-space missions. Some rudimentary forms of ARQ are already used in deep-space missions, where blocks of corrupted or lost data are isolated and requested again from the spacecraft. Retransmission methods have been considered, particularly as a playback strategy to overcome weather outages. We propose to use retransmission methods in a more systematic and automatic fashion... We consider an interplanetary spacecraft transmitting frames of data to Earth."

https://tmo.jpl.nasa.gov/progress_report/42-122/122L.pdf


If you're trying to optimize the latency of a software system its helpful to make the speed of light your dominant bottleneck. :-)


I guess this won't work for Netflix who's pushing 800Gbps from a single server.

https://people.freebsd.org/~gallatin/talks/euro2022.pdf

Joking aside, this is super interesting.





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: