Hacker News new | past | comments | ask | show | jobs | submit login

They just launched the next batch of Starlink satellites minutes ago: https://www.youtube.com/watch?v=y4xBFHjkUvw. The 60-satellite train should be visible in the sky over the SF bay area around 9:35 tomorrow night: https://james.darpinian.com/satellites/?special=starlink

Random FAQ about Starlink below:

The combination of low latency (20-40ms) and high bandwidth (100+Mbps) has never been available in satellite internet before.

A public beta may start later this year for some users in the northern US, around the 14th launch. Today is the 7th launch of v1 satellites. SpaceX is hoping to do more than two launches per month but haven't reached that pace yet.

The ground stations look like this: https://www.reddit.com/r/SpaceXLounge/comments/gkkm9c/starli... The user antennas can be seen in that picture; they are the smaller circular things on black sticks. They are flat phased array antennas, and don't need to be precisely pointed like satellite dishes do. They are about the size of an extra-large pizza, so you won't be able to get a Starlink phone.

The user antennas are likely to be quite expensive at first (several thousand dollars). Cost reduction of the user antennas is the biggest hurdle Starlink currently faces. Nobody knows yet how much SpaceX will charge for the antenna or service.

Starlink can't support a high density of users, so it will not be an alternative to ISPs in cities. Rural and mobile use are the important applications. The US military is doing trials with it. Cell tower backhaul may also be a possibility.

Starlink V1 doesn't have cross-satellite links, so the satellites can only provide service while over a ground station. There will be hundreds of ground stations in North America; no information about other regions yet. Starlink V2 is planned to have laser links between satellites, which will enable 100% global coverage 24/7, though local regulations are likely to prevent SpaceX from providing service in many places.

Because the speed of light in a vacuum is 30% faster than in optical fiber, the latency of Starlink over long distances has the potential to be lower than any other option once laser links are available.

Each launch of 60 Starlink satellites has nearly as much solar panel area as the International Space Station. Once SpaceX's Starship is operational they should be able to launch several hundred satellites at once instead of just 60.

Starlink's only current competitor, OneWeb, just filed for bankruptcy after only launching a handful of satellites, and is fishing for acquisition offers. Amazon is also planning something called Project Kuiper but not much is known about it.

Starlink V2 will have 30,000 satellites, requiring hundreds of launches. Even once the initial fleet is launched, SpaceX will still need to maintain the constellation with many launches per year indefinitely.

SpaceX's FCC application has many interesting details: https://licensing.fcc.gov/myibfs/download.do?attachment_key=...




> Because the speed of light in a vacuum is 30% faster than in optical fiber, the latency of Starlink over long distances has the potential to be lower than any other option once laser links are available.

Not to mention the dramatic reduction of hops. My route from the USA east coast to the university of Melbourne in Australia[1] is 30 hops reported by traceroute, with at least as many switches in the way. You could make the same link with only a few satellites.

[1]www.ie.unimelb.edu.au

EDIT: 30 is actually just the default max hops in traceroute, its really 32 hops from me to Melbourne.


I'm in Switzerland and have 25 hops, which can be broken into:

- 1-7: Hops within my ISP's in-country network (~4ms total latency)

- 8-10: Hops within my ISP's in-Europe network (~28ms total latency)

- 11: London -> New York (~93ms total latency)

- 12: New York -> Los Angeles (~160ms total latency)

- 13: Transfer in LA from my ISP to AARNet (about the same latency)

- 14: LA to somewhere in NSW (guessing Sydney, 305ms total latency)

- 15-25: Routing within AARNet and Unimelb (319ms total latency)

So most of the latency looks to be attributed to large hops across oceans rather than internal switching. Even if you could narrow it down to London -> NY -> LA -> NSW you'd have 277ms.


From my university network in Germany, I seem to get a direct London → Perth link and a total latency of 278ms. I couldn't find any information about a direct fiber between London and Australia, though there is one from London to Singapore, and AARNet seems to have presence in Singapore. My guess would be that there is some switching below the IP layer going on in Singapore as described by dicknuckle in a sibling comment.

     9  cr-fra2-be11.x-win.dfn.de (188.1.144.222)  15.360 ms  15.385 ms dfn.mx1.fra.de.geant.net (62.40.124.217)  15.021 ms
    10  ae7.mx1.ams.nl.geant.net (62.40.98.186)  21.806 ms dfn.mx1.fra.de.geant.net (62.40.124.217)  15.244 ms ae7.mx1.ams.nl.geant.net (62.40.98.186)  21.707 ms
    11  ae9.mx1.lon.uk.geant.net (62.40.98.129)  29.059 ms ae7.mx1.ams.nl.geant.net (62.40.98.186)  21.805 ms ae9.mx1.lon.uk.geant.net (62.40.98.129)  28.933 ms
    12  138.44.226.6 (138.44.226.6)  196.613 ms ae9.mx1.lon.uk.geant.net (62.40.98.129)  29.074 ms  29.156 ms
    13  138.44.226.6 (138.44.226.6)  196.807 ms et-7-3-0.pe1.wmlb.vic.aarnet.net.au (113.197.15.28)  277.463 ms 138.44.226.6 (138.44.226.6)  196.778 ms
    14  138.44.64.73 (138.44.64.73)  277.544 ms et-7-3-0.pe1.wmlb.vic.aarnet.net.au (113.197.15.28)  277.629 ms *


I don't think it's Perth, the address has ".vic." in the name, which implies that the router in question is located in Victoria.

My guess is that there's a private network involved.

If you use AARNet's looking glass [0] it shows that its route to GEANT's London router goes through Singapore.

[0]: https://lg.aarnet.edu.au/cgi-bin/lg


Yeah you're right, "pe1" shows up in many of their hostnames with different states in the name. And their looking glass shows a bunch of hops within Australia, none of which show up when I do a traceroute.


PE means provider edge. CE - customer edge.


AARNet own subsea capacity to Singapore: https://www.aarnet.edu.au/network-and-services/the-network/i...

They also have a fairly extensive network of dark fiber and leased fiber in Australia.

I suspect they might have an onward wavelength service to London.


Starlink has no inter-satellite links, so there will still be a large number of hops. The original promise didn't pan out.


Init7 gets there in 15, they reach AARNet directly from their presence in LAX, however it still takes ~310ms to reach NSW.


Huh, my ISP is Init7 and as I said, takes 25 steps to get there, though steps 18-24 inclusive show as "waiting for reply" in MTR.

Init7's traceroute [0] shows 5 fewer steps to r1lon2.core.init7.net than my traceroute though and appears to route through r1bsl1 (assuming Basel) instead of Frankfurt.

Perhaps you're in/near Basel and so skip straight to London, circumventing my 8 hops around Zurich?

[0]: https://www.as13030.net/traceroute.php


Those numbers seem awfully high. I've just ran a speedtest between Germany and California and got 170ms total latency. Back when rabb.it [0] was alive I could even get reaction scores of 450ms compared to 230ms when running the test on my computer. This is quite impressive when you consider that this means video encoding and decoding must happen in less than 50ms. Conventional video encoding is usually slower than real time.

[0] https://en.wikipedia.org/wiki/Rabb.it (Californian startup) [1] https://humanbenchmark.com/tests/reactiontime


Germany to california for you: 170ms. Switzerland to california for that other guy/gal: 160ms

So… awfully high how?


My latency numbers were cumulative, which is what I meant by "total". The total round trip time to Melbourne is 320ms.


> Because the speed of light in a vacuum is 30% faster than in optical fiber

Non-Australians, don't get too excited. Those satellites are at 340 miles, so that adds 680 miles of latency (3.66 ms) plus two to three Starlink hops, which cancels out some of that "speed in a vacuum."

The way to get almost c on earth is via direct microwave links.


The minimum latency is 3.66 ms, but depending on the angle to the satellites involved it may be adding significantly less than 680 miles to the journey. Aka if it’s 340 miles east and 340 miles up that’s 480 miles to the satellite, adding just 140 not 340 miles.

Outside of HFT, most networks are far from the shortest great circle routes between you and the other end, which further completes the issue.


The round-trip is not 140 miles to a satellite at 340 miles.

Regarding cable placement, in some cases it's circuitous. But sometimes not, expecially in 2020.


You misunderstood. (Ignoring the earth being a sphere.)

A satellite directly overhead means you need to travel to that altitude. However, if it’s not overhead light is traveling the hypotenuse of a right triangle where X is the distance to a point underneath the satellite and Y is the altitude of the satellite. That distance is the square root( X^2 + y^2). From there you need to travel to a different base station.

Assuming an ideal path where the satellite is directly between two locations that are 680 miles apart, that adds up to 2 * ( sqrt(340^2 + 340^2) ) ~= 961.7 miles vs 340 + 340 miles, or an added 281.7 miles not 680. In other words 1.414x the distance rather than simply adding 680.

Clearly the earth is not a flat and your very unlikely to be in that situation, but assuming you can reach several satellites at the same time it is likely one of them will be roughly in the direction you want to go.


> The way to get almost c on earth is via direct microwave links.

But line of sight. Short-haul.


You'd be surprised how far HFT companies run their microwave link networks to get better latencies :)

https://sniperinmahwah.wordpress.com/2014/09/22/hft-in-my-ba...


From one space elevator to the next. Yeah?


FYI: both Canada and the US have had transcontinental RF links. The repeaters slow down the transmission by about 30%.

However those were fairly low-bandwidth.


there's probably double or triple the switches in between. they're not really switches though. more like super fast packet routers. the latency is much less than any normal or carrier grade switch. they don't touch the IP layer at all. (I manage a bunch of them)


one of the things I keep telling non-network-eng people is that looking at a traceroute gives you absolutely no way of knowing what the OSI layer 2 topology is for transport networks, MPLS, switches, DWDM circuits carried as lit services "glassed through" a location, etc. Just the IP network.

Discovering the underlying layer 2 topology of a carrier's network requires inside information and cannot be easily discerned sitting at a computer elsewhere on the internet. You might see two routers that appear to be directly adjacent to each other but it's actually carried as a 10Gbps VLAN across a several-state sized region between two cities many hundreds of km apart, with a lot of intermediate equipment in between.


Do you have any recommendations for where to learn more about network eng for us non-network-engs? My experience is in DevOps (on k8s, mostly) and systems/application programming, if that helps pinpoint any advice you could give. I’m trying to get the bigger picture like network engineers understand, as well as start to understand the layer 2 topology that you described.


The traditional and hard way is to start as a first-tier NOC person for an ISP and work one's way up. Or to start as a field tech for an ISP and show skills worthy of getting promoted (perhaps into a NOC job or a manager-of-field-techs job). That method takes quite a while.

Medium to large sized ISPs have a very large amount of BSD/GPL/Apache/misc licensed software running to support back end monitoring and provisioning systems. They do occasionally hire software developers to customize things for their environments, so it certainly wouldn't hurt to reach out to the noteworthy ones in your region and try giving them your CV.


Read the Cisco CCNA1 certification book. It goes deep on the theory and levels of the OSI model like switching. From there it’s a lot easier to grok.


Yep, start there and then throw GNS3 on a fairly hefty VM and start labbing scenarios that relate to environments that are of interest to you.


You could build a playground internet with some VPN-connected boxes which talk BGP and route 10.0.0.0/8 between them. It's a lot of work, but a friend of mine started such a thing and I have learned _so much_ about networking on layers 2 and up from that.

Start by installing a BGP deamon on two boxes that share a network and see what you can do :)


Nanog.org tries to do some outreach. Might be worth checking out some of their intro sessions on the basics of routing, traceroute, etc.

Plenty of other resources like irc channels, too.


Ditto. Would love to figure out if there's a good way of mapping out layer 1/2 connectivity from a single computer as well. Would be really, really fascinating.


If you want layer 1 (physical location of dark fiber) you have to get the GIS dataset from the carriers. The data sets exist, but they're often protected as proprietary information or under NDA. I have a QGIS setup with a ton of stuff in it.

There are other ways of acquiring layer 1 data which are labor intensive and involve the equivalent of filing FOIAs for construction permits with local city and county agencies, etc.

Big facilities based ISPs that have a lot of fiber out there underground and aerial make extensive use of GIS software. Their construction groups will have their own full time GIS staff positions.


You'll have to follow the cable. Since this forwarding is entirely passive there's no way to know.


You could build a lab with older, yet relevant hardware from eBay, etc.


Actually, you don't want to do that. It costs money and you really should have a good door between you and the equipment or you will go nuts from the noise.

Most of the interesting stuff can be setup using Linux, OpenvSwitch, FRR/ BIRD, network namespaces/ VRF and more. It is all in FOS Software - so more or less zero cost. Of course, you will not learn how to configure a Catalyst Switch that way. For some stuff, there are virtual appliances that you can spin up with KVM/ QEMU but most of the enterprise stuff has to be bought. Again, at that time, you will have a solid understanding of what should be happening and will know what to look for in the documentation. The rest is field experience with firmware bugs, methods how to approach some problems and syntactic sugar of the particular equipment. At least that is my view.


I've seen ISP routers massively increasing latency in conjunction with packet loss -- had a problem with one provider recently where packet loss would increase every evening, and so would the rtt. Suggests large buffers on the congested part. The reverse DNS of the given IP suggested it was a gig-e connection (my side of the hop, which was also into LINX, is 100G)

Of course you can hide a router by not decrementing the TTL as it passes through your network at layer 3, you can hide the IP by not responding with ICMP expired messages

An ICMP could well return on a different path to the direction it was sent, with a path like this

Host -> R1 -> R2 -> R3 // R3 -> R4 -> R5 -> R1 -> Host

Traceroute will only show the outbound route, so you should traceroute from both ends

The latency will also be affected by ICMP generation on the router, which could be delayed, rate limited, dropping, etc.

Doing a quick traceroute to a host of mine in Sydney, from London, shows

  5ms to i-91.ulco-core02.telstraglobal.net 202.40.148.33
  82ms to i-10104.unse-core01.telstraglobal.net 202.84.141.145
  132ms to i-10601.1wlt-core02.telstraglobal.net 202.40.148.106
  277ms to i-10406.sydo-core04.telstraglobal.net 202.84.141.226
sydo will be sydney

To get the exact map I could talk to Telstra (in this case I peer with telstra directly in London)

Or I could look at telstra's map, which ddg helpfully tells me is at

https://www.telstraglobal.com/network-infrastructure-map/

Sadly the map doesn't work very well, more form over function

ddging 1wlt-core02.telstraglobal.net returns this though

https://crowdsupport.telstra.com.au/t5/home-broadband/bad-ro...

Which tells me 1wlt is LA. unse will thus be east coast U.S. I'd have expected routing via Singapore.

There's no way to know which way the traffic is actually going without asking Telstra.

I have 2 ethernet circuits from the UK to Washington DC, to me it looks like two layer 2 1500MTU circuit. Only by talking to the provier can I work out which circuits it actually travels on trans atlanticly. It's supposed to be separate, but latency changed by 2ms a few days ago. Asked them about it, and there was a failure in their network, they rerouted in a few hundered milliseconds (which isn't good as now both diverse circuits run via the same equipment, thus any issues like another 100ms outage will cause an impact)


>I have 2 ethernet circuits from the UK to Washington DC, to me it looks like two layer 2 1500MTU circuit. Only by talking to the provier can I work out which circuits it actually travels on trans atlanticly. It's supposed to be separate, but latency changed by 2ms a few days ago. Asked them about it, and there was a failure in their network, they rerouted in a few hundered milliseconds (which isn't good as now both diverse circuits run via the same equipment, thus any issues like another 100ms outage will cause an impact)

Buying two mpls circuits (assuming they are that) from the same provider is a single point of faliure, your only real redundancy is your handover at each site, if they are separate.

You're better of buying an optical link, then you are also not sharing bandwidth with anyone else. The cost isn't that different in my experience, but might be for an trans-Atlantic circuit. IPSEC over normal internet connections with multiple isp's is a better choice.


Don't get me started on that. However the two circuits are sold as a fully resilient pair (different tails from different DCs arrive into the building in Washington different directions with guaranteed different paths to the DCs, then in contract it's guaranteed neither circuit will run via the same equipment or on the same submarine circuit. In the UK they arrive in two separate cities)

However I know reality and contracts never meet, unfortunately in a large disfunctional organization there are other considerations in circuit procurement than technical requirements.

As for internet circuits, I had two ISPs in NY on two separate paths, which is great. Something went wrong about a month ago, and the routing from one of the ISPs changed, meaning that we were back to a single point of failure who we have no business relationship with


We had a partner that bought a redundant mpls service from their DC to another DC (not their own) and to AWS direct connect via that DC. Their mpls provider was a SPOF, the provider for the second DC was a SPOF, then the direct connect circuits terminated in the same AWS router, another SPOF. All 3 had SPOF's had caused its own significant downtime within a few months.

This was designed by the partner, contract signed by us, the whole solution paid by us, without consulting any network engineer on our side until 1 week before it was supposed to be implemented. We proposed two different solutions that would cost less than 1/20 of the cost their solution was costing us ($20k/month), while providing real redundancy (yet to be implemented, though some SPOFs solved by now). Never enjoyed outages as much as this as we got to explain why their solution was so bad each time.


Choir here, although in our case the connectivity was designed by a third party consulting firm, against the advice of long standing employees with experience.

The third party consulting firm are long gone, as are the people high up in the company who brought them in.

Now there's new people high up who come up with the same problems, but in a slightly different way, arguing how their basket is far better than the previous basket.

Sigh.


Massive latency and increase in packet loss, in correlation with a hop between providers, is usually an overly congested peering or transit session. Particularly if noticed most often during evening peak hours. Some people aren't paying attention to their traffic charts and monitoring systems, and aren't upgrading circuits as needed.


100km = 3 light-ms - which should be noticeable in the relative pings, no?


Yes, I mean it can be obvious when ISPs have proper consistent reverse DNS with hostnames and POP designations. You might see a router in Sacramento that is a neighbor of something in Portland. Obviously there's a lot of stuff in between there but it's all abstracted away at the purely layer-3 IP network level.


You've missed one zero, 1 light-ms = 300km.


In vacuum, yes, in medium it's more like 200 km.


Do these have googleable model numbers, if I wanted to see pictures of them ("oh that's what my packets go through")?


At the small end, a juniper MX204 is a good example of a current-gen router that has enough RAM to take multiple full BGP tables, and has a sufficient number of 10 and 100GbE interfaces.


Juniper MX series. Juniper PTX. Cisco ASR 9000.

That's basically a good portion of the internet.


Look at Junipers PTX line, they are fairly common in carrier grade networks AFAICT.


ISP core networks typically have two components, routers and dwdm/opto equipment. The routers are not really special, just your regular ISP core/edge router, adding roughly the same delay for each hop (invisible mpls hops or not) on the order of microseconds.

Opto equipment adds practically no delay as they just forward the light without looking at it or processing it. There might be redundant paths that it can switch between if there's a fiber cut.


I wonder how many hops it will actually remove. I just tested it, and from my apartment in Mountain View, CA to my VPS in a data center ~20 miles away in Fremont, CA I get 14(!) hops.


> I wonder how many hops it will actually remove.

Hopefully a lot! Fun fact, IPv4 and IPv6 have a maximum network size. Because of the TTL field in v4 (renamed max hops in v6), a path through an IP network can never be more than 255 hops long.


Hops are artificial, as long as you are 100% sure you’re not introducing loops you can just tell routers not to decrease the TTL and they’ll be invisible.

It’s just that if you do that and you do introduce a loop the packets will keep looping and the network will very quickly overflow.


You can tunnel though, so the actual amount of hops could be infinite. Routers can also rewrite the field.


And just like NAT with IPv4, we will use hacks to get around implementing a proper solution for decades (and counting).

With satellite internet taking off, and actual interest in extraterrestrial colonies from e.g. NASA and SpaceX, I think we need a proper space communication protocol. Maybe it can all be solved in L2, but I think we will soon be looking at 255 hops the way we look at the limited address space of v4.


It seems people are working on this: https://en.wikipedia.org/wiki/Interplanetary_Internet


Interesting (but probably not too surprising) to see that DTN research is alive and kicking!

I wrote a short seminar paper on DTN protocols as an undergrad in '09, so thanks for the trip down memory lane!


Thanks for posting this, reading about all the progress made with delay tolerant networking is exciting!


It's all about peering. Once they're in more exchanges, and it becomes cost effective to set up peering with them, more traffic will reach them directly asdifferent providers set up peering to both reduce costs across other links and provide better access.


It all depends on peering, I get 13 hops to some networks in Europe which is some 12.000km away from here (7500 freedom units).


Mentioned above, but it won't. The hops are up and down to boats instead of routers.


Routers and other equipment do not add very much delay unless packets sit around in the buffer, and for that you need congestion.


This brings up thoughts of The Hummingbird Project (a movie)

https://youtu.be/vnXLYr6U3bk


It's a bad movie.

The true story is that a company (spread networks) spent many millions building a low latency fiber path between chicago and nyc. Two guys built a lower latency radio network that connected a number of cell towers together and were the first to market rendering that cable obsolete. Within months other people had radio networks up to try to compete (to give an idea about how competitive that space is).


Excellent summary, thanks for that.

One thing to add is that Starship should enable 400 satellites per launch:

https://www.teslarati.com/spacex-president-teases-starship-s...


I am wondering about military applications here - a single missile with 400 warheads?


For military purposes, my idea for Starship would be the following. Starship aims to put a 150tonne payload to orbit for a few million dollars. I would look at non-nuclear kinetic perpetrators.

Put a 100 tonne tungsten rod into space (cost ~$10m), or more likely some cheaper metal. Maybe 10m long with 25cm radius. Stick a cold gas reaction control system, maybe some gridfins and a guidance package. Put it in orbit, Put some ablative on the leading edge. Once in orbit the cold gas system slows the weapon to put it on target. The gridfins and cold gas thrusters control the guidance. The gridfins wouldn't work for long before the burnt off, but recessed cold gas thrusters could continue to work. Ionisation and intense heat would stop any optical sensors or outside guidance, so aiming it would be problematic.

A kinetic warhead with a mass of 100tonnes going at sub orbital speeds (say 7000m/s) would have 2.45x10^12 joules, which is around 0.58kt of tnt. If I remember correctly once it is going past a critical speed the penetrator and target act like fluids and the rod would penetrate a depth based on the relative density of the two materials time the length. Tungsten is about 6 to 12 times as dense as rock, so it might go through 60 to 120m meters of rock.

Non-nuclear, pretty hard to detect or stop. Competitive in price with cruise missiles. Accuracy could be a problem.


Storing weapons in space is really scary. It's practically impossible to reliably detect when they are "fired", which makes the people with nuclear bombs waiting to fire back antsy, which increases the risk of a nuclear war accidentally starting. I hope no one carries out this idea of yours.

Using starship to launch rods that immediately come down on people instead of loitering in space sounds like a much safer use.


Those rods are designed for precision targets on the order of a few meters in diameter. I would much rather both sides of the conflict had those in play, than a nuclear alternative.


Unfortunately it's not a question of one or the other, but of nuclear or both.


Then both would be preferable. :)


> practically impossible to reliably detect

I call the opposite, it is extremely easy to detect and track their orbits, a single personal computer can probably track them all and calculate the potential strike point over points in orbits.

Any attempt to deviate from an existing low earth orbit is extremely energy intensive. It also can't strike anywhere - it can be more than 24 hours before its in position over the location it wants to be.


It doesn't take much Delta v (energy) to move a low Earth orbit into a suborbital collision course. You don't kill all of your velocity, just enough of it that your orbit intersects Earth.

The orbital period for low Earth orbit is closer to 1 hour than 24, and you further reduce that to only a dozen or so minutes by spacing out rods over the orbit... Much like starlink satellites.


Yeah it's 1 hour but what if your orbit doesn't intersect with where you want to strike? What if the city is 5000 km _sideways_? Then you wait for the rotation of Earth to bring that location into your orbit path.

Keep in mind an orbit does not cover all of the Earth surface. And unless the target is on the equator, there is no low earth orbit that can maintain it's path over a target consistently.


I understand your point on tracking orbits.

But for detection, Is that actually easy? I don't know much about it, would it be done optically, or is there a better way? Are there concealment strategies? How about say painting the satellite black?


Otherwise known as the "Rods from God". https://en.wikipedia.org/wiki/Kinetic_bombardment


Nice link. I am not surprised that the concept is that well developed. I'm surprised they slow down to terminal velocity before impact.

Fun to think about but I also hope they don't build these things.


Terminal velocity for a streamlined rod of metal is pretty high. You're talking about tactical nuke levels of energy, delivered to a foot-wide spot on the ground.


De-orbiting them would consume a lot of delta-V.. plus if they're 'stored' in orbit they have a very narrow path where they can go in a certain period of time.


Could de-orbiting be partially managed by shooting them backward from the platform, using atmospheric resistance to bleed off the gained velocity of the platform?


Deorbiting them so their periapsis intersects with the earth is not hard from LEO.


If you don't mind aero capturing over multiple orbits that is.


They're already in orbit, though?


Every space rocket is automatically an intercontinental ballistic missile. But why launch one rocket with 400 warheads from a publicly known launchpad when you can launch 400 ICBMs with several warheads each from 400 undisclosed locations?


Not every rocket - liquid fuel ones only to a very limited, logistically awkward and costly degree. As the US and Soviets learned, keeping fleets of liquid fueled ICBMs ready to go is dangerous and expensive. Storable fully solid fuel ICBMs such as current-generation US/Russian stuff are quite different.


> each from 400 undisclosed locations

A point of clarification, submarines are the only nuclear platform with "undisclosed locations." It is trivial for a nation-state to observe the construction and fitment of a fixed ICBM placement, or track the movements of a truck/rail mounted launcher.


It is certainly not trivial, and definitely out of reach of most nation states out there.

I'm sure U.S. at least tries to keep track of Russian mobile launchers, but to which degree it is successful we wouldn't know for sure. And pretty certain Russia would have problems live tracking launch vehicles (if the USA had any).


What prevents the usage of decoy trucks?


In a hot war with a country that has ASAT tech, being able to launch 400 low earth orbit sats -- low enough that they need to keep boosting, but also low enough that space debris is non-existent -- could make all the difference.


Is that better than existing MIRVs? And I sure hope there's less military escalation in the future, not more.


And as of 1846 pacific time, we have another fully successful launch. Second stage nominal low orbit. The first stage has been recovered for the 5th time. All sixty satellites released.


>Because the speed of light in a vacuum is 30% faster than in optical fiber, the latency of Starlink over long distances has the potential to be lower than any other option once laser links are available.

50% faster than light in fiber, fiber is 30% slower.


Mark Handley made a fantastic video about the ground relay system:

https://youtu.be/m05abdGSOxY


Although, I could see executive travel (limousine/private jets) and/or airliners and trains using them, if it works well logistically.


Maybe cruise ships or other ships as well.


But isn't it still relatively very low bandwidth? I think they tested a v2 with ~700Mbps. In terms of normal everyday consumers, what are the implication of Starlink outside of Rural area?

PS: Thanks for the summary and NO Starlink phone. I have gotten sick and tired of people jumping to conclusion suggesting Starlink taking over ISP and Mobile Network. And that is even on HN.


> But isn't it still relatively very low bandwidth? I think they tested a v2 with ~700Mbps

Are you saying 700mbps is slow? Or is that all the bandwidth that the single link can provide (meaning 700 divided by the number of subscribers)? Living in a fly over state, I was lucky at first to get much more than 10mbps. Spectrum has now provided service to my area and I can get 400mbps and am super happy.


Sorry I was wrong. Total Maximum Capacity per Sat is 17Gbps as of v1.0. V2.0 will add even higher Capacity and Inter Sat communication.


"go for launch..." she says the decision was made by Falcon 9 autonomously. Not sure what that means, but sounds badass.

https://www.youtube.com/watch?v=y4xBFHjkUvw&feature=youtu.be...


From 1 minute before liftoff, the computers in the Falcon 9 take control of the whole process. From then on those computers monitor all relevant sensors to decide go vs no go, and control the ignition etc.


> The 60-satellite train should be visible in the sky over the SF bay area around 9:30 tomorrow night if the launch goes off as scheduled:

I thought they were redesigning the satellites to be nearly invisible from the ground so as not to disrupt astronomy? Or is that later launches?


They added sunshades which should make the satellites nearly invisible from the ground, but they are only effective once the satellites have assumed their final orientation and orbit. They start in a much lower orbit and will likely be very visible in the first few days after launch. Then they enter a phase of orbit raising during which they deliberately rotate to be less visible, though they can still sometimes be seen until they reach their destination several weeks later.


> Today is the 7th launch of v1 satellites. SpaceX is hoping to do more than two launches per month but haven't reached that pace yet

Isn't this the 8th? Jesse the Engineer said it was, too. The stage 1 is also on its 5th mission, so rad!


The "first" launch was v0.9 satellites. There was also an earlier launch of prototype satellites. Numbering of Starlink launches is a little confused as a result.


Yeah, Tintin 1/2, so is it 7th of 14 or 8th?

Edit: stage 1 recovery success, and that was perhaps the clearest image of it landing I have ever seen and I've been watching since 2015! Also, I miss the shots of LOX images in zero G they used to do back then.


No, Tintin 1 and 2 were even earlier prototypes that piggybacked on the Paz launch.

There was also a full 60 satellite stack of v0.9 satellites that aren't counted towards the "production" constellation.


Ah, so that's what makes it 7th launch of 14 then?

Now I remember this launch last year around this time, and some of them were not able to maintain their orbit, but weren't the majority of them operational?


Genuinely curious: any idea what the carbon footprint of this number of launches would be, as a ballpark?


The first and second stages of a Falcon 9 use a combined ~163 000 litres (43 000 gallons) of RP-1 fuel. This is less than the fuel capacity of a Boeing 777-300ER.

So if the Starlink launches happen twice a month, the carbon footprint from the fuel would be at least an order of magnitude smaller than that of a single long haul route of an airline.

Building the satellites and the rocket (including the expendable second stage) is obviously pretty carbon intensive too, but fuel is probably what you were thinking about when posing this question.


Everyday Astronaut on YouTube has a great video on the environmental impact of rockets. 55 minutes long but it's fascinating. The TL;DW is they're not as bad as you'd expect. But not zero-emissions, obviously.

https://www.youtube.com/watch?v=C4VHfmiwuv4


Starship gonna run on methane so most likely going to be carbon neutral.


Methane is a fossil fuel, so no way burning it is carbon neutral.


It'll be pretty big news if the methane they use to refuel Starship on Mars originates from a fossil source :D


Elon has discussed using the sabatier process on earth to create methane from CO2 to fuel it. Presumably the motivation would be a mix of PR and practicing for mars.



ISRU methane fuel for starship is going to be eventually done on earth (and mars) for carbon neutral starship / superheavy launches.

https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/201200...

TL;DNR: If you take methane out of the atmosphere and you put it back in the atmosphere, the carbon footprint is neutral.


> The user antennas are likely to be quite expensive at first (several thousand dollars). Cost reduction of the user antennas is the biggest hurdle Starlink currently faces. Nobody knows yet how much SpaceX will charge for the antenna or service.

Considering the pay-over-time model that SolarCity implemented, should we not expect the same here? Assuming the user antenna can be removed, it can be rented, just like a modem or router.

Relying on the antennas having a long ‘useful life’, it may not be too prohibitive if rented out.


The hardware for existing geostationary consumer-grade (cheap, sub $150/month service) Ku and Ka-band VSAT terminals, for use in really remote locations in the USA, is about a $800 to $1200 cost. It's absorbed into 24/36 month contract terms.

People who live in a really remote plate and sign up for a 24 month term for some barely-usable VSAT service are usually disappointed to find out how firmly they're locked into the contract, when somebody builds a WISP in their area.

I would not be surprised if there's a terminal rental charge or 12/24/36 month contract terms offered.


>Considering the pay-over-time model that SolarCity implemented, should we not expect the same here?

Considering the pay-over-time model essentially bankrupted SolarCity, forcing Tesla to buy them out, for which Tesla are being sued by shareholders...I'm not sure that's the model to resurrect.


Random thought: are the satellite trajectories "dense enough" that in the future when ships/people are trying to leave the Earth have to get a delay/launch window and the Starlink satellite(s) may get diverted around the area you're going through.


No, think of the sky in terms of the surface of the earth. The star link satellites paths are like a few highway stretched across the surface. Not only is it a tiny sliver cut across the glob, it’s also at a very specific LEO so the spacecraft wouldn’t even stop there in the vast majority of cases.


I see yeah I meant just flying through it. It would help to see a scale model of the completed orbits as it looks "worse" than it is. I'm not against it, though it would be nice if it was free ha.

side note: not related but the Eccentric Orbits book about Iridium was really good imo.


Well a scale model would be just the earth, you basically would never even see a satellite because they are so small.


right maybe they're emphasized to show the orbital pattern like this[1]

Or if it was scale/can't see it.

Which I suppose even so, that ship would have to be unrealistically massive to have problems flying through one of the gaps.

[1] https://cdn.geekwire.com/wp-content/uploads/2019/02/190208-s...


This [1] is a good youtube explanation of how a hybrid between Starlink and ground stations could still improve connectivity:

[1] https://youtu.be/m05abdGSOxY


>The user antennas are likely to be quite expensive at first (several thousand dollars). Cost reduction of the user antennas is the biggest hurdle Starlink currently faces. Nobody knows yet how much SpaceX will charge for the antenna or service.

One of Starlink's competitors OneWeb was able to aquire a user antenna which is apparently a breakthrough in cost reduction at $15 [0]. I would assume the manufacturer has a contractual agreement with OneWeb, but I can imagine that Starlink could develop a similarly priced antenna with it's greater resources.

[0] https://spacenews.com/wyler-claims-breakthrough-in-low-cost-...


> OneWeb was able to aquire a user antenna which is apparently a breakthrough in cost reduction at $15

Greg Wyler, the founder of Oneweb, has a history of making bold claims that are not really supported by facts.


To be fair, so does SpaceX's.


The difference being that Elon tends to actually follow through on at least some of his bold claims.


The oneweb antenna was never produced or seen: it was a sound byte from the CEO, and they're now bankrupt.


Just Curious. Have you been using zettelkasten, org-roam or similar tool to take notes whenever you came across this topic and publishing now as the compiled version ?


What would be likely frequency range for the laser transmission? What would the send/receive stations look like?


Very interesting set of facts


> They are about the size of an extra-large pizza, so you won't be able to get a Starlink phone.

I'll be surprised if this doesn't shrink. I worked for a company in the mid-2000s that had some pretty cool IP which effectively shrank a briefcase-sized BGAN terminal down to a Pocket PC (pre iPhone days!) with Pocket PC-sized antenna strapped on the back.


People have worked on phased arrays for decades. It's not going to shrink much, and I don't think it'll be any less expensive.


People have worked on vehicles to get us to space for decades. It's not going to shrink much, and I don't think it'll be any less expensive. ... enter SpaceX


Except Elon just admitted this:

https://www.businessinsider.com/elon-musk-spacex-starlink-sa...

The satellites were never the challenging part; cross-links and the user terminal were always the most difficult pieces.


so... it'll be raining satellites?


Sure, just like it rains meteors now. But they are specifically designed to completely burn up during reentry.


Sort of, but none of the debris will reach the ground. The current version of Starlink is designed to burn up completely in the atmosphere upon re-entry.


Do they need to re-enter? Could they not be directed out into the void?


That would require lots of energy to escape the planets pull vs just waiting until gravity does its thing.


Thanks for the explanation!


what sort of components are the Sat's made up of? Are we all happy with the idea of "burning up" wont have long term issues with not completely "burnt up" parts floating around the sky and eventually being breathed/drank/eaten in the biosphere?

Try to ask this in a non-hyperbolic way, I dont want to sound alarmist and I'm no "chem trail" loon. but "Burning up" doesn't just make it magically go away.


Any parts that didn't completely burn up (of which, again, there shouldn't be be any) would not float, they would fall. But it's a good question if the burned up mass would have any bad effects. My intuition is that the amount of mass is so miniscule in comparison to the whole atmosphere that it really doesn't make a difference what the satellites are made of as long as it's not radioactive. But I haven't done any math about it.


in a short term view yes i agree, but its more than just starlink (we have all sorts of sat's made of highly refined materials falling from the sky), and its over years... decades even.

I haven't seen articles on this, so either its being ignored or it's not an issue. :-/

I hope its the later.


I strongly suspect it's the latter - since "Every day, Earth is bombarded with more than 100 tons of dust and sand-sized particles." [1] and the satellites burning up aren't going to approach that mass simply due to launch costs.

[1] https://www.nasa.gov/mission_pages/asteroids/overview/fastfa...


the difference for me is that the stuff that we put up there is highly refined materials. not just a load of unrefined rock dust.

I wonder what the stats of Uranium/Plutonium we let "burn up" in the biosphere is compared to stuff we get while cruising through space is...


There is no uranium or plutonium in these satellites, nor anything else radioactive. On the other hand, according to [1] the abundances on this page multiplied by [2] 40000 tons per year, the stuff naturally hitting the atmosphere each year includes 0.3 kg uranium, 1.6 kg thorium, 3 kg radioactive isotopes of potassium, 10 kg mercury, 18 kg cadmium, 50 kg lead, 72 kg arsenic, and 120 tons of chromium, among other things. This stuff burns up in the atmosphere all day long, and has for billions of years.

[1] https://periodictable.com/Properties/A/MeteoriteAbundance.v.... [2]https://link.springer.com/chapter/10.1007%2F978-1-4419-8694-...


Well Starlink will be the majority of all satellites pretty soon, so it really only matters for Starlink for the foreseeable future, if it matters at all.


Only the v0.9 satellites launched on 24 May 2019 did not have interlink. All the satellites after that did have interlink.


None of the launched v1 satellites have laser crosslinks.


You are right.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: