The combination of low latency (20-40ms) and high bandwidth (100+Mbps) has never been available in satellite internet before.
A public beta may start later this year for some users in the northern US, around the 14th launch. Today is the 7th launch of v1 satellites. SpaceX is hoping to do more than two launches per month but haven't reached that pace yet.
The ground stations look like this: https://www.reddit.com/r/SpaceXLounge/comments/gkkm9c/starli...
The user antennas can be seen in that picture; they are the smaller circular things on black sticks. They are flat phased array antennas, and don't need to be precisely pointed like satellite dishes do. They are about the size of an extra-large pizza, so you won't be able to get a Starlink phone.
The user antennas are likely to be quite expensive at first (several thousand dollars). Cost reduction of the user antennas is the biggest hurdle Starlink currently faces. Nobody knows yet how much SpaceX will charge for the antenna or service.
Starlink can't support a high density of users, so it will not be an alternative to ISPs in cities. Rural and mobile use are the important applications. The US military is doing trials with it. Cell tower backhaul may also be a possibility.
Starlink V1 doesn't have cross-satellite links, so the satellites can only provide service while over a ground station. There will be hundreds of ground stations in North America; no information about other regions yet. Starlink V2 is planned to have laser links between satellites, which will enable 100% global coverage 24/7, though local regulations are likely to prevent SpaceX from providing service in many places.
Because the speed of light in a vacuum is 30% faster than in optical fiber, the latency of Starlink over long distances has the potential to be lower than any other option once laser links are available.
Each launch of 60 Starlink satellites has nearly as much solar panel area as the International Space Station. Once SpaceX's Starship is operational they should be able to launch several hundred satellites at once instead of just 60.
Starlink's only current competitor, OneWeb, just filed for bankruptcy after only launching a handful of satellites, and is fishing for acquisition offers. Amazon is also planning something called Project Kuiper but not much is known about it.
Starlink V2 will have 30,000 satellites, requiring hundreds of launches. Even once the initial fleet is launched, SpaceX will still need to maintain the constellation with many launches per year indefinitely.
> Because the speed of light in a vacuum is 30% faster than in optical fiber, the latency of Starlink over long distances has the potential to be lower than any other option once laser links are available.
Not to mention the dramatic reduction of hops. My route from the USA east coast to the university of Melbourne in Australia[1] is 30 hops reported by traceroute, with at least as many switches in the way. You could make the same link with only a few satellites.
[1]www.ie.unimelb.edu.au
EDIT: 30 is actually just the default max hops in traceroute, its really 32 hops from me to Melbourne.
I'm in Switzerland and have 25 hops, which can be broken into:
- 1-7: Hops within my ISP's in-country network (~4ms total latency)
- 8-10: Hops within my ISP's in-Europe network (~28ms total latency)
- 11: London -> New York (~93ms total latency)
- 12: New York -> Los Angeles (~160ms total latency)
- 13: Transfer in LA from my ISP to AARNet (about the same latency)
- 14: LA to somewhere in NSW (guessing Sydney, 305ms total latency)
- 15-25: Routing within AARNet and Unimelb (319ms total latency)
So most of the latency looks to be attributed to large hops across oceans rather than internal switching. Even if you could narrow it down to London -> NY -> LA -> NSW you'd have 277ms.
From my university network in Germany, I seem to get a direct London → Perth link and a total latency of 278ms. I couldn't find any information about a direct fiber between London and Australia, though there is one from London to Singapore, and AARNet seems to have presence in Singapore. My guess would be that there is some switching below the IP layer going on in Singapore as described by dicknuckle in a sibling comment.
9 cr-fra2-be11.x-win.dfn.de (188.1.144.222) 15.360 ms 15.385 ms dfn.mx1.fra.de.geant.net (62.40.124.217) 15.021 ms
10 ae7.mx1.ams.nl.geant.net (62.40.98.186) 21.806 ms dfn.mx1.fra.de.geant.net (62.40.124.217) 15.244 ms ae7.mx1.ams.nl.geant.net (62.40.98.186) 21.707 ms
11 ae9.mx1.lon.uk.geant.net (62.40.98.129) 29.059 ms ae7.mx1.ams.nl.geant.net (62.40.98.186) 21.805 ms ae9.mx1.lon.uk.geant.net (62.40.98.129) 28.933 ms
12 138.44.226.6 (138.44.226.6) 196.613 ms ae9.mx1.lon.uk.geant.net (62.40.98.129) 29.074 ms 29.156 ms
13 138.44.226.6 (138.44.226.6) 196.807 ms et-7-3-0.pe1.wmlb.vic.aarnet.net.au (113.197.15.28) 277.463 ms 138.44.226.6 (138.44.226.6) 196.778 ms
14 138.44.64.73 (138.44.64.73) 277.544 ms et-7-3-0.pe1.wmlb.vic.aarnet.net.au (113.197.15.28) 277.629 ms *
Yeah you're right, "pe1" shows up in many of their hostnames with different states in the name. And their looking glass shows a bunch of hops within Australia, none of which show up when I do a traceroute.
Huh, my ISP is Init7 and as I said, takes 25 steps to get there, though steps 18-24 inclusive show as "waiting for reply" in MTR.
Init7's traceroute [0] shows 5 fewer steps to r1lon2.core.init7.net than my traceroute though and appears to route through r1bsl1 (assuming Basel) instead of Frankfurt.
Perhaps you're in/near Basel and so skip straight to London, circumventing my 8 hops around Zurich?
Those numbers seem awfully high. I've just ran a speedtest between Germany and California and got 170ms total latency. Back when rabb.it [0] was alive I could even get reaction scores of 450ms compared to 230ms when running the test on my computer. This is quite impressive when you consider that this means video encoding and decoding must happen in less than 50ms. Conventional video encoding is usually slower than real time.
> Because the speed of light in a vacuum is 30% faster than in optical fiber
Non-Australians, don't get too excited. Those satellites are at 340 miles, so that adds 680 miles of latency (3.66 ms) plus two to three Starlink hops, which cancels out some of that "speed in a vacuum."
The way to get almost c on earth is via direct microwave links.
The minimum latency is 3.66 ms, but depending on the angle to the satellites involved it may be adding significantly less than 680 miles to the journey. Aka if it’s 340 miles east and 340 miles up that’s 480 miles to the satellite, adding just 140 not 340 miles.
Outside of HFT, most networks are far from the shortest great circle routes between you and the other end, which further completes the issue.
You misunderstood. (Ignoring the earth being a sphere.)
A satellite directly overhead means you need to travel to that altitude. However, if it’s not overhead light is traveling the hypotenuse of a right triangle where X is the distance to a point underneath the satellite and Y is the altitude of the satellite. That distance is the square root( X^2 + y^2). From there you need to travel to a different base station.
Assuming an ideal path where the satellite is directly between two locations that are 680 miles apart, that adds up to 2 * ( sqrt(340^2 + 340^2) ) ~= 961.7 miles vs 340 + 340 miles, or an added 281.7 miles not 680. In other words 1.414x the distance rather than simply adding 680.
Clearly the earth is not a flat and your very unlikely to be in that situation, but assuming you can reach several satellites at the same time it is likely one of them will be roughly in the direction you want to go.
there's probably double or triple the switches in between. they're not really switches though. more like super fast packet routers. the latency is much less than any normal or carrier grade switch. they don't touch the IP layer at all. (I manage a bunch of them)
one of the things I keep telling non-network-eng people is that looking at a traceroute gives you absolutely no way of knowing what the OSI layer 2 topology is for transport networks, MPLS, switches, DWDM circuits carried as lit services "glassed through" a location, etc. Just the IP network.
Discovering the underlying layer 2 topology of a carrier's network requires inside information and cannot be easily discerned sitting at a computer elsewhere on the internet. You might see two routers that appear to be directly adjacent to each other but it's actually carried as a 10Gbps VLAN across a several-state sized region between two cities many hundreds of km apart, with a lot of intermediate equipment in between.
Do you have any recommendations for where to learn more about network eng for us non-network-engs? My experience is in DevOps (on k8s, mostly) and systems/application programming, if that helps pinpoint any advice you could give. I’m trying to get the bigger picture like network engineers understand, as well as start to understand the layer 2 topology that you described.
The traditional and hard way is to start as a first-tier NOC person for an ISP and work one's way up. Or to start as a field tech for an ISP and show skills worthy of getting promoted (perhaps into a NOC job or a manager-of-field-techs job). That method takes quite a while.
Medium to large sized ISPs have a very large amount of BSD/GPL/Apache/misc licensed software running to support back end monitoring and provisioning systems. They do occasionally hire software developers to customize things for their environments, so it certainly wouldn't hurt to reach out to the noteworthy ones in your region and try giving them your CV.
You could build a playground internet with some VPN-connected boxes which talk BGP and route 10.0.0.0/8 between them.
It's a lot of work, but a friend of mine started such a thing and I have learned _so much_ about networking on layers 2 and up from that.
Start by installing a BGP deamon on two boxes that share a network and see what you can do :)
Ditto. Would love to figure out if there's a good way of mapping out layer 1/2 connectivity from a single computer as well. Would be really, really fascinating.
If you want layer 1 (physical location of dark fiber) you have to get the GIS dataset from the carriers. The data sets exist, but they're often protected as proprietary information or under NDA. I have a QGIS setup with a ton of stuff in it.
There are other ways of acquiring layer 1 data which are labor intensive and involve the equivalent of filing FOIAs for construction permits with local city and county agencies, etc.
Big facilities based ISPs that have a lot of fiber out there underground and aerial make extensive use of GIS software. Their construction groups will have their own full time GIS staff positions.
Actually, you don't want to do that. It costs money and you really should have a good door between you and the equipment or you will go nuts from the noise.
Most of the interesting stuff can be setup using Linux, OpenvSwitch, FRR/ BIRD, network namespaces/ VRF and more. It is all in FOS Software - so more or less zero cost. Of course, you will not learn how to configure a Catalyst Switch that way. For some stuff, there are virtual appliances that you can spin up with KVM/ QEMU but most of the enterprise stuff has to be bought. Again, at that time, you will have a solid understanding of what should be happening and will know what to look for in the documentation. The rest is field experience with firmware bugs, methods how to approach some problems and syntactic sugar of the particular equipment. At least that is my view.
I've seen ISP routers massively increasing latency in conjunction with packet loss -- had a problem with one provider recently where packet loss would increase every evening, and so would the rtt. Suggests large buffers on the congested part. The reverse DNS of the given IP suggested it was a gig-e connection (my side of the hop, which was also into LINX, is 100G)
Of course you can hide a router by not decrementing the TTL as it passes through your network at layer 3, you can hide the IP by not responding with ICMP expired messages
An ICMP could well return on a different path to the direction it was sent, with a path like this
Traceroute will only show the outbound route, so you should traceroute from both ends
The latency will also be affected by ICMP generation on the router, which could be delayed, rate limited, dropping, etc.
Doing a quick traceroute to a host of mine in Sydney, from London, shows
5ms to i-91.ulco-core02.telstraglobal.net 202.40.148.33
82ms to i-10104.unse-core01.telstraglobal.net 202.84.141.145
132ms to i-10601.1wlt-core02.telstraglobal.net 202.40.148.106
277ms to i-10406.sydo-core04.telstraglobal.net 202.84.141.226
sydo will be sydney
To get the exact map I could talk to Telstra (in this case I peer with telstra directly in London)
Or I could look at telstra's map, which ddg helpfully tells me is at
Which tells me 1wlt is LA. unse will thus be east coast U.S. I'd have expected routing via Singapore.
There's no way to know which way the traffic is actually going without asking Telstra.
I have 2 ethernet circuits from the UK to Washington DC, to me it looks like two layer 2 1500MTU circuit. Only by talking to the provier can I work out which circuits it actually travels on trans atlanticly. It's supposed to be separate, but latency changed by 2ms a few days ago. Asked them about it, and there was a failure in their network, they rerouted in a few hundered milliseconds (which isn't good as now both diverse circuits run via the same equipment, thus any issues like another 100ms outage will cause an impact)
>I have 2 ethernet circuits from the UK to Washington DC, to me it looks like two layer 2 1500MTU circuit. Only by talking to the provier can I work out which circuits it actually travels on trans atlanticly. It's supposed to be separate, but latency changed by 2ms a few days ago. Asked them about it, and there was a failure in their network, they rerouted in a few hundered milliseconds (which isn't good as now both diverse circuits run via the same equipment, thus any issues like another 100ms outage will cause an impact)
Buying two mpls circuits (assuming they are that) from the same provider is a single point of faliure, your only real redundancy is your handover at each site, if they are separate.
You're better of buying an optical link, then you are also not sharing bandwidth with anyone else. The cost isn't that different in my experience, but might be for an trans-Atlantic circuit. IPSEC over normal internet connections with multiple isp's is a better choice.
Don't get me started on that. However the two circuits are sold as a fully resilient pair (different tails from different DCs arrive into the building in Washington different directions with guaranteed different paths to the DCs, then in contract it's guaranteed neither circuit will run via the same equipment or on the same submarine circuit. In the UK they arrive in two separate cities)
However I know reality and contracts never meet, unfortunately in a large disfunctional organization there are other considerations in circuit procurement than technical requirements.
As for internet circuits, I had two ISPs in NY on two separate paths, which is great. Something went wrong about a month ago, and the routing from one of the ISPs changed, meaning that we were back to a single point of failure who we have no business relationship with
We had a partner that bought a redundant mpls service from their DC to another DC (not their own) and to AWS direct connect via that DC. Their mpls provider was a SPOF, the provider for the second DC was a SPOF, then the direct connect circuits terminated in the same AWS router, another SPOF. All 3 had SPOF's had caused its own significant downtime within a few months.
This was designed by the partner, contract signed by us, the whole solution paid by us, without consulting any network engineer on our side until 1 week before it was supposed to be implemented. We proposed two different solutions that would cost less than 1/20 of the cost their solution was costing us ($20k/month), while providing real redundancy (yet to be implemented, though some SPOFs solved by now). Never enjoyed outages as much as this as we got to explain why their solution was so bad each time.
Choir here, although in our case the connectivity was designed by a third party consulting firm, against the advice of long standing employees with experience.
The third party consulting firm are long gone, as are the people high up in the company who brought them in.
Now there's new people high up who come up with the same problems, but in a slightly different way, arguing how their basket is far better than the previous basket.
Massive latency and increase in packet loss, in correlation with a hop between providers, is usually an overly congested peering or transit session. Particularly if noticed most often during evening peak hours. Some people aren't paying attention to their traffic charts and monitoring systems, and aren't upgrading circuits as needed.
Yes, I mean it can be obvious when ISPs have proper consistent reverse DNS with hostnames and POP designations. You might see a router in Sacramento that is a neighbor of something in Portland. Obviously there's a lot of stuff in between there but it's all abstracted away at the purely layer-3 IP network level.
At the small end, a juniper MX204 is a good example of a current-gen router that has enough RAM to take multiple full BGP tables, and has a sufficient number of 10 and 100GbE interfaces.
ISP core networks typically have two components, routers and dwdm/opto equipment. The routers are not really special, just your regular ISP core/edge router, adding roughly the same delay for each hop (invisible mpls hops or not) on the order of microseconds.
Opto equipment adds practically no delay as they just forward the light without looking at it or processing it. There might be redundant paths that it can switch between if there's a fiber cut.
I wonder how many hops it will actually remove. I just tested it, and from my apartment in Mountain View, CA to my VPS in a data center ~20 miles away in Fremont, CA I get 14(!) hops.
Hopefully a lot! Fun fact, IPv4 and IPv6 have a maximum network size. Because of the TTL field in v4 (renamed max hops in v6), a path through an IP network can never be more than 255 hops long.
Hops are artificial, as long as you are 100% sure you’re not introducing loops you can just tell routers not to decrease the TTL and they’ll be invisible.
It’s just that if you do that and you do introduce a loop the packets will keep looping and the network will very quickly overflow.
And just like NAT with IPv4, we will use hacks to get around implementing a proper solution for decades (and counting).
With satellite internet taking off, and actual interest in extraterrestrial colonies from e.g. NASA and SpaceX, I think we need a proper space communication protocol. Maybe it can all be solved in L2, but I think we will soon be looking at 255 hops the way we look at the limited address space of v4.
It's all about peering. Once they're in more exchanges, and it becomes cost effective to set up peering with them, more traffic will reach them directly asdifferent providers set up peering to both reduce costs across other links and provide better access.
The true story is that a company (spread networks) spent many millions building a low latency fiber path between chicago and nyc. Two guys built a lower latency radio network that connected a number of cell towers together and were the first to market rendering that cable obsolete. Within months other people had radio networks up to try to compete (to give an idea about how competitive that space is).
For military purposes, my idea for Starship would be the following. Starship aims to put a 150tonne payload to orbit for a few million dollars. I would look at non-nuclear kinetic perpetrators.
Put a 100 tonne tungsten rod into space (cost ~$10m), or more likely some cheaper metal. Maybe 10m long with 25cm radius. Stick a cold gas reaction control system, maybe some gridfins and a guidance package. Put it in orbit, Put some ablative on the leading edge. Once in orbit the cold gas system slows the weapon to put it on target. The gridfins and cold gas thrusters control the guidance. The gridfins wouldn't work for long before the burnt off, but recessed cold gas thrusters could continue to work. Ionisation and intense heat would stop any optical sensors or outside guidance, so aiming it would be problematic.
A kinetic warhead with a mass of 100tonnes going at sub orbital speeds (say 7000m/s) would have 2.45x10^12 joules, which is around 0.58kt of tnt. If I remember correctly once it is going past a critical speed the penetrator and target act like fluids and the rod would penetrate a depth based on the relative density of the two materials time the length. Tungsten is about 6 to 12 times as dense as rock, so it might go through 60 to 120m meters of rock.
Non-nuclear, pretty hard to detect or stop. Competitive in price with cruise missiles. Accuracy could be a problem.
Storing weapons in space is really scary. It's practically impossible to reliably detect when they are "fired", which makes the people with nuclear bombs waiting to fire back antsy, which increases the risk of a nuclear war accidentally starting. I hope no one carries out this idea of yours.
Using starship to launch rods that immediately come down on people instead of loitering in space sounds like a much safer use.
Those rods are designed for precision targets on the order of a few meters in diameter. I would much rather both sides of the conflict had those in play, than a nuclear alternative.
I call the opposite, it is extremely easy to detect and track their orbits, a single personal computer can probably track them all and calculate the potential strike point over points in orbits.
Any attempt to deviate from an existing low earth orbit is extremely energy intensive. It also can't strike anywhere - it can be more than 24 hours before its in position over the location it wants to be.
It doesn't take much Delta v (energy) to move a low Earth orbit into a suborbital collision course. You don't kill all of your velocity, just enough of it that your orbit intersects Earth.
The orbital period for low Earth orbit is closer to 1 hour than 24, and you further reduce that to only a dozen or so minutes by spacing out rods over the orbit... Much like starlink satellites.
Yeah it's 1 hour but what if your orbit doesn't intersect with where you want to strike? What if the city is 5000 km _sideways_? Then you wait for the rotation of Earth to bring that location into your orbit path.
Keep in mind an orbit does not cover all of the Earth surface. And unless the target is on the equator, there is no low earth orbit that can maintain it's path over a target consistently.
But for detection, Is that actually easy? I don't know much about it, would it be done optically, or is there a better way? Are there concealment strategies? How about say painting the satellite black?
Terminal velocity for a streamlined rod of metal is pretty high. You're talking about tactical nuke levels of energy, delivered to a foot-wide spot on the ground.
De-orbiting them would consume a lot of delta-V.. plus if they're 'stored' in orbit they have a very narrow path where they can go in a certain period of time.
Could de-orbiting be partially managed by shooting them backward from the platform, using atmospheric resistance to bleed off the gained velocity of the platform?
Every space rocket is automatically an intercontinental ballistic missile. But why launch one rocket with 400 warheads from a publicly known launchpad when you can launch 400 ICBMs with several warheads each from 400 undisclosed locations?
Not every rocket - liquid fuel ones only to a very limited, logistically awkward and costly degree. As the US and Soviets learned, keeping fleets of liquid fueled ICBMs ready to go is dangerous and expensive. Storable fully solid fuel ICBMs such as current-generation US/Russian stuff are quite different.
A point of clarification, submarines are the only nuclear platform with "undisclosed locations." It is trivial for a nation-state to observe the construction and fitment of a fixed ICBM placement, or track the movements of a truck/rail mounted launcher.
It is certainly not trivial, and definitely out of reach of most nation states out there.
I'm sure U.S. at least tries to keep track of Russian mobile launchers, but to which degree it is successful we wouldn't know for sure. And pretty certain Russia would have problems live tracking launch vehicles (if the USA had any).
In a hot war with a country that has ASAT tech, being able to launch 400 low earth orbit sats -- low enough that they need to keep boosting, but also low enough that space debris is non-existent -- could make all the difference.
And as of 1846 pacific time, we have another fully successful launch. Second stage nominal low orbit. The first stage has been recovered for the 5th time. All sixty satellites released.
>Because the speed of light in a vacuum is 30% faster than in optical fiber, the latency of Starlink over long distances has the potential to be lower than any other option once laser links are available.
50% faster than light in fiber, fiber is 30% slower.
But isn't it still relatively very low bandwidth? I think they tested a v2 with ~700Mbps. In terms of normal everyday consumers, what are the implication of Starlink outside of Rural area?
PS: Thanks for the summary and NO Starlink phone. I have gotten sick and tired of people jumping to conclusion suggesting Starlink taking over ISP and Mobile Network. And that is even on HN.
> But isn't it still relatively very low bandwidth? I think they tested a v2 with ~700Mbps
Are you saying 700mbps is slow? Or is that all the bandwidth that the single link can provide (meaning 700 divided by the number of subscribers)? Living in a fly over state, I was lucky at first to get much more than 10mbps. Spectrum has now provided service to my area and I can get 400mbps and am super happy.
From 1 minute before liftoff, the computers in the Falcon 9 take control of the whole process. From then on those computers monitor all relevant sensors to decide go vs no go, and control the ignition etc.
They added sunshades which should make the satellites nearly invisible from the ground, but they are only effective once the satellites have assumed their final orientation and orbit. They start in a much lower orbit and will likely be very visible in the first few days after launch. Then they enter a phase of orbit raising during which they deliberately rotate to be less visible, though they can still sometimes be seen until they reach their destination several weeks later.
The "first" launch was v0.9 satellites. There was also an earlier launch of prototype satellites. Numbering of Starlink launches is a little confused as a result.
Edit: stage 1 recovery success, and that was perhaps the clearest image of it landing I have ever seen and I've been watching since 2015! Also, I miss the shots of LOX images in zero G they used to do back then.
Ah, so that's what makes it 7th launch of 14 then?
Now I remember this launch last year around this time, and some of them were not able to maintain their orbit, but weren't the majority of them operational?
The first and second stages of a Falcon 9 use a combined ~163 000 litres (43 000 gallons) of RP-1 fuel. This is less than the fuel capacity of a Boeing 777-300ER.
So if the Starlink launches happen twice a month, the carbon footprint from the fuel would be at least an order of magnitude smaller than that of a single long haul route of an airline.
Building the satellites and the rocket (including the expendable second stage) is obviously pretty carbon intensive too, but fuel is probably what you were thinking about when posing this question.
Everyday Astronaut on YouTube has a great video on the environmental impact of rockets. 55 minutes long but it's fascinating. The TL;DW is they're not as bad as you'd expect. But not zero-emissions, obviously.
Elon has discussed using the sabatier process on earth to create methane from CO2 to fuel it. Presumably the motivation would be a mix of PR and practicing for mars.
> The user antennas are likely to be quite expensive at first (several thousand dollars). Cost reduction of the user antennas is the biggest hurdle Starlink currently faces. Nobody knows yet how much SpaceX will charge for the antenna or service.
Considering the pay-over-time model that SolarCity implemented, should we not expect the same here? Assuming the user antenna can be removed, it can be rented, just like a modem or router.
Relying on the antennas having a long ‘useful life’, it may not be too prohibitive if rented out.
The hardware for existing geostationary consumer-grade (cheap, sub $150/month service) Ku and Ka-band VSAT terminals, for use in really remote locations in the USA, is about a $800 to $1200 cost. It's absorbed into 24/36 month contract terms.
People who live in a really remote plate and sign up for a 24 month term for some barely-usable VSAT service are usually disappointed to find out how firmly they're locked into the contract, when somebody builds a WISP in their area.
I would not be surprised if there's a terminal rental charge or 12/24/36 month contract terms offered.
>Considering the pay-over-time model that SolarCity implemented, should we not expect the same here?
Considering the pay-over-time model essentially bankrupted SolarCity, forcing Tesla to buy them out, for which Tesla are being sued by shareholders...I'm not sure that's the model to resurrect.
Random thought: are the satellite trajectories "dense enough" that in the future when ships/people are trying to leave the Earth have to get a delay/launch window and the Starlink satellite(s) may get diverted around the area you're going through.
No, think of the sky in terms of the surface of the earth. The star link satellites paths are like a few highway stretched across the surface. Not only is it a tiny sliver cut across the glob, it’s also at a very specific LEO so the spacecraft wouldn’t even stop there in the vast majority of cases.
I see yeah I meant just flying through it. It would help to see a scale model of the completed orbits as it looks "worse" than it is. I'm not against it, though it would be nice if it was free ha.
side note: not related but the Eccentric Orbits book about Iridium was really good imo.
>The user antennas are likely to be quite expensive at first (several thousand dollars). Cost reduction of the user antennas is the biggest hurdle Starlink currently faces. Nobody knows yet how much SpaceX will charge for the antenna or service.
One of Starlink's competitors OneWeb was able to aquire a user antenna which is apparently a breakthrough in cost reduction at $15 [0]. I would assume the manufacturer has a contractual agreement with OneWeb, but I can imagine that Starlink could develop a similarly priced antenna with it's greater resources.
Just Curious. Have you been using zettelkasten, org-roam or similar tool to take notes whenever you came across this topic and publishing now as the compiled version ?
> They are about the size of an extra-large pizza, so you won't be able to get a Starlink phone.
I'll be surprised if this doesn't shrink. I worked for a company in the mid-2000s that had some pretty cool IP which effectively shrank a briefcase-sized BGAN terminal down to a Pocket PC (pre iPhone days!) with Pocket PC-sized antenna strapped on the back.
People have worked on vehicles to get us to space for decades. It's not going to shrink much, and I don't think it'll be any less expensive. ... enter SpaceX
Sort of, but none of the debris will reach the ground. The current version of Starlink is designed to burn up completely in the atmosphere upon re-entry.
what sort of components are the Sat's made up of?
Are we all happy with the idea of "burning up" wont have long term issues with not completely "burnt up" parts floating around the sky and eventually being breathed/drank/eaten in the biosphere?
Try to ask this in a non-hyperbolic way, I dont want to sound alarmist and I'm no "chem trail" loon. but "Burning up" doesn't just make it magically go away.
Any parts that didn't completely burn up (of which, again, there shouldn't be be any) would not float, they would fall. But it's a good question if the burned up mass would have any bad effects. My intuition is that the amount of mass is so miniscule in comparison to the whole atmosphere that it really doesn't make a difference what the satellites are made of as long as it's not radioactive. But I haven't done any math about it.
in a short term view yes i agree, but its more than just starlink (we have all sorts of sat's made of highly refined materials falling from the sky), and its over years... decades even.
I haven't seen articles on this, so either its being ignored or it's not an issue. :-/
I strongly suspect it's the latter - since "Every day, Earth is bombarded with more than 100 tons of dust and sand-sized particles." [1] and the satellites burning up aren't going to approach that mass simply due to launch costs.
There is no uranium or plutonium in these satellites, nor anything else radioactive. On the other hand, according to [1] the abundances on this page multiplied by [2] 40000 tons per year, the stuff naturally hitting the atmosphere each year includes 0.3 kg uranium, 1.6 kg thorium, 3 kg radioactive isotopes of potassium, 10 kg mercury, 18 kg cadmium, 50 kg lead, 72 kg arsenic, and 120 tons of chromium, among other things. This stuff burns up in the atmosphere all day long, and has for billions of years.
Well Starlink will be the majority of all satellites pretty soon, so it really only matters for Starlink for the foreseeable future, if it matters at all.
Could someone explain the technical notability beyond what can be inferred from the Wikipedia page?
> An autonomous system (AS) is a collection of connected Internet Protocol (IP) routing prefixes under the control of one or more network operators on behalf of a single administrative entity or domain that presents a common, clearly defined routing policy to the internet.
> Originally the definition required control by a single entity, typically an Internet service provider (ISP) or a very large organization with independent connections to multiple networks.... The newer definition ...came into use because multiple organizations can run Border Gateway Protocol (BGP) using private AS numbers to an ISP that connects all those organizations to the internet. Even though there may be multiple autonomous systems supported by the ISP, the internet only sees the routing policy of the ISP. That ISP must have an officially registered autonomous system number (ASN).
> A unique ASN is allocated to each AS for use in BGP routing. ASNs are important because the ASN uniquely identifies each network on the Internet.
It’s the ISP equivalent of setting up stall at a wholesale market.
To exchange traffic, providers advertise the IP ranges they can route.
The starter pack for this consists of an AS number (which is fundamentally just a nominal integer identifier for your organisation), an interconnect (either to an internet exchange or a transit provider), and some address space to call your own.
Can you delve into the possible business models (not military)_ and commercial applications if this remains?
Starlink is seriously this biggest mystery to me about SpaceX's over all trajectory, I find it fascinating but ,my mind doens't conjure up many more thoughts than they'll either be an ISP or work with ISPs to reach a currently unavailable demographic, the nuts and bolts of how are entirely lost on me as I realistically don't understand the Industry.
There's a tongue-in-cheek view that Elon Musk is a lonely alien just trying to get home. His Sub-Etha Sens-O-Matic is on the blink, so creating a privately-operated interplanetary transportation programme from first principles was the next best plan.
Alternatively, view these interests (grid-scale batteries, solar panels, underground excavation, autonomous vehicles (cars, rockets, spacecraft), hyperloops, benevolent AGI, low-latency satellite internet &c &c) as essential infrastructure technologies for derisking survival on Mars and even beyond, thereby increasing the likelihood that the humans will survive even the gross mismanagement of their current planet, or a passing Vogon Constructor Fleet, by metastasising across Sector ZZ9 Plural Z Alpha.
Which is to say, that:
> SpaceX's overall trajectory
is a Hohmann transfer orbit, or a nice hot cup of tea.
Ehh, all I see is just a guy who grew up on the Internet and behaves as such on both Social Media and in real life; except his story is one where, unlike most of the population, his often seemingly reckless risk taking paid off and now he wants to put his resources to its best use that creates a Legacy that will outlive him according to Dr. Zubrin. Pretty Human if you ask me.
> Alternatively, view these interests...
I'm well aware of that, perhaps latenly in 2007ish and understood it well by ~2013 and I have been part of Mars Society since 2017.
I got an offer to go to Tesla in Supply Chain during Model 3 ramp up, but declined so I could follow up on opportunities that led to me working for Kimbal's businesses for the past year in order to go to SpaceX. I interviewed at the Boca Chica Launch facility in February because I understood all of that and dedicated the last ~4 years of my Life to do that as I understand the imperative of being Multi-planetary. It went well, way better than I expected in fact, but in a post COVID World its just no longer tenable.
I still think by re-focusing on Supply Chain I can try and join Kimbal's Sqaure Root program when it starts to send container garden farms to Mars for future colonization missions, as I have a background in Ag as well as Culinary and I'm still in good with Corporate, my old Team and Management.
Hell, I'll go even further and tell you a big reason I'm a Bitcoiner to this day is because I think its the only currency/monetary system that can actually facilitate a Multi-Planetary Economy. We've had actual initiatives by members of out community petition NASA to put Bitcoin wallets on the Mars rover(s) so we could pilot test this and iterate [1], but were ultimately rejected. Still undeterred by that let down by NASA, Blockstream has a Satelite system to be able to validate the blockchain and send transaction without the need for a traditional ISP based Internet that allowed us to gain insight as to how it would work with Earth and hopefully apply that data eventually to Mars when the time comes.
But, back to my initial question:
What I meant to ask specifically was: from an ISP POV how do they build a system that can theoretically overcome China's Great Firewall, while simultaneously having a Tesla Factory there that is critical to the overall 'Master Plan,' without invoking the ire of the CCP and having them just take it like they have with Hong Kong?
Do they just deny citizens of the PRC these services and wash their hands of the situation? Can Starlink actually have the means to circumvent the disgusting Panonopticon that's been created with the current Internet, or does the topography of how these systems are built and are routed prevent us from having the ACTUAL Internet we were supposed to have all along despite that? Could the two be so different they simply cannot interact, or does the information suggest it will simply be the same protocol?
I think it's a pretty safe bet that starlink will act just as an ISP. The business model is obvious, "existing ISPs target people living in permanent ground based dwellings in densely populated areas well, we'll target everyone else". Everyone else includes some obvious lucrative targets, passenger jets (especially on intercontinental flights), ocean fairing ships, rich people living in places that simply aren't hooked up to other ISPs - such as many cottages.
The military isn't particularly interesting, for the most part they're just a special case of the ship and plane market segment.
As far as business models having this enables, it's digital land that belongs to SpaceX/StarLink, which doesn't really restrict what sort of business can be run with it. ISPs will have an AS (sometimes more than one), but non-ISP Internet companies sometimes also have an AS. Google for, example has AS15169 [0] and Google Search and Maps and YouTube and everything is served from there. (Ignore Google Fiber as it predates it.)
Wikipedia[1] goes into peering far more elaborately than I, but it would be pure speculation on my part to say anything more elaborate than "StarLink will be an ISP". Elon Musk could give the StarLink receiver hardware away for free and start his own TV channel, having this AS enables that, but doesn't give any indication other than StarLink intends to be on the Internet in a some way. (Not that you'd necessarily need an AS to have a TV channel.)
The business model is interesting. They may be able to build a network where the marginal cost of adding a new customer is very low (just the cost of the base station). And the amount of demand and ability to pay could vary hugely across the globe. What do you charge a school in Botswana, or an arctic research station, or an oil rig, or a traffic monitoring camera in Iowa? It makes sense to have as many customers as possible within reason, regardless of what they can pay. But only if the same bandwidth can't be sold to someone else for more.
Network engineers are tribal but also very cooperative (in the right geographic areas). You can get about 240,000 ipv4 routes out of the entire 840,000 size ipv4 routing table by peering with the SIX route servers. And for any entity that is big enough to be seen as a real serious player, places like where the SIX if physically located are also optimal to establish direct sessions (PNIs) with other large/serious ISPs. All settlement free. Major content sources, CDNs and hosting providers have a strong interest in getting outbound traffic off their networks as efficiently as possible. And downstream eyeball ISPs (starlink) have the same interest in reverse.
ISP senior neteng here: It means they're starting to get serious about becoming their own facilities-based ISP at the most fundamental "core" level of the Internet. In this case the facilities will be their earth stations for trunk links to the starlink satellites, the satellites themselves, and their routers at major carrier-neutral interconnections points such as where the SIX is physically located.
A number of the earth station filing geographic coordinates, in public FCC documents, correlate with the known location of regen huts for north america's largest dark fiber/DWDM/transport carriers on intercity fiber paths. Such as Centurylink and Zayo. The typical thing to do in this case is that they'll be buying 'lit' circuits from carrier-of-carrier ISPs to reach those IX points.
It means that they're a first-class part of the internet in their own right, rather than part of someone else's network. Without an AS number, you're a client of whatever ISPs are providing you with connectivity. If you have your own AS number you are at least nominally a "peer" of other ISPs and responsible for routing a (possibly very small) part of the internet.
I've reached some geostationary and orbiting weather satellites with Software-Defined Radio (SDR) in the past. I'd love to try and snag some signals from these... Anyone know of a more formal scientific write-up on these to find the frequency/modulation?
The FCC filings are probably your best bet, 95% of the technical details we know are from them (92% of statistics are made up). I'm not sure if they contain enough information for you though.
From what sparse info I can find, it appears ground > sat > ground comms will be encrypted in some fashion so listening with an SDR and doing anything meaningful with the data might be hard, but I'm curious if there will be opportunities for it to be abused for anonymous downlink connections like the Turla spyware group used to do (https://arstechnica.com/information-technology/2015/09/how-h...)
A RTL-SDR with whatever antenna you whipped up to listen to NOAA sats is not going to have enough link margin to tune in a SpaceX transmission. IIRC the example user terminal SpaceX demoed ages ago is a phased array hooked up to a fairly beefy modem.
I have an SDR module and have played around listening to things (and having a blast doing it) but nothing like that! That sounds really cool. Do you have any resources you could point?
/24 has 7 addresses that return pings from nmap -sn, all have about the same ping as the previous hop on the traceroute so my ping probably isn't reaching space.
Nothing too fancy for now. Two tiny, I assume test, prefixes (one v4, one v6) that aren't even verified by IRR/RPKI. Happy to see, however, that they peer openly (at least for now).
Probably with the same process as currently some regimes are handling satellite phones - the receiver hardware becomes a regulated item where it's a crime to import, operate or possess one without an appropriate permit.
Wouldn't it be possible for someone to fake their location and sign up anyways? Or does starlink require the knowledge of your exact location to operate?
They'll turn the antennas in the satellites off when they're not above a region they're servicing. That's both for legal reasons (you are not allowed to use spectrum you haven't licensed) and physical reasons (less energy usage).
This isn't the first route leading straight to satellites. I forget the name of the provider, but there is at least one big allocation where whatever you're sending to it is basically coming straight back down as radio.
All of the major geostationary telecom satellite owners (intelsat, eutelsat, SES, etc) operate their own ASes and IP networks. Nothing quite leads "straight" to a satellite, but you can definitely look at the prefixes announced by a company like Intelsat and try to make intelligent guesses as to which earth station it correlates with.
Aren't most cellular networks CGNAT? I know my T-Mobile phone is (for IPv4), with native IPv6. I'd like to see more consumer internet be IPv6 first, with a weaker IPv4 to help speed along adoption of v6.
A number of them are. Especially when you get out of first world countries. The problem is that folks who deploy them really hate scaling them up. They believed in IPv6 and the transition isn't going fast enough (sup hosters and cloud providers - stop treating IPv6 as a "customers aren't asking for it" problem and realize this is infrastructure you have to do on their behalf). So now they are stuck paying for v6 CGNAT upgrades or in some cases: running them congested.
Sprin isn't. yet. who knows what's going to happen with the merger. AT&T is though. I'm using OpenMPTCP router to my VPS to both get around CGNAT and aggregate the speed of both connections.
Usually the encumbent/older ISPs still have IPs from the age where they being handed more freely, so they have more than enough to give every customer their own public IP while the challengers have fewer IPs and have to resort to carrier grade NAT. Smartphone plans usually get carrier grade NAT even at the older ISPs.
Kudos to SpaceX for potential lower latency internet service.
But I'm concerned about the light pollution caused by Starlink satellites at night: https://www.unilad.co.uk/science/spacex-to-test-starlink-sun.... If they don't fix it, it would look awful for people who prefer a clean sky. And an average people wouldn't have a mean to mitigate that - it's not like a drone flying on top of your roof and you can shoot it down, it's a set of satellites thousands miles away.
And if the parts of those satellites broke down, I wonder how much SpaceX itself could do to fix them. If the broken satellites end up a space garbage, that reminds me of the film scene in Gravity. This is a problem we already have today, contributed by all aerospace institutions. I guess people are treating it just like any other garbage around us: if it doesn't show in my eye, it's not my problem.
But it's a harder problem than things like metal garbage on the ground, the latter you could recycle, but I don't know if people really recycle a satellite.
Regarding light pollution:
It's not as bad as it seems, since the satellites really only show up while they are raising their orbits, and usually only during dusk and dawn. Once they're in their final orbit you can't really see them. Also, SpaceX are working on the problem. They are testing a sun screen that will block light from reflecting towards Earth while being far enough away from the satellite to prevent overheating.
Regarding space junk:
These satellites are in low orbits, so if the satellite breaks it will just re-enter the atmosphere and get burned up within a few months or so. I believe they also have a deployable drag device to intentionally de-orbit the satellites at their end of life, or if there's a problem with the satellite.
IIRC they are offering service only to northern parts of North America to begin. The network is sparse right now and I think the plan is to start with Canada.
its increasingly feeling like we won't make it...cross-satellite link is out, so we're basically back to Iridium (read: DSL) speeds, meanwhile the company burns cash like crazy and needs to fundraise every 8 months. hearing it needs to be spun off to get funded separately
but then again I've been hearing that for awhile. Push come to shove moment seemed to be when the FCC said to stop bs'ing about theoratical network speeds in applications for rural broadband grants, I still don't understand how that got submitted..lawyers...
- To serve as an internet backbone they need to bounce off of ground stations
- In particularly busy areas where they might otherwise have routed traffic up, laterally, down they have to now go up, down resulting in 0.5x local available bandwidth.
- The latency is a few ms higher on long routes when using starlink as a backbone
None of this particularly effects their ability to serve as a consumer ISP. The advantages are still there. Also cross satellite laser links are still the plan as far as the public has been told, just somewhat delayed.
A day or two after that statement Elon said that the amount they were thinking of spinning it off right now is "0".
PS. Your use of "us" makes me think you might be an insider, I'm definitely not, but as an outsider it looks like SpaceX has a pretty good chance of pulling this off.
I would have thought that their backhaul ground station links would be many times faster than their user station bandwidth, since they can use large directional antennas.
Yes, the 0.5x multiplier is based on an over simplified model where everything is consuming the same resources... e.g. that might be onboard processing power. In reality it is probably much better than that, but considering what I was replying to I wanted to be conservative.
> FCC smacking us (article is more than a little generous towards us, thank Ajit Pai's reputation I guess)
This is the FCC protecting incumbent service providers on the basis that Starlink isn't already widely deployed. They are not claiming that they have any kind of technical analysis that Starlink will not in fact be low-latency (the threshold is a full 100ms, so not incredibly strict). They are claiming that simply because they do not already offer the service they will get lumped in with the high latency geo-stationary sat link providers;
"In the absence of a real world example of a non-geostationary orbit satellite network offering mass market fixed service to residential consumers that is able to meet our 100ms round trip latency requirements, Commission staff could not conclude that such an applicant is reasonably capable of meeting the Commission's low latency requirements, and so we foreclose such applications."
Frankly it's very sad, because the $16 billion in rural broadband grants are the perfect application for Starlink.
The funds are distributed by "multi-round, descending clock auction where bidders will indicate in each
round whether they will bid to provide service to an area at a given performance tier and latency. The auction will end after the aggregate support amount of all bids is less than or equal to the total budget and there is no longer competition for support in any area." [1]
I think if Starlink was allowed to bid as both a "low-latency" and "high performance" tiers, then at the pricing level they would likely have come in under, they would have wiped the floor with the competition and won a lion's share of that funding. So the FCC appears to be trying to keep them out of the most lucrative tiers to protect the incumbent approach of extremely expensive rural fiber deployments.
IMO a much smarter approach would have been to require some sort of performance bond from every applicant that they will successfully achieve the service levels they are bidding on. It's not like Comcast or Verizon haven't won massive subsidies in the past and then failed to deliver. The approach the FCC is taking (not surprisingly) just limits innovation and wastes taxpayer dollars. Actually in this case the dollars come from customer service fees, but they are effectively taxes.
I agree a performance bond would be best, with enough teeth that the worst case forces SpaceX to give every cent back or go bankrupt. Same with Comcast et al.
But you're unnecessarily ascribing malice. SpaceX has made some big promises, and if it delivers it deserves to hoover up basically all of that money.
But until the network is real, customers can sign up, and it's just a matter of scaling, a rational actor attempting to genuinely forward the interests of the country may well decide it poses too much risk.
I was under the impression that cross-satellite laser links would be coming with second-generation version of the satellites, as opposed to the first operational batches that've been launching. Has that changed?
That seems backwards. They should have much more ground station bandwidth to those giant tracking dishes than to little personal phased array antennas.
The problem with no sat-to-sat traffic is they'll need a crapton of downlink stations, which increases cost substantially. Also, they'll be geographically limited--no service out in the middle of the ocean.
> Also, they'll be geographically limited--no service out in the middle of the ocean.
There's a workaround for this, which is to stick a relay station out in the middle of the ocean on a ship that bounces the signal to a satellite closer to land. Expensive, but workable for the right price.
If there's a high enough density of commercial ships, you might be able to piggyback on the user terminals for this purpose. At the same time the military is likely one of the most interested customers and probably wouldn't be thrilled at relying on the typical distribution of commercial ships.
Last I heard, the current generation of sats do not yet have inter-sat comms hardware but it's planned for the next generation which should come in 2021
Looking at their prefix right now I see them behind Hurricane Electric prepended 3 times. They ought to filter/sever that out give HEs "we treat peers as customers" offensive routing policy. Unless that's what they want :/
That they don't offer peering for free but charge for it. The idea of peering is that both sides gain from it as they reduce the traffic routed through middlemen like level3. So you both go to internet exchanges like DE-CIX, and pay a basic fee that you may participate in the exchange but you get all the connectivity to other ISPs from that exchange. But some ISPs don't do that and require you to peer in one of their own datacenters, requiring you to pay them money. You are now their customer instead of a partner. A big German partially government owned ISP did this for a while until they finally gave up a few years ago.
HE doesn't charge for peering though. The complaint was that HE was leaking peer routes to other peers per default, which is normally only done if you are a customer (which can be nice if you want this, but can lead to sub-optimal routing if you don't want this).
So this is something they're not very open about, but you'll learn the hard way. Years ago Hurricane Electric would setup settlement free peers to be downstream customers in order to inflate their IPv6 footprint. Years ago HE was trying really hard to be the largest IPv6 transit provider in terms of prefixes and ASNs behind them in order to get some of the larger networks to peer with them ("we have X percent of the IPv6 internet"). To their credit it actually worked with a few.
As for the goofy routing: if you were an IPv6 peer they'd announce you to their a large amount of their other peers (normally you don't announce peers to transit or peers - just to customers). This led to lots of sub-optimal routing scenarios and you'd have to ask them to knock it off and treat you like a real peer. So yeah cool you get free transit kinda, but the goofy routing isn't worth it for a lot of folks. This plus their long history of lack of filtering their transit customers and enabling lots of route hijacks really gave them a bad reputation. Their recent news about deploying RPKI is a bit of fake news: it's not real RPKI and doesn't address the issues with their customers.
That's what I thought, but I don't play in that playpen
Wouldn't you just have an inbound filter to only allow HE ASes? Or do you also want to reach other ASes that pay HE for upstream service, but are multi-homed so exist in their own AS?
There's two ways to think about this problem: routes advertised and routes received.
Received:
So for the most part people implement zero or just minimal filters on routes received from peers. They might drop a set of ASNs they consider large that would be indicative of a leak. Some may go the extra distance and even do IRR filtering. But for the most part people are fairly permissive in what they accept from peers
Advertised:
Here's the catch. You announce routes to HE and they're propagating it to networks you don't anticipate. This pulls in traffic from other HE peers you weren't expecting. You don't really have much controls here. You can try to prepend but remember it's the other leaked peers of HE that will generally set a better local pref to HE (by virtue of being a peer) so your prepends won't do anything.
Does anyone know the next most prolific/frequent commercial launcher after SpaceX and what their annual average is? The frequency of their launches seem completely nuts.
Its one if those things where you could probably game the metrics to justify half a dozen different answers. For example ISRO doesn't have many launches, but they do a lot of rideshares so I wouldn't be surprised if they're #1 or #2 in terms of individual satellites launched last year.
It also depends a lot on what you consider a commercial launcher. In recent history, the Chinese Long March rockets have been dominating the numbers game (around 25 launches last year, vs around 20 Soyuz and 15 Falcon 9s). But the Long March family is a lot more diverse than most rocket families, so they're probably not #1 if you're comparing specific configurations. You also probably don't want to count rockets that missed space and ended up in the middle of a village, which hurts China's totals.
Ok I have a naive question I think someone here can answer: how long do these satellites stay in orbit? Don't they need a boost every now and then to maintain the momentum? Is that boost built in?
Yes. They have built-in thrusters to maintain their orbit. If they stop working or get retired their orbit decays and they burn up in the atmosphere. See the 'ion propulsion systems' graphics at starlink.com.
These particular ones won't stay in orbit long. If left to decay naturally, maybe 5-10 years (tending towards the lower side of that range, but there's unpredictable factors that can have an impact e.g. space weather). However orbital decay times increase exponentially(ish) with your height. At the higher orbits they've gotten permission to use, it can take more like 200-1000 years to decay naturally
Excellent report. I'm wondering the tech used in Google’s Loon balloon internet service and this Starlink is different? Does both are using a similar tech?
I think the technical aspects of building the necessary communication infrastructure for a human trip to Mars would be absolutely fascinating. Due to the large RTTs (varying from 4 to 24 minutes), use of TCP/IP on Earth-Mars links is infeasible. Instead, you'd have two TCP/IP Internets (Earth's Internet and Mars's Internet), with store-and-forward protocols being used between them [1].
Earth is only a small point from Mars and vice versa. There is an upper limit of what you can send via direct electromagnetic radiation. However, you can set up relay stations in the solar system. It will increase the RTTs for those data but also increase the available bandwidth. You'll likely have a layered internet where really fast communication is most expensive, and it gets cheaper the slower you allow it to be, down to the slowest (but highest bandwidth) method of your data being printed into DNA (or some other high density storage medium) and put onto the next scheduled transport to Earth.
SpaceX moved the satellite orbits much lower to mitigate this concern. In their current low orbit, any broken satellites or debris will fall out of the sky within a few years due to atmospheric drag. The real concern is in higher orbits where debris would persist for millennia.
The satellites are designed to completely burn up as they reenter the atmosphere. So it won't rain anything on the ground. And the mass of the satellites is minuscule compared to the mass of dust and meteors that fall into the Earth's atmosphere from space every day.
If even 1% of Starlink satellites crap out before their planned EOL, there could be hundreds or thousands of them left in orbit for hundreds of years after the constellation is inactive
No way. Orbital decay times increase more or less exponentially, and once you get above the thermopause (which moves around from 500-1000km based upon terrestrial and space weather conditions), atmospheric drag becomes practically 0.
SpaceX is upfront about this in their FCC filings:
"The natural orbital decay of a satellite at 1,150 km requires hundreds of years to enter the Earth’s atmosphere"
All current satellites are in 550 km orbits. They had plans to also launch in 1150 km orbits but they made some FCC filings requesting to reduce the altitude.
On April SpaceX modified the architecture and submitted application to FCC proposing to operate more satellites in lower orbits. Previously they had approval to operate almost 3000 satellites in orbits between 1110-1300 km. Now the modified plan foresees 1500 satellites at altitudes between 540-570 km and another 7500 satellites in orbits around 345 km at later stages.
I dunno, space isn't a delicate eco-system like the oceans, so long-term I think industry and infrastructure in space is a much better idea than the same on earth.
Doesn't matter if you're emitting greenhouse gases if you're somewhere where there is no atmosphere.
Starlink will probably make its bucks on number of subscribers, but the targeting of rural areas as the primary first consumer base will probably mean there will be enough bandwidth without throttling/perGeeBee charges.
Last I read they are targeting Canada and northern rural US
> you won't be sharing with many people at any point in time.
That depends of the density of subscribers. Starlink is only feasible (at least with stable, high performance...) for low density areas. The stated market for Starlink is "the 3% to 4% of the hardest to reach customers for telcos." Almost no one reading this is part of that market.
SpaceX has good reasons to be compliant with the policies of sovereign nations; Starlink relies on shared spectrum and orbits a shared planet. I expect SpaceX will diligently observe the disparate policies of all nations. Note that the purpose of Starlink is to fund major SpaceX initiatives, particularly the Mars mission. It isn't an egalitarian exercise and squabbles with China or its client states are not a source of revenue.
There is no opportunity for clandestine use either; the ground station is a transceiver. Transceivers are relatively easy to detect when they transmit.
Random FAQ about Starlink below:
The combination of low latency (20-40ms) and high bandwidth (100+Mbps) has never been available in satellite internet before.
A public beta may start later this year for some users in the northern US, around the 14th launch. Today is the 7th launch of v1 satellites. SpaceX is hoping to do more than two launches per month but haven't reached that pace yet.
The ground stations look like this: https://www.reddit.com/r/SpaceXLounge/comments/gkkm9c/starli... The user antennas can be seen in that picture; they are the smaller circular things on black sticks. They are flat phased array antennas, and don't need to be precisely pointed like satellite dishes do. They are about the size of an extra-large pizza, so you won't be able to get a Starlink phone.
The user antennas are likely to be quite expensive at first (several thousand dollars). Cost reduction of the user antennas is the biggest hurdle Starlink currently faces. Nobody knows yet how much SpaceX will charge for the antenna or service.
Starlink can't support a high density of users, so it will not be an alternative to ISPs in cities. Rural and mobile use are the important applications. The US military is doing trials with it. Cell tower backhaul may also be a possibility.
Starlink V1 doesn't have cross-satellite links, so the satellites can only provide service while over a ground station. There will be hundreds of ground stations in North America; no information about other regions yet. Starlink V2 is planned to have laser links between satellites, which will enable 100% global coverage 24/7, though local regulations are likely to prevent SpaceX from providing service in many places.
Because the speed of light in a vacuum is 30% faster than in optical fiber, the latency of Starlink over long distances has the potential to be lower than any other option once laser links are available.
Each launch of 60 Starlink satellites has nearly as much solar panel area as the International Space Station. Once SpaceX's Starship is operational they should be able to launch several hundred satellites at once instead of just 60.
Starlink's only current competitor, OneWeb, just filed for bankruptcy after only launching a handful of satellites, and is fishing for acquisition offers. Amazon is also planning something called Project Kuiper but not much is known about it.
Starlink V2 will have 30,000 satellites, requiring hundreds of launches. Even once the initial fleet is launched, SpaceX will still need to maintain the constellation with many launches per year indefinitely.
SpaceX's FCC application has many interesting details: https://licensing.fcc.gov/myibfs/download.do?attachment_key=...