The combination of low latency (20-40ms) and high bandwidth (100+Mbps) has never been available in satellite internet before.
A public beta may start later this year for some users in the northern US, around the 14th launch. Today is the 7th launch of v1 satellites. SpaceX is hoping to do more than two launches per month but haven't reached that pace yet.
The ground stations look like this: https://www.reddit.com/r/SpaceXLounge/comments/gkkm9c/starli...
The user antennas can be seen in that picture; they are the smaller circular things on black sticks. They are flat phased array antennas, and don't need to be precisely pointed like satellite dishes do. They are about the size of an extra-large pizza, so you won't be able to get a Starlink phone.
The user antennas are likely to be quite expensive at first (several thousand dollars). Cost reduction of the user antennas is the biggest hurdle Starlink currently faces. Nobody knows yet how much SpaceX will charge for the antenna or service.
Starlink can't support a high density of users, so it will not be an alternative to ISPs in cities. Rural and mobile use are the important applications. The US military is doing trials with it. Cell tower backhaul may also be a possibility.
Starlink V1 doesn't have cross-satellite links, so the satellites can only provide service while over a ground station. There will be hundreds of ground stations in North America; no information about other regions yet. Starlink V2 is planned to have laser links between satellites, which will enable 100% global coverage 24/7, though local regulations are likely to prevent SpaceX from providing service in many places.
Because the speed of light in a vacuum is 30% faster than in optical fiber, the latency of Starlink over long distances has the potential to be lower than any other option once laser links are available.
Each launch of 60 Starlink satellites has nearly as much solar panel area as the International Space Station. Once SpaceX's Starship is operational they should be able to launch several hundred satellites at once instead of just 60.
Starlink's only current competitor, OneWeb, just filed for bankruptcy after only launching a handful of satellites, and is fishing for acquisition offers. Amazon is also planning something called Project Kuiper but not much is known about it.
Starlink V2 will have 30,000 satellites, requiring hundreds of launches. Even once the initial fleet is launched, SpaceX will still need to maintain the constellation with many launches per year indefinitely.
> Because the speed of light in a vacuum is 30% faster than in optical fiber, the latency of Starlink over long distances has the potential to be lower than any other option once laser links are available.
Not to mention the dramatic reduction of hops. My route from the USA east coast to the university of Melbourne in Australia[1] is 30 hops reported by traceroute, with at least as many switches in the way. You could make the same link with only a few satellites.
[1]www.ie.unimelb.edu.au
EDIT: 30 is actually just the default max hops in traceroute, its really 32 hops from me to Melbourne.
I'm in Switzerland and have 25 hops, which can be broken into:
- 1-7: Hops within my ISP's in-country network (~4ms total latency)
- 8-10: Hops within my ISP's in-Europe network (~28ms total latency)
- 11: London -> New York (~93ms total latency)
- 12: New York -> Los Angeles (~160ms total latency)
- 13: Transfer in LA from my ISP to AARNet (about the same latency)
- 14: LA to somewhere in NSW (guessing Sydney, 305ms total latency)
- 15-25: Routing within AARNet and Unimelb (319ms total latency)
So most of the latency looks to be attributed to large hops across oceans rather than internal switching. Even if you could narrow it down to London -> NY -> LA -> NSW you'd have 277ms.
From my university network in Germany, I seem to get a direct London → Perth link and a total latency of 278ms. I couldn't find any information about a direct fiber between London and Australia, though there is one from London to Singapore, and AARNet seems to have presence in Singapore. My guess would be that there is some switching below the IP layer going on in Singapore as described by dicknuckle in a sibling comment.
9 cr-fra2-be11.x-win.dfn.de (188.1.144.222) 15.360 ms 15.385 ms dfn.mx1.fra.de.geant.net (62.40.124.217) 15.021 ms
10 ae7.mx1.ams.nl.geant.net (62.40.98.186) 21.806 ms dfn.mx1.fra.de.geant.net (62.40.124.217) 15.244 ms ae7.mx1.ams.nl.geant.net (62.40.98.186) 21.707 ms
11 ae9.mx1.lon.uk.geant.net (62.40.98.129) 29.059 ms ae7.mx1.ams.nl.geant.net (62.40.98.186) 21.805 ms ae9.mx1.lon.uk.geant.net (62.40.98.129) 28.933 ms
12 138.44.226.6 (138.44.226.6) 196.613 ms ae9.mx1.lon.uk.geant.net (62.40.98.129) 29.074 ms 29.156 ms
13 138.44.226.6 (138.44.226.6) 196.807 ms et-7-3-0.pe1.wmlb.vic.aarnet.net.au (113.197.15.28) 277.463 ms 138.44.226.6 (138.44.226.6) 196.778 ms
14 138.44.64.73 (138.44.64.73) 277.544 ms et-7-3-0.pe1.wmlb.vic.aarnet.net.au (113.197.15.28) 277.629 ms *
Yeah you're right, "pe1" shows up in many of their hostnames with different states in the name. And their looking glass shows a bunch of hops within Australia, none of which show up when I do a traceroute.
Huh, my ISP is Init7 and as I said, takes 25 steps to get there, though steps 18-24 inclusive show as "waiting for reply" in MTR.
Init7's traceroute [0] shows 5 fewer steps to r1lon2.core.init7.net than my traceroute though and appears to route through r1bsl1 (assuming Basel) instead of Frankfurt.
Perhaps you're in/near Basel and so skip straight to London, circumventing my 8 hops around Zurich?
Those numbers seem awfully high. I've just ran a speedtest between Germany and California and got 170ms total latency. Back when rabb.it [0] was alive I could even get reaction scores of 450ms compared to 230ms when running the test on my computer. This is quite impressive when you consider that this means video encoding and decoding must happen in less than 50ms. Conventional video encoding is usually slower than real time.
> Because the speed of light in a vacuum is 30% faster than in optical fiber
Non-Australians, don't get too excited. Those satellites are at 340 miles, so that adds 680 miles of latency (3.66 ms) plus two to three Starlink hops, which cancels out some of that "speed in a vacuum."
The way to get almost c on earth is via direct microwave links.
The minimum latency is 3.66 ms, but depending on the angle to the satellites involved it may be adding significantly less than 680 miles to the journey. Aka if it’s 340 miles east and 340 miles up that’s 480 miles to the satellite, adding just 140 not 340 miles.
Outside of HFT, most networks are far from the shortest great circle routes between you and the other end, which further completes the issue.
You misunderstood. (Ignoring the earth being a sphere.)
A satellite directly overhead means you need to travel to that altitude. However, if it’s not overhead light is traveling the hypotenuse of a right triangle where X is the distance to a point underneath the satellite and Y is the altitude of the satellite. That distance is the square root( X^2 + y^2). From there you need to travel to a different base station.
Assuming an ideal path where the satellite is directly between two locations that are 680 miles apart, that adds up to 2 * ( sqrt(340^2 + 340^2) ) ~= 961.7 miles vs 340 + 340 miles, or an added 281.7 miles not 680. In other words 1.414x the distance rather than simply adding 680.
Clearly the earth is not a flat and your very unlikely to be in that situation, but assuming you can reach several satellites at the same time it is likely one of them will be roughly in the direction you want to go.
there's probably double or triple the switches in between. they're not really switches though. more like super fast packet routers. the latency is much less than any normal or carrier grade switch. they don't touch the IP layer at all. (I manage a bunch of them)
one of the things I keep telling non-network-eng people is that looking at a traceroute gives you absolutely no way of knowing what the OSI layer 2 topology is for transport networks, MPLS, switches, DWDM circuits carried as lit services "glassed through" a location, etc. Just the IP network.
Discovering the underlying layer 2 topology of a carrier's network requires inside information and cannot be easily discerned sitting at a computer elsewhere on the internet. You might see two routers that appear to be directly adjacent to each other but it's actually carried as a 10Gbps VLAN across a several-state sized region between two cities many hundreds of km apart, with a lot of intermediate equipment in between.
Do you have any recommendations for where to learn more about network eng for us non-network-engs? My experience is in DevOps (on k8s, mostly) and systems/application programming, if that helps pinpoint any advice you could give. I’m trying to get the bigger picture like network engineers understand, as well as start to understand the layer 2 topology that you described.
The traditional and hard way is to start as a first-tier NOC person for an ISP and work one's way up. Or to start as a field tech for an ISP and show skills worthy of getting promoted (perhaps into a NOC job or a manager-of-field-techs job). That method takes quite a while.
Medium to large sized ISPs have a very large amount of BSD/GPL/Apache/misc licensed software running to support back end monitoring and provisioning systems. They do occasionally hire software developers to customize things for their environments, so it certainly wouldn't hurt to reach out to the noteworthy ones in your region and try giving them your CV.
You could build a playground internet with some VPN-connected boxes which talk BGP and route 10.0.0.0/8 between them.
It's a lot of work, but a friend of mine started such a thing and I have learned _so much_ about networking on layers 2 and up from that.
Start by installing a BGP deamon on two boxes that share a network and see what you can do :)
Ditto. Would love to figure out if there's a good way of mapping out layer 1/2 connectivity from a single computer as well. Would be really, really fascinating.
If you want layer 1 (physical location of dark fiber) you have to get the GIS dataset from the carriers. The data sets exist, but they're often protected as proprietary information or under NDA. I have a QGIS setup with a ton of stuff in it.
There are other ways of acquiring layer 1 data which are labor intensive and involve the equivalent of filing FOIAs for construction permits with local city and county agencies, etc.
Big facilities based ISPs that have a lot of fiber out there underground and aerial make extensive use of GIS software. Their construction groups will have their own full time GIS staff positions.
Actually, you don't want to do that. It costs money and you really should have a good door between you and the equipment or you will go nuts from the noise.
Most of the interesting stuff can be setup using Linux, OpenvSwitch, FRR/ BIRD, network namespaces/ VRF and more. It is all in FOS Software - so more or less zero cost. Of course, you will not learn how to configure a Catalyst Switch that way. For some stuff, there are virtual appliances that you can spin up with KVM/ QEMU but most of the enterprise stuff has to be bought. Again, at that time, you will have a solid understanding of what should be happening and will know what to look for in the documentation. The rest is field experience with firmware bugs, methods how to approach some problems and syntactic sugar of the particular equipment. At least that is my view.
I've seen ISP routers massively increasing latency in conjunction with packet loss -- had a problem with one provider recently where packet loss would increase every evening, and so would the rtt. Suggests large buffers on the congested part. The reverse DNS of the given IP suggested it was a gig-e connection (my side of the hop, which was also into LINX, is 100G)
Of course you can hide a router by not decrementing the TTL as it passes through your network at layer 3, you can hide the IP by not responding with ICMP expired messages
An ICMP could well return on a different path to the direction it was sent, with a path like this
Traceroute will only show the outbound route, so you should traceroute from both ends
The latency will also be affected by ICMP generation on the router, which could be delayed, rate limited, dropping, etc.
Doing a quick traceroute to a host of mine in Sydney, from London, shows
5ms to i-91.ulco-core02.telstraglobal.net 202.40.148.33
82ms to i-10104.unse-core01.telstraglobal.net 202.84.141.145
132ms to i-10601.1wlt-core02.telstraglobal.net 202.40.148.106
277ms to i-10406.sydo-core04.telstraglobal.net 202.84.141.226
sydo will be sydney
To get the exact map I could talk to Telstra (in this case I peer with telstra directly in London)
Or I could look at telstra's map, which ddg helpfully tells me is at
Which tells me 1wlt is LA. unse will thus be east coast U.S. I'd have expected routing via Singapore.
There's no way to know which way the traffic is actually going without asking Telstra.
I have 2 ethernet circuits from the UK to Washington DC, to me it looks like two layer 2 1500MTU circuit. Only by talking to the provier can I work out which circuits it actually travels on trans atlanticly. It's supposed to be separate, but latency changed by 2ms a few days ago. Asked them about it, and there was a failure in their network, they rerouted in a few hundered milliseconds (which isn't good as now both diverse circuits run via the same equipment, thus any issues like another 100ms outage will cause an impact)
>I have 2 ethernet circuits from the UK to Washington DC, to me it looks like two layer 2 1500MTU circuit. Only by talking to the provier can I work out which circuits it actually travels on trans atlanticly. It's supposed to be separate, but latency changed by 2ms a few days ago. Asked them about it, and there was a failure in their network, they rerouted in a few hundered milliseconds (which isn't good as now both diverse circuits run via the same equipment, thus any issues like another 100ms outage will cause an impact)
Buying two mpls circuits (assuming they are that) from the same provider is a single point of faliure, your only real redundancy is your handover at each site, if they are separate.
You're better of buying an optical link, then you are also not sharing bandwidth with anyone else. The cost isn't that different in my experience, but might be for an trans-Atlantic circuit. IPSEC over normal internet connections with multiple isp's is a better choice.
Don't get me started on that. However the two circuits are sold as a fully resilient pair (different tails from different DCs arrive into the building in Washington different directions with guaranteed different paths to the DCs, then in contract it's guaranteed neither circuit will run via the same equipment or on the same submarine circuit. In the UK they arrive in two separate cities)
However I know reality and contracts never meet, unfortunately in a large disfunctional organization there are other considerations in circuit procurement than technical requirements.
As for internet circuits, I had two ISPs in NY on two separate paths, which is great. Something went wrong about a month ago, and the routing from one of the ISPs changed, meaning that we were back to a single point of failure who we have no business relationship with
We had a partner that bought a redundant mpls service from their DC to another DC (not their own) and to AWS direct connect via that DC. Their mpls provider was a SPOF, the provider for the second DC was a SPOF, then the direct connect circuits terminated in the same AWS router, another SPOF. All 3 had SPOF's had caused its own significant downtime within a few months.
This was designed by the partner, contract signed by us, the whole solution paid by us, without consulting any network engineer on our side until 1 week before it was supposed to be implemented. We proposed two different solutions that would cost less than 1/20 of the cost their solution was costing us ($20k/month), while providing real redundancy (yet to be implemented, though some SPOFs solved by now). Never enjoyed outages as much as this as we got to explain why their solution was so bad each time.
Choir here, although in our case the connectivity was designed by a third party consulting firm, against the advice of long standing employees with experience.
The third party consulting firm are long gone, as are the people high up in the company who brought them in.
Now there's new people high up who come up with the same problems, but in a slightly different way, arguing how their basket is far better than the previous basket.
Massive latency and increase in packet loss, in correlation with a hop between providers, is usually an overly congested peering or transit session. Particularly if noticed most often during evening peak hours. Some people aren't paying attention to their traffic charts and monitoring systems, and aren't upgrading circuits as needed.
Yes, I mean it can be obvious when ISPs have proper consistent reverse DNS with hostnames and POP designations. You might see a router in Sacramento that is a neighbor of something in Portland. Obviously there's a lot of stuff in between there but it's all abstracted away at the purely layer-3 IP network level.
At the small end, a juniper MX204 is a good example of a current-gen router that has enough RAM to take multiple full BGP tables, and has a sufficient number of 10 and 100GbE interfaces.
ISP core networks typically have two components, routers and dwdm/opto equipment. The routers are not really special, just your regular ISP core/edge router, adding roughly the same delay for each hop (invisible mpls hops or not) on the order of microseconds.
Opto equipment adds practically no delay as they just forward the light without looking at it or processing it. There might be redundant paths that it can switch between if there's a fiber cut.
I wonder how many hops it will actually remove. I just tested it, and from my apartment in Mountain View, CA to my VPS in a data center ~20 miles away in Fremont, CA I get 14(!) hops.
Hopefully a lot! Fun fact, IPv4 and IPv6 have a maximum network size. Because of the TTL field in v4 (renamed max hops in v6), a path through an IP network can never be more than 255 hops long.
Hops are artificial, as long as you are 100% sure you’re not introducing loops you can just tell routers not to decrease the TTL and they’ll be invisible.
It’s just that if you do that and you do introduce a loop the packets will keep looping and the network will very quickly overflow.
And just like NAT with IPv4, we will use hacks to get around implementing a proper solution for decades (and counting).
With satellite internet taking off, and actual interest in extraterrestrial colonies from e.g. NASA and SpaceX, I think we need a proper space communication protocol. Maybe it can all be solved in L2, but I think we will soon be looking at 255 hops the way we look at the limited address space of v4.
It's all about peering. Once they're in more exchanges, and it becomes cost effective to set up peering with them, more traffic will reach them directly asdifferent providers set up peering to both reduce costs across other links and provide better access.
The true story is that a company (spread networks) spent many millions building a low latency fiber path between chicago and nyc. Two guys built a lower latency radio network that connected a number of cell towers together and were the first to market rendering that cable obsolete. Within months other people had radio networks up to try to compete (to give an idea about how competitive that space is).
For military purposes, my idea for Starship would be the following. Starship aims to put a 150tonne payload to orbit for a few million dollars. I would look at non-nuclear kinetic perpetrators.
Put a 100 tonne tungsten rod into space (cost ~$10m), or more likely some cheaper metal. Maybe 10m long with 25cm radius. Stick a cold gas reaction control system, maybe some gridfins and a guidance package. Put it in orbit, Put some ablative on the leading edge. Once in orbit the cold gas system slows the weapon to put it on target. The gridfins and cold gas thrusters control the guidance. The gridfins wouldn't work for long before the burnt off, but recessed cold gas thrusters could continue to work. Ionisation and intense heat would stop any optical sensors or outside guidance, so aiming it would be problematic.
A kinetic warhead with a mass of 100tonnes going at sub orbital speeds (say 7000m/s) would have 2.45x10^12 joules, which is around 0.58kt of tnt. If I remember correctly once it is going past a critical speed the penetrator and target act like fluids and the rod would penetrate a depth based on the relative density of the two materials time the length. Tungsten is about 6 to 12 times as dense as rock, so it might go through 60 to 120m meters of rock.
Non-nuclear, pretty hard to detect or stop. Competitive in price with cruise missiles. Accuracy could be a problem.
Storing weapons in space is really scary. It's practically impossible to reliably detect when they are "fired", which makes the people with nuclear bombs waiting to fire back antsy, which increases the risk of a nuclear war accidentally starting. I hope no one carries out this idea of yours.
Using starship to launch rods that immediately come down on people instead of loitering in space sounds like a much safer use.
Those rods are designed for precision targets on the order of a few meters in diameter. I would much rather both sides of the conflict had those in play, than a nuclear alternative.
I call the opposite, it is extremely easy to detect and track their orbits, a single personal computer can probably track them all and calculate the potential strike point over points in orbits.
Any attempt to deviate from an existing low earth orbit is extremely energy intensive. It also can't strike anywhere - it can be more than 24 hours before its in position over the location it wants to be.
It doesn't take much Delta v (energy) to move a low Earth orbit into a suborbital collision course. You don't kill all of your velocity, just enough of it that your orbit intersects Earth.
The orbital period for low Earth orbit is closer to 1 hour than 24, and you further reduce that to only a dozen or so minutes by spacing out rods over the orbit... Much like starlink satellites.
Yeah it's 1 hour but what if your orbit doesn't intersect with where you want to strike? What if the city is 5000 km _sideways_? Then you wait for the rotation of Earth to bring that location into your orbit path.
Keep in mind an orbit does not cover all of the Earth surface. And unless the target is on the equator, there is no low earth orbit that can maintain it's path over a target consistently.
But for detection, Is that actually easy? I don't know much about it, would it be done optically, or is there a better way? Are there concealment strategies? How about say painting the satellite black?
Terminal velocity for a streamlined rod of metal is pretty high. You're talking about tactical nuke levels of energy, delivered to a foot-wide spot on the ground.
De-orbiting them would consume a lot of delta-V.. plus if they're 'stored' in orbit they have a very narrow path where they can go in a certain period of time.
Could de-orbiting be partially managed by shooting them backward from the platform, using atmospheric resistance to bleed off the gained velocity of the platform?
Every space rocket is automatically an intercontinental ballistic missile. But why launch one rocket with 400 warheads from a publicly known launchpad when you can launch 400 ICBMs with several warheads each from 400 undisclosed locations?
Not every rocket - liquid fuel ones only to a very limited, logistically awkward and costly degree. As the US and Soviets learned, keeping fleets of liquid fueled ICBMs ready to go is dangerous and expensive. Storable fully solid fuel ICBMs such as current-generation US/Russian stuff are quite different.
A point of clarification, submarines are the only nuclear platform with "undisclosed locations." It is trivial for a nation-state to observe the construction and fitment of a fixed ICBM placement, or track the movements of a truck/rail mounted launcher.
It is certainly not trivial, and definitely out of reach of most nation states out there.
I'm sure U.S. at least tries to keep track of Russian mobile launchers, but to which degree it is successful we wouldn't know for sure. And pretty certain Russia would have problems live tracking launch vehicles (if the USA had any).
In a hot war with a country that has ASAT tech, being able to launch 400 low earth orbit sats -- low enough that they need to keep boosting, but also low enough that space debris is non-existent -- could make all the difference.
And as of 1846 pacific time, we have another fully successful launch. Second stage nominal low orbit. The first stage has been recovered for the 5th time. All sixty satellites released.
>Because the speed of light in a vacuum is 30% faster than in optical fiber, the latency of Starlink over long distances has the potential to be lower than any other option once laser links are available.
50% faster than light in fiber, fiber is 30% slower.
But isn't it still relatively very low bandwidth? I think they tested a v2 with ~700Mbps. In terms of normal everyday consumers, what are the implication of Starlink outside of Rural area?
PS: Thanks for the summary and NO Starlink phone. I have gotten sick and tired of people jumping to conclusion suggesting Starlink taking over ISP and Mobile Network. And that is even on HN.
> But isn't it still relatively very low bandwidth? I think they tested a v2 with ~700Mbps
Are you saying 700mbps is slow? Or is that all the bandwidth that the single link can provide (meaning 700 divided by the number of subscribers)? Living in a fly over state, I was lucky at first to get much more than 10mbps. Spectrum has now provided service to my area and I can get 400mbps and am super happy.
From 1 minute before liftoff, the computers in the Falcon 9 take control of the whole process. From then on those computers monitor all relevant sensors to decide go vs no go, and control the ignition etc.
They added sunshades which should make the satellites nearly invisible from the ground, but they are only effective once the satellites have assumed their final orientation and orbit. They start in a much lower orbit and will likely be very visible in the first few days after launch. Then they enter a phase of orbit raising during which they deliberately rotate to be less visible, though they can still sometimes be seen until they reach their destination several weeks later.
The "first" launch was v0.9 satellites. There was also an earlier launch of prototype satellites. Numbering of Starlink launches is a little confused as a result.
Edit: stage 1 recovery success, and that was perhaps the clearest image of it landing I have ever seen and I've been watching since 2015! Also, I miss the shots of LOX images in zero G they used to do back then.
Ah, so that's what makes it 7th launch of 14 then?
Now I remember this launch last year around this time, and some of them were not able to maintain their orbit, but weren't the majority of them operational?
The first and second stages of a Falcon 9 use a combined ~163 000 litres (43 000 gallons) of RP-1 fuel. This is less than the fuel capacity of a Boeing 777-300ER.
So if the Starlink launches happen twice a month, the carbon footprint from the fuel would be at least an order of magnitude smaller than that of a single long haul route of an airline.
Building the satellites and the rocket (including the expendable second stage) is obviously pretty carbon intensive too, but fuel is probably what you were thinking about when posing this question.
Everyday Astronaut on YouTube has a great video on the environmental impact of rockets. 55 minutes long but it's fascinating. The TL;DW is they're not as bad as you'd expect. But not zero-emissions, obviously.
Elon has discussed using the sabatier process on earth to create methane from CO2 to fuel it. Presumably the motivation would be a mix of PR and practicing for mars.
> The user antennas are likely to be quite expensive at first (several thousand dollars). Cost reduction of the user antennas is the biggest hurdle Starlink currently faces. Nobody knows yet how much SpaceX will charge for the antenna or service.
Considering the pay-over-time model that SolarCity implemented, should we not expect the same here? Assuming the user antenna can be removed, it can be rented, just like a modem or router.
Relying on the antennas having a long ‘useful life’, it may not be too prohibitive if rented out.
The hardware for existing geostationary consumer-grade (cheap, sub $150/month service) Ku and Ka-band VSAT terminals, for use in really remote locations in the USA, is about a $800 to $1200 cost. It's absorbed into 24/36 month contract terms.
People who live in a really remote plate and sign up for a 24 month term for some barely-usable VSAT service are usually disappointed to find out how firmly they're locked into the contract, when somebody builds a WISP in their area.
I would not be surprised if there's a terminal rental charge or 12/24/36 month contract terms offered.
>Considering the pay-over-time model that SolarCity implemented, should we not expect the same here?
Considering the pay-over-time model essentially bankrupted SolarCity, forcing Tesla to buy them out, for which Tesla are being sued by shareholders...I'm not sure that's the model to resurrect.
Random thought: are the satellite trajectories "dense enough" that in the future when ships/people are trying to leave the Earth have to get a delay/launch window and the Starlink satellite(s) may get diverted around the area you're going through.
No, think of the sky in terms of the surface of the earth. The star link satellites paths are like a few highway stretched across the surface. Not only is it a tiny sliver cut across the glob, it’s also at a very specific LEO so the spacecraft wouldn’t even stop there in the vast majority of cases.
I see yeah I meant just flying through it. It would help to see a scale model of the completed orbits as it looks "worse" than it is. I'm not against it, though it would be nice if it was free ha.
side note: not related but the Eccentric Orbits book about Iridium was really good imo.
>The user antennas are likely to be quite expensive at first (several thousand dollars). Cost reduction of the user antennas is the biggest hurdle Starlink currently faces. Nobody knows yet how much SpaceX will charge for the antenna or service.
One of Starlink's competitors OneWeb was able to aquire a user antenna which is apparently a breakthrough in cost reduction at $15 [0]. I would assume the manufacturer has a contractual agreement with OneWeb, but I can imagine that Starlink could develop a similarly priced antenna with it's greater resources.
Just Curious. Have you been using zettelkasten, org-roam or similar tool to take notes whenever you came across this topic and publishing now as the compiled version ?
> They are about the size of an extra-large pizza, so you won't be able to get a Starlink phone.
I'll be surprised if this doesn't shrink. I worked for a company in the mid-2000s that had some pretty cool IP which effectively shrank a briefcase-sized BGAN terminal down to a Pocket PC (pre iPhone days!) with Pocket PC-sized antenna strapped on the back.
People have worked on vehicles to get us to space for decades. It's not going to shrink much, and I don't think it'll be any less expensive. ... enter SpaceX
Sort of, but none of the debris will reach the ground. The current version of Starlink is designed to burn up completely in the atmosphere upon re-entry.
what sort of components are the Sat's made up of?
Are we all happy with the idea of "burning up" wont have long term issues with not completely "burnt up" parts floating around the sky and eventually being breathed/drank/eaten in the biosphere?
Try to ask this in a non-hyperbolic way, I dont want to sound alarmist and I'm no "chem trail" loon. but "Burning up" doesn't just make it magically go away.
Any parts that didn't completely burn up (of which, again, there shouldn't be be any) would not float, they would fall. But it's a good question if the burned up mass would have any bad effects. My intuition is that the amount of mass is so miniscule in comparison to the whole atmosphere that it really doesn't make a difference what the satellites are made of as long as it's not radioactive. But I haven't done any math about it.
in a short term view yes i agree, but its more than just starlink (we have all sorts of sat's made of highly refined materials falling from the sky), and its over years... decades even.
I haven't seen articles on this, so either its being ignored or it's not an issue. :-/
I strongly suspect it's the latter - since "Every day, Earth is bombarded with more than 100 tons of dust and sand-sized particles." [1] and the satellites burning up aren't going to approach that mass simply due to launch costs.
There is no uranium or plutonium in these satellites, nor anything else radioactive. On the other hand, according to [1] the abundances on this page multiplied by [2] 40000 tons per year, the stuff naturally hitting the atmosphere each year includes 0.3 kg uranium, 1.6 kg thorium, 3 kg radioactive isotopes of potassium, 10 kg mercury, 18 kg cadmium, 50 kg lead, 72 kg arsenic, and 120 tons of chromium, among other things. This stuff burns up in the atmosphere all day long, and has for billions of years.
Well Starlink will be the majority of all satellites pretty soon, so it really only matters for Starlink for the foreseeable future, if it matters at all.
Random FAQ about Starlink below:
The combination of low latency (20-40ms) and high bandwidth (100+Mbps) has never been available in satellite internet before.
A public beta may start later this year for some users in the northern US, around the 14th launch. Today is the 7th launch of v1 satellites. SpaceX is hoping to do more than two launches per month but haven't reached that pace yet.
The ground stations look like this: https://www.reddit.com/r/SpaceXLounge/comments/gkkm9c/starli... The user antennas can be seen in that picture; they are the smaller circular things on black sticks. They are flat phased array antennas, and don't need to be precisely pointed like satellite dishes do. They are about the size of an extra-large pizza, so you won't be able to get a Starlink phone.
The user antennas are likely to be quite expensive at first (several thousand dollars). Cost reduction of the user antennas is the biggest hurdle Starlink currently faces. Nobody knows yet how much SpaceX will charge for the antenna or service.
Starlink can't support a high density of users, so it will not be an alternative to ISPs in cities. Rural and mobile use are the important applications. The US military is doing trials with it. Cell tower backhaul may also be a possibility.
Starlink V1 doesn't have cross-satellite links, so the satellites can only provide service while over a ground station. There will be hundreds of ground stations in North America; no information about other regions yet. Starlink V2 is planned to have laser links between satellites, which will enable 100% global coverage 24/7, though local regulations are likely to prevent SpaceX from providing service in many places.
Because the speed of light in a vacuum is 30% faster than in optical fiber, the latency of Starlink over long distances has the potential to be lower than any other option once laser links are available.
Each launch of 60 Starlink satellites has nearly as much solar panel area as the International Space Station. Once SpaceX's Starship is operational they should be able to launch several hundred satellites at once instead of just 60.
Starlink's only current competitor, OneWeb, just filed for bankruptcy after only launching a handful of satellites, and is fishing for acquisition offers. Amazon is also planning something called Project Kuiper but not much is known about it.
Starlink V2 will have 30,000 satellites, requiring hundreds of launches. Even once the initial fleet is launched, SpaceX will still need to maintain the constellation with many launches per year indefinitely.
SpaceX's FCC application has many interesting details: https://licensing.fcc.gov/myibfs/download.do?attachment_key=...