Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I know the control algorithms are the mind-blowing part here but,

does anyone have any literature about how the Rocket localized itself with respect to the chopstick arms? It must've been some combination of GPS and Radar pings to the arms?

And then the onboard IMU to make sure it hits it straight.




Great question! Could just be Real-Time Kinematic (RTK) GPS like someone mentioned. Essentially the landing arms know their position very precisely and they measure the tiny errors in GPS data, and send that correction data live to the rocket in real-time as it's landing. Once the rocket gets very very close it could also just be using vision systems to zero-in on exactly where the chopsticks are.

To speculate more, they could also be using something like ultra-wide band positioning. This relies on the same time-of-flight principle as GPS but instead of using satellites in orbit to provide the precise time information you rely on various nearby ground stations. Would only be useful right at the final approach, the last couple hundred meters, but it's another way they could get very very precise position information. (fun fact: Ultra Wide band positioning is also how iPhones can locate AirTags with centimeter accuracy)


ooooo Yea forgot about RTK GPS. I’ve always found them to be so brittle but that’s because I’m in a city.

In the wide open sky, I’m guessing it’s pretty reliable.

Vision systems would be pretty useless with the low visibility of the smoke and fire. So I thought maybe it was some kind of radar configuration.

Anyways, I’d pay a lot of money to pick the brain of the GNC team here.


Why bother with GPS or other "absolute" coordinate systems? Once the rocket's in close, all that matters is relative position and orientation of the rocket with respect to the landing apparatus. Eg, if you had many sensors in known locations on the rocket and many sensors in known locations on the landing apparatus, and you could measure relative positions between all pairs of these sensors, you could get extremely precise relative position/orientation information without beaming information to satellites or whatever.


From the control point of view, isn't this exactly the same as F9 landing on a pad, except the pad is virtual, floating in between the chopsticks and the ground? Or course one difference is that the approach needs to be from the correct direction.


I seems to remember some article mentioning the Falcon 9 using radar (+ presumably other sensors) & even having a landing site map uploaded (mainly for the return to launch site scenarios) with prioritized exclusion zones in case of a landing failure.


Hans Koenigsmann alluded to mapping during the CRS-16 post-flight press conference.

https://www.youtube.com/watch?v=tVSPCPoc8hs&t=373


A major difference is that F9 (landing on a wide flat pad) had a quite wide acceptable horizontal error, 10 meters or more, whereas I think this (landing between two chopsticks) needs like ~1 meter accuracy in the radial direction.


Could just be differential/rtk GPS. You can get incredible precision with that.


super good question, especially with all the tilting involved, which would make visual servoing difficult. Maybe some form of beacons on the ground?




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: