Opportunity's and Spirit's original missions were planned to last 90 Martian days each. They ended up lasting over 8,000 combined. That is almost as impressive as getting the rovers to Mars in the first place.
I remember talking to a high up on the program at a FIRST robotics event who complained about them running so long. They weren't making any major new discoveries and sucking up resources and mindshare stopping him for kicking off and focusing on new projects. He said nobody would take the PR hit to decide to just end a mission like this so he was stuck.
Sounds like that person doesn't really understand that NASA is a political entity and the Public Relations of an ongoing successful mission is more valuable long term than more scientifically pragmatic experiments in the immediate.
Sounds like s/he perfectly understood that, but still didn't like it. Often (always?) politics get in the way of actually getting things done in a smart fast efficient manner.
The thing is, you can substitute "politics" for "people" and get the same statement, which is why NASA's political work is at least as important to human space exploration (and, I would argue, long-term species survival) as the raw scientific work.
People have to be continually convinced that exploration of the solar system is a worthwhile expenditure of a lot of money that could be spent on other things. Seems obvious to those who are already thinking in the direction of where humanity is going, but most people do not.
It's still possible to simultaneously acknowledge the fact that we need to repeatedly convince people of space exploration being important while also disliking that this is necessary.
A small handful of people can still easily cost millions of dollars per year. And DSN time to talk to the rovers is expensive, probably in the neighborhood of $2,000 per hour.
How much DSN time do you need. Given the deley, I assume the only use is batch transfers, so cost per bit seems like the more useful metric (along with how much data is involved)
$5M a year seems like it would be very reasonable. The original 90-day price tag was $820M, and they've spent an additional ~$200M over the past decade and a half. Maybe it's really not worth it to spend $5M/year to extend the life further, but the scientific returns would have to be quite severely diminishing.
Looks like according to an old article linked on wikipedia they spent another $100+ million over 5-6 years to keep it going. So not wildly expensive, but not cheap either.
> I remember talking to a high up on the program at a FIRST robotics event who complained about them running so long.
Not surprising at all. Coached a FIRST team for several years. At this point they sell licensed robotics kits with canned instructions for kids to put together and those kids win because of time limitations. The teams that build from scratch and actually contribute discover or learn anything at all are intentionally blocked from winning. The entire program is total crap for retarded children. Meanwhile smart kids are off doing their own projects that are 1000x as complicated as the most challenging FIRST challenges. It's just turned into a money burn.
> The teams that build from scratch and actually contribute discover or learn anything at all are intentionally blocked from winning
This is ridiculous, and I'm saying it is from the perspective of having been a student whose team finished fourth in FRC in a recent year (and won multiple events per year), and having volunteered and assisted multiple teams.
While it is true that a lot of lower resources teams may be better off going with commerical off the shelf (COTS) components, such as the Greyt Elevator and Greyt Intake, teams that perform at a higher level will build everything from scratch. Whether this means utilizing gussets or welding, building your own drive train or using the AndyMark prebuilt chassis.
While it is true that some kids may be better off doing research projects, there is nothing wrong with participating in a team. From my three years as a student, I was exposed to developing solutions to split messages up and verify data integrity when communicating over serial with external microprocessors, utilizing version control, setting up Continuous Integration on our repositories, first deploying apps from Cordova to developing Progressive Web Apps, which forced me to learn to use a VPS on Digital Ocean, running a docker swam on hyper.sh, and storing files on S3 and using RDS.
FIRST is what you make of it, and I'm personally glad that someone holding your opinions towards the program is no longer actively interacting with kids in the program.
It's well known that the best teams are the ones that eschew using the kit of parts and have their NASA/SpaceX/Autocorp sponsors laser cut their robot chassis for them.
That story is very much over sold. They defiantly designed the things with a longer lifespan in mind. NASA simply had more reasonable minimum specks so if they died on day 220 nobody would call the program a failure.
They designed it for a winter which means at least 687+ earth days days was considered achievable. Dust was expected to be a larger issue and in many ways they lucked out, but more so as a calculated risk than blind luck.
Distance makes little difference past the first day. Doing the same thing on the moon would not be easier becase it’s closer. It’s local conditions that make this hard more so than simply getting to that point.
I can assure you that the distance took its toll on the rovers, in my understanding. Their wheel surfaces eroded, oil pumps failed, bearings degraded and they ended up dragging dead wheels in the end.
Are you talking about the distance the rovers traveled while on mars? Because that's not what Retric is talking about. Retric is talking about the distance from the earth to mars (hence why they compared it to the moon).
I think the book "Roving Mars" covers this the best. They were worried about a dust storm like the one that killed Opportunity, so they designed the rover to last 90 days even in the pitch darkness of a dust storm. To make sure they'd succeed even in this worst case, they added two additional petals worth of solar panels (IIRC, about a 50% increase in area) to the MERs.
Reading a contemporary description, they seemed to be very scared they wouldn't make the 90 days - hence this major and risky redesign. (The new panels had to open, and if one failed it would block off both the new solar panel, and one of the original ones.)
It worked, and so they had enough extra power to last through that first Martian year, where the winds blew the panels clean. And the rest is history.
I disagree. Even without accounting for expectations, making a machine that works for years without regular scheduled maintenance in extreme conditions is a monumental feat. Ask people who work in the oilfield, extreme heats, extreme humidity, etc.
90 days was because they didn't think a rover would last over 90 days due to dust storms. They figured that after 90 days they'd encounter a bad dust storm that would cover the solar panels and make them unable to collect power, and thus the battery would be drained and unable to charge after 90 days.
They didn't account that the winds on mars would be able to clean the solar dust off the solar cells so well that it turned out dust storms weren't a huge issue until the latest one.
And from my understanding it wasn't the intensity of the latest storm that killed it but the part of the rover's computer that kept track of time was shot, so it wasn't able to optimize when to go to sleep and when to wake up properly, causing it to run out of charge while it was dark.
I wasn't just commenting on the physical durability of the rovers themselves although that is a huge part of it. It is also impressive from both a political and an organizational standpoint to keep a program like this running successfully for so much longer than initial plans dictated.
NASA has numerous programs running for this length of time. One of the people working on Voyager showed up at my grad school to get a degree between Uranus and Neptune.
NSF, too. The VLA opened in 1978 and the first data recorded by it can be reduced by the current radio astronomy analysis software. In fact the software data format developed for the VLA (FITS) is now popular with high-end camera people... 41 years later.
JPL wrote a fascinating reflection on what engineering practices allowed this to happen. For any of you who have not experienced the NASA Knowledge Gateway, I would recommend. https://llis.nasa.gov/lesson/1743
Nice concise read. I think many lessons are well known but not acknowledged. For a high quality product...Test a lot, experienced people, do things in house.
Meanwhile software industry is... Ship to prod ASAP, hire junior devs to reach head count, outsource to overseas and contractors.
I realize the risks are different... But management gets pissed when software causes financial lost and wonder why.
One systemic problem is that the ownership and management of most companies shifts significantly over a 15 year time span, whereas the people designing a new NASA program know that they (or people that they care about) would be picking up the pieces if they are sloppy.
I wonder whether family owned companies are closer to NASA in their testing/hiring practices?
If I remember correctly the reason they did not expect the mission to last longer than that was because they expected the solar panels to get covered with dust and thus render them useless. I don't think they had banked on martial winds to regularly clean them.
Your recollection is correct. Consider too the dust accumulation predictions for the moon landing. It would appear that we (as humans) are not very good at dust estimates (dustimates?). This is reinforced by the continuing struggle/inability to manufacture dirt. As an avid composter with a ton (lol sorry) of experience, I can make you hummus for days. I can turn 90% of your organic household waste into growth medium, bokashi tea, or pre-dirt in anywhere from 6 months to 2 years. But if you want me to make dirt? I'll need a decade to get you a production level process that requires raw stocks, pre-processing, vermiculture, aquaculture, and many rounds of late stage funding. Dirt production stymies every modern industrial process because as it stands, you cannot reduce the time necessary for conversion from hummus to dirt. I find it beautifully tragic that terraforming Mars will require mass dirt transfer from earth (consider this tonnage conundrum and revel in the irony).
Don't take my word for it. Go out and try to make earth yourself. If you can reinvent this fundamental, elementary process, you will join Jonas Salk and Fritz Haber, and very few others, in changing The Game in a fundamental way.
Kim Stanley Robinson's Mars trilogy talks a lot about the "can't make dirt fast" problem, and I was surprised and impressed at how fascinating such a mundane-appearing problem could be. I had never thought about it before.
Note: I think you mean humus. Hummus is "a Levantine dip or spread made from cooked, mashed chickpeas... blended with tahini, olive oil, lemon juice, salt and garlic."
You are correct. Incidentally, humus is a delightful compliment to a sound horticultural spread, not unlike it's etymological cousin. It can make things better, but it cannot be the main course.
Jumping on the opportunity to have somebody who knows about those things.
I used to have a huge garden, and made a compost by barely dumping anything organic on a big pile. It worksed great: low maintenance, never one problem, plenty of compost produced.
I now live in a flat, and have composter using worms.
I can't make it practical for the love of me.
If I put it inside, I get flies, no matter how much carbonated material I put in it.
So I put it outside, but it dies from either heat, or cold.
Even when I manage to avoid the worm genocide, the process is so slow. It does produce a fantastic liquid fertilizer in mass, but the volum of organic matter consummed is nowhere close to what I need. I eat a lot of vegies and fruits, and in 2 weeks, I have to put most of it in the trash can, waiting for the worms to work on the legacy pile of trash.
Definitely! First off, I am biased in my compost preferences and it is always good to hear different opinions. Second, what works for one fails for another, so no sense in throwing more effort into a process that isn't working.
On to your situation. In home composting is always tricky. My wife and I do indoor/outdoor because we have a yard. In your situation, I'd advise against vermicomposting for the same reasons you listed. I was the technical manager for an enterprise vermicompost startup. Our facility was a rowhouse basement and the owner of the property was a founder. Flies are unavoidable, as is the occasional vermischwitz. They are fragile guys and minor mistakes have major consequences. The exact ratios escape me, but 1lb of healthy, mature red wigglers can consume half a pound of Green (fresh) cellulose in a day or two. We were engineering soil additive, so we supplemented heavily with Gray (dead) cellulose (cardboard, paper, etc.) In an attempt to hit profitable output. I will spare you the further details of our failure. Suffice to say that vermiculture is tricky in the best circumstances.
My suggestion to you is twofold: use the Bokashi Method (https://www.planetnatural.com/composting-101/indoor-composti...) and find a friend with a garden to offload your Bokashi Tea. That link is one of the first hits on Google (read: not vetted) because if you go this route you are going to be reading a lot and there are many roads to Rome.
This is very short due to medium, but if you'd like to discuss in depth and at length I'd be more than happy to. Both composting and waste neutralization are passions of mine. Just let me know how to contact you should you desire.
In terms of exoplanetary interchange, time recedes and tonnage expands as the concern. Without a space elevator or MAC transfer solution, you'd be effectively FedExing teacups of dirt to a green house in a Sahara contained on Antarctica... on Mars.
It's also a matter of federal budgeting; when the rovers landed, NASA definitely did not have enough funds budgeted to keep tasking the rovers for 8,000 days. But as the rovers continued to operate, NASA requested more funds to continue, and Congress provided them.
Once a robotic mission accomplishes its primary objective, NASA has a great story to tell about marginal cost vs return of additional funds. It's not that uncommon; heck they still get funding to keep the Voyager program operating.
The 90 days was, as mentioned, a lower bound, not an upper bound. They were planned to last at least 90 days.
Framed this way it's not entirely surprising they lasted longer, especially given the fortunate turn of events with the martian wind cleaning the solar panels of dust.
This of course does not in any way diminish the engineering and ingenuity involved with designing and operating the rovers.
I’ve never found it impressive, it’s a horrible padded estimate meant to cover NASA and make them look good. A real estimate needs to be useful and somewhat close to the actual outcome. There’s no way they were this off without it being on purpose.
Imagine if your retirement planner or accountant was off by 44X in their number crunches for you. Would you say they were good at estimating?
I celebrate the achievement just fine, it’s the horrible estimate that is shoved in my face constantly I have issues with. It represents a dumbing down of science for the masses that is just getting worse year after year that I resent. The rover isn’t good enough as is, we are told to be impressed by a nearly 50-fold wrong estimate that is all anyone knows about the rover. I knew it would be one of the top comments on this post. As a scientist I appreciate precision and good science, not hero worship and bureaucratic PR estimates.
Is it not acceptable that they were conservative in their predictions about the effects of Martian dust? I imagine they didn't have a whole lot of data points to inform their estimate.
I don't disagree with your overall point, but if you care about accuracy, you shouldn't also conflate an engineering goal with an estimation. I couldn't find the original estimation, but considering the unknowns we're talking about and the sunk costs, surely a 4-8x safety margin (1-2 years estimation) is not unreasonable. So we're at most talking about a 15-fold wrong estimation, not 50.
Another comment explained this. They set a goal of 90 days to get the data they needed. When they presented the proposal for budgetary approval, 90 days of data collection was considered worthwhile enough to greenlight the project. So they accomplished it. However once you already have a working rover on the surface you might as well operate it as long as it lasts because the marginal costs are minimal compared to the initial costs. So they kept requesting additional funding to keep the program running.
No one predicted that the rover would break on day 91. They designed it to last at least 90 days. Between standard rocket-science safty factors and preparing for the pessimistic end of all the uncertain risks, there is much room to go beyond your original mission.
Imagine you design a bridge to support 100 tons. No one would accuse you of mispredicting if it holds up at 200 tons. They will accuse you of negligence if it collapses at 105 tons.
Steven Squyres, the Principal Scientist of the program came to Microsoft to give a talk. One of the best talks I’ve been to.
So many interesting tidbits like the parts about the rovers expected short lifetime due to the dust, and how (if I remember correctly) they fixed this by shaking the solar panels like wings.
He talked about the rover drivers, and how they all had to live in special light cycle controlled buildings to get them used to working on Martian days vs. Earth (the extra hour adds up over time).
He wrote a book, a worthy read. The printing I got had some amazing pictures in it:
I was really fortunate to have him as an astronomy professor right around the time of the launch. He's great at making complex things easy to understand, and his enthusiasm is contagious. Highly recommend his book, but if you've only got an hour here's a good video conversation.
"Reported to have a unit cost somewhere between US$200,000 and US$300,000, RAD6000 computers were released for sale in the general commercial market in 1996"
Anybody know why the per unit cost is so high? Low yields or is it that much more expensive?
1) The yield rates for spaceflight-qualified chips is very, very low. Like 1%-5% or so. The chips are inspected when they come out of fab, and only the most perfect ones are given a spaceflight certification. The rest of the chips are used for other, less stringent applications (test boards, or military/embedded applications).
2) Spaceflight parts have significant paper trails. For metal parts, they are traced from the moment a lot of material comes out of the mill, and every time it is touched or changes hands thereafter that fact is recorded. Same thing with chips. Every chip has a "traveler" associated with it that records when it was manufactured, how it was stored, etc. Keeping those records costs a surprising amount of money. Handling the parts so the paper trail can be kept costs even more. You have to organize your logistics train such that every part is individually trackable. That reduces efficiency and adds cost.
Commercial aircraft also track provenance very carefully. I wonder how much the two systems have informed each other over the years.
You would never want a part involved in a stress test to be reused, and you certainly don't want test parts anywhere near a production craft. This isn't like IT where you can cannibalize parts from a QA box to put into a production server.
Having worked on mission critical commercial aircraft electronics, I can assure you every part is stress tested before it goes on a plane. Both after its first made and every time its in for service. Supposedly there has been research done showing that stress testing does not adversely affect reliability, at least at this quality level
Also, the paper trail is quite literally paper. Its kinda amazing how slow the aviation industry is to adopt new things
It's possible I'm mucking up the jargon, but isn't stress testing electronics what most of us might call "burn-in"? I definitely want the flight control computer burnt in before I sit on the airplane.
I was thinking more of physical components, like an aluminum bracket, and you don't want a part with metal fatigue being installed as new.
I saw a video once saying that SpaceX uses general purpose computers on the rockets instead of specific purpose hardware. If I'm not mistaken, they amount to 6 and have a checking system to assure the output of them is the same.
Yes and a SpaceX rocket computer is not in space very long and is mostly in the low radiation environment of LEO. These Mars rovers spent years in a high radiation environment and received a large total ionizing dose of radiation. Having multiple computers doesn't do anything for total ionizing dose, they are all going to fail around the same time.
I'm not sure about the RAD6000 being discussed here, but its successor, the RAD750, is fabbed with silicon-on-sapphire to help with total ionizing dose. For single event upsets, there is triple modular redundancy for all logic in the CPU.
Those "have N copies and compare them" systems then have the issue that whatever does the comparison is a single point of failure. You could use multiple comparison units and have a second level comparison check them, but then that is your single point of failure, and so on.
I've been told, but never actually looked it up, that there is a theorem that proves you always have to have at least one single point of failure.
I don't know if the following is actually true, or just a rumor, but I've heard that at least one aircraft whose mission called for very high reliability didn't have a comparison unit: the redundancy extended all the way to having the 3 independent flight control computers each control a separate actuator on each flight control surface. If one of the systems went bad and tried to move the surface incorrectly, the other two would physically overpower it.
That still has a single point of failure, but now that point is the control surface itself. If your control surface itself has failed it no longer matters if the 3 computers controlling it agree.
> I've been told, but never actually looked it up, that there is a theorem that proves you always have to have at least one single point of failure.
In what context? There's a theorem that arbitrarily-reliable computation can be done with noisy components, as long as the noise is below some threshold (e.g. picture less than 1 error per 10 operations). [1]
1: von Neumann, J. (1956). "Probabilistic Logics and Synthesis of Reliable Organisms from Unreliable Components", in Automata Studies, eds. C. Shannon and J. McCarthy, Princeton University Press, pp. 43–98 http://www.cyclify.com/wiki/images/a/af/Von_Neumann_Probabil...
So long as you can trade off robustness for performance, the fact that there must be a single point of failure is less important. You make the single point of failure more robust. In the example of the control surface, perhaps some ionizing radiation could flip bits in the flight control computers, but it takes a physical collision to damage the control surface.
To argue otherwise is to imply that all designs and all systems are equally robust, which is clearly not true.
This is something I'd always wondered about and never gotten a straight answer to. You're the first person I've seen who explicitly confirmed my suspicion that every system has a single point of failure. Would love to find a reference for this...
I thought the way these things worked is that each of the N computers can see the state of the other N-1 computers; if your local state differs from the majority opinion you overwrite it, thereby joining the majority.
I'm curious if this SPoF theory stops at computer/real world interfaces (such as the control surface <-> computers). Or if this generalizes into purely computational models. In my mind I can't find any purely computational model using distributed techniques that succumbs to the SPoF "theory." But the computer/real world interface is trivially obvious in just about every context.
I don't think it's strictly true that you always have to have a single point of failure. Trivial counterexamples: 6 people carrying a sofa, an ant colony, BitTorrent.
The closest thing I can remember encountering to what you describe is the "Contracrostipunctus" chapter in Douglas Hofstadter's Godel, Escher, Bach, where he writes a dialog featuring record players as an analogy to Godel's Incompleteness Theorem (which only applies to "formal systems" - descriptive mathematical languages). He does go on to explore a real-world example of the principle in the form of viruses - a cell cannot fully defend against DNA modification using only instructions found in its DNA. The same principle applies to cracking copy protection in games - no matter how elaborate the validity checks, there's always a single point of failure in the form of the final decision - "if(checks_pass){run_game()}" - which can be trivially short circuited with a debugger.
I'm not a good enough mathematician to fully understand the limits of Godel's Theorem. But it seems to me that all of the above applications are examples of some sort of well defined formal computational system, and you can't generalize it to "everything has a single point of failure" without some carefully defined rules as to what constitutes the boundaries of system.
Low sales, meaning the R&D cost (which'll be higher than a non rad hard chip anyway) can't be as easily amortized. And probably half of it is or so is just the .gov markup.
The costs of employing a team to do the radiation hardening design work, running a non-mainstream fab (I believe these chips were silicon-on-sapphire) and small production runs. There's a lot of overheads amortised on to a very small number of chips.
My understanding is that most of that is setup costs for retooling a fab, which requires moving around and recalibrating some very specialized equipment. The numbers I heard were dozens of engineers working a total of one to two thousand man hours to set up the process, make and verify the chips, then switch the fab back to the processes their other customers usually order. Since these semiconductor fabs have high upfront and fixed operating costs, they need to have a high utilization to be profitable - preventing anyone from having a specialized fab that makes only RAD hardened chips.
AFAIK the lower cost VORAGO designs require far less retooling so they're a lot cheaper with existing processes.
I would assume the kinds of use cases that people might need radiation hardened equipment for just have people willing to spend much more money. They're charging what people are willing to pay, not how much it costs to make.
Excellent question. I assumed that one mars minute differs from an earth minute but I just learned that they don't. Seconds, minutes and hours are universal. Hours in a day is what varies by planets, moons etc. Funny that I never thought of that before :)
You're likely joking, but just in case: on earth the second is not defined as a fraction of the day, it's a SI unit defined as a constant count of energy level variations of an caesium-133 atom.
Thanks for that clarification. After this, I was able to find the International System of Units Wikipedia page which clearly defines a second as per your summary. It's a relief to see that it's a constant formula like that of the Kilogram.
I was suprised to learn (from the wiki page) that the kelvin, mole, and ampere do not have exact numerical definitions yet, though I guess that's expected to change in May of this year.
And then there's the candela, still basically defined by how luminous whale blubber is when it is burning:
> Current (1979): The luminous intensity, in a given direction, of a source that emits monochromatic radiation of frequency 5.4×1014 hertz and that has a radiant intensity in that direction of
1
/
683
watt per steradian.
> Note: both old and new definitions are approximately the luminous intensity of a whale blubber candle burning modestly bright, in the late 19th century called a "candlepower" or a "candle".
Counting minutes obviously came first as devices likes hourglass and water clocks are old (2000 BCE).
Regarding their precision, I got interested and per this paper [1] and the wikipedia page about traditional Chinese timekeeping [2], water clocks from two millennia ago might have around 15min precision.
For second level precision, it seems modern mechanical clocks were required, and they only precede the discovery of caesium by a few centuries.
A minute is still 60 seconds. The only difference is that some clocks will occasiomally go an entire minute without incrementing their minute counter. This is a property of the map, not the territory.
More interesting would be when someone else from another species finds them in a couple of million years in the future and wonders what their purpose was all about.
Cassini was deliberately pushed into Saturn's atmosphere and destroyed (so as not to potentially contaminate habitable worlds) so we probably don't need to worry about that.
Also, there's a great documentary TV series called "7 Days Out" on Netflix which covers the last week of the Cassini mission.
Wheels and axis are probably universal. Don't believe in sci-fi moves about floating transports.
A cooper wire around a iron core makes a good electromagnet. Deducing from there that it is a motor is not difficult. (It's even connected to the wheels.)
I'm not sure about the battery. I guess it's guessable because it has some unusual metals.
The lens (a clear piece of glass, thin, with a rounded surfaces) are also probably universal. If you have a few of them in line connected with moving parts with gears will confirm that it's the zoom of the camera.
Probably at the end of the camera is the sensor, I don't know how long it will keep the photo sensibility. It is connected by wires with the big chunk of wires and weird electric parts, so it must be the main board. The main board is also connected to the motors of the wheels.
The high gain antena is not parabolic (IIUC), it would have been an easy task to recognize it if it were parabolic :( . The low gain antena is a stick. By this time they already know the technology level of the motors and they will deduce that the communication is electromagnetic waves. So a big metal stick connected to wires is a good low gain antena candidate. The other weird thing has similar conections. And is orientable (they can see the inner gears of the support arm) and perhaps if you open it the inner structure also help. They will deduce that it is another antena.
[I personally think that for long distance transmission (specially in space) the electromagnetic waves are irremplazable. They may have a better encoding and filtering methods, but I guess they will mainly use electromagnetic waves.]
You know the craziest thing about all of the mars exploration programs is to me?
The first time there is an entire full-up test of the system is live, AT MARS. There isn't a good way to test the entire entry descent and landing sequence because the earth's atmosphere is so different than mars. I know NASA works hard to test parts of it in the vicinity of earth, but I can't imagine designing something so complicated (especially the system for curiosity) and then not being able to test it completely before the real thing.
I've actually read, that the upper part of the atmosphere, where a returning Falcon 9 booster does retro burns to slow down, is similar to Mars (in terms of pressure I guess).
I had heard about them doing some study of using an inflatable heat shield and testing in earth's upper. I guess it could be used for Earth or Mars. Neat!
Opportunity is going to be the standard for longevity that all future rovers are measured against until one surpasses it.
If you said you expected a rover to last 5yr in 2004 you'd have been called crazy. Here we are in 2019 and after ~15yr of Opportunity driving around up there the idea of rover lasting 5yr or seems perfectly normal. Opportunity has raised the bar for all future missions.
Curiosity is doing well at about 6.5 years and had a 2 year mission but it probably won't stay fully operational that long. Its RTG could last that long but it will eventually stop producing enough power for the rover to move.
Is it possible for someone other than NASA to send a message to the rover? I'd be happy throwing a little money at a project to ping it monthly to see if there is a response.
Yes and unlikely :-) The protocols are all documented but the ability to receive signals from the rover requires a pretty sensitive receiver and antenna combination. If you have the resources to build a 10m or 15m steerable radio antenna parabolic dish then you could probably manage it.
That can't be that expensive unless you want it to be storm proofed. But it wouldn't be super cheap either, the receiver is probably going to be a bigger problem.
Would be a good starting point. But keeping it stable under wind load is going to be the major challenge. That would be one heck of a project, it would likely take a few years of your time to pull it off.
This quora answer (https://www.quora.com/What-bands-signals-and-protocols-are-u...) claims that even the 70m DSN antennas have a hard time hearing the rover to earth transmissions. That is 10x the diameter so 1000x the surface area of a 4.8m dish like the one the guy built.
That said, there are no doubt interesting hacks you can do to help this but steerability is always going to be a concern to maximize the SNR of the very weak signal coming from the rover.
Yes, the precision required would be insane. That is also what makes this an interesting project, even if you fail you will still learn a ton about all kinds of engineering principles. Expensive lesson though!
Another big factor would likely be how far Mars is away from Earth, at the close extreme it will probably be substantially easier to pull this off.
What an interesting article by the way, thank you for that link.
You could send a message. The source code is available[1], but you would need to build or have access to a large antenna, which would be extremely difficult and/or expensive.
It's physically possible but I'd have to imagine there's a security key required for the Rover to accept a message and there's no way in hell NASA would give you that key.
I get the argument for why nuclear weapons aren't physically locked -- there's always people guarding them -- but not having security on a remotely operated billion-dollar device seems crazy to me, even if the technical barriers to establishing a link are high.
The person answering doesn't really comment on deep space missions, but I think the conclusion you draw (no encrypted link) is almost certainly right for Spirit and Opportunity, which were launched in 2003.
The thinking used to be that a 70m radio dish (and all the accompanying deep knowledge about pointing, relative velocity, channel codes, etc.) would be enough of an obstacle.
This thinking has definitively changed in the meantime.
I built some warehousing apps in VFP 20 years ago and they're still running fine. Every time I go ship something the owner asks me to prune and reindex the DB and that's it, another year running smooth.
In an IT industry magazine website (Information Week? not sure) had an article about an IT engineer at JPL who was picked/trained to be one of the drivers of the rovers. He was just a normal IT guy, but got the chance to be a driver for the rover.
He recalled that the first night after he spent a day driving the rover on Mars, he couldn't sleep at home. He had just driven a vehicle on the Mars. Certainly one of the first in human history.
As one of the PIs said at the NASA briefing. If you can bring Opportunity home I'll prefer if you bring back 180kg of Mars rocks. We already know what the rover is made of. :)
It wasn’t supposed to last only 90 days, the initial mission specifications were for a 90 day mission.
While it’s an engineering marvel no one thought that it would last only 90 days if it successfully landed and deployed, the 90 days was a minimum figure for the design and also the initial operating budget for the mission.
It wasn't actually expected to last much longer than 90 days, that it did was essentially a miracle, not just good engineering.
At the time the MER vehicles were built we knew enough about Martian dust to know that it would be a severe limiting factor on solar powered vehicles, but we didn't have enough experience with long lived solar powered vehicles to know all the details. We didn't know about "cleaning events" which were too irregular to fully plan for anyway. Ultimately, we got lucky, and we were able to take advantage of that luck on the fly. For example, we found that even with heavy dust accumulation the rovers could survive with careful power management during Summer, and during Winter we could conveniently park them on a South facing incline to maximize power.
Nevertheless, it is telling that for both rovers the thing that did them in was the thing that was always expected to limit their longevity: solar power generation. Spirit had a wheel get stuck and then couldn't park at a good angle during winter, and the absence of cleaning events during that time resulted in power generation falling below a critical threshold at that time. Opportunity got done in by an epic planet wide dust storm, which blocked sunlight long enough for the batteries to run out and for the vehicle to get dangerously cold (likely resulting in critical equipment failures).
Before we had that experience we had no idea that these things were possible, and nobody in their right mind would have bet money that the rovers would have been able to survive for years on Mars.
Yep one of the reasons that it lasted so long was that it was too small for an RTG so as long as it the solar panels were still somewhat intact it could operate even if the battery would essentially be completely dead.
The 90 day figure is simply the time period for which NASA asked for money to operate the thing no one is going to ask for 2, 5 not to mention 15 years worth of operating budget you usually get a few months at the time and extend it based on your needs.
I get annoyed seeing this fact too, but if it makes the general public feel goodwill towards NASA and improves public perception of our space program, I guess it's okay in the scheme of things.
Semantics. The “supposed to” part means it was designed for 90 days, not that it was expected to fail after 90 days. I was pointing out the marvellous over engineering
I recall having a chat with someone with a poor understanding of chemistry, and they asked why we couldn't "just have tiny robots take the carbon dioxide and turn it back into gasoline again"
Steve Squyres from Cornell answered something kinda related on the jpl channel today[0].
"If you had the opportunity to bring a hundred and eighty kilograms of stuff back from the surface of Mars, the last thing I wanna bring is something I know exactly what is made of."
Opportunity did her mission amazingly and is resting in peace in the place where she was designed to be.
We learned a lot from her and her sister mission, now, the effort is better spent building over their shoulders.
The discussion around humans vs robots in space in historically deep conversation with scientific, technical, and philosophical aspects. Do not simply brush it off without understanding the context.
AI doesn't work nearly as well as you seem to think it does. For something as excruciatingly difficult as a space mission, humans have proven to be well up to the task. The headline of this article here is one illustrative data point.
Furthermore, many methods that are currently classified as "AI" act because of very complex and often opaque emergent behavior, and we often have a hard time (or don't know at all) why a CNN for example behaves as it does, or even how exactly it behaves. Do you want something that you neither understand nor can predict to perform a crucial job in your space mission?
If space exploration were a robot-first, robot-only affair, we would probably have cheaper, better, far more capable robots today... And also more humans in space then the 3 that currently occupy the ISS.
actually, since the end of the cold war, space exploration is a robot-first endeavor (and apart from ISS robot-only -- think of the countless exploratory missions, hubble, the rovers etc). the ISS barely counts as space exploration
you do realize the mars rover was supposed to last 90 days? and we re still receiving data from the voyagers.
any robot missions to mars will still be remotely-orchestrated, however AI can be used to manage the time lag between earth and mars
We absolutely should make extensive use of robotics on their own and as preparation for manned missions. Stuff like sending ships and fuel factories in advance of people, to place fully-fueled ships for return missions, habitations, etc. for them on arrival.
Manned missions still make sense for two reasons: 1) humans are (presently) more versatile than computers/robots, and this will probably continue to be the case for 20+ years (although one human can do the work of many through technological augmentation), and 2) the emotional/sociological/etc. value of humans actually being there (plus in the longer term, actual settlement).
I'd like to be approximately the 1000th person to move to Mars, sometime in the next 15-30 years.
We've found water on Mars, but we've also found ancient river and lake beds, deltas, and minerals that can only be found in water on areas of Mars where no water currently exists.
We wouldn't know any of that without having gone there to explore it.
By learning about the mechanisms behind why Mars went from water-rich planet to its current state, we learn more about what could happen us, its closest neighbor.
There also remains the possibility Mars could currently harbor life, or once did and there's evidence of that to uncover. Mars represents our best chance of finding direct evidence of extraterrestrial life.