Hacker News new | past | comments | ask | show | jobs | submit login

Marc actually discusses how safety is extremely challenging, e.g. people assume you can just have the robot freeze, but often freezing can cause more harm.

So for now they actually do not let the robots operate in physical contact with humans. The closest they’ve come to testing the robots is working with a human to lift a stretcher.

See: T=37:50




That depends.

If you say "at least as good as a human" then it's challenging. (by which I mean it's an ethics problem. Uber has ... ethical issues, but Tesla is much better already, and so far I'd say Google seems to have it covered)

If you say "perfectly safe, I don't care about humans" then it's impossible.

> So for now they actually do not let the robots operate in physical contact with humans. The closest they’ve come to testing the robots is working with a human to lift a stretcher.

This is an example of the "absolute safety" standard. By that standard, if you were fair, you wouldn't let humans near each other. You certainly wouldn't let 2 humans lift a stretcher and run with it, and yet we do that all the time, live on public television.

https://www.youtube.com/watch?v=CPbdnafO93c

That's reality, not an abstract standard. That's what humans find acceptable behavior done to other humans that likely have a fractured bone : throwing them onto a field from half a meter high (and on occassion, onto concrete or the metal of an ambulance. With the broken bone first if you're truly out of luck. Ouch). As long as it's not done on purpose ... it's fine.

Over 1000 humans are killed yearly in the US just because other humans can't wait to sober up before driving. That's how much humans really care about safety. How much humans imagine/pretend they care about safety ...


> people assume you can just have the robot freeze, but often freezing can cause more harm.

Is this true of humans as well, or is it something unique to robots with their hard shells and hydraulics?

In other words, would freezing be a safe fallback strategy for a soft robot?


It's true of everything. If you freeze when you are not in equilibrium, you will... fall over. Maintaining equilibrium is an active process.


That is why you build robots with size and strength of a child first, to limit amount of damage it can do, and to allow human adult to overpower it if necessary.


Unfortunately that is not financially rewarding enough and will be skipped for higher risk applications. But I think you are spot on.


This is a concept in robotics. You have "static stable" robots, where you can just slam on the emergency brakes, cut power and expect that nothing will break or die or kill after that point. Things like gantries are good examples, or 3d printers, or most factory robots, elevators, ...

You also have "dynamic stable" systems. They're systems that only remain stable if they're under control, which means that the software can never abort, as that can cause a disaster. A good example is a plane autopilot, or a car autopilot. If the software encounters a critical error, shutting down probably results in more damage than just giving (hopefully slightly) invalid output, in some cases while switching to an extended emergency stop process (like in a car) or attempting to proceed normally despite failures in the control system (planes). Systems are expected to simply do the best they can when they fail.

There's a special kind of dynamic stable systems, which are systems that once stopped, can't be restarted at all, ever, and therefore there simply is no "good state" at all, and you can't even go through a lengthy shutdown process and restart. Essentially systems that self-destruct when the software decides to abort. Rockets, some types of reactors fall under this category, quite a few chemical robots, as does of course any living being.

Of course all interesting systems are dynamic stable. Anything that has a balance, moves faster than a certain function of it's weight, anything that has active components independent of the control (like any reactor), anything that builds up momentum (like a crane, or most moving platforms), factory lines, ...

As a rule dynamic stable systems are much harder to control, as at every moment you must have a plan ready to get to a safe state, instead of just taking into account what you want to achieve.

Humans are dynamic stable systems. Inside a human there are many layers of control, none of which can be safely turned off, some of which are so critical that if they just fail for a few seconds the human dies. Also, humans tend to stand up, looking through pubmed you can easily find just how badly a human body can be damaged by simply collapsing on the spot. A human body, however, cannot deal with extended loss of control even when safely lying down.

Most of the control functions in the human body don't actually happen in the brain, and happen within the body proper ... In some cases it's a neural circuit directly attached to muscle tissue (famously the heart has a big one, but almost every muscle and many glands have something), sometimes it's portions of the neural system that can work independently of the rest, where the control is circuits in points where nerves meet (lungs, the womb are examples. A human body can give birth successfully decapitated, probably even without a spinal cord, and a human with a severed vagus nerve will keep breathing for quite a while, although not indefinitely), some of it is neural circuits connected to glands that work through chemical messages (the hypofyse, adrenal glands). Beyond that there's many more layers. The spinal cord. The cortex and finally the neocortex (which itself is subdivided in nearly 60 parts that are more-or-less separate)


I bet it’s pretty doable to program a robot to detect a yelp and then pull back 20%.

Many human reflexes are pretty simple heuristics.

Let them play with other baby robots and human trainers and they will learn. Same as animal babies.

Dogs are quite dangerous, but they learn how to not hurt other creatures.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: