Hacker News new | past | comments | ask | show | jobs | submit login
MIT AGI: Boston Dynamics [video] (youtube.com)
222 points by AlanTuring on April 2, 2018 | hide | past | favorite | 56 comments



The best thing about Boston Dynamics is that their robots don't work. Maybe spot-mini will but their track record is terrible and I'm thankful for it. I get the distinct impression that these self described 'badboys' would happily hand over a fully functional android to the military complete with gun mounts. There are a lot of great robotics companies to work for that are also deeply principled about doing good, this is not one of them.


Why do you say they don't work? The videos seem impressive.


The GP works at an indirect competitor to Boston Dynamics.

Since the invective lacks suevidence, we can assume it is based upon resentment and/or jealousy, i.e., “how dare their unweaponized, prototype tech be light-years ahead of our own tech! They’re unprincipled!”


Examples of such companies?


Kindred.ai


Weren't they getting funds from DARPA?


what's evil about the military?


“Every gun that is made, every warship launched, every rocket fired signifies, in the final sense, a theft from those who hunger and are not fed, those who are cold and are not clothed. This world in arms is not spending money alone. It is spending the sweat of its laborers, the genius of its scientists, the hopes of its children. The cost of one modern heavy bomber is this: a modern brick school in more than 30 cities. It is two electric power plants, each serving a town of 60,000 population. It is two fine, fully equipped hospitals. It is some fifty miles of concrete pavement. We pay for a single fighter with a half-million bushels of wheat. We pay for a single destroyer with new homes that could have housed more than 8,000 people. . . . This is not a way of life at all, in any true sense. Under the cloud of threatening war, it is humanity hanging from a cross of iron”

-Dwight D. Eisenhower

https://en.m.wikipedia.org/wiki/Chance_for_Peace_speech


There's definitely something evil (extremely unethical?) about handing the decision/burden to take someone's life to machines.

Also, I don't think your remark is a fair representation of the parent comment.


No no, they're right, the US military is evil. Your statement is also true though.


Please see: past 15 years of nonstop war and corruption that hasn't done any good for anyone.


Thanks for sharing this wonderful lecture. I would like to know why this guy's mission is to have Robotic Intelligence >= Human Intelligence. What is his reason behind that, he did not divulge. Any guesses?


What's there to question? You're making assumptions by even asking his motives. Are you implying he should not be doing this ethically or something? Von Neumann responded to a quote from Oppenheimer lamenting the invention of the bomb: "Some people confess to the sin to take credit for it"


I skimmed the lecture after first ten minutes and as far as I can tell, while it seemed like it would be interesting to someone interested in what Boston Dynamics was doing, it doesn't really say anything about AGI beyond "we eventually want to do AGI and we think doing robot stuff well will get us there." If there's something I missed, could someone say.

Since this is part of an MIT lecture series on AGI, I think the GP is sort-of justified in also asking where that part is.


that is a great quote, indeed that happens at all levels, sounds like a cover up for false modesty.


It's humblebrag's cousin - the shamebrag.


Pretty much every conception of AGI is of greater than human intelligence, at least in some aspects.

The reason? Otherwise why build it? Yes, there are cases where AGI as subhuman-level intelligence would have utility, but the remaining cases are far more numerous and have far more utility.


A target of exactly-human intelligence is potentially useful for automating All The Jobs and entering a post-scarcity era without also starting an AI singularity.

Though many approaches to “exactly-human intelligence” would just result in humans that happen to be robots, and we probably don’t want that, especially if the point is to make them do all our work for us.

I don’t know how to refer specifically to the alternative, though—an intelligence that can solve “human-hard” problems, without needing anything like sentience or self-determined goals/desires that would push it toward wanting things orthogonal to its designed purpose. Human-adjacent AGI? Human-competitive Tool AI?


Much of your response seems to orbit around the utility function, which is the devilishly tricky part from the start. Getting that part right is much more important than a specific level of intelligence.


Not sure why you're being downvoted for bringing up a valid point. Though I'd argue the reason is more that stopping at human level is a hard optimization to hit.

Once there's a way to train a general intelligence it's likely hard to contain the improvement - that's why the goal alignment problem is important. Even the current human architecture running at a billion operations per second instead of its current 100 or so would be a problem.

Human general intelligence is constrained by things like power consumption available via eating or needing a head that can fit out of a birth canal - AGI constraints are a lot less limiting and AGI can improve faster.


Also by all the other human beings placing limits on what any individual, however smart, can do (i.e. taking over the world). A superhuman AI is going to come into a world full of existing intelligences, some of them machine which may be close to super intelligent already, along with the billions of humans and all their organizations.


I think the existing intelligences and organizations won't matter, the super intelligence will be a step change like the difference between a human and an ant. If the goals are not aligned up front it'll be a problem.

To put it into perspective if you can run the human architecture on an AGI at a billion operations per second it's like compressing a historical human civilization of learning into a couple hours.

Maybe for some reason this learning process will be slow or maybe there's some reason why it can't scale up quickly, but based on the learning speed in narrow areas that seems unlikely.

There's also the fact that natural selection which is a relatively brute force sexual selection mechanism still results in general problem solving brains being everywhere - it doesn't seem like something particularly rare.


> run the human architecture on an AGI at a billion operations per second

Maybe a single human, but can you run an entire world civilization and it's environment (basically Earth) in a sped up manner? Because one super fast human-level intelligence is still constrained in a way that all of civilization is not. What is one human sped up a billion times? Is that 1 billion humans? We already have several of those units operating around the clock. And those units have access to the world's resources, whereas one sped-up mind will not, unless it can gain control of the world.


I said human architecture - so human like ability to solve general problems sped up without the other human constraints (like needing to eat) and focused on solving a problem. It isn't like a billion independent humans just doing random things - maybe if they could all focus and coordinate on a single goal but the communication overhead from that would still make it different in addition to the other biological constraints.

You may not need that much access to the world to learn/infer a lot about it [1] and if a superintelligent AI needed access to more in order to achieve its goal it'd probably be able to get it.

[1] https://www.lesswrong.com/posts/5wMcKNAwB6X4mp9og/that-alien...


There are multiple dimensions to scaling. A large number of subhuman-intelligent agents can beat a small number of superhuman-intelligences (and vice versa, depending on specifics), and certainly beats not building anything.


> but the remaining cases are far more numerous and have far more utility

But why do we want to replace ourselves? Do we just want the machines to do everything for us? Take care of us like we're pets?


Different people have different priorities, some subset of people must desire to be taken care of like pets. That said, the relationship does not have to have a master/pet dynamic to it. There are alternatives where ASI takes specific roles away from humanity (like various governmental, administrative, or legal roles), especially those roles where humans consistently fall short and failure has very negative consequences. The driving motivations for ASIs will determine what kind of relationship we have with them. They may all have the potential to lead to mass subjugation but, that isn't set in stone, we could end up as partners in a shared future.

I for one hope they can fortify my deep solipsism with more detail, my own personal paradise Matrix. Or maybe they can just take care of me while I waste away on drugs, ensuring no negative externalities.


There is a value to human life beyond just "how smart you are".

>Otherwise why build it?

Because it would be better to send robots into dangerous situations that it is to send human beings.


This is why I mention that there are specific cases for subhuman levels of intelligence.


Possibly to make the world a better place.


For the shareholders.


It's excellent that they've got large scale practical commercial applications in mind, rather than the niche search and rescue which every university robot researcher seems to talk about as their imagined application.


Very cool lecture!

It would be interesting to hear him speak about the pros and cons of different body plans. Are three or six legged robots useful, compared to four legged ones? What about multi-armed humanoid robots?


Someone asked that question and he kinda punted on the answer. He said humanoid robots are theoretically useful for operating in spaces designed for humans, and also get orders of magnitude better responses from the public, but spreading out the motors and balancing tech in a quadruped gives much more space to work in.

See: T=46min


Interesting:

- mentions military application without further comment, then dwells on elder care

- 21:05 robotics platform like "the android of robotics"

- 38:50 struggling with safety

- 48:00 "Im sure we will use learning before too long" Im surprised they arent using a lot of machine learning here.


For the vast majority of robotics and controls traditional methods often have the best (implementation + research)/time value. Using any kind of machine learning usually means a lot of research and implementation time for widely varying results. Maybe if you had some ML specialists that could accelerate implementation and direct you on the right path, ML approaches could become more popular.

The simplest and most practical use of ML is in CV. I'd be surprised if they weren't doing some of that.


They aren't. You see QR code everywhere. I feel they are at the point I was before going into deep learning: they recognized the buzz pattern of an overhyped tech and won't believe it provides immediate gain until they try.


The video also pairs well with the penultimate Black Mirror episode in the latest season (S4E5, "Metalhead").


When you specify that the robot should open the door, and then a human interferes, how do you ensure the robot doesn't yank the door abruptly to knock the human back or swing its arm into the human to clear that space?


Asimov's three rules of robotics? Are those being implemented?


The whole point of Asimov's three rules of robotics was to discuss how they make no sense in the real world, as the problem domain is too hard to be reduced like that.


Yep.

If you follow his Robots series right through to the end of the Foundation series, the theme is how robots R. Giskard and R. Daneel "evolved" to not follow a strict and rigid interpretation of the 3 laws, instead taking actions that are of long-term benefit to mankind even if that might mean short-term harm to a few individuals.

You also have the Solarians, who'd simply ended up programming their robots to only recognize Solarians as human; so if you're an Earthling or Auroran a robot will have no qualms about murdering you. That was pointing out the biggest flaw to the whole concept of the 3 laws: even in the Robots/Foundation universe where they have been implemented with great deal of success, they're only as reliable as those who define them.


Marc actually discusses how safety is extremely challenging, e.g. people assume you can just have the robot freeze, but often freezing can cause more harm.

So for now they actually do not let the robots operate in physical contact with humans. The closest they’ve come to testing the robots is working with a human to lift a stretcher.

See: T=37:50


That depends.

If you say "at least as good as a human" then it's challenging. (by which I mean it's an ethics problem. Uber has ... ethical issues, but Tesla is much better already, and so far I'd say Google seems to have it covered)

If you say "perfectly safe, I don't care about humans" then it's impossible.

> So for now they actually do not let the robots operate in physical contact with humans. The closest they’ve come to testing the robots is working with a human to lift a stretcher.

This is an example of the "absolute safety" standard. By that standard, if you were fair, you wouldn't let humans near each other. You certainly wouldn't let 2 humans lift a stretcher and run with it, and yet we do that all the time, live on public television.

https://www.youtube.com/watch?v=CPbdnafO93c

That's reality, not an abstract standard. That's what humans find acceptable behavior done to other humans that likely have a fractured bone : throwing them onto a field from half a meter high (and on occassion, onto concrete or the metal of an ambulance. With the broken bone first if you're truly out of luck. Ouch). As long as it's not done on purpose ... it's fine.

Over 1000 humans are killed yearly in the US just because other humans can't wait to sober up before driving. That's how much humans really care about safety. How much humans imagine/pretend they care about safety ...


> people assume you can just have the robot freeze, but often freezing can cause more harm.

Is this true of humans as well, or is it something unique to robots with their hard shells and hydraulics?

In other words, would freezing be a safe fallback strategy for a soft robot?


It's true of everything. If you freeze when you are not in equilibrium, you will... fall over. Maintaining equilibrium is an active process.


That is why you build robots with size and strength of a child first, to limit amount of damage it can do, and to allow human adult to overpower it if necessary.


Unfortunately that is not financially rewarding enough and will be skipped for higher risk applications. But I think you are spot on.


This is a concept in robotics. You have "static stable" robots, where you can just slam on the emergency brakes, cut power and expect that nothing will break or die or kill after that point. Things like gantries are good examples, or 3d printers, or most factory robots, elevators, ...

You also have "dynamic stable" systems. They're systems that only remain stable if they're under control, which means that the software can never abort, as that can cause a disaster. A good example is a plane autopilot, or a car autopilot. If the software encounters a critical error, shutting down probably results in more damage than just giving (hopefully slightly) invalid output, in some cases while switching to an extended emergency stop process (like in a car) or attempting to proceed normally despite failures in the control system (planes). Systems are expected to simply do the best they can when they fail.

There's a special kind of dynamic stable systems, which are systems that once stopped, can't be restarted at all, ever, and therefore there simply is no "good state" at all, and you can't even go through a lengthy shutdown process and restart. Essentially systems that self-destruct when the software decides to abort. Rockets, some types of reactors fall under this category, quite a few chemical robots, as does of course any living being.

Of course all interesting systems are dynamic stable. Anything that has a balance, moves faster than a certain function of it's weight, anything that has active components independent of the control (like any reactor), anything that builds up momentum (like a crane, or most moving platforms), factory lines, ...

As a rule dynamic stable systems are much harder to control, as at every moment you must have a plan ready to get to a safe state, instead of just taking into account what you want to achieve.

Humans are dynamic stable systems. Inside a human there are many layers of control, none of which can be safely turned off, some of which are so critical that if they just fail for a few seconds the human dies. Also, humans tend to stand up, looking through pubmed you can easily find just how badly a human body can be damaged by simply collapsing on the spot. A human body, however, cannot deal with extended loss of control even when safely lying down.

Most of the control functions in the human body don't actually happen in the brain, and happen within the body proper ... In some cases it's a neural circuit directly attached to muscle tissue (famously the heart has a big one, but almost every muscle and many glands have something), sometimes it's portions of the neural system that can work independently of the rest, where the control is circuits in points where nerves meet (lungs, the womb are examples. A human body can give birth successfully decapitated, probably even without a spinal cord, and a human with a severed vagus nerve will keep breathing for quite a while, although not indefinitely), some of it is neural circuits connected to glands that work through chemical messages (the hypofyse, adrenal glands). Beyond that there's many more layers. The spinal cord. The cortex and finally the neocortex (which itself is subdivided in nearly 60 parts that are more-or-less separate)


I bet it’s pretty doable to program a robot to detect a yelp and then pull back 20%.

Many human reflexes are pretty simple heuristics.

Let them play with other baby robots and human trainers and they will learn. Same as animal babies.

Dogs are quite dangerous, but they learn how to not hurt other creatures.


Stephen Wolfram says they're impossible: https://www.youtube.com/watch?v=P7kX7BuHSFI&t=4387


i'm sure darwin's law also applies.


Give all the humans robot arms. Rules of nature, Jack!


> Unfortunately, the above is the best available resolution of slides. Thanks for understanding. We're always learning and improving.

Hmm something doesn't sit right with me on this one. It's not just slides, it's videos too. It seems like they're apologizing for a mistake they might have made as opposed to intentionally downscaled the videos. Is it just a coincidence that the first time some of these videos are shown (not all) they happen to be extremely low res?


I recommend the entire MIT course on AGI http://AGI.mit.edu

Great guest lecturers.


Nice tiled floor, I guess that helps a bit with SLAM (just being jealous, great job!! :) )




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: