Hacker News new | past | comments | ask | show | jobs | submit login
Toyota Research claims breakthrough in teaching robots new behaviors (tri.global)
403 points by geox on Sept 20, 2023 | hide | past | favorite | 243 comments



Having been a member of the robot learning community both in grad school and now in industry, I'd actually like to rightfully attribute something here since it seems that TRI is (deservedly so, I will agree wholeheartely) receiving most of the praise:

The core of these advancements are powered by Diffusion Policy [1], which Prof. Shuran Song's lab at Columbia (before she moved recently to Stanford) developed and pioneered. I'd suggest everyone to view the original project website [2], it has a ton of amazing real world challenging experiments.

It was a community favorite for the Best Paper Award at the R:SS conference [3], this year. I remember our lab (and all other learning labs in our robotics department), absolutely dissecting this paper. I know of people who've entirely pivoted away from their projects involving behavior cloning/imitation learning, to this approach, which deals with multi-modal action spaces much more naturally than the aforementioned approaches.

Prof. Song is an absolute rockstar in robotics right now, with several wonderful approaches that scale elegantly to the real world, including IRP [4] (which won Best Paper at R:SS 2022), FlingBot [5], Scaling Up Distilling Down [6] and much more. I recommend checking out her lab website too.

[1] - https://arxiv.org/abs/2303.04137

[2] - https://diffusion-policy.cs.columbia.edu/

[3] - https://roboticsconference.org/program/awards/

[4] - https://irp.cs.columbia.edu/

[5] - https://flingbot.cs.columbia.edu/

[6] - https://www.cs.columbia.edu/~huy/scalingup/


To be fair, they do credit Professor Song and the paper you linked. TRI is also listed as a collaborator on the paper.

> Diffusion Policy: TRI and our collaborators in Professor Song’s group at Columbia University developed a new, powerful generative-AI approach to behavior learning. This approach, called Diffusion Policy, enables easy and rapid behavior teaching from demonstration.


Interesting that we have a genius robotics Dr. Song for real vs Star Treks Dr. Soong :)


My dad, who worked on military IFF before he retired, met someone in the UK intelligence community whose actual name was James Bond.

Or so he said…


There's actually 1000+ people named James Bond so it's likely. Although fathers are the biggest liars - mine told me he swam across the ocean.


I'm glad that she skipped Dr. Soong's ambition that preceded his work on robotics!


It should be noted that Diffusion Policy (not to mention IRP) was also apparently joint work with TRI.


Can anyone ELI5 (well, or, "explain like I'm someone who understands how autoencoders, transformers & convolutional networks work") diffusion?

What makes it work so much better than alternatives mentioned above?


I haven't read the paper on Policy Diffusion yet so I don't know what they do differently. But I can ELI5 image diffusion models, like stable diffusion. Essentially you add random noise to an image, and the ask the model to predict the noise, such that if you remove that noise detected by the model, you obtain the original image. After the model has been trained enough, int the noise removal task, you can pass just random noise, ask the model to remove noise from the noise only image, then remove a little bit of the noise the model suggested, and do it again. And again, for multiple steps, eventually all the noise is removed and you end up with an image "dreamed" by the model from random noise. You can also condition the noise removal with things like text or other images to guide the noise removal process toward a certain target image.


It seems some researchers in her lab were also involved with Toyota.


> our lab

Which lab are you referring to?


I meant the academic lab that I was a part of while in grad school (would like to keep that anonymous for now, it's a smallish community).

Btw, I was at R:SS 2022 and meeting the Skydio autonomy team was one of the highlights of my career as a robotics engineer!


You may as well credit the information theorists, mathematicians, and physicists who laid out the fundamentals that brought us here.

They died before hardware achieved their decades old visions. Not much of this work is net new description, moreso normalizing old descriptions with observation now that we can actually build the old ideas.


Prompt: ChatGpt write a generic anti academia rant against robotics research


For anyone interested, here[0] is the YouTube channel of Russ Tedrake, which has:

    - 6.4210 (2023) Robotics Manipulation
    - 6.8210 (2023) Underactuated Robotics
[0](https://www.youtube.com/@underactuated5171)


Cool to see Russ Tedrakes recent work! His online course Underactuated Robotics is a very good course to get a grasp on the complexities faced in robotics.

It's exciting to see someone with a bit more deeper knowledge than "Flex tape slap LLM on robotics" featured here, which is majority of Robot Learning work upvoted on HN.

There's more to it than just language learning to be solved before we can have proper embodied agents in the chaotic real world.


Website isn't loading for me, but found the video on the Toyota research youtube:

https://www.youtube.com/watch?v=w-CGSQAO5-Q


Thanks. In the video around 2:40, he describes it as a “kindergarten for robots”, that’s an interesting way to think about it. I wonder if it would be possible to crowdsource the training of new tasks with a standard training tech protocol? That way you bid on the task you want and someone who solves it gets a bounty and everyone benefits? The point is there’s a long tail of tasks and a centralized lab probably can’t do them all.


Google was doing something similar, and it was on HN about a month ago.[1]

I wonder how much force feedback they have. Is that big round squishy thing in the videos sort of like a big finger, with lots of pressure sensors? People have built area pressure sensors before, as far back as the 1980s, but nobody knew what to do with all that data back then. Today, too much sensor data is far less of a problem.

I once took a crack at this problem by equipping a robot arm with an end wrench. The idea was that it would feel around for a bolt head, get the wrench onto it, and turn. A 6 DOF force sensor is enough for that. But this was pre deep learning, and I didn't get very far. although I did build the wrench robot setup.

[1] https://news.ycombinator.com/item?id=37167698


The squishy thing is essentially an inflated balloon with a camera inside, which monitors the balloon's deformations: https://punyo.tech


Nice. Cameras are so small and cheap now you can do that. Computing optical flow is cheap now, too. This is something you can make from cell phone parts. Early designs were sheets of resistive plastic between crossing grids of flexible printed circuits.


This looks impressive. Much more than even the Boston Dynamics demonstrations.

Flipping a pancake is extremely difficult because each pancake is different. I know that these videos must be cherry-picked but to be able to train a Robot to do this just by demonstrating feels like a massive leap.


Flipping a pancake was done in 2010. What looks impressive for humans is easy for robots and vice versa: https://youtu.be/W_gxLKSsSIE?si=HDyNXe1Ys_eFXiVU Another case in point: robot juggling was done in 1990s and to date we do not have a robot that can open any door reliably like a human. Kind of like Moravecs Paradox


To be fair it is far more complex for a robot to grip a spatula and use that spatula on a griddle than to use dynamic motion to flip a pancake in a pan.


Ehhh.

Solving any one problem with robotic manipulation isn’t all that hard. It takes a lot of trial and error, but in general if the task is constrained you can solve it reliably. The trick is to solve *new* tasks without resorting to all that fine tuning every time. Which is what Russ is claiming here. He’s training an LLM with a corpus of one-off policies for solving specific manipulation tasks, and claiming to get robust ad hoc policies from it for previously unsolved tasks.

If this actually works, it’s pretty important. But that’s the core claim: that he can solve ad hoc tasks without training or hand tuning.


  > He’s training an LLM with a corpus of one-off policies for solving specific manipulation tasks, and claiming to get robust ad hoc policies from it for previously unsolved tasks.
It seems clear that many people do not understand that this is the key breakthrough: solving arbitrary tasks after learning previous, unrelated tasks.

In my opinion that really is a good definition of intelligence, and puts this technique at the forefront of machine intelligence.


Is the pancake and spatula problem actually that constrained though?

I know it isn’t as open ended as plenty of more important problems in robotics, but this doesn’t strike me as easy at all.

I’ve only dabbled in robotics as an entry level hobbiest, so I really don’t know the answer.


It’s constrained enough to be tractable.


Fair enough. When would you say it stops being tractable? What single, practical thing could we add to this problem to make intractable?


Flipping a pancake in a "random kitchen" would be much more difficult and have many of the same issues as the door problem.

It's hard to point to a single thing that would make "flipping pancakes" intractable, it's sort of the other way around, to usefully flip pancakes in the same way as a person takes a lot of skills chained together.

The "door problem" is a sort of compendium of many real-world skills, identifying the door, understanding its affordances and how to grip / manipulate them, whether to push or pull the door, predicting the trajectory of the door when opened, estimating the mass of the door and applying the right amount of force, understanding if there any springs or pulls on the door and how it must be held to traverse through it. Etc. There are also a ton of things I'm missing that are so fundamental one tends to take them for granted, like knowing your own size and that you can't fit through a tiny doorway.

I think you can ramp towards the "door problem" in difficulty by slowly relaxing constraints. A video linked above (not article) shows "can flip a pancake successfully with a particular pan (you are already holding) and pancake with a fixed camera and visual markers". Ok, now do it in varying lighting conditions. With no visual markers. With different camera views. Different pancakes. Real pancakes (which are not rigid, and sometimes stick to the pan). Different pans. Now you have to pick up the pan. Use a stove. Different stoves. Identify griddle vs pan and use the right flipping technique. Find everything and do it all in a messy kitchen... eventually you're getting to same ballpark as the "door problem".


physicist here (so very naive on these topics) - I’m wondering how to compare the steps you mention regarding the door problem (especially the predictive ones, e.g. about the trajectory of the door as it opens, etc) with how humans open doors? Surely people don’t stop in front of a door and begin planning things out, rather they seem to go for it and adjust on the fly, is this an approach that won’t work in robotics? Why not?


So classical robotics yeah, people used to write code for each step of opening a door. Practically speaking you would probably not do motion planning on the door, you would just code it up with a bunch of heuristics like, try to be over here in relation to the doorframe because that's a good opening spot and will probably work. Ok you're in the right place? Now, move gripper towards the door handle... etc. Bunch of hacks. Put enough hacks together and you can kinda sorta open (some) doors. Oh this is a SLIDING door? Damn we forgot to code for that...

The way things are going is sensors (cameras, force, etc) and neural networks. You let the robot try a bunch of ways of opening doors, sometimes it doors itself in the face, eventually it'll figure out good places to stand based on what the door looks like. The more doors you make it try to open hopefully the better it gets at generalising over the task of opening doors. The hacks/heuristics are really still there but the robot is supposed to learn them.

> Surely people don’t stop in front of a door and begin planning things out, rather they seem to go for it and adjust on the fly, is this an approach that won’t work in robotics? Why not?

Yeah, figuring out how to do this is basically "the problem". Most people don't have a sense or feeling of "planning things out" as they open a door because we reached "unconscious competence" at that task. We definitely have predictions of what is going to happen as we start opening the door based on prior experience and our observations so far. If reality diverges from our expectations we will experience surprise, make some new predictions, take actions to resolve the surprise, etc.

Not sure that anyone has ever studied how people open doors in detail, it'd be interesting. I bet there are a ton of subtle micro behaviours. One that I know is, if you hear kids running in the house it is a good idea to keep a foot planted in front of you as you approach the door, because those guys will absolutely fling or crash doors open right into your face.


Thank you, great answer. As soon as I had asked my question I realized that we must have a lot of unconscious behaviours. Very interesting points about surprise/expectations. And top marks for the advice about kids


What makes you think a kitchen would have to be random? We regularly design physical spaces to accommodate robots.


I was responding to address why the "door problem" is more difficult than "pancake flipping under controlled conditions".

(I also ignored that door opening is generally done by mobile robots of a certain weight class which tend to be more expensive than a stationary arm with enough strength to pick up a spatula or hold a pan).

There is a steep difficulty gradient from "works in the lab" to "works under semi-controlled real world conditions" to "works in uncontrolled real-world situations".


Yes! In layman's terms: is the most efficient way to train these robots by showing them billions of videos of how it's done?


Almost certainly not. Because the sense of touch is an important part of the problem and that data isn’t present in videos.


Not just touch but proprioception. Robots in human environments will have to be better at proprioception than 98% of humans. If I bump into you it’s typically anything from annoying to a meetcute. I’m a pretty big guy, but if you had to chose me to step on your foot or somebody else, it’s probably me you want, because I will shift my weight off your foot before you even know what happened (tai chi) because you will barely notice.

If instead your choice is your high school bully or a robot, well for now pick the bully. Because that robot isn’t even being vicious and will hurt more.


> Because that robot isn’t even being vicious and will hurt more.

Rodney Brooks at the MIT AI Lab was a big advocate of something called "series elastic actuators." The idea was was that you didn't allow motors to directly turn robot joints. Instead, all motors acted through some kind of elastic. And the robots could also measure how much resistance they encountered and back off.

MIT had a number of demos of robots that played nicely around fragile humans. I remember video of a grad student stopping a robot arm with their head.

Now, using series elastic actuators will sacrifice some amount of speed or precision. You wouldn't want to do it for industrial robots. And of course, robots also tend to be heavy and made of metal, so even if they move gently, they still pose real risks.

But real progress has been made on these problems.


I think you're probably right, and those non-linear systems are going to make me have to increase my estimate for how long it takes for a robot to go from 5 year old child to ninja physicality. The more complex the feedback mechanisms, the more complexity there is in, for instance, screwing in a screw as fast as possible.


The robot won't take any enjoyment out of it, and won't laugh at your pain. Won't post about it on social media. Isn't going to try and fuck your ex or sister or mom.

I'll take the robot, thanks.


Your friends will though.


"friends"


I'm pretty sure that if I had never opened a door before and I saw somebody opening a door in a video, I would immediately know how to open doors just by watching the video. And that would be any door, with any kind of door handle. Not because I got superpowers, but because I'm average-human.

So, the moment your system needs this kind of data and that kind of data, oh and btw it needs a few hundreds of thousands of examples of all those kinds of data, that's very clear to me that it's far away from being capable of any kind of generalisation, any kind of learning general behaviour.

So that's "60 difficult, dexterous skills" today, "1,000 by the end of 2024", and when the aliens land to explore the ruins of our civilisation your robot will still be training on the 100,000'th "skill". And still will fall over and die when the aliens greet it.


Can you train a robot to imagine touch by showing it what touch would feel like in many video scenarios?


I think their robot has a way of converting touch to a video input. The white bubble manipulator has a pattern printed on the inside that a camera watches for movement. (see 1:58 of the video).


And here I thought manual labor jobs were safe for a very long time. I really hope people at the policy level are thinking about what it looks like to have a world of people that don’t have any work to do.


This sounds similar to some of the work that google has done, such as PaLM-E: https://blog.research.google/2023/03/palm-e-embodied-multimo...

Very exciting times in robotics!


This looks way better than PaLM-E because the robots they're using are more capable and the tasks much more complex. And they're doing the behaviors at the same speed a human does them while puppeteering the robot. The PaLM-E demonstrations were all shown in sped-up videos because they are agonizingly slow in reality.


This is getting pretty close to how I think we get to the general purpose humanoid robot. This is how I see it playing out:

- You have your Boston Dynamics style humanoid robot at the job site, lets say it's a bricklayer for the purposes of this example.

- You have a human somewhere offsite in an open room with an omnidirectional treadmill floor, and cameras and depth sensors positioned all around the room. They're wearing a Hollywood style motion capture suit and have a VR headset on so they can see what the humanoid robot sees through their cameras.

- The human then acts as they would on site, walking up to the pile of bricks, picking them up, placing them etc. The robot moves in real time on the job site, mimicking whatever action the human performs. I don't know if you'll need props to do this properly or if the muscle memory from years on the job will be enough for the humans to get the motions right.

- You log all the data. You then have someone watch through the video stream, labelling each action that is being performed.

- You run it all through a machine learning algorithm, until you get to the point where you can just send the architectural plan to the robot and essentially say "Build this wall for me".


The first 3/4 points were all almost exactly from the 2008 movie Sleep Dealer to the point where I thought you were referencing that


Never heard of it before, any good?


It's worth watching because it is so plausible.


Commenting for myself later :D.


there's a favorites feature that might interest you.


And yet, as far as I know we don't even have mature implementations of this even for machines with much courser movement whose movement are controlled by a loose physical mapping to the human operator's movements. Things like excavators with dual joystick controls.


This is precisely how Comma and Tesla do self driving.


comma does raw input to output behaviour cloning like you propose. interestingly, tesla does not. via there last public ml update, theyre self driving architecture still involves a path generator and evaluator combo, and a lot of seperately designed sensor fusion steps. parking is done via ppo reinforcement learning i think.


In short term perhaps but not long-term.

I believe they will send out a team to digitize the job site.

The team will then create a digital twin.

The architect will map everything into this twin.

The computer system will simulate the build steps.

Robots will be brought onto the job site and get a finetuned model (if necessary) and will build it automatically.


No, long term we somehow get something magical like LLMs that swoop in and magically pull off these tasks.


None of that. We get LLM on the jobsite and it tells worker how it’s done.

Often nothing as cheap as humans in terms of energy consumption. I doubt LLMs will beat that energy consumption.


Humans are .. complicated. You cannot just compensate a human’s caloric expense (although many companies would love that).

Anywhere a human gets involved shit gets expensive. Interestingly robots are pretty human-intensive, from design to programming and maintenance. Only at scale does the “human factor” go down and things become affordable.


Also, humans are squishy, you need safety requirements to keep them intact (seeing as human repair is incredibly costly and takes ages to complete, often with no guarantee that repair completes and performance levels go back to nominal), humans need sleep and can't work around the clock, and they have a mind of their own, which can make them difficult to deal with.


Indeed. Most of them are also bad planners and lose track of priorities very, very easily. Besides that, there is non-stop internal strife and quarreling as they are deeply social creatures that need to be in constant competition with one and other in order to be happy. You need another class of humans to "manage" these issues.

Once those two layers interact, you get a whole new dimension of problems you get to deal with. The creation of yet another layer of "management" is inevitable. It's basically management all the way up.

But man, once you get a bunch them aligned and motivated the sky is the limit.


I once read a novel or short story with this concept. The robots were piloted (on-site) for training and then let loose on their own. I forget the plot besides that though or who may have wrote it.


Yes, it's called reinforcement learning.


we use the term "Large language models" because the entire world wide web, library of congress, etc. have produced a truly vast amount of written content such that LLMs have massively large datasets available for learning. That's what I understand to be the "large" part. We have an unbelievable amount of written content available from a huge amount of datasets (both public domain and more questionably public domain).

When this video refers to "large behavioral model", where's the "large" part? Where are they getting a similarly "large" amount of behavioral input data? It looks like they have a big lab with a few dozen people modeling behaviors. that's great but it's not like this number of people could produce as much content as all of digital written content.


This seems pretty cool. But I'm not clear how someone can be a (full-time) professor at MIT and also be a (full-time) vice president at TRI. I've seen this kind of two-job situation before but never understood how it's practical, unless the person works 70+ hours a week.


You probably still 40 hours week or less, but you're so expert in those fields that your 10 hours of work cannot be replaced by somebody else working full time.

In software engineering terms, you would gladly pay a full good salary and give a good role to John Carmack to work on your projects 6/7 days per month anyway, because he's John Carmack.


I guess but that feels more applicable to an individual contributor role, albeit a very senior one. VPs I've known in my 30+ years in software are fully booked all day every day. A professor maybe gets to count his outside work as part of his research efforts but what about say class load, mentoring grad students, serving on committees, and the like?


I can't but point the reality of modern college education: professors nowadays are mostly fundraisers than teachers or lab directors.

This is especially true in the United States or Switzerland from my experience but it applies to virtually any country where the budget that goes in public research is not really enough.

Their first job is really fundraising, and their second job is actually academical.

I've worked for some time with scientists like Michael Graetzel[1], he was very important in the lab, don't get me wrong, he had a brilliant mind and he would still take decisions and give terrific feedback when he was in the labs, but the primary role was fundraising.

You can only have the best lab if you have the most money and can hire more and the best people, that's more important than teaching or directing a lab, sadly.

[1] https://en.wikipedia.org/wiki/Michael_Gr%C3%A4tzel


> class load, mentoring grad students, serving on committees, and the like

that's the neat part. you don't


Woah!


It's on

Really - start thinking carefully about what you’re working on. Until now the new AI have been language only, not spatial. That’s over


Been thinking about this a lot over the last year. It's becoming very difficult to plan for the long term future, career-wise.


why, are a lot of Hacker Newsers pursuing short order cook careers?


Essentially, yes

How can we be sure?


This seems like world changing technology, I can't believe these robots can learn complex motions just by watching a teacher.


They don't learn by watching. That would be another breakthrough entirely. They learn only by physically completing the task under the complete control of a puppeteer. Still very cool though.


My understanding is that the puppeteer records the same movement over and over again and this dataset of trajectories (pose, speed, acceleration) is then "diffused".


learning through a shared experience!


If I had to guess, we’re less than 5 years away from seeing real-life C-3PO.


I would disagree. All of what we are seeing from this latest surge in AI is essentially jumped up predictive text. To get to C-3P0 there is a whole additional layer of Intelligence needed. C-3P0 can make plans and execute those plans. This latest wave cannot reason about the world, it does not know or understand the world it just assembles words (and here motions) in a way that we value. It is not planning anything.


That's the easy part. Making high level plans is trivial compared to the fine motor control and dexterity and sensing necessary to do things like turn a T-shirt inside out or install a fitted sheet or crack an egg or whatever. If you give me a robot with all the fine motor skills necessary for all the steps to cook a meal but no planning capability whatsoever, I'll have that robot cooking your dinner within a year.


C3PO is a translator droid that is basically stupid at everything else (other than math or facts listing, as he's a robot).

So yes, I think he seems a reasonable target.


I think you're giving C3PO too much credit for the bumbling idiot he usually is when on screen. Well aside from calculating the odds of successfully navigating an asteroid field, but I'm sure GPT 4 will let you know what that is just as easily, as well as translating any language into any other language which is supposedly 3PO's whole schtick.

Also:

> can make plans and execute those plans

https://github.com/antony0596/auto-gpt

> cannot reason about the world, it does not know or understand the world it just assembles words

They can reason a surprising amount given that they only work with text. With vision/actuation encoding there's potential for far more. Remember, it doesn't have to be smart or conscious as long as it gets the job done with cold hard statistics while just appearing as such. A submarine does not swim but crosses the ocean just the same.


> as well as translating any language into any other language which is supposedly 3PO's whole schtick.

To be pedantic, language is only half the job, as a protocol droid it's C3POs job to understand social protocols, ie etiquette, and knowing what one culture might misunderstand about another and smooth over any faux pas, a task that requires considerable empathy and attention to subtle emotional cues.

i'm very curious to know what it would take to turn a language model designed to respond to prompts, and create something that can proactively interrupt a situation - to realize when it has something to contribute, and keeping its virtual mouth shut otherwise.


Well on one hand that's something that even humans can't do that well, on the other there already are a load of reddit bots that search for relevant strings in posted comments and reply when relevant. I suppose it would be a more advanced version of that, just interjecting when the probability that it knows what info follows is large enough, just replace the stream of new comments with a speech to text engine.


Considering that C-3PO waas verbally very competent, but somewhat clumsy, I agree. Even the "old" Boston Dynamic robots are more agile than 3PO.


Hmm, perhaps that clumsiness was a programmed affect designed to disarm those he interacted with. After all, as a translator and master of protocol, outshining his master in any way would be quite against his programming.


I think you're onto something, given that IG assassin droids exist in-universe and are anything but clumsy.


As long as you don't mind the extension cord.


I'll step over an extension cord all day long if it's powering a robot that does the laundry and the dishes and the house cleaning for me.


If I had to guess, we're over 50 years away from seeing a real-life C-3PO -- hopefully before I die. I think self-driving cars are even further away than that, however.


"Visuomotor Policy Learning via Action Diffusion" - https://news.ycombinator.com/item?id=37581866


OK, so those nice safe real world jobs? AI is after them too now…


We'll borrow the work from future to keep people busy. There will always be plenty of work. IlIn addition to organic vs other food, there will be stall of human grown vs machine grown food etc etc..

When first tractor arrived in my village, when grandfather joked that all the landless labourers will dies of hunger now since there won't be work for them. Manual ploughing reduced but a number of other work became routine. These days it's hard to find labour in my village (western UP).


They all seem to be laborers in Karnataka now. Almost all laborers seem to be from UP and Bihar.


> those nice safe real world jobs?

"American manufacturers use far fewer robots than their competitors, in particular in Taiwan, South Korea and China" [1]. And specialized manufacturing is in a permanent skills shortage. More automation may boost employment and wages for blue-collar workers. Particularly if such kit enables new entrants to challenge incumbents.

[1] https://www.wsj.com/economy/american-labors-real-problem-it-...


This will effect jobs that are not manufacturing. Anything that requires fine motor control in noon uniform environments


You described manufacturing jobs right there too


Yes? The user above me already pointed out that that sector will be hit.


There’s no real dignity in work that can easily be done by a robot. A lot of these jobs make people miserable anyway, maybe we shouldn’t be fighting so hard to keep them.


"There’s no real dignity in working to make a living." FTFY

/s But seriously, while I think humans will always find some meaning in work, there will come a day when that work is no longer required. Or at the very least, work will look so different that it will be unrecognizable to us currently. I think "work" will look more and more like "art" in the year 4000 for example.

The idea of a person sacrificing some of their time to enrich the person at the top of a company already sounds bad, but we all accept it must happen so we can afford to live. But when robots and AI takes over labor in a significant way, what will be left for those humans that remain? And let's say we converge on something like a universal basic income to fix this, where will humans find meaning?


I was a huge supporter of UBI for a while, but now I think money is going to become kind of meaningless at some point (the more we print the less it's worth, and UBI policies would struggle to keep up with the inflation they drive)

I'm leaning more towards socialized housing, food, health care, and public transporation now, at least for the basics.

Everyone should be able to at least meet their minimal basic needs without depending on their economic relevance (which isn't a given for many people over the next 100 years). People who want more than the essentials (a house with larger bedrooms, equipment for their hobbies, travel accommodations) can find paid work in positions that still require a human touch.

And if there aren't enough jobs available for everyone who wants one, then start reducing the standard work week from 40 hours, until everyone who wants can find work.


> I'm leaning more towards socialized housing, food, health care, and public transporation now, at least for the basics.

What's the difference between that and UBI? The state giving the people a magical ticket to redeem for food and shelter or simply giving them food and shelter is the same thing at the end of the day.

Both are equally prone to be inflationary as they require workers to provide the food and shelter without receiving anything real in return. In both cases you need to seize real value from the wealthy population if you want to avoid the inflationary effects.

It's the same thing, just different accounting.


In practice I think giving a stipend to buy shelter is less useful than providing shelter directly in our current political system where large amounts of local regulatory control have been captured by existing landowners — so more money in doesn't mean more production of housing. For example, it's illegal to build significant new housing in most of San Francisco, and NYC zoning and planning restrictions mean that the city has built less housing in the past decade than it built during the Great Depression, despite vastly more money being spent.

There's also the tough tradeoff that money is useful for more than just food and shelter, and so there's high incentive for unscrupulous people to try to get you to transfer it to them instead of covering your basic needs: for example, by spending millions on advertising campaigns to get you to stretch yourself to buy things you don't need; or just regular, transparently illegal things like extortion, selling addictive drugs, etc. It's not that people are dumb, it's that other people have incentive to try to get you to do things that are bad for you but good for them, if you have money. If you have shelter that can't be transferred, there's just a lot less incentive to try to abuse you.

Not that money is bad, just if we're trying to run an efficient program that covers basic needs (but not much else), directly providing the things we want to provide is IMO likely to be more efficient than cash transfers and unaccountably hoping that the cash is spent on what the package is aiming to provide.


In theory, the government is only bearing the cost for people who want to depend on socialized meals, which is different from giving everyone $500/month. With UBI, people are also putting that money back in the private sectors. With socialized meal programs, the government lean on the wealthy to ensure everyone has access to food, and uses that to employ people in the public sector.

> Both are equally prone to be inflationary as they require workers to provide the food and shelter without receiving anything real in return.

I'm not sure I follow why this would be inflationary, but it's also not what I was suggesting.

Even if this does lead to more inflation though, it ensures the government is responsible for securing food and housing for everyone, rather than something like UBI which might result in people who can't find work not actually being able to afford the essentials.


I have the real unpopular opinion that some psychological pressure is necessary for human societies to operate somewhat optimally.

Straight up giving people money is different from living "on stamps". I know it's fashionable these days to call this "shaming" and I guess it is. My question is, is that a bad thing?

If I wasn't shamed/bullied/pestered into being a proper citizen I don't think I'd become one. Maybe I'm just an inferior breed or something, but my guess is this is true for a lot of folks.


Children need to be socialized, and it's fairly unpleasant for them. This would still be needed regardless of how we distribute money. You're not being "a proper citizen" to receive your paycheck, you're being one because you've been brainwashed from an early age to think that you should be.


If I'm a landlord and everyone gets an extra 2k a month guess how much i'm going to increase my rent next month?


That system will just make people in charge of distributing socialized housing, food and health care very rich.


It's almost like you're describing the current system.


>I'm leaning more towards socialized housing, food, health care, and public transporation now, at least for the basics.

The problem with this is that these are all limited resources. That's the whole reason we invented money: to control access to limited resources. If you have free housing for everyone, who decides who gets a luxury penthouse apartment, and who lives on a 20 m^2 apartment on the 1st floor? Not everyone can live in the penthouse, and there's not enough room for everyone to have a 300 m^2 apartment without making the city sprawling. Same with food: some food costs a lot more to make than others. Same with healthcare: while arguably everyone should get basic care and emergency care, should we give free plastic surgery to everyone who wants it? Money lets people decide what's more important to them.


I suspect you didn't read the rest of what I wrote.

Subsistence housing and food is by definition not luxury. No one who receives social housing is getting the penthouse suite. Luxury is still pay to play. The government would just provide modest housing and food. Maybe the food is Soylent. Maybe the housing is a studio apartment or a tiny home.

I agree that these are limited resources, but in theory there should be enough housing and food for everyone to survive, and if there isn't, making it a social responsibility can hopefully catalyze action to change that. The government has a lot more sway in getting more housing built (and in fact, are often responsible for policies that prevent enough housing being built) than someone who can't afford market rent for an unglamorous studio.


Thinking about the year 4000 isn't interesting. Thinking about the year 2100 is interesting.


Thinking about the year GPT-5 is released is interesting (likely 2024).


Indeed. People are way underestimating how quickly this change is coming


I think both ideas have their merits. I think we can all agree that progress isn’t necessarily linear. So maybe in 2100 we won’t quite get to what I’m talking about, but by 4000 I don’t think anyone can claim that we’ll be working 40 hours a week and paying bills with credit cards still.


>But when robots and AI takes over labor in a significant way, what will be left for those humans that remain?

If AI gets advanced enough that it can make art better than humans (not just technically superior, but artistically superior, more inspired/innovative etc.), then humans will be truly obsolete and doomed, because AIs will be just as sentient as humans, but more capable.

If AIs don't quite reach this point, but still do all the other drudge-work, humans are going to have a hard time still. Some humans will do great, because they'll be doing things that AIs can't, and working by telling AIs what to do, but not-so-smart humans will find themselves without any purpose in life, and only UBI to keep them from rioting.


The only way AIs won’t reach the point of being better than humans is if we intentionally stop, I fear. Once an AI is trained, you can copy it a million times over. A human takes many years to learn a skill to the point of mastery.


> A lot of these jobs make people miserable anyway

A lot of these jobs also make people happy, though. That is why the loss of manufacturing in particular was such a blow to Americans. People love manufacturing – to the point that having small-scale manufacturing facilities in one's garage so that one can keep on manufacturing things on the weekend is a dream of many.


This is a foundation of the liberal tradition and one on which e.g. Smith and Marx would agree: human-scale industry is among our most fundamental (and prolific) predilections

Moreover that it tends to produce better outcomes than the frantic, often brutal thrust of the greater industrial revolution

This is not to condemn labor-saving ingenuity but to advise deep consideration with regard to social and material technologies


We’re back to the recurring question: what economics model will work without workers?


> what economics model will work without workers?

Without labour? None. Without human workers? All of them. That said, everything we label the humanities has plenty of runway apart from automation.


Capitalism without human workers? How?


> Capitalism without human workers?

Humans only own and consume while robots (functionally, the capital of this economy) provide the labour. Everyone is a founder, but instead of co-founders and employees, you just command a team of AIs and robots. You're still trying to innovate and provide a product, as are others to you. But nobody is selling labour per se.


Everyone is a founder

Surely you mean the top 1%, who have the capital to invest into robots?


> Surely you mean the top 1%, who have the capital to invest into robots?

Whomever we empower. The people left out of the loop would die, suffer in quiet subsistence or be folded into the society. The first two are the "people are pests" solution. You see it in resource-rich countries where the population isn't part of the economic machinery.

The last is precedented; see how ancient Sparta dealt its public allotments of land and slaves. Which way various societies go will depend, in part, on decisions we take today. (Should such a future come to pass, which, again, is predicated on massive leaps in robotics and AI.)


We'll see. Don't forget that AI + robots is a dangerous combination on its own, in more than one way.


> We'll see. Don't forget that AI + robots is a dangerous combination

Also, a totally fictional one.


All possible futures are fictional until they happen.

That said, as limited as AI and robotics are today, they are nevertheless already sufficient to be extremely dangerous.


Leaving aside the convenient assumption that AI will take every job except CEO…this sounds insanely competitive and most people don’t have it in them to do this.


> the convenient assumption that AI will take every job except CEO

One, I don't assume this will happen. I was responding to the hypothetical of an AGI and economic model without workers. Two, I outright assume AI will take the job of CEO. Otherwise, we're still selling labour. What we can provide AI, novelly, is our preferences. In that hypothetical world, most people would presumably let their AI(s) manage their capital. The same way many aristocrats couldn't tell you anything about how their estates generated income.


And also the value judgement shifts more and more from physicality to how much you upset communities, similar to how BS modern art works. We'd be paying not cash for a banana but will be paying in contexts for contexts.


I think most people will just stop working.

Everything will become dirt cheap.

People will play with robots for things like space travel, habitat restoration etc, but it will be more like a passion job.

There will be no more self important rich founders.


AGI will claim the novelty of ‘innovation’ from humans too


> AGI will claim the novelty of ‘innovation’ from humans too

One, sure, we can expand the hypothetical envelope to infinity. That wasn't the question.

Two, I'm not sure. Human preferences need not be rational. Given the choice, many would choose the flawed work of a human versus the synthesized product of an AGI.

Three, if we have an AGI that can do everything humans can do we've rendered the question irrelevant. There is no "economy" anymore because everything can be centrally measured, produced and dispatched. By the AGI. (Or it can destroy us.) Either way, we return to production and consumption of non-AI work products being purely voluntary.


None. There will be an economy of Robot consumers and Robot producers.


Humans squeezed out of the existing economy will engage in a parallel economy. It will be interesting how these things will interplay and what it will lead to in the end.


The model where the workers team up and destroy the robots and take anything of value from the robots' owners.


All Labor has Dignity


Does it? If I pay you to move rocks back and forth in a field does that work have dignity?

Making someone work a job that could be done cheaper and faster by a robot doesn’t benefit anyone. You’re destroying economic value and wasting the worker’s time.


Well over 50% of workers report being satisfied with their job. Automation eliminating jobs people are satisfied or happy with is almost certainly a loss the workers, even if it is an improvement overall.

I say this as someone who knows he has been directly responsible for eliminating dozens of jobs through automation. Not all the people affected had lives were improved by the job elimination, even if I truly believe our solution made far more peoples lives better and was a win overall.

—- Edits for typos


This seems like a perception thing. Like the person moving rocks might be extremely grateful just to have a job. The mentally disabled, ex-cons, and other people who have been overlooked for work for many years all likely experience a sense of dignity in being paid to work. Perhaps they even delude themselves into thinking moving rocks is somehow useful or necessary.

I always found it weird that outsiders need to dictate what dignity is, since it is an internal state/feeling about own actions.

I’m not against automating high toil (the definition from the Google SRE book) jobs. But people will find dignity in their hobbies if they can’t find it at work. If they can’t find dignity there, they have been failed by society.


Yes, in the same way as a work-out in the gym has dignity and value. In the gym you're just moving some metal chunks up and down, but that work has tremendous physical and mental effects on the person doing it.


Yes. For latest example, there are many people enjoy coding that can be written by GPT.


Unless you're a professional athlete no one is going to pay you to work out. It has no value for society.


What are you talking about? Society? The benefit is to the person working or working out.


It has dignity if it gives purpose. I can tell you right now there’s a substantial portion of a generation of people who see working _for_ anyone as purposeless.

There’s already value in the human made and hand crafted. Maybe our society just becomes one where we’re left to the retirement of a civilization.

It makes me think of this series: https://www.sbnation.com/a/17776-football

Post-scarcity, post capitalism, post everything.


I don't think that's the issue.

The issue is training humans fast enough to stay one step ahead of the robots and the LLMs.


Might just take a while for it to be economical for lots of jobs. The amount of humans is increasing, the amount of natural resources, different story.


Populations of middle and high income countries are aging so there aren't that many takers for these jobs anyway.


people were hoping it would replace labor intensive jobs, instead ai seems to replace the white collars first


People underestimated how much brainpower is involved in simple mechanical tasks, and they probably overestimated how much is required for things like object recognition. Even simple tasks like lifting weights involve motor learning.


Pretty fumble-fingered robots they were using. All you needed for industry up til now.

Now the can learn, quickly, perhaps a more dexterous robot with flexible digits etc will become the norm.


Most of you guys have deep and practical experience with robotics and robots, for me anything that a robot is demonstrating by doing is a magic and scary thing...now I am having mild paranoia due all these progress in LLM and now these activities done by robots...What future might roll in front of us? What are your opinions on that?


This will become increasingly normal, but it’ll take a while before massive impact (ie taking your and your friends’ jobs). I’m not sure of the timeline. Humans are slow to adapt, but in 20-30 years I think the pace will pick up.

I’m not too worried about the current generation, but my kids. Don’t know what to tell them TBH.

I guess it’ll all be fine though. We techies tend to have a paranoid streak which isn’t becoming.


Star Wars ?


Star Wars is set in the distant past, not the future.


That assumes "A long time ago" is relative to Earth 1977, vs to the narrator who may live in a far-away galaxy


"“Our research in robotics is aimed at amplifying people rather than replacing them,” said Gill Pratt, CEO of TRI and Chief Scientist for Toyota Motor Corporation."

Why do CEOs make public statements such as this when the goal is humanoid robots to replace human labor, particularly in countries with declining birthrates?


Softening the blow. No one wants to hear a CEO say the robots are coming for their jobs.


I wonder if giving cars a sense of touch is what will ultimately be the key to enable full autonomous driving

Are there any cars out there that have something like a sense of touch and with it can sense the road or things that they crash into?


I think cars already have a sense of touch with the road with traction control sensors.

For everything else isn't touch too late, especially at high speeds? the point is to avoid the crash.


there is a lot of work to be done with gauging tire traction.

it's (generally) done via comparison of RPM, but one can imagine some future tire that could realistically give feedback about how the weight is actually sitting on the tire, which portions are heating disproportionately, and which parts are experiencing friction anomalies versus the rest of the tire.

that's the direction it would have to kind of go towards to reach the dimensionality and resolution of human proprioception.

that kind of stuff is approached on F1 test rigs and cars. multiple IR laser therms and lidar to compare temperature across the tire at speed as well as judging deformation.


Most cars are able feel when they crash into things. Usually that means it's too late to react to it.

https://wellsve.com/products/electrical-lighting-body-system....


In production yes, but it’s like kids, you don’t want them to fall and hurt themselves, however that’s how they learn

Likewise, maybe self-driving cars need to get a few bumps and scratches to learn not to crash


Cars with electric power steering has steering force sensors, as well as accelerometers, though cars generally don't have central computers - they're networked collection of feature computers in a topology somewhat like a sugar molecule.


I'm pretty ignorant of state of the art robotics and had assumed for years that approaches like this were used, e.g. by Boston Dynamics. Surprising to see that it's a new thing.


Has Boston Dynamics ever done anything other than produce a viral video approximately every 5 years? Mr Beast produces a viral video twice a month.


Have you not observed the content of those viral videos? Who else is doing robotics at that level? Extremely complex, natural bio-mimicking movements I've not seen anywhere else.


What has a Boston Dynamics robot ever done, other than star in a Boston Dynamics video?



Thanks for making my point -- in the "real world" they produce mundane robotic arms for warehouse work.


You ignored the quadruped.


I don't see any breakthrough here. I think it's simple marketing: "We're among the first using a variant of LLMs to enhance robotic behaviors."

The first pancake sequence starts with the pancake half off the baking surface and in danger of falling to the floor. So the device pushes the pancake back onto the baking surface. Good enuf if the pancake is "stiff" (already cooked and rigid). So hope you like your pancakes well-done!

And the pancake-flipper doesn't flip the pancake - it slides the spatula under the pancake and then, just like a robot, rotates the spatula until the pancake drops. In any case, fuggeddabout "eggs over easy".


I don't see any innovation here. I think it's simple hype: "We're among the first to use a variant of these 'horseless carriages' to enhance mobility."

The first jaunt starts with the vehicle half off the pathway and in danger of colliding with a fence. So the operator manipulates the wheel to guide the contraption back onto the path. Fine enough if the path is "level" (already flattened and firm). So hope you enjoy your rides on well-paved roads!

And the operator doesn't truly control the vehicle - they merely grasp the steering device and then, just like a carriage driver, rotate the wheel until the vehicle alters its direction. In any case, fuggeddabout "tranquil and fragrant travels".


The prospect of robots able to manipulate liquids, cloth and other tricky materials shows how dexterous they're becoming. Major milestone. Exciting stuff!


If this is using gen-AI, I wonder how it "hallucinates" - would that result in jerk movements that could break something or hurt someone?


As far as I understand, getting robots to do reasonable things when faced with out of distribution experiences is super hard. How do they manage this?


I wouldn’t call it breakthrough, a step forward yes, but nit a breakthrough, since these movements are recorded and then replicated, it will be a breakthrough if for example you only teach the robot how to hold a spoon, and then it can figure out by itself how to use it.


This sounds like LoRA but for robotics, am I teaching this right?


When it comes to giving robots a "sense of touch", you may want to check out Bota Systems (botasys.com) – already deployed in hundreds of research and industrial robots globally.


Wake me up when a robot is the one teaching the other robots. The bottleneck in requiring human instructors makes this basically a no-go for me.


Once a robot is trained, it's training dataset can be copied to other pretty quickly? So that's 'one robot teaching another'.


In the linked TRI video they hint they are working on building generic dexterity models. If that proves possible, maybe oneshot/fewshot learning of dexterous behaviors isn't farfetched.


Google kuffner


Ok here it is. An LLM with physical senses. They mentioned sense of touch and I think that is a big deal. You can teach me all you want about a new color with all the text description available and it would be worse than just show me that new color.

In the same way, letting an AI actually touch and interact with the world would do wonders in grounding it and making sure it understand concepts beyond just the words or the bytes it was fed during training. GPT4 can already understand images and text, it should not be long until it takes care of videos and we can say AI has vision. This robot from toyota would have touch. We need hearing and smelling and then maybe we will get something resembling a true AGI.


> Ok here it is. An LLM with physical senses.

See: Pieter Abbeel & Jitendra Malik

https://www.therobotbrains.ai/copy-of-jitendra-malik


There's no reason to expect that such advances will get us closer to a true AGI. I mean it's not impossible, but there's no coherent theory or technical roadmap. Just a lot of hope and hand waving.

I do think that this is an impressive accomplishment and will lead to valuable commercial products regardless of the AGI issues.


> true AGI

What is that? Most humans have general intelligence, but do other apes? Do dogs? A quick google search suggests that the answer is yes.

If that’s the case, then this approach may indeed yield an AGI but maybe it’s not going to be much better than a golden retriever. Truly excellent at some things, intensely loyal (I hope), but goofy at most other things.

Or maybe just as dogs will never understand calculus, maybe our future AGI will understand things that we will not be able to. It seems to me that there’s a good chance that AGI (and intelligence in general) is a spectrum.


Yep, and that’s rather terrifying, is it not? Is there any good reason to assume that future AGI will share our sense of morality, once it is smart enough to surpass human thought?


> Is there any good reason to assume that future AGI will share our sense of morality

I think it would be surprising if it did. Just as our morality is shaped by our understanding of the world and our capabilities, a future AGI's morality would be shaped by its understanding of the world and capabilities. It might do something that we think is terrible but isn't because we lack the capacity to understand why it's doing what it's doing. I'm thinking about how a dog might think going to the vet is punishment but we are actually doing it out of love.


There’s a wonderful novella by Ed Chiang called “the lifecycle of software objects” that addresses your thoughts exactly. Highly recommended.


Thanks. I’ll check it out.


Intelligence in general seems to be a spectrum for animals. Future AGI may be on an entirely different spectrum which isn't directly comparable. We won't know until someone actually builds one and we have a chance to evaluate it.


"True AGI" is often used in a way that means "human-like intelligence, but faster, more consistent and of greater depth". In that case, knowing that embodied agents are the way forward is quite trivial. We've known for a long time that the development of a human brain is a function of its sensory inputs - why would this be any different for an artificial intelligence, especially if designed to mimic/interface with a human one?


That's not the right question to ask. You can construct all sorts of hypotheticals or alternative answers but all of that is meaningless until someone actually builds it.


Why imitate human senses? AI should be able to reach out and touch radio waves, and interpret their meanings, much how we interpret the meaning behind gusts of wind.


We can do both, but clearly the senses that mammals have are well adapted for existence on Earth.


Existence is how you interpret it. An AI’s existence might mean it has a very different interpretation of Earth.


Sure but when I ask you to pass the potato chips I want you to understand what that means and be physically able to do so. The five senses we know well are very well suited for that.


> reach out and touch radio waves

At the point we're describing "touching" massless particles, we might as well say that's what our retinas do. In terms of novel senses, some kind of gravitometric sense would be neat. LIGO-on-a-chip and all.


> Our research in robotics is aimed at amplifying people rather than replacing them

Noble, but the reality is that once the genie is out of the bottle it will be used by many MBAs to replace people.


Controversial, but we can look at this as a good thing. If your entire job is simple enough for a robot to do that job shouldn’t be filled by a human. However, the human should receive a decent standard of living, regardless of their employment, which may be politically impossible.


> However, the human should receive a decent standard of living, regardless of their employment, which may be politically impossible.

Not gonna happen.


And not even for political reasons.


> However, the human should receive a decent standard of living, regardless of their employment, which may be politically impossible

They'll get pie in the sky when they die


> which may be politically impossible

Try economically impossible for starters.

Do some napkin math on how much an individual needs to live each month, and then multiply that by how many people you need to support nation-wide. The dollar amounts are staggering, and make our current annual budget (all of it!) look like mere child's play.

We're talking hundreds of billions of dollars every month. It's simply not possible.


1.3% of US workforce is employed in farms. 1.3% grows the food for the rest of the 98.7% of the population. Add a couple of percents for transport and distribution and you probably have around 5% needed for food (maybe less if it excludes meat and processing).

It's simply very possible.

https://www.ers.usda.gov/data-products/ag-and-food-statistic...


You're only adding up half of the equation. On the other half, your nation's productivity has skyrocketed due to plummeting labor costs.

Lets say that robots cut US 'labor' costs in half -- from about 10 trillion to 5 trillion. Add in a 5 trillion dollar tax on robot labor and you've got about 14,700 per capita to spend. And costs for businesses don't change.


So we replace full time employment with $14.5k annually? And people are supposed to survive on that, and celebrate the ushering in of tech?

Someone working an entry level fast food job earned vastly more than that per year.

Let's redo your math using realistic numbers and then we can see what's possible or not.


The per capita number was just for scale. I didn't mean that you'd distribute it on a per capita basis. Obviously you'd want to use a mechanism distribute it to people who need it.


> I didn't mean that you'd distribute it on a per capita basis

You have to, otherwise you assemble a perverse incentive to not be productive or work. We want less of that as it is, not more.

Working doesn't just mean "working for the man", it can be anything productive that results in income. Such as making and selling paintings, music, whatever.

However, there cannot be a reality where choosing to not work rewards you with as much or more than those who actually work. We also cannot have a system where people choose to peruse fruitless endeavors simply because they enjoy them, and then still get government payments. Yet, the system you propose will be just that - "I need it more because I'm poor - I'm poor because I choose not to work".

The money used to pay these people is complex, but it is not "free" and is largely supported by the working class. We cannot build incentives for the working class to stop working and subsist entirely off government payments (which come from the rest of the working class, leading to a downward spiral for any such program in terms of costs to the nation).

So realistically, the numbers for some sort of UBI are far, far greater than most people admit in these debates (as all-things government tend to be).

Additionally, if the tax equals the original labor, then there is now a negative incentive for businesses to adopt technology and replace employees as well. Employees are more flexible than a robot, for instance, so if costs are equal the human is the better value from the perspective of most businesses (some excluded such as maybe manufacturing).


No, the labor participation rate in the US is less than 2/3 already, because people like children, retirees, and the disabled exist. Distributing money to people who would lose jobs to automation would never be a per-capita exercise, anywhere.


You miss the point. The system you propose builds an incentive for more people to not work, not less.

Your system falls apart if more and more people decide not doing anything at all is a worthy trade off for a reduction in annual salary.


Yes, in today's labor market it would.

But that's not the scenario that is being entertained here.

We're talking about a hypothetical future world in which robots with AGI are capable of performing basic labor. Incentivize a human all you want, they will never be able to compete in a labor pool where their competition has no rights and will work 24/7.

That being said, a handout is not the best way to use that money anyway. What it should be used for are stronger safety nets and public services, along the lines of what already exists in western nations.


Let's assume the poverty line of $20,000 a year for a household of two people. For approximately 300,000,000 Americans, that would make $3,000,000,000/year. That is less than half of our current annual budget (all of it!). It's a staggering amount, but I don't see why you would exaggerate like this.

I'm actually pretty sure if you raised the income tax rate for the top 0.1% and closed corporate tax loopholes, you could get that level of money.


The poverty line?



So you are saying we're going to doom an entire population to poverty, provided courtesy of the government?

I know you're trying to make a point, but let's be realistic. That's never going to fly... and it's wrong to even think people are going to be ok submitting to technology and the government overlords for a life filed with poverty.


Not possible with a capitalistic system in place but definitely possible. We would still have the same resources being produced possibly more if robots are more efficient at the jobs. But of course robots will initially be used to rake in more cash for their owners. Later we will get Star Trek way of life where things hold less value because we will easily produce more then we need and stop trying to sell everything.


> robots will initially be used to rake in more cash for their owners.

And as long as regulators maintain a competitive marketplace, the plummeting in costs to operate a business will result in lower prices. Labor is the largest cost for most businesses, and businesses that take advantage of new automation typically use it to undercut their competitors prices.


This rather sound like an inside facing expectations adjustment implying "don't fire assembly workers yet", with Toyota eagerness I'd imagine you'll be either fired on spot for blatantly lying or forced to make it happen by end of the quarter if you'd even suggest that.


This is FUS. Replacing people with machines is the story of the entire Industrial Revolution, and yet we haven’t ever had widespread unemployment as a result. Quite the opposite.


To be a bit flippant, the last thing our species needs is amplification- we are plenty loud, thanks.


It won't replace people, it will displace them.


[flagged]


Business administrators are needed, even in a worker-owned socialist business, even if just to enforce the worker decisions.


Toyota hinting at AGI in 2 years time? Yes, I'm skeptical.

"Siri, make me a sandwich"

"You're out of bread, would you like me to bake some bread and then make your sandwich?"

isn't going to happen by 2025.


I can honestly see this by 2030. Just not reliable or safe enough for a consumer product.

There will be a gap between “can be done reliably in testing” (probably doable by 2030 even in a randomly chosen house that’s not part of the test set) and “safe enough for consumers, children, and pets to not get injured by the movements of a robot who is strong enough to lift and carry a 40lb bag of flour or 40lb laundry basket”.

That safety gap for “co-working” robots will be very difficult to close enough for the CPSC to be satisfied and also avoid expensive class-action or individual lawsuits.


We already have robots that make sandwiches. And robots that make bread. And robots that order sandwiches and bread. And robots that deliver sandwiches and bread.

So maybe it’s not one robot, but 3-4 robots that interface with each other.


RemindMe! 2 years

Ah wait, this isn't Reddit. Well anyhow we'll see I suppose. My money would be more on unfreezing and reheating bread instead, dough rising takes too long for practical sandwich creation.


You forgot the sudo: https://xkcd.com/149


That shot of it spreading goop on bread was amusing. This video feels like an Onion production to me for some reason.


Yeah lol I had the same thought. I almost expected it to zoom out and be a guy with his hand up the robotic arm.


[flagged]


Standard of living increases are made 1 job loss at a time. It sucks, we should supports people's transitions, but I don't think we should hold back progress.


"Eschew flamebait. Avoid generic tangents."

https://news.ycombinator.com/newsguidelines.html


Twitter still is the public townsquare. Nobody does announcements or breaking news on threads or IG.


Dr. Ruth might have some ideas what other skills robots should learn.


Sex robots!

Real world adoption will catalyze with the company brave enough to train these behaviors in.


Jokes aside, IMO the next great "appliance" is the personal robot. It basically follow you everywhere, witnesses/records everything you do (for only your benefit/access hopefully), carries/lifts/moves things for you, fetches things for you, holds things for you, can do emergency services. Can walk a mountain with you.

There was once a cartoon where a robot is instructed to protect it's "primary" from rain and proceeds to swat away all the individual drops of rain from it. A ridiculous parody, but... think about it.

But yeah ... sex robots will absolutely be a thing. Might HAVE to be the thing for the female clientele with insemination abilities to continue the existence of the human race.


Marriage rates are already falling; when sex robots get good enough, they'll collapse entirely. I'm not even sure if this is a bad thing.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: