In his TED talk, Ken Goldberg addresses the question many of us have wondered about: why don’t we have better robots yet? As a robotics researcher at UC Berkeley for the past 30 years, Goldberg sheds light on the challenges that have prevented us from having the futuristic robots we see in movies and TV shows.
Goldberg explains that robots face a fundamental issue called Moravec’s paradox, where what is easy for humans, like picking up objects and stacking blocks, is incredibly difficult for robots. He delves into the hardware and software limitations that make tasks like grasping arbitrary objects a grand challenge in the field of robotics.
Despite the obstacles, Goldberg and his team have made significant progress in developing robots that can perform specific tasks, such as sorting packages in e-commerce warehouses. With advancements in AI and deep learning, they have created robots that can train themselves to grasp objects effectively.
While we may still be waiting for the perfect home robot, Goldberg assures us that progress is being made and encourages us to be patient. As robots continue to evolve, they will eventually be able to perform tasks that we can’t or don’t want to do.
Watch the video by TED
Video Transcript
I have a feeling most people in this room would like to have a robot at home. It’d be nice to be able to do the chores and take care of things. Where are these robots? What’s taking so long? I mean, we have our tricorders, and we have satellites. We have laser beams.
But where are the robots? I mean, OK, wait, we do have some robots in our home, but, not really doing anything that exciting, OK? Now I’ve been doing research at UC Berkeley for 30 years with my students on robots, and in the next 10 minutes, I’m going to try to explain the gap between fiction and reality. Now we’ve seen images like this, right? These are real robots. They’re pretty amazing.
But those of us who work in the field, well, the reality is more like this. That’s 99 out of 100 times, that’s what happens. And in the field, there’s something that explains this that we call Moravec’s paradox. And that is, what’s easy for robots, like being able to pick up a large object, large, heavy object, is hard for humans. But what’s easy for humans,
Like being able to pick up some blocks and stack them, well, it turns out that is very hard for robots. And this is a persistent problem. So the ability to grasp arbitrary objects is a grand challenge for my field. Now by the way, I was a very klutzy kid. I would drop things. Any time someone would throw me a ball, I would drop it. I was the last kid to get picked on a basketball team. I’m still pretty klutzy, actually, but I have spent my entire career studying how to make robots less clumsy. Now let’s start with the hardware.
So the hands. Now this is a robot hand, a particular type of hand. It’s a lot like our hand. And it has a lot of motors, a lot of tendons and cables as you can see. So it’s unfortunately not very reliable. It’s also very heavy and very expensive.
So I’m in favor of very simple hands, like this. So this has just two fingers. It’s known as a parallel jaw gripper. So it’s very simple. It’s lightweight and reliable and it’s very inexpensive. And if you’re doubting that simple hands can be effective,
Look at this video where you can see that two very simple grippers, these are being operated, by the way, by humans who are controlling the grippers like a puppet. But very simple grippers are capable of doing very complex things. Now actually in industry,
There’s even a simpler robot gripper, and that’s the suction cup. And that only makes a single point of contact. So again, simplicity is very helpful in our field. Now let’s talk about the software. This is where it gets really, really difficult because of a fundamental issue, which is uncertainty.
There’s uncertainty in the control. There’s uncertainty in the perception. And there’s uncertainty in the physics. Now what do I mean by the control? Well if you look at a robot’s gripper trying to do something, there’s a lot of uncertainty in the cables and the mechanisms that cause very small errors.
And these can accumulate and make it very difficult to manipulate things. Now in terms of the sensors, yes, robots have very high-resolution cameras just like we do, and that allows them to take images of scenes in traffic or in a retirement center, or in a warehouse or in an operating room.
But these don’t give you the three-dimensional structure of what’s going on. So recently, there was a new development called LIDAR, and this is a new class of cameras that use light beams to build up a three-dimensional model of the environment. And these are fairly effective.
They really were a breakthrough in our field, but they’re not perfect. So if the objects have anything that’s shiny or transparent, well, then the light acts in unpredictable ways, and it ends up with noise and holes in the images. So these aren’t really the silver bullet.
And there’s one other form of sensor out there now called a “tactile sensor.” And these are very interesting. They use cameras to actually image the surfaces as a robot would make contact, but these are still in their infancy. Now the last issue is the physics.
And let me illustrate for you by showing you, we take a bottle on a table and we just push it, and the robot’s pushing it in exactly the same way each time. But you can see that the bottle ends up in a very different place each time. And why is that?
Well it’s because it depends on the microscopic surface topography underneath the bottle as it slid. For example, if you put a grain of sand under there, it would react very differently than if there weren’t a grain of sand. And we can’t see if there’s a grain of sand because it’s under the bottle.
It turns out that we can predict the motion of an asteroid a million miles away, far better than we can predict the motion of an object as it’s being grasped by a robot. Now let me give you an example. Put yourself here into the position of being a robot.
You’re trying to clear the table and your sensors are noisy and imprecise. Your actuators, your cables and motors are uncertain, so you can’t fully control your own gripper. And there’s uncertainty in the physics, so you really don’t know what’s going to happen. So it’s not surprising that robots are still very clumsy.
Now there’s one sweet spot for robots, and that has to do with e-commerce. And this has been growing, it’s a huge trend. And during the pandemic, it really jumped up. I think most of us can relate to that. We started ordering things like never before, and this trend is continuing.
And the challenge is to meet the demand, we have to be able to get all these packages delivered in a timely manner. And the challenge is that every package is different, every order is different. So you might order some some nail polish and an electric screwdriver.
And those two objects are going to be somewhere inside one of these giant warehouses. And what needs to be done is someone has to go in, find the nail polish and then go and find the screwdriver, bring them together, put them into a box and deliver them to you.
So this is extremely difficult, and it requires grasping. So today, this is almost entirely done with humans. And the humans don’t like doing this work, there’s a huge amount of turnover. So it’s a challenge. And people have tried to put robots into warehouses to do this work. It hasn’t turned out all that well. But my students and I, about five years ago, we came up with a method, using advances in AI and deep learning, to have a robot essentially train itself to be able to grasp objects. And the idea was that the robot would do this in simulation.
It was almost as if the robot were dreaming about how to grasp things and learning how to grasp them reliably. And here’s the result. This is a system called Dex-net that is able to reliably pick up objects that we put into these bins in front of the robot.
These are objects it’s never been trained on, and it’s able to pick these objects up and reliably clear these bins over and over again. So we were very excited about this result. And the students and I went out to form a company, and we now have a company called Ambi Robotics.
And what we do is make machines that use the algorithms, the software we developed at Berkeley, to pick up packages. And this is for e-commerce. The packages arrive in large bins, all different shapes and sizes, and they have to be picked up,
Scanned and then put into smaller bins depending on their zip code. We now have 80 of these machines operating across the United States, sorting over a million packages a week. Now that’s some progress, but it’s not exactly the home robot that we’ve all been waiting for.
So I want to give you a little bit of an idea of some the new research that we’re doing to try to be able to have robots more capable in homes. And one particular challenge is being able to manipulate deformable objects, like strings in one dimension, two-dimensional sheets and three dimensions,
Like fruits and vegetables. So we’ve been working on a project to untangle knots. And what we do is we take a cable and we put that in front of the robot. It has to use a camera to look down, analyze the cable, figure out where to grasp it
And how to pull it apart to be able to untangle it. And this is a very hard problem, because the cable is much longer than the reach of the robot. So it has to go through and manipulate, manage the slack as it’s working. And I would say this is doing pretty well.
It’s gotten up to about 80 percent success when we give it a tangled cable at being able to untangle it. The other one is something I think we also all are waiting for: robot to fold the laundry. Now roboticists have actually been looking at this for a long time,
And there was some research that was done on this. But the problem is that it’s very, very slow. So this was about three to six folds per hour. So we decided to to revisit this problem and try to have a robot work very fast. So one of the things we did was try to think about a two-armed robot that could fling the fabric the way we do when we’re folding,
And then we also used friction in this case to drag the fabric to smooth out some wrinkles. And then we borrowed a trick which is known as the two-second fold. You might have heard of this. It’s amazing because the robot is doing exactly the same thing
And it’s a little bit longer, but that’s real time, it’s not sped up. So we’re making some progress there. And the last example is bagging. So you all encounter this all the time. You go to a corner store, and you have to put something in a bag. Now it’s easy, again, for humans,
But it’s actually very, very tricky for robots because for humans, you know how to take the bag and how to manipulate it. But robots, the bag can arrive in many different configurations. It’s very hard to tell what’s going on and for the robot to figure out how to open up that bag.
So what we did was we had the robot train itself. We painted one of these bags with fluorescent paint, and we had fluorescent lights that would turn on and off, and the robot would essentially teach itself how to manipulate these bags. And so we’ve gotten it now up to the point
Where we’re able to solve this problem about half the time. So it works, but I’m saying, we’re still not quite there yet. So I want to come back to Moravec’s paradox. What’s easy for robots is hard for humans. And what’s easy for us is still hard for robots. We have incredible capabilities.
We’re very good at manipulation. But robots still are not. I want to say, I understand. It’s been 60 years, and we’re still waiting for the robots that the Jetsons had. Why is this difficult? We need robots because we want them to be able to do tasks that we can’t do or we don’t really want to do.
But I want you to keep in mind that these robots, they’re coming. Just be patient. Because we want the robots, but robots also need us to do the many things that robots still can’t do. Thank you.
Video “Why Don’t We Have Better Robots Yet? | Ken Goldberg | TED” was uploaded on 03/28/2024 to Youtube Channel TED