Exploring the Newest Advancements in AI Robotics with Mike Wooldridge – Video

Exploring the Newest Advancements in AI Robotics with Mike Wooldridge – Video

The latest developments in AI robotics are a fascinating and challenging field, as explained by Professor Michael Wooldridge from the University of Oxford. The public perception of AI is closely tied to robots, but the progress in robotic AI has been slower than in other areas of AI. Professor Wooldridge emphasizes that AI in the physical world is incredibly challenging, but there has been progress in this area.

In a visit to the Oxford Robotics Institute, Professor Wooldridge and his team explore the state-of-the-art developments in AI robotics. They meet with Associate Professor Perla Maiolino, who explains that AI is embedded in most of the robots they work on. This includes perception, processing of text and voice commands, and recognition of the environment. However, robotic AI is much harder than text-based AI due to the embodiment of robots, which adds complexity and potential sources of noise.

Despite the challenges, Professor Wooldridge is optimistic about the future of robotic AI. He acknowledges the significant progress that has been made and believes that there is still work to be done. With continued research and development, the future of robotic AI looks promising.

Overall, the video provides a fascinating insight into the complexities of robotic AI and the ongoing research efforts to overcome these challenges. It offers a glimpse into the cutting-edge work being done at the Oxford Robotics Institute and the potential impact of AI robotics on our future.

Watch the video by The Royal Institution

Video Transcript

– My name’s Michael Wooldridge. I am a Professor of Computer Science at the University of Oxford and I study artificial intelligence. I’m this year’s Royal Institution Christmas lecturer, and this year we’re gonna be looking at AI. So artificial intelligence is about building machines that can do things, which currently only human beings can do. And for the most part, it’s about getting them to do very, very, very specific things like being able to translate from one language to another or to drive a car

Or to play a game of chess. And what we focus on in AI is here’s a particular thing which currently only a human being can do. How can we make a machine do that? So the public perception of AI is very closely tied to robots. When we imagine artificial intelligence, we imagine robots

And we imagine robots that can do things for us, like just come in our house and tidy it up or clean the streets and so on. But the truth is, progress in robotic AI has been a lot slower than progress in text-based AI, the kind of ChatGPT-based AI,

And the simple truth is AI in the real world, in the physical world is just very, very hard. But nevertheless, we’ve seen progress and we’re here at the Oxford Robotics Institute. We’re gonna go and see some colleagues of mine, who working on the state of the art of AI robotics. – I am Perla Mailolino and I’m Associate Professor in Engineering Science Department of the University of Oxford and the DPI of the Soft Robotic Lab at the Oxford Robotic Institute. Here we just design and control robots for very different applications. So from industry to inspection to medical robots

Or robots that allow us to, that can interact with us in a safely manner from robot dogs to manipulators, but even soft robots that actually have a soft and compliant bodies that are like designed to, let’s say mimic biological organism and safely interact to very dynamic and different kind of environments.

The HSR is actually robots that we are using and is supposed to be used for domestic use, so the robots can be programmed to be able to recognise objects in our environments and also maybe to interact with us. So we can even ask to grab for us a bottle of water

And like hand over to us. Part of what we do is try to make robots learn from their action and so like progressively acquire more skills as really as human do. So sense of touch can be also used for learning because for example, we can teach the robot to perform a task by touching and moving the robot as we would do with maybe a person to show a certain task. So we can exploit that kind of perception to teach the robots.

And so the robots can learn through this. The idea is actually to develop algorithm to allow them to use their sensor information and to learn from also their action to be able to accomplish all the tasks that are required. – So thanks for showing us around this afternoon and showing us some amazing demos. – You’re very welcome. – Where are we seeing AI in all of this? Is it embedded in everything you do? Computer vision, machine learning, neural networks? – Nowadays with most of the robots

That we’re working on here in ORI, AI is embedded. A lot of it comes into perception. So for example, with the HSR, a lot of it is how the robot perceives the world, how it processes text that its hearing, how it responds to voice commands and of course how it recognises things

Around its environment or how it plans its path around humans or obstacles in a domestic environment. – Okay, so I’ve worked a little bit with some of your colleagues here in the lab, and it’s always seen to me that robotic AI is just much harder than, you know,

The AI of text and symbols. – Absolutely yeah. – So why is that? Why is robotic AI so hard? – It’s because of the embodiment. You have to deal with so much sources of potential noise. You have to deal with the sensors, you have to do interaction with environment.

You have things that are really hard to model, for example, contact and of course, yeah, the mechanics if you wish of having a body adds a lot more complexity to the system. – And do things go wrong an awful lot more often? Do they break down? Things break. (both laughing)

Things tend to break. I mean, maybe initially with hardware for example, a lot of the hardware that we’re using now is mature. But for example, two or five years ago it was a lot harder. We would see a lot more sort of robot downtime. Nowadays we have progressed a lot.

So most of our robots are really robust and you know, a lot of the approaches that we have are very, very sort of repeatable. – Okay well thank you to you and to the team for showing us around and showing us some amazing work. And we look forward to showcasing some

Of it in the Christmas Lectures. – You’re welcome. – Thank you. We’ve had some amazing demos from some brilliant colleagues who are working on robotic AI and what we’ve seen is that things which seem completely trivial to us that we don’t even think about them, just picking up an apple and putting it in a bag are phenomenally difficult when it comes to robotic AI.

So there’s a huge amount of work to be done on robotic AI before we have, for example, robots that can just go in our house and tidy up for us. And a huge range of challenges, scientific challenges, deep challenges that need to be addressed. But we’re seeing progress, the areas moving forward

And some of the demos today point away for what that progress is going to look like. So I’m optimistic about the future of robotic AI, but I think we have to remember robotic AI is just much harder than the kind of textual AI of ChatGPT and the like.

Video “What are the latest developments in AI Robotics? – with Mike Wooldridge” was uploaded on 02/03/2024 to Youtube Channel The Royal Institution