When Daniela Rus and her collaborators looked at how a deep neural network made decisions in the vision system of their laboratory’s self-driving car, they noticed that its attention was focused on the entire image, even the bushes and trees at the side of the road. “But that’s not how people drive,” said Rus in her office in the Massachusetts Institute of Technology (MIT)’s Computer Science and Artificial Intelligence Laboratory (CSAIL), of which she is the director. “We usually look at the road horizon and the sides of the road.”
Traditionally AI and robotics have been largely two separate fields, Rus explained. “AI has been amazing us with its decision-making and reasoning, but it is confined in the digital space. Robots have physical presence but are generally pre-programmed and not intelligent. We are aiming to bridge the separation between AI and robots by developing what I call ‘physical AI’. Physical AI uses AI’s power to understand text, images, and video to make a real-world machine smarter. And those machines can be any physical platform: a sensor, a robot, or a power grid.”
Traditionally AI and robotics have been largely two separate fields, Rus explained. “AI has been amazing us with its decision-making and reasoning, but it is confined in the digital space. Robots have physical presence but are generally pre-programmed and not intelligent. We are aiming to bridge the separation between AI and robots by developing what I call ‘physical AI’. Physical AI uses AI’s power to understand text, images, and video to make a real-world machine smarter. And those machines can be any physical platform: a sensor, a robot, or a power grid.”
The complete article can be read here.