Here is good news for Robotic lovers, and those who love to depend more on robots for daily home activities. Researchers from MIT envision robots more like home helpers, who can follow top commands like’ Go to kitchen and get me a cup of coffee for me’.
The idea is simple. If you ask the Robo to go to the kitchen and make a coffee, it will do with little error. But the process is complex. To execute this kind of high-level task, robots need to able to perceive their physical environment, just like we, the humans do.
To make an understanding around the world, robots need to transform pixel values they see through a camera
Luca Carlone, assistant professor of aeronautics and astronautics at MIT, makes it clear, “In order to make any decision in the world, you need to have a mental model of the environment around you.”
For humans, it is a hassle-free task, but for the robots it is painful and hard, according to the MIT engineers.
To make an understanding around the world, robots need to transform pixel values they see through a camera.
Now Carlone and his team have developed a new model called 3D Dynamic Scene Graphs to solve this problem. It is designed after the way humans perceive and navigate the world.
“The new model enables robots to quickly produce a 3D map of its surroundings that also includes objects and their semantic labels (a chair versus a table, for instance), as well as people, rooms, walls, and other structures that the robot is likely seeing in its environment”explains a press statement issued by the MIT.
Through this the robot can extract relevant information from the 3D map, to query the location of objects and rooms, or the movement of people in its path.
According to Carlone this compressed representation of the environment is useful as it allows the robot to make quick decisions and plan its path. The flow of decision making is almost similar to the humans.
These robots can be deployed for high-level works such as working side by side with human workers on a manufacturing plant floor or exploring a disaster site for survivors, the MIT researcher said.
The key component of this 3D model is Kimera, an open-source library to simultaneously construct a 3D geometric model of an environment along with encoding the likelihood that an object is
Carlone, along with hiis students, including lead author and MIT graduate student Antoni Rosinol, will present their findings on the new model this week at the ‘Robotics: Science and Systems virtual conference.’
Carlone and team’s new model is the first to generate a three dimensional (3D) map of the environment in real-time, also labeling objects, people, and structures within that 3D map.
The key component of this 3D model is Kimera, an open-source library to simultaneously construct a 3D geometric model of an environment along with encoding the likelihood that an object is.
(With inputs from MIT)