Researchers from MIT have developed a learning-based particle simulation system that teaches robots how to interact with more delicate objects.
In a paper, the team described a model that learns and remembers how different materials (or particles) would react when poked and prodded. The robots can use the model to predict the underlying physics of an object, including liquids and rigid and deformable materials, would react with the force of its touch.
Researchers believe that the new approach can give industrial robots a more refined touch and enable potentially fun applications in personal robotics. They will present the new system next month at the International Conference on Learing Representations.
Giving Robots A 'Human' Touch
There have been several previous attempts to teach robots how to handle delicate objects, but they often rely on approximations that immediately fall apart when tested in the real world. The new system mimics how humans learn the physics of certain objects based on experiences.
"Humans have an intuitive physics model in our heads, where we can imagine how an object will behave if we push or squeeze it," explained Yunzhu Li, a graduate student from MIT and one of the authors of the study. "Based on this intuitive model, humans can accomplish amazing manipulation tasks that are far beyond the reach of current robots."
To demonstrate, the researchers employed a two-fingered robotic hand called RiceGrip to manipulate a deformable foam and create the desired shapes. The robot first used a depth-sensing camera and object recognition techniques to identify the deformable foam. Next, the new model reconstructed the foam into a dynamic graph for deformable materials.
To create shapes, the robot manipulated the real world particles based on the model. While the robot already has an idea how the particle would respond to its touch, if the position of the real world particle does not align with its model, the model is adjusted based on the real world physics of the material.
A Long Way To Go
The next goal of the researchers is to teach robots to predict interaction with objects that are only partially visible. For example, a robot will be able to predict that a stack of boxes that will topple over when pushed even if only a portion can be seen while the rest are hidden underneath.
Watch RiceGrip create three-dimensional shapes below.