Robots can now think ahead, thanks to Visual Foresight technology

Robots can now think ahead, thanks to Visual Foresight technology
Robots can now think ahead, thanks to Visual Foresight technology

Robots can now think ahead, thanks to Visual Foresight technology. A team of researchers at the University of California Berkeley determines that robots can think ahead.  They have developed a new robotic learning technology that makes the robots capable of thinking ahead in order to “figure out how to manipulate objects they have never encountered before.”

The technology used in these new robots is called “visual foresight” — but, it does not give robots the ability to predict the future.

The Berkeley researchers applied “visual foresight” to a robot called Vestri and enabled it to make predictions about what its cameras will see several seconds to come.  Vestri, demonstrated the ability to move small objects around on a table without touching or knocking over the nearby obstacles. The technology allowed the robot to perform the small task without human input, supervision, or any prior knowledge of physics.

Sergey Levine, assistant professor at Berkeley’s Deptt of Electrical Engineering & Computer Sciences (the lab behind the technology’s development) says,

Must Read: Pakistan’s first ever hackthon planned in Lahore

“In the same way that we can imagine how our actions will move the objects in our environment, this method can enable a robot to visualize how different behaviors will affect the world around it,” adding that this technology can enable intelligent planning of highly flexible skills in complex and real-world situations.

Visual foresight bases upon “convolutional recurrent video prediction” or dynamic neural advection (DNA). The team say DNA-based models can predict how the pixels in an image will move from one frame to another based on what the robo does. As Chelsea Finn (a doctoral student in Levine’s lab & inventor of the original DNA model) explaines robots like Vestri can now learn a range of visual object manipulation skills entirely on their own.

Frederik Ebert is a graduate student in Levine’s lab who worked previously on the project compared their work with robots to the way humans do learn to interact with objects in their environment:

Humans do learn object manipulation skills without any teacher through millions of interactions with a range of objects during their lifetime, and the team has shown that it is possible to build a robotic system that also leverages large amounts of autonomously collected data to learn widely applicable manipulation skills, specially object pushing skills.

Levine says that the capabilities of Vestri are still somewhat limited, though additional work is on the way to improve visual foresight. One day, the technology could be utilised to help self-driving cars while on the road, better equipping them to handle unfamiliar objects and new situations.

The technology requires numerous improvements before that would be possible, though, such as further refined video prediction and methods to gather more and more specific video data. Robots, following these advancements, may be able to perform more complex tasks like lifting and placing objects or handling soft and easy to deform objects such as cloth or rope.


Please enter your comment!
Please enter your name here