Future

MIT Robot Uses Sight and Touch To Identify Objects

mit-robot-can-identify-objects-by-sight-and-touch
PIxabay

If we see a cat, we know what it is and can tell the difference between our adorable furball and a pillow, for example. But to ask the same thing from a machine has always been generally considered as asking too much. 

However, a new robot developed by MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), might just introduce a new way we perceive robots. Or how they perceive our world. 

The MIT team taught an AI how to identify objects by touch. To achieve that, they used a KUKA robot arm and paired it up with a tactile sensor called GelSight.

GelSight is basically a synthetic rubber that can create a detailed computerized image of whatever surface it presses against, allowing the machine to almost see the world as we do, in a way. 

New Robotic Arm Is Low-Cost And Can Be Programmed By Anyone

The team behind the project recorded 12,000 videos of 200 objects that ranged from fabrics to household items which they then broke down into still images and fed to the AI, alongside all the information GelSight had collected in a dataset called “VisGel”. 

All that data eventually amounted to over three million visual/tactile images. 

Yunzhu Li, CSAIL PhD student and lead author on the paper about the system said that just by looking at the images, the AI can “imagine the feeling of touching a flat surface or a sharp edge. By blindly touching around, our model can predict the interaction with the environment purely from tactile feelings. Bringing these two senses together could empower the robot and reduce the data we might need for tasks involving manipulating and grasping objects.”

Robots usually only manipulate one type of object, as is the case with factory robots. If they happen to grasp on a different type of object they might end up breaking it if it’s too fragile or not grasp it with enough strength if it’s more solid. 

And generally, robots are unable to tell that a china cup and a phone need to be handled differently. But, with this sort of technology backing up their decisions, things might soon change. 

For now, the robot built by the CSAIL team is only capable of identifying objects in a controlled environment but the researchers plan to create an even larger data set that will allow the machine to function in a variety of different environments. 

Methods like this have the potential to be very useful for robotics, where you need to answer questions like ‘is this object hard or soft?’, or ‘if I lift this mug by its handle, how good will my grip be?’,” Andrew Owens, postdoctoral researcher at the University of California at Berkeley has said. “This is a very challenging problem, since the signals are so different, and this model has demonstrated great capability.”

Li’s paper will be presented next week at The Conference on Computer Vision and Patter Recognition in Long Beach, California. 

Follow TechTheLead on Google News to get the news first.

Subscribe to our website and stay in touch with the latest news in technology.

Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Must Read

Are you looking for the latest innovations in tech? You're in the right place, just subscribe to our RSS feed


Techthelead Romania     Comedy Store

Copyright © 2016 - 2023 - TechTheLead.com SRL

To Top