New research can help robots feel layers of cloth rather than relying on computer vision tools to only see it.
The work could allow robots to assist people with household tasks like folding laundry.
Humans use their senses of sight and touch to grab a glass or pick up a piece of cloth. It is so routine that little thought goes into it. For robots, however, these tasks are extremely difficult.
The amount of data gathered through touch is hard to quantify and the sense has been hard to simulate in robotics—until recently.
“Humans look at something, we reach for it, then we use touch to make sure that we’re in the right position to grab it,” says David Held, an assistant professor in the School of Computer Science and head of the Robots Perceiving and Doing (R-PAD) Lab at Carnegie Mellon University.
“A lot of the tactile sensing humans do is natural to us. We don’t think that much about it, so we don’t realize how valuable it is,” Held says.
For example, to fold laundry, robots need a sensor to mimic the way a human’s fingers can feel the top layer of a towel or shirt and grasp the layers beneath it. The researchers could teach a robot to feel the top layer of cloth and grasp it, but without the robot sensing the other layers of cloth, the robot would only ever grab the top layer and never successfully fold the cloth.
“How do we fix this?” Held asks. “Well, maybe what we need is tactile sensing.”
ReSkin, developed by researchers at Carnegie Mellon and Meta AI, was the ideal solution. The open-source touch-sensing “skin” is made of a thin, elastic polymer embedded with magnetic particles to measure three-axis tactile signals. In a recent paper, the researchers used ReSkin to help the robot feel layers of cloth rather than relying on its vision sensors to see them.
“By reading the changes in the magnetic fields from depressions or movement of the skin, we can achieve tactile sensing,” says Thomas Weng, a PhD student in the R-PAD Lab, who worked on the project with postdoctoral fellow Daniel Seita and graduate student Sashank Tirumala. “We can use this tactile sensing to determine how many layers of cloth we’ve picked up by pinching with the sensor.”
Other research has used tactile sensing to grab rigid objects, but cloth is deformable, meaning it changes when touched—making the task even more difficult. Adjusting the robot’s grasp on the cloth changes both its pose and the sensor readings.
The researchers didn’t teach the robot how or where to grasp the fabric. Instead, they taught it how many layers of fabric it was grasping by first estimating how many layers it was holding using the sensors in ReSkin, then adjusting the grip to try again. The team evaluated the robot picking up both one and two layers of cloth and used different textures and colors of cloth to demonstrate generalization beyond the training data.
The thinness and flexibility of the ReSkin sensor made it possible to teach the robots how to handle something as delicate as layers of cloth.
“The profile of this sensor is so small, we were able to do this very fine task, inserting it between cloth layers, which we can’t do with other sensors, particularly optical-based sensors,” Weng says. “We were able to put it to use to do tasks that were not achievable before.”
There is plenty of research to be done before handing the laundry basket over to a robot, though. It all starts with steps like smoothing a crumpled cloth, choosing the right number of layers of cloth to fold, then folding the cloth in the right direction.
“It really is an exploration of what we can do with this new sensor,” Weng says. “We’re exploring how to get robots to feel with this magnetic skin for things that are soft, and exploring simple strategies to manipulate cloth that we’ll need for robots to eventually be able to do our laundry.”
The team presented their research paper at the 2022 International Conference on Intelligent Robots and Systems in Kyoto, Japan.
Source: Stacey Federoff for Carnegie Mellon University