A new system that merges artificial intelligence, robotics, and a brain-machine interface takes a step toward restoring function and autonomy for people without full use of their limbs.
For more than 30 years—following an accident in his teens—Robert “Buz” Chmielewski has been a quadriplegic with minimal movement and feeling in his hands and fingers. But in November he manipulated two prosthetic arms with his brain to feed himself dessert.
“It’s pretty cool,” says Chmielewski, whose sense of accomplishment was unmistakable after using his thoughts to command the robotic limbs to cut and feed him a piece of golden sponge cake. “I wanted to be able to do more of it.”
Nearly two years ago, Chmielewski underwent a 10-hour brain surgery at Johns Hopkins Hospital in Baltimore as part of a clinical trial originally spearheaded by the Defense Advanced Research Projects Agency and leveraging advanced prosthetic limbs developed by the Johns Hopkins Applied Physics Laboratory (APL).
Its goal was to allow participants to control assistive devices, and enable perception of physical stimuli (touching the limbs) using neurosignals from the brain.
Surgeons implanted six electrode arrays into both sides of his brain, and within months he was able to demonstrate, for the first time, simultaneous control of two of the prosthetic limbs through a brain-machine interface.
Researchers were impressed with his progress during the first year of testing and wanted to further push the bounds of what could be accomplished. The team launched a parallel line of inquiry—termed “Smart Prosthetics”—to develop strategies for providing advanced robot control and sensory feedback from both hands at the same time using neural stimulation.
The researchers set out to develop a closed-loop system that merges artificial intelligence, robotics, and a brain-machine interface. In the instance of Chmielewski serving himself dessert, the system enabled him to control the movements necessary to cut food with a fork and knife and feed himself.
“Our ultimate goal is to make activities such as eating easy to accomplish, having the robot do one part of the work and leaving the user, in this case Buz, in charge of the details: which food to eat, where to cut, how big the cut piece should be,” says David Handelman, a senior roboticist specializing in human-machine teaming. “By combining brain-computer interface signals with robotics and artificial intelligence, we allow the human to focus on the parts of the task that matter most.”
Francesco Tenore, an neuroscientist and principal investigator for the Smart Prosthetics study, says the next steps for this effort include not only expanding the number and types of activities of daily living that Buz can demonstrate with this form of human-machine collaboration, but also providing him with additional sensory feedback as he completes tasks so that he won’t have to rely on vision to know if he’s succeeding.
“The idea is that he’d experience this the same way that uninjured people can ‘feel’ how they’re tying their shoelaces, for example, without having to look at what they’re doing,” Tenore says.
In an interview just before Thanksgiving—the traditional launch of a food-heavy holiday season—Buz reflected on the significance of this research for individuals with limited mobility. Disabilities like his take away a person’s independence, he said, particularly their ability to eat by themselves.
“A lot of people take that for granted. To be able to do this independently and still be able to interact with family is a game-changer,” he said.
The Smart Prosthetics study received funding from an internal research grant from APL.
Source: Johns Hopkins University