Tuesday, November 12, 2024

Research – a thought-driven robot to help quadriplegics

published

EPFL has developed software that controls a machine with brain waves in order to restore independence to people deprived of it.

1 / 2

The tests were performed using an articulated arm that receives signals sent by electrodes placed on a person’s head.

© Alan Herzog 2021 EPFL

If the arm passes too close or too far from the glass, the brain sends an error message.

If the arm passes too close or too far from the glass, the brain sends an error message.

© Alan Herzog 2021 EPFL

For many years, two groups of EPFL researchers have been working on solutions to restore some independence to quadriplegia. The idea is to use robots that can perform tasks instead of those people who are unable to perform the slightest movement. Scientists have succeeded in developing a computer program that controls a robot using electrical currents emitted from the brain. No need for voice or digital control, simple thinking is enough.

For their work, the teams of Aude Billard, professor and director of the Algorithms and Learning Systems Laboratory at EPFL, as well as José del R. Millán, director of the Brain-Machine Interface Laboratory at EPFL, used an articulated arm developed several years ago. He can travel from left to right and vice versa, moving or avoiding objects in his field of vision. “Our robot was programmed to avoid an obstacle, but we could also have decided on a completely different process like filling a glass with water, pushing or pulling something, for example,” explains Aude Billard.

The idea was to be able to fine-tune his trajectory, sometimes too far from the unavoidable thing, sometimes too close. “Because the goal of this robot is to help paralyzed people, we needed a way to communicate with the robot that is nonverbal, and doesn’t involve any movement,” explains Carolina Gaspar Pinto Ramos Correia, a doctoral student.

The brain says no

So scientists have developed an algorithm to modify the robot’s behavior with just a thought. A cap with electrodes capable of producing an electroencephalogram (EEG) that records the user’s thoughts. The user does not need to do anything. He just has to look at the robot. If he is not satisfied with his behavior, his brain sends a noticeable and specific error signal, as if he wants to say “No, this does not suit me.” Then the latter understands that what he perceives is wrong, but without knowing exactly what turned out to be wrong. Does it pass too close to the obstacle or, conversely, too far?

This error signal is then used by an inverse reinforcement algorithm, which can determine what the user wants and how to modify the behavior to better satisfy the user. To validate its deductions, the bot will run tests. To achieve user-satisfactory behavior, the bot is very fast. It is enough from three to five attempts to understand the will of the paralyzed person and adapt to it as much as possible.

“The robot, which is equipped with artificial intelligence, can learn quickly but must detect when it has made mistakes in order to correct them. “Detecting the error signal was a real technical challenge,” Jose del R Milan says. The challenge was to link brain activity to controlling the robot, in other words , translating signals from the brain into actions for the robot. To do this, we use machine learning to determine brain activity related to a particular task. We then combine this information with the robot’s actions to adapt the robot’s movement according to the user’s preferences,” says Eason Bazianolis, lead author of the research published in “Nature Communication Biology”.

Brain-driven wheelchair

In the long term, the researchers want to incorporate this algorithm into a wheelchair. “Today, there are still unresolved engineering challenges. With a wheelchair, the difficulty is even greater, because the body and the robot are in motion,” explains Aude Billard. The scientists also want to combine their algorithm with a robot that can perceive several different signals, in order to coordinate the data the brain gets with that of visual motor skills.

See also  Rocket Lab: The stage of the rocket captured by the helicopter after the successful launch
Latest news
Related news