News

A brain-computer interface for turning thoughts into movements

A new brain-computer interface system could improve the quality of life of people with motor dysfunction or paralysis by helping them execute movements.

A system developed by an international team of researchers and led by the Georgia Institute of Technology, allows its users to imagine an action and wirelessly control a wheelchair or robotic arm.

Activate movements from thoughts

These brain-computer interfaces are a rehabilitation technology that analyzes a person’s brain signals and translates that neural activity into commands, turning intentions into actions.

The portable system presented uses electroencephalography as a non-invasive method to translate brain activity into commands. It integrates imperceptible microfiber electrodes with soft wireless circuits, which offers better signal acquisition and also does not depend on cables or cumbersome connections, like other devices for studies or clinical treatments.

Accurately measuring those brain signals is of utmost importance in determining what actions a user wants to take. So the researchers integrated a powerful machine learning algorithm and a virtual reality component to address that challenge.

This new system, still experimental in nature, was tested with four human subjects, but has not yet been studied with disabled people.

“This is only a first demonstration, but we are delighted with what we have seen”said Woon-Hong Yeo, director of Georgia Tech’s Center for Human-Centered Engineering and Interfaces, Institute for Electronics and Nanotechnology, and a member of the Petit Institute for Bioengineering and Bioscience.

Yeo’s team has experience with these kinds of projects. Previously, he featured a brain-computer interface in a 2019 study published in Nature Machine Intelligence. The lead author of that work, Musa Mahmood, was also the lead author of the team’s new research paper.

“This new brain-machine interface uses a completely different paradigm, which involves imagined motor actions, such as grasping with either hand, which frees the subject from having to look at too many stimuli”, said Mahmood, a doctoral student in Yeo’s lab.

In this year’s study, presented at the Georgia Tech web, users demonstrated precise control of virtual reality exercises using their thoughts – their motor images. Visual cues improve the process for both the user and the researchers collecting information. “Virtual prompts have proven to be very helpful”, Yeo said. “Accelerate and improve user engagement and accuracy. And we were able to record continuous high-quality motor imaging activity ».

Banner photo: Woon-Hong Yeo, Georgia Institute of Technology

Source link

Lenny Li

I started to play with tech since middle school. Smart phones, laptops and gadgets are all about my life. Besides, I am also a big fan of Star War. May the force be with you!

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button