Researchers from Massachusetts Institute of Technology (MIT) have designed a skin-like device that can measure small facial movements in patients who have lost the ability to speak. Their study appears in Nature Biomedical Engineering.

The stretchable, skin-like device can be attached to a patient’s face and can measure small movements such as a twitch or a smile. Using this approach, patients could communicate a variety of sentiments, with small movements that are measured and interpreted by the device.

The researchers hope that their new device would allow patients to communicate in a more natural way, without having to deal with bulky equipment. The wearable sensor is thin and can be camouflaged with makeup to match any skin tone, making it unobtrusive, they explain, in a media release from MIT.

“Not only are our devices malleable, soft, disposable, and light, they’re also visually invisible. You can camouflage it and nobody would think that you have something on your skin.”

— Canan Dagdeviren, the LG Electronics Career Development Assistant Professor of Media Arts and Sciences at MIT and the leader of the research team

The researchers tested the initial version of their device in two ALS patients (one female and one male, for gender balance) and showed that it could accurately distinguish three different facial expressions — smile, open mouth, and pursed lips.

A Skin-Like Sensor

The MIT team set out to design a wearable interface that patients could use to communicate in a more natural way, without the bulky equipment required by current technologies.

The device they created consists of four piezoelectric sensors embedded in a thin silicone film. The sensors, which are made of aluminum nitride, can detect mechanical deformation of the skin and convert it into an electric voltage that can be easily measured. All of these components are easy to mass-produce, so the researchers estimate that each device would cost around $10.

The researchers used a process called digital imaging correlation on healthy volunteers to help them select the most useful locations to place the sensor. They painted a random black-and-white speckle pattern on the face and then took many images of the area with multiple cameras as the subjects performed facial motions such as smiling, twitching the cheek, or mouthing the shape of certain letters. The images were processed by software that analyzes how the small dots move in relation to each other, to determine the amount of strain experienced in a single area.

The researchers also used the measurements of skin deformations to train a machine-learning algorithm to distinguish between a smile, open mouth, and pursed lips. Using this algorithm, they tested the devices with two ALS patients, and were able to achieve about 75% accuracy in distinguishing between these different movements. The accuracy rate in healthy subjects was 87%, the release explains.

Enhanced Communication

Based on these detectable facial movements, a library of phrases or words could be created to correspond to different combinations of movements, the researchers say.

The information from the sensor is sent to a handheld processing unit, which analyzes it using the algorithm that the researchers trained to distinguish between facial movements. In the current prototype, this unit is wired to the sensor, but the connection could also be made wireless for easier use, the researchers say.

The researchers have filed for a patent on this technology and they now plan to test it with additional patients. In addition to helping patients communicate, the device could also be used to track the progression of a patient’s disease, or to measure whether treatments they are receiving are having any effect, the researchers say.

[Source(s): Massachusetts Institute of Technology, EurekAlert]


Related Content:
MDA Awards $1.6M+ in ALS Research Grants in Summer 2020 Grant Cycle
Target ALS and Spartan Join Forces for the Hardest Fight
Thought-to-Text Technology Begins Clinical Trials