It has been six years since the Ice Bucket Challenge soaked the world. It is a fundraising campaign that promotes awareness of amyotrophic lateral sclerosis (ALS), a neurodegenerative disorder with no known cure. Patients with ALS will gradually lose control of their muscles, and losing the ability to speak is one of the painful syndromes. Aristotle’s famous quote: “Man is by nature a social animal” illustrates the importance of communication. In fact, speech is what most of us take for granted. With the loss of this vital function, ALS patients are often overwhelmed with the sense of disconnection and isolation. Researchers at MIT have recently come out with a medical technology that features a skin-like sensing system. Designed to be robust, mechanically adaptive, predictable, and visually invisible, it makes communication a possible and less frustrating task for these ALS patients.
Limitation of current communication devices for ALS patients
Professor Stephen Hawking’s speech-generating device used an infrared sensor to detect twitches in his cheek. This allowed him to select letters to form words and sentences. This technology worked but was not optimal, according to Canan Dagdeviren, the lead researcher of this project when she visited him in 2016. Bulkiness and reliability are among the major limitation of current communication tools for ALS patients.
How this non-verbal communication technology works
Intended to tackle Professor Stephen Hawking’s struggle to type via his computer interface, Dagdeviren then came out with the idea of this conformable Facial Code Extrapolation Sensor (cFaCES). This stretchable and near-to-invisible device consists of a piece of thin sticker-like sensor that attaches to the patient’s cheek or temple. Embedded in a silicone sheet (PDMS), it consists of aluminum nitride (AIN) piezoelectric sensing materials along with molybdenum (Mo) metal electrodes. The research paper was published in the journal Nature Biomedical Engineering.
The sensor will track the patient’s facial micromotions and generate voltage waveforms. These outputs will then undergo signal processing and motion classification based on the patient’s motion library via a kNN-DTW algorithm. Dynamic Time Warping (DTW) is a time series alignment algorithm, whereas the k-nearest neighbours (kNN) is a supervised classification algorithm. A large subset of languages can thus be inferred from this motion library. The size of the motion library is customizable, which is based on the patient’s preferences and comfort. The final number of motions chosen for decoding depends on the number of phrases desired to be communicated along with the chosen mapping strategy.
The pros and cons
The sensor’s conformable structure enables it to yield seamless integration with facial skin. It is completely skin-like, and you will completely forget that it is even there once you put it on. This allows rapid, repeatable, and reliable measurements of skin strain when facial muscles move. Moreover, this device only needs one-time calibration. You don’t have to recalibrate it every time you remove and reapply it. Nevertheless, this medical technology has a few limitations such as the low density of sensing elements, small area coverage, wired connections, and external adhesion mechanism.
The researchers have carried out trials on healthy individuals and ALS patients using a small subset of three motions. The motions are smile medium (SM), open mouth (OM) and pursed lips (PL) respectively. They manage to demonstrate the potential of this medical technology with accuracy rates at 75% for ALS patients and 87% for healthy subjects using four sensing elements. According to the researchers, they might expand the functionality of the device beyond communication in future. For instance, the device can function as a clinical monitoring tool or an indicator of treatment effectiveness.
For More Information: