Meet Erin’s Avatar: Groundbreaking Tech Gives Voice & Mobility to Woman With ALS

Meet Erin’s Avatar: Groundbreaking Tech Gives Voice & Mobility to Woman With ALS

At 23, Erin Taylor was diagnosed with ALS, a disease that leads to near-total paralysis. But a team of innovative companies has given Erin a virtual voice through a lifelike avatar that captures 96% of her visual and audible characteristics. The avatar, which will debut at Showstoppers CES 2024, demonstrates possibilities for using AI and other advanced technologies to empower people with even the most severe disabilities. Erin is helping test the different technologies to preserve her personality, independence, and mobility as the disease progresses. Meet Erin’s Avatar: Groundbreaking Tech Gives Voice & Mobility to Woman With ALS.

Access our explainer video HERE.
Access Erin’s avatar HERE.

By combining these technologies, the Scott-Morgan Foundation aims to enable more autonomous communication and mobility for people with disabilities. During the CES demo, LUCI’s self-driving wheelchair will display Erin’s hyper-realistic AI avatar on a vertical screen. Created by DeepBrain AI and Lenovo, the avatar captures Erin’s personality and mannerisms with 96% accuracy and articulates her text in real time using IRISBOND’s multimodal eye gaze-powered AAC platform. The avatar serves as a virtual stand-in for Erin, who will be available for select interviews to discuss her vital role in designing these assistive technologies that preserve independence for people with disabilities.

“We are building an ecosystem of complementary solutions to change what it means to live with a disability,” said Andrew Morgan, CEO of the Scott-Morgan Foundation. “This growing collaboration showcases the power of mission-driven companies coming together. Individually, their innovations change lives. Together, they disrupt the entire landscape.”

Currently, 2.5 billion people need assistive technology worldwide. By 2050, this number will reach 3.5 billion. By addressing the intensive needs of people with severe disabilities—especially ALS, a neurodegenerative disease also known as MND in the UK —the Scott-Morgan Foundation sets out to solve some of assistive technology’s greatest challenges. Focused on designing and democratizing assistive tech, the Foundation does visionary work that raises the bar on what is possible in empowering those with limited mobility and speech.

“By driving public awareness of the possible, we hope to spark innovation that makes such assistive technologies accessible to all who need them,” said LaVonne Roberts, Executive Director of The Scott-Morgan Foundation. “This collaboration isn’t merely a technological advancement—it’s a powerful affirmation of human rights and inclusion.”

AI Avatar to Preserve Personality

The Scott-Morgan Foundation debuted the first hyper-realistic AI avatar deployed as assistive technology. Erin’s avatar preserves her unique personality, voice, mannerisms, and full body in an avatar she can use to transform text into dynamic video. DeepBrain AI’s avatars replicate the look and sound of the real individual with staggering accuracy: over 96% visual and audible similarity to the human counterpart.

The avatar marks a major leap beyond traditional voice banking or other text-to-speech engines. The DeepBrain avatar can also be integrated with an LLM-based generative AI for live interaction, as showcased at a recent Lenovo Formula 1 event—an exciting frontier for the next generation of assistive avatars. Lenovo conceived and sponsored the avatar with ongoing support and processing generously donated by DeepBrain.

Personal, Predictive LLM

Lenovo also unveiled a personal, on-device LLM dedicated to reliably providing the power of generative AI to people with disabilities. By compressing a larger public LLM, the team at Lenovo’s AI Innovation Centre created a powerful predictive text tool optimized for people who cannot use a traditional keyboard.

The new LLM offers solutions to two major challenges faced by Erin and others using AAC:

  • Smarter, faster text generation with multiple output options. The Lenovo AI inferences the LLM after each user input (character or word) and offers a set of the most likely next words to select, giving several predicted options.
  • Offline reliability. Fast and accurate communication should not rely on Internet connectivity; here, it runs entirely offline within a portable device.

The LLM currently runs on Erin’s Lenovo Yoga laptop and will later be ported to other devices. Future iterations may integrate more seamlessly into the integrated IRISBOND platform and be customized to learn from Erin’s personal data.

Multimodal Input Interface

IRISBOND, a pioneer in eye-gaze technology, is building the first AAC platform of its kind that converges multiple assistive technologies—including eye tracking, tongue, ear, wheelchair, and avatar controls—into one seamless user experience.

Globally, the World Health Organization (WHO) estimates that more than 405 million people require assistive technologies like AAC. Still, only 10% of those in need have access to them in lower-income countries. In the US alone, 85-90% of the 2.5 million people who need AAC technologies cannot obtain them due to high costs and limited coverage. The Scott-Morgan Foundation is changing that.

This new AI-powered AAC platform aims to be the most accessible, affordable, and empowering communication solution for those with severe disabilities by converging multimodal inputs, leveraging NVIDIA AI for fluid speech, lessening the need for dedicated external hardware, and pioneering autonomous communication even for those with profound speech and mobility limitations.

People with severe disabilities who cannot use their hands or voices often face gaps in conveying messages. The painstaking letter-by-letter process using eye gaze technology can be life-changing but also time-consuming. IRISBOND’s human-centric platform will leverage NVIDIA’s AI to learn from the user and facilitate more natural conversations by predicting intent and subtle movement optimizations.

Circular Keyboard and New Input Technologies

The Scott-Morgan Foundation also developed an innovative circular keyboard that optimizes letter placement to minimize eye movements, decreasing typing time and allowing easy, extended communication. Based on research with real users—and a reimagining of a rectangular keyboard designed for fingers—the team struck upon the eye-centric design. Concurrently, predictive text and AI analyze conversations, suggesting relevant responses to reduce latency dramatically.

Augmental has created the world’s first hands-free tongue-operated touchpad called MouthPad^. This intraoral device converts subtle mouth gestures into input commands, serving as an invisible, always-available controller for personal electronics. Made of dental-grade materials, it empowers those with limited mobility through natural and expressive tip-of-the-tongue interaction.

EarSwitch earbuds, meanwhile, will seek to make eye-tracking faster, more accurate, and less tiring by helping users rapidly select letters on the keyboard and wheelchair controls with the “click of an ear” by squeezing a muscle in the ear.

Here, the Scott-Morgan Foundation presents a vision for integrating these technologies to offer ways to thrive for people with different disabilities or in different stages of disease progression.

Clinical Trials and Scaling Up

The Scott-Morgan Foundation is taking a patient-first approach by collaborating with pioneering researcher Dr. David Putrino and Dr. Abbey Sawyer of The Abilities Research Center at the Icahn School of Medicine at Mount Sinai. With more than a decade of experience, Dr. Putrino and his team assess emerging assistive technologies through clinical trials and research. Their crucial guidance and expertise in rehabilitation innovation will optimize this platform’s effectiveness and validate its real-world impact for people with disabilities.

Democratization of technology remains a core mission of the Scott-Morgan Foundation, and the new proof of concept is already inspiring more accessible and scalable options to be developed in the near future.

“Digital technologies rarely allow for effective communication with the outside world for people with severe disabilities, and certainly not with the same ease and efficiency as the technology available for non-disabled communities,” said Dr. Putrino of The Abilities Research Center at the Icahn School of Medicine at Mount Sinai. “This failure drives social isolation and loneliness, not to mention the potential safety impacts of a person with severe disability being unable to convey their immediate needs.”

Additional Quotes can be found HERE.
PRESS KIT HERE.

About The Scott-Morgan Foundation:

The UK-based Scott-Morgan Foundation focuses on pioneering technology-driven solutions to empower people with severe disabilities. The Foundation develops both bold proof-of-concepts to inspire a brighter future and more immediate, scalable solutions.

Collaborators/innovations:

  • Hyper-realistic, AI-powered avatar from DeepBrain AI and Lenovo.
  • On-device, personal LLM for predictive, generative AI from Lenovo.
  • Eye-gaze tracking and AAC communication platform converging multimodal inputs, including eye tracking, ear, tongue, and wheelchair controls from IRISBOND.
  • Smart robotic mobility platform from LUCI Mobility.
  • Circular keyboard interface from The Scott-Morgan Foundation.
  • In-ear biometric control technology from EarSwitch.
  • Tongue-operated, hands-free touchpad from Augmental.