Revolutionizing Accessibility: The Power of Facial Navigation Technology

This post has been republished via RSS; it originally appeared at: New blog articles in Microsoft Community Hub.

Introduction

Imagine that one of your colleagues lost arm mobility due to an accident, and now they are wondering how they will go back to their work. What if I could tell you that they will be able to use Teams and much more as they did before… UCL MotionInput v3.3: Facial Navigation Assistant v1.0, Seeks to address the challenges faced by people with significant disabilities in interacting with digital devices, particularly those reliant on a keyboard and mouse. It is aimed at enhancing their ability to use computers, even with limited mobility.

This project is a guide application that works in conjunction with the UCL MI3 Facial Navigation application, which enables users to interact with computers without using traditional input devices. This facial navigation technology was developed in collaboration with prominent organizations during the COVID-19 pandemic and offers several interaction methods, including nose tracking, facial expressions, and speech commands.


zuzanna_sosnowska_0-1698768784924.png


The project's primary goals include guiding users on how to effectively use UCL MI3 Facial Navigation, providing 'Gesture Guide,' 'Joystick Guide,' and 'Speech Guide' for understanding various interaction methods, offering tutorials on specific tasks like sending emails or setting up meetings, and allowing users to personalize their interaction setup.


The development process adopted an agile methodology, with development divided into sprints. Challenges were faced during the UI/UX design phase due to the inability to test the application on individuals, but research into mobility impairments and visits to specialized schools for individuals with physical disabilities provided valuable insights.


The technical details involve the use of C#, Windows App SDK, and WinUI 3 to create a user-friendly and robust application.

Future development plans include adding functionality for testing interactions, more tutorials, diverse representation in tutorial recordings, and compatibility with other devices such as tablets.

 

Project Overview

1.3 billion people experience significant disabilities, and many of these individuals face challenges when interacting with digital devices that rely on a keyboard and mouse. A solution to this is already existing assistive technologies that offer various interaction methods. However, many of these helpful technologies either lack a guide on how to effectively use the application or the guidance is not personalised to a user and their mobility which may later result in not fully addressing the users' needs.

In response to these challenges, my project aimed to tackle the issues by developing an innovative guide application named UCL MotionInput v3.3: Facial Navigation Assistant v1.0.  UCL MotionInput v3.3: Facial Navigation Assistant v1.0 works alongside the UCL MI3 Facial Navigation application, and with the help of simple, clear UI, as well as GIFs, and various tutorials guides users on how to use their computers despite limitations.

What is the UCL MI3 Facial Navigation application?

It is an application that allows users to interact with computers, mouse, and keyboard free. The application was developed by over 150 UCL Academics and Staff during the COVID-19 Pandemic as a solution to stopping germ spreading. UCL MI3 Facial Navigation was developed in collaboration with Intel Corporation, IBM, Microsoft, UK's NHS, and Great Ormond Street Hospital for Children.


As of right now, UCL MI3 Facial Navigation allows users to use their computers through:

  • Nose Tracking and Facial Expressions
  • Nose Tracking and Speech Commands
  • Eye Gaze and Facial Expressions (Experimental)
  • Eye Gaze and Speech Commands (Experimental

 

Project Goals

The project aims to deliver an application that:

  • Guides users on how to use UCL MI3 Facial Navigation effectively.
  • Provides users with a ‘Gesture Guide’, ‘Joystick Guide’, and ‘Speech Guide’ to understand how certain facial expressions or speech commands can be used to interact with a computer.
  • Provides various tutorials that show how UCL MI3 Facial Navigation can be used to send an email or set up a meeting on Teams and more.
  • Allows users to personalise their interaction setup.
  • Improves users’ independence by adding UCL MI3 Facial Navigation to Startup so the application can launch automatically when the user logs in.

 

Project Journey

Due to the project's nature, the author chose to adopt an agile methodology during its development. The app's creation was divided into sprints, typically lasting from one to four weeks. Aside from tackling the coding challenges, one sprint proved challenging for the author i.e., the phase involving planning and UI/UX design. The difficulty came from the inability to test the application on individuals during its development, leaving the author uncertain about whether the UCL MotionInput v3.3: Facial Navigation Assistant v1.0 would effectively meet users' needs. Consequently, the author researched how various mobility impairments and movement disorders might impact a person's interactions with computers. Additionally, the author visited one of London's schools specialising in education for individuals with physical disabilities and sensory needs. This visit provided valuable insights into existing assistive technologies and areas requiring improvement.

 

Technical Details

The development of UCL MotionInput v3.3: Facial Navigation Assistant v1.0 was achieved through an integration of C#, Windows App SDK, and WinUI 3. These tools allowed the author to create a robust and user-friendly application that works towards improving users’ independence.

 

Future Development

  • Add functionality that allows users to test the interactions before choosing them.
  • Add more tutorials.
  • Enlist individuals from diverse ethnic backgrounds to create tutorial recordings, fostering inclusivity and representation.
  • Make the application fully compatible with other devices such as tablets.

 

Conclusion

In conclusion, the UCL MotionInput v3.3: Facial Navigation Assistant v1.0 application represents a step forward in addressing the challenges faced by users with disabilities in their interactions with digital devices. This project has effectively bridged the gap between existing assistive technologies and personalised guidance, enhancing users' abilities to navigate computers with limited mobility.

The integration of a modern user interface, GIFs, and comprehensive tutorials is an example of a dedication to ensuring that users can fully embrace their interactions with devices, from sending emails to participating in online meetings through platforms like Teams.

Additionally, the project's dedication to helping users personalize their interactions and adding UCL MI3 Facial Navigation to Startup boosts users' ability to do things independently.


The project’s journey encompasses agile methodologies, research into mobility impairments, and meetings with educational institutions, resulting in the creation of a holistic solution.


Overall, this IXN project stands as a symbol of empowerment, accessibility, dedication, and the drive for a better life through technology.

 

Project Demo

LeeStott_0-1698833783530.gif

 

Learning Resources

If you are interested in learning about building applications with Windows App SKD, please visit Windows App SDK


Build the future

If you are interested in exploring your ideas and building a startup Microsoft for Startups Founders Hub helps startups radically accelerate innovation by providing access to industry-leading AI services, expert guidance, and the essential technology needed to build a future-proofed startup.


Open to all—Sign up in minutes with no funding required.


Author

Zuzanna Sosnowska, UCL MSc Computer Science Student | LinkedIn

 

Leave a Reply

Your email address will not be published. Required fields are marked *

*

This site uses Akismet to reduce spam. Learn how your comment data is processed.