Real time sign language interpretation using Augmented Reality in Museums

Summary

Seeing that people with hearing disabilities in the Arabic speaking world rely heavily on the use of sign language to communicate, there is a critical need for the development of technology to provide sign language interpretation where human interpreters cannot always be present.

As such, a solution is required to provide people with hearing disabilities access to static and rich multimedia content (e.g. video, audio announcements, text, graphics, signage) through Augmented Reality.

The solution will enable people with hearing disabilities to access key information and partake in different activities in a more independent and equitable fashion.

This is particularly relevant in Museums, where both the user journey and displays rely heavily on the use of signage and multimedia displays to convey key information.

Initiated

Target Users

Museum visitors with Hearing Disabilities
Sign language users

User Journey

MaryamMaryam is 24-year-old. She is deaf and uses sign language to communicate. Maryam is looking to use her smartphone to access information on static and digital signage in her everyday life.

1

Maryam is looking at a painting, with information written on the accompanying description in Arabic. She cannot read the text.

2

Maryam uses the camera on her smartphone to activate an Augmented Reality sign language interpretation app to access the information found on the sign that describes the painting.

3

Once open, the app will identify and interpret the content of the painting description and convert it to sign language.

4

The interpreted content will be presented to Maryam through a sign language avatar displayed on her smartphone or wearable device.

Potential Service Features

  • NFC Enabled Identification
  • QR Code Scanner
  • Biometric Identification
  • Saved Favorite Points Of Interest
  • Automated Day Planner
  • Personalized Notifications
  • Predictive Insights
  • Issue Resolution

Touch Points

Issue Statement

As a sign language user, Maryam is unable to read information found in static or text and rich multimedia-based mediums (e.g. audio, video, graphics, etc.) during her visit to museums.

The inability to access crucial museum related information such as written display descriptions puts people with hearing disabilities at a tremendous disadvantage to others.

Expected Key Benefits

Text / Graphics to Sign Language

Translate key information that is being delivered to mass audiences through text and video into sign language (both Arabic and English).

Contextual Information Delivery

Based on the user’s live location, relevant contextual information (e.g. emergency notifications, alternative routes, etc.) can be pushed to the user in sign language.

Audience Expansion

Businesses and public sector organizations can use the solution to broadcast contextualized information in sign language, thereby targeting a niche customer base that is not traditionally accessed through conventional advertising mediums.

Implementation Analysis

Implementation Timeline

Timeline Medium

Short

Medium

Long

Technology Commercial Viability

Timeline Short

Available Now

Viable in Short Term

Viable in Long Term

Investment Requirements

Timeline Medium

Low

Medium

High

Key Implementation Considerations

1

User identification and sign-on

2

Collection and synchronization of real-time data from multiple sources (e.g. transportation schedules, emergency response routes, etc.)

3

Identification and Instant / Real-time sign language translation of relevant data in alternative formats (e.g. text, audio, etc.)

4

Data collection policies

5

Intuitive Graphical User Interface