ASL Learning Experience

Design System Design, Virtual Reality, Accessibility Design

2024.08

FULL DEMO VIDEO

INTRO

ASL

American Sign Language (ASL) is a complete, natural language that serves as the predominant sign language of Deaf communities in the United States and most of Anglophone Canada. It uses hand shapes, facial expressions, and body movements to convey meaning.

Brief

Using ASL in a VR environment to do some tasks, which helps users learn sign language in a more fun way.

Why VR

Learning a language is extremely difficult, if you add some image things, you can help users deepen the memory, so as to learn faster. Learning in VR can not only deepen memory, but also make learning more interesting in a more interactive process.

Output

The system will generate items corresponding to the gestures made by the user, and the user needs to spell out a complete sentence according to the guidance, so that the words in the sentence are generated and formed into a scene.

At the same time of learning, the VR camera will detect whether the user's gestures are correct and whether there are any shortcomings. As long as the user makes correct gestures, the system will provide visual feedback and Interaction.

The system will generate items corresponding to the gestures made by the user, and the user needs to spell out a complete sentence according to the guidance, so that the words in the sentence are generated and formed into a scene.

At the same time of learning, the VR camera will detect whether the user's gestures are correct and whether there are any shortcomings. As long as the user makes correct gestures, the system will provide visual feedback and Interaction.

Stage 1

User needs to spell out single word like “grocery store” to see a grocery store.

This is used to help users remember individual words independently and connect them to objects。

Stage 2

User needs to spell out a group of words are in same catalogue like “bread” to have bread generated. (bread belongs to food)

Stage 3

User needs to spell out full sentence like“ Go to grocery store to buy apple, bread and egg”. And Interact with the game objects.
During this process, the object corresponding to each word in the scene will be revealed after the correct gesture is made.


This is to help the user to independently spell the learned words into a complete sentence, the scene can help the user to deepen the memory.

THE PROCESS

In the first step, I tried to use meta quest Interactable SDK to make a single hand gesture in the scene and connect the gesture to the object. Whenever the gesture is detected to have been successfully made, the item will appear.

Next, I tried to record the movements of both hands at the same time to form a complete gesture. I connected two squares in the scene, and whenever a hand made a correct gesture, the corresponding square would change color, indicating that the hand had done the right thing. At the same time, the scene changes accordingly.

In order to let the user better observe and learn the action, I put the visual hand in the scene, the user can observe the thing again.

But during the User test, I found that neither using blocks to indicate nor providing a huge visual hand worked very well.

In this step, I studied ASL carefully and chose a simple gesture as the first word to learn, "apple", which I made in unity and recorded a video of the hand. I also tried to make the left hand and right hand display different colors, hoping that users can easily understand visual hands. However, in the user test, the effect was not obvious, but the hand video I recorded got unanimous praise.

In this step, I studied ASL carefully and chose a simple gesture as the first word to learn, "apple", which I made in unity and recorded a video of the hand. I also tried to make the left hand and right hand display different colors, hoping that users can easily understand visual hands. However, in the user test, the effect was not obvious, but the hand video I recorded got unanimous praise.

Finally, I made the hover interface, which shows you the actions to do and the explanation. And a constant stream of hand movements. This method helps the user understand the corresponding actions more quickly.

© 2024 Oscar Wang

© 2024 Oscar Wang

© 2024 Oscar Wang