Authors: Crescentia Jung*, Jazmin Collins*, Yeonju Jang, Jonathan Segal (* indicates co-first authorship and equal contribution).

This project explores how to make nonverbal cues accessible in VR for blind and low vision people. We first explored how to make gaze accessible in VR with a blind co-designer by co-creating a highly customizable prototype that uses audio and haptic feedback. From these initial findings, we have started to explore additional nonverbal cues such as nodding, shaking one’s head, smiling, and frowning. 

We plan to develop prototypes with these additional cues and conduct a formative design study with blind and low vision participants. We aim to ultimately conduct an evaluation study with blind and low vision participants to evaluate our design of accessible nonverbal cues.