XR Access Research Network
The XR Access Research Network fosters a diverse community of researchers pursuing academic-style research across fields and disciplines which have a shared interest in making XR technologies accessible, enabling innovative research, new collaborations, resource-sharing, and translational impact.
We’re also looking for research participants and testers – sign up here!
The XR Access Research Network aims to foster academic-style research across fields and disciplines, with a shared interest in making XR technologies accessible, enabling innovative research, new collaborations, resource-sharing, and translational impact.
The Research Network holds regular seminars and conversations, supports emerging researchers at the undergraduate and graduate levels, creates networking and collaboration opportunities among research groups and between academia and industry.
Research Network Seminars take place on the third Tuesday of each month at 12pm ET.
If you’d like to participate as a partner in the Research Network, we’d love to hear from you! You can get in touch with us via email at email@example.com.
All are welcome to register for the XR Access Research Network’s Seminar Series. Stay tuned for an announcement of the next seminar.
Research Network Projects
Authors: Jazmin Collins*, Crescentia Jung*, Yeonju Jang (* indicates co-first authorship and equal contribution). Our first project explored using the sighted guide technique in VR to support accessibility for blind and low vision people. We created a prototype with a guidance system and conducted a study with 16 blind and low vision participants. Participants completed various tasks and explored the virtual environments with their guide. From this project, we uncovered participants’ preferences for a guide in VR and a broad design space for virtual guides. We have begun a second project that explores using an AI-powered guide. We [...]
Authors: Ria J. Gualano*, Lucy Jiang*, Kexin Zhang*, Andrea Stevenson Won, and Shiri Azenkot (* indicates co-first authorship and equal contributions). Through this project, we aim to fill a gap in current accessibility research by focusing on the experiences of people with invisible disabilities (e.g., chronic health conditions, mental health conditions, neurodivergence, etc.) and preferences regarding disability disclosure practices in virtual embodied spaces. We conducted semi-structured interviews with 9 participants, some who identified as neurodiverse and others who identified as having chronic conditions such as rheumatoid arthritis. Of the 9 participants, 4 were Caucasian, 2 were Mixed-Race, 1 was [...]
Authors: Lucy Jiang, Mahika Phutane, Shiri Azenkot. While traditional videos are typically made accessible with audio description (AD), we lack understanding on how to make 360° videos accessible while preserving their immersive nature. Through individual interviews and collaborative design workshops, we explore ways to improve 360° video accessibility beyond applying traditional AD methods. We conducted design workshops with BLV and sighted AD creators who collaboratively develop accessible 360° video prototypes. We found that AD experts value accurate vocabulary and different points of view for immersive descriptions, prefer a variety of presentation locations for spatialized AD, appreciate sound effects [...]
Authors: Crescentia Jung*, Jazmin Collins*, Yeonju Jang, Jonathan Segal (* indicates co-first authorship and equal contribution). This project explores how to make nonverbal cues accessible in VR for blind and low vision people. We first explored how to make gaze accessible in VR with a blind co-designer by co-creating a highly customizable prototype that uses audio and haptic feedback. From these initial findings, we have started to explore additional nonverbal cues such as nodding, shaking one’s head, smiling, and frowning. We plan to develop prototypes with these additional cues and conduct a formative design study with blind and [...]
Authors: Jonathan Segal, Samuel Rodriquez, Akshaya Raghavan, Heysil Baez, Shiri Azenkot, Andrea Stevenson Won. The distribution of tools on improving the accessibility of VR is currently fragmented. Companies usually share these frameworks either on their website and academics if they do publish these tools usually do so inconsistently. This leads to difficulty when integrating these tools into other projects. When timelines are tight, especially if the project’s main focus is not accessibility, it is easy to skip adding accommodations to a project. This project is advocating for a standard method of sharing such tools focusing on the Unity [...]
VR games are becoming common, but people with visual impairments are frequently left out of the fun. Some prior work has explored including audio cues as an accompaniment to visual VR experiences, while developers of accessible games have created audio-only versions of popular games for people with visual impairments. However, there’s very little academic research on accessible VR games, pointing to an opportunity to explore how to convey information about moving objects, and how to teach players who may never have played a non-virtual version of a game how to interact with virtual objects. Dr. Kyle Rector and her [...]
Navigating stairs can be a dangerous mobility challenge for people with low vision. Inadequate handrails, poorly marked steps, and other obstacles can reduce mobility and lead to accidents. While past research has proposed audio stair-navigation aids for blind people, no research on people with low vision has yet addressed this challenge. Cornell Tech PhD student Yuhang Zhao, along with Cx faculty member Shiri Azenkot and other researchers from Cornell Tech and Columbia University, built on this research to design several AR navigation systems for people with low vision. They tested both projection-based and optical see-through smart glasses systems [...]
Cornell Tech PhD student Lei Shi, faculty member Shiri Azenkot, and their collaborators are studying how to design educational 3D models for students with visual impairments. The researchers interviewed teachers of the visually impaired about the needs of their students, and demonstrated previously designed 3D-printed interactive models and computer vision-based educational tools. Based on interview results, they designed an augmented reality (AR) app that pairs with a 3D printed model. The model’s tactile features help students learn the physical features of a concept, while visual and audio cues from the AR app provide additional information. The study’s results [...]