Nov 12th, 2025

Ever wondered how blind gamers wave hello, give a thumbs up, or show they’re excited in online games?  Join us as entrepreneur, researcher, and designer Brandon Biggs shares his research on how techniques from top audio game developers can be applied to make VR more accessible for blind and low-vision users and better for everyone.

Event Details

Date: Wed Nov 12th

Time: 10am PT | 1pm ET

Location: Zoom

This talk will present a framework described in the paper “Creating Non-Visual Non-Verbal Social Interactions in Virtual Reality” for making Virtual Reality (VR) fully accessible to blind and low-vision individuals. By leveraging a Delphi method with top audio game developers, the research translates commercially-tested, non-visual conventions for 29 non-verbal social interactions, covering categories like movement, emotes, and appearance, into practical design patterns for VR. The core focus is on utilizing techniques such as spatial audio (HRTF) for proximity and location, dedicated auditory cues for movement and collisions, and screen reader integration to convey rich emotional and appearance information. This approach offers developers an immediate, tested baseline for accessibility, moving past the visual-centric limitations of current VR to create a genuinely inclusive social VR experience.

If you require accessibility accommodations such as American Sign Language interpretation, please email info@xraccess.org no fewer than 72 hours before the event.

Video

Summary

This seminar explored non-visual social interaction design in VR by examining the audio game community, which creates immersive multiplayer games using spatial audio for blind and visually impaired users. Presenter Brandon Biggs conducted a Delphi study with the top five active audio game developers to create a comprehensive inventory of nonverbal interaction patterns for non-visual VR experiences. The research revealed that while academia has explored basic haptic VR tools (haptic gloves, canes, styluses, 360 treadmills) and auditory elements (binaural audio, head-tracking, sonar sticks, auditory beacons), these efforts remain minimal compared to visual VR development. Meanwhile, the thriving audio game community on audiogames.net has developed sophisticated conventions across strategy games, FPS, RPGs, and MUDs like Swamp, Survive the Wild, Shades of Doom, A Hero’s Call, and Alter Aeon, demonstrating that spatial audio creates experiences as immersive for blind users as visual VR does for sighted users.

The study identified key design patterns across nine categories of social interaction: movement (using joystick controls with footstep and collision sounds rather than physical gestures that lack feedback), camera position (always maintaining first-person audio perspective), facial expressions and gestures (combining short earcons with text descriptions like “[earcon] Brandon winked at you”), multi-avatar interactions (exaggerated sounds plus speech for physical contact), avatar mannerisms (subtle, optional periodic sounds), avatar appearance (speech descriptions with customization options), avatar-environment interactions, full-body animations (preset emotional expressions with spatial audio), and conventional communication (speech messages with menu navigation). The research emphasized that blind players strongly dislike VR’s typical reaching-and-gesturing interactions because they provide no tactile feedback, preferring gamepad or keyboard controls with clear auditory indicators of movement start/stop and collision.

Critical accessibility recommendations included allowing users to access their own familiar screen readers and synthesizers rather than forcing proprietary systems, ensuring all content uses semantic HTML that screen readers can parse (avoiding WebGL/Canvas “black boxes”), implementing keyboard-first navigation, making sounds and messages optional/customizable, and providing ARIA live regions or tools like TOLK for screen reader integration. The presenters noted that while 95% of blind gamers use Windows (where most audio games run), mainstream VR platforms like Mozilla Hubs have resisted adding basic navigation sounds despite years of advocacy. Future work includes analyzing interface conventions in top audio games and building platforms incorporating these standards, though major engines like Unreal and Unity have been largely unresponsive to accessibility needs, while Godot shows promise with recent semantic tooling additions.

Generated from notes via Anthropic’s Sonnet 4.5 LLM

About the Speaker

Headshot of Brandon Biggs, a white, brown-haired, blue-eyed mail waring a button-down shirt.

Brandon Biggs

Georgia Institute of Technology PhD Candidate and XR Navigation CEO

Brandon Biggs is an entrepreneur, researcher, and inclusive designer. He is CEO of XR Navigation, an engineer at Smith-Kettlewell Eye Research Institute, and a PhD student at Georgia Tech. Nearly blind from Lebers Congenital Amaurosis, he develops human-centered tools tackling challenges in the blindness field.