The Sixth Annual XR Access Symposium

June 6-7, 2024, New York City

XR Access 2024:
Insights From XR Assistive Technology

The XR Access Symposium is our annual conference for leaders in industry, academia, and advocacy to come together and solve the most pressing problems in extended reality accessibility. This year our focus is on Insights from XR Assistive Technology: how can we learn from technologies aimed at assisting disabled people in order to make mainstream XR more accessible?

This 2 day conference will take place on June 6-7, 2024, at the Verizon Executive Education Center on Cornell Tech campus in New York City, with sessions broadcasted online. It is hosted by Cornell University. Previous years have included presentations from industry titans such as Accenture, Meta, Microsoft, and Adobe, as well as cutting edge research from NYU, Carnegie Mellon, Columbia University, and many others.

To keep up to date about the Symposium, join our newsletter or Slack community. We’ll make any future announcements to both.

Hosted by
Cornell Tech, home of the Jacobs Technion-Cornell Institute


In-person registration closes May 31st! You can register for in-person via Eventbrite for a small fee, or just for the online portion for free via Zoom.


Our scholarship applications have closed. If you applied for a scholarship, please make sure to submit your refund request by July 2nd, 2024.


Volunteer applications have closed. Thank you to everyone who volunteered to help with the Symposium!


Verizon Executive Education Center at Cornell Tech

2 West Loop Road

New York, NY 10044

The Verizon Executive Education Center at Cornell Tech redefines the landscape of executive event and conference space in New York City. This airy, modern, full-service venue offers convenience and ample space—with breathtaking views—for conferences, executive programs, receptions, seminars, meet-ups and more. Designed by international architecture firm Snøhetta, the Verizon Executive Education Center blends high design and human-centered technology to bring you an advanced meeting space suited for visionary thinkers.

Verizon executive education center exterior: a pointed building of dark glass, like a whale's baleen.


June 6th, 2024

Start Time (EDT) Type Title People
9:00 AM Plenary Welcome & Opening Remarks Shiri Azenkot, Dylan Fox
9:10 AM Plenary XR Access Stories: Why Access Matters Dylan Fox, Meryl Evans, Sunny Ammerman, Jesse Anderson, Anne Burke
10:00 AM Plenary Unseen Sound Andy Slater, Sammie Veeler
10:20 AM Break Snack Break
10:40 AM Plenary Designing Interactive AI Visual Guides for Blind and Low Vision People Ricardo Gonzalez, Jazmin Collins
11:00 AM Plenary Using Surface Electromyography (sEMG) to Create a New XR Human-Computer UI Kati London
11:20 AM Plenary ASL Champ!: Learning ASL in VR with AI-powered Feedback Lorna Quandt, Shahinur Alam
11:40 AM Plenary XR for Individuals with Hearing Impairment Stefania Serafin
12:00 PM Break Lunch
1:00 PM Exhibit Posters & Demos
3:00 PM Breakouts Breakouts Set A
4:00 PM Break Snack Break
4:20 PM Plenary Breakouts Summaries, Extra Q&A, and Day 2 Preview Shiri Azenkot, Dylan Fox

June 7th, 2024

Start Time (EDT) Type Title People
9:00 AM Plenary The Latest from XR Access Shiri Azenkot, Dylan Fox
9:20 AM Plenary Wheelchair Based Navigation in VR Justin Berry
9:40 AM Plenary StoryTrails: The People’s Metaverse Angela Chan
10:00 AM Plenary Do You See What I’m Saying? – The Design of Owlchemy Labs’ Subtitle System Peter Galbraith
10:20 AM Break Snack Break
10:40 AM Plenary Virtual Steps: The Experience of Walking for a Lifelong Wheelchair User in Virtual Reality Atieh Taheri
11:00 AM Plenary Accessibility as a Guide for Equity and Sustainability in Virtual Conferences Andrea Stevenson Won
11:20 AM Plenary Learnings from Co-Designing Products with People with Disabilities Nicol Perez, Erin Leary
11:40 AM Plenary A Machine-Learning-Powered Visual Tour Guide for Attentional Disorders Hong Nguyen
12:00 PM Break Lunch
1:00 PM Exhibit Posters & Demos
3:00 PM Breakouts Breakouts Set B
4:00 PM Break Snack Break
4:20 PM Plenary Breakout Summaries, Awards, & Closing Shiri Azenkot, Dylan Fox

Plenary Sessions

XR Access Stories: Why Access Matters

Headshot of Dylan Fox

Dylan Fox (Moderator)

Director of Operations, XR Access

Meryl wears an off-white dress with black speckles and has her arms resting on her crossed leg with a dark grey background behind her

Meryl Evans

Professional Speaker, Trainer, and Accessibility Marketing Consultant

Headshot of Jesse Anderson

Jesse Anderson

MN State Services for the Blind | @IllegallySighted YouTube Channel

Headshot of Sunny Ammerman

Sunny Ammerman

Founder, SpoonieVR

Headshot of Ann Burke, a white woman with long brown hair wearing a black t-shirt

Ann Burke

Assistant Professor, Michigan State University

Accessibility is often thought of in terms of standards to meet or items to be checked off a list, but this belies the real human beings that are affected by their ability to access technology. XR Access interviewed several disabled members of our community to show why accessibility matters, and these are their stories.

Unseen Sound

Andy, a middle aged white man smiles as wide as he can , shutting his eyes as if someone off camera is trying very hard to make him laugh. , . He has a full beard that is mostly gray with streaks and spots of red. . Hiss hair is short and reddish brown with a forehead a mile wide. . He's wearing a teal cardigan over a comprehensive patterned shirt. According to AI, the background is a soft, solid green color, enhancing the overall cheerful and pleasant vibe of the photo.

Andy Slater

Virtual Access Lab | Artist and researcher

Sammie Veeler, a white tattooed trans woman, sits at a kitchen table backlit by a green passionflower vine in the window. She has blond center-parted short hair, with long twin braids draped over the front of her open charcoal button up shirt. She looks into the camera while eating a tangerine.

Sammie Veeler

Virtual Access Lab | Founder

Unseen Sound is a spatial audio-based XR experience developed by blind artist Andy Slater, with Virtual Access Lab and New Art City virtual art space. The piece spotlights creative access and disability solidarity while producing new technical infrastructure. Created during the Leonardo “Crip Tech” fellowship, this project challenges conventional accessibility in technology by integrating sonic way-finding, poetic captions, and custom controllers designed for universal use. It addresses the oversight of blind people in tech design processes, providing an immersive experience accessible to a wide audience, including those who are blind, deaf, hard of hearing and neurodivergent.

Designing Interactive AI Visual Guides for Blind and Low Vision People

Headshot of Ricardo Gonzalez

Ricardo Gonzalez

Cornell University | PhD Candidate

A headshot of Jazmin Collins, a young woman with brown eyes and dark red hair, reaching past her shoulders. She is smiling brightly, wearing a black shirt with a pink scarf and earrings.

Jazmin Collins

Cornell University | PhD Candidate

BeMyAI, SeeingAI and other AI-powered applications provide visual information to Blind and Low Vision People. Users can now simply hold their phone up to hear descriptions of their environment or hear text read aloud by asking a question. However, these applications still struggle to provide accurate and salient information. While models like GPT-4 demonstrate human-like performance describing images, ultimately, they still lack the adaptability and sensemaking abilities of humans. In this talk we will present our plan to design an interactive system for smartglasses to act as a personalized AI-powered visual guide for BLV people. We will discuss the findings of a study we conducted to understand how BLV people are using technologies like SeeingAI. Then, we will discuss our plan to collect data about interactions of BLV people with an image description system powered by GPT-4. Finally, we will present a prototype to receive feedback from the audience.

Using Surface Electromyography (sEMG) to Create a New XR Human-Computer UI

Headshot of Kati London, light skinned woman

Kati London

Meta Reality Labs | Product Leader

Kati has been leading productization of first generation hardware & software consumer technologies throughout her 20+ year career. At Meta Reality Labs, she focuses on Trusted AR & AI Interfaces, including Surface-EMG Input at the wrist for more-human interaction. Previously, Kati designed real world game systems for people+ plants, DNA, traffic & sharks – tackling wicked challenges like disaster preparedness + socio-economic segregation. At Microsoft, she introduced human-agents into early Cortana; co-chaired the Listening Machines Summit, led early-GenAI efforts, and oversaw trusted search & news. Kati is obsessed w/gnarly ethical challenges found when productizing bleeding edge technologies.

For the past decade, our team at Meta Reality Labs has been dedicated to developing an advanced neuromotor interface as an alternative to touch-screens, hand-held controllers and keyboards. The goal is to address the Human Computer Interaction challenge of providing effortless, intuitive, and efficient input for XR experiences. This presentation will describe the development of a noninvasive neuromotor interface that allows for computer input using surface electromyography (sEMG), and its applications for accessibility.

ASL Champ!: Learning ASL in VR with AI-powered Feedback

Headshot of Lorna, a white woman with blonde/brown hair and glasses.

Lorna Quandt

Gallaudet University | Associate Professor

Shahinur Alam headshot

Shahinur Alam

Gallaudet University | Postdoctoral Associate

We developed a VR platform for learning ASL with immersive interaction and real-time feedback named ASL Champ. Our innovative approach includes an interactive game featuring a fluent signing avatar and the first implementation of ASL sign recognition using deep learning in VR. Using advanced motion-capture technology, our expressive ASL teaching avatar operates within a three-dimensional environment. Users learn by mimicking the avatar’s signs, and a third-party plugin executes sign recognition through a deep learning model, adjusting based on user accuracy. The functional prototype effectively teaches sign language vocabulary, making it a promising interactive ASL learning platform in VR.

XR for Individuals with Hearing Impairment

Stefania Serafin headshot

Stefania Serafin

Aalborg University in Copenhagen | Professor of Sonic Interaction Design, Leader of the Multisensory Experience Lab

Stefania Serafin is professor of Sonic interaction design at Aalborg University in Copenhagen and the leader of the multi-sensory experience lab together with Rolf Nordahl. She is the President of the Sound and Music Computing association, Project Leader of the Nordic Sound and Music Computing network and lead of the Sound and music computing Master at Aalborg University. Stefania received her PhD entitled “The sound of friction: computer models, playability and musical applications” from Stanford University in 2004, supervised by Professor Julius Smith III. 

In this talk I will present an overview of the technologies we have developed in our lab to support individuals with different hearing impairments. The application range from VR training of spatial awareness for children with hearing impairment to augmented reality based solutions to regain musical skills.

Wheelchair Based Navigation in VR

Picture of Justin Berry, a white male with a beard wearing a black cap

Justin Berry

Yale Center For Immersive Technologies in Pediatrics | Creative Producer / Project Director

Justin Berry is an artist, educator, creative producer, researcher, and game designer whose interdisciplinary work has been presented internationally in magazines, conferences, and museums. His current role is Creative Producer and Project Director for the Yale Center For Immersive Technologies in Pediatrics.

This talk will challenge the conventional wisdom surrounding virtual reality (VR) locomotion, as we explore groundbreaking research comparing 1:1 walking to 1:1 wheeling. Delve into the findings revealing no significant differences in user experience, and discover how embracing seated experiences can expand accessibility and enhance comfort in VR. Join us in advocating for inclusive design practices that cater to the diverse needs of all VR users.

StoryTrails: The People’s Metaverse

Headshot of Professor Angela Chan, a mixed ethnicity Chinese white British woman with long dark hair wearing a black dress and smiling

Prof. Angela Chan

CoSTAR National Lab/ Royal Holloway, University of London | Head of Inclusive Futures

Professor Angela Chan is Head of Inclusive Futures for the new CoSTAR National Lab for creative technology in the UK, overseeing Democratisation and Standards research and managing EDI and sustainability for the lab. Her background is in TV as an executive producer and senior manager for the BBC & Head of Creative Diversity for Channel 4.

In this talk we will describe the inclusion and accessibility planning that went on behind the scenes for StoryTrails, the UK’s most ambitious mixed reality live experience to date.  Made by StoryFutures, Royal Holloway University of London, and partners, it was one of ten national projects part of Unboxed, a major festival of creativity.  StoryTrails toured 15 cities and towns bringing stories from underrepresented groups to audiences through their local libraries. The stories were produced by local creatives told through 15 guided augmented reality trails, 8 virtual reality experiences and immersive cinema experiences using archive, LiDAR scanning and 3D animations.  The project adopted an inclusive design approach to every aspect, considering accessibility from recruitment through to production and delivery.

Do You See What I’m Saying? – The Design of Owlchemy Labs’ Subtitle System

Headshot of Peter Galbraith, a white man with short curvy blonde hair and a wide smile

Peter Galbraith

Owlchemy Labs | Senior Accessibility Engineer

Peter Galbraith is a programmer, engineer, and designer with a focus on developing new and unique gameplay and interactions for Virtual Reality projects and passionately advocating for accessibility in games. Currently the Senior Accessibility Engineer at Owlchemy Labs, he has worked on several multi-platform VR titles including the award-winning “Job Simulator” and “Vacation Simulator”, Emmy Award-nominated “Rick and Morty: Virtual Rick-ality”, and Owlchemy Labs’ most recent title, “Cosmonious High”. He has also played a key role in developing Owlchemy Labs’ commitment to accessibility, ensuring that its games are playable by everyone, regardless of their abilities.

Subtitle systems in traditional games have decades-old design paradigms to reference, but those paradigms quickly encounter problems once you begin considering the unique challenges of VR. In this talk, Owlchemy Labs’ Senior Accessibility Engineer Peter Galbraith will highlight the challenges of designing subtitles for VR, show some of their early prototypes, and explain the current solutions that have been implemented in the subtitles system used in Owlchemy’s VR games. Attendees will learn about different design considerations for captioning audio in VR and effective methods of implementing subtitles in their own VR projects.

Virtual Steps: The Experience of Walking for a Lifelong Wheelchair User in Virtual Reality

Headshot of Atieh Taheri, a woman with streaked blonde hair and wearing a teal dress seated in a wheelchair.

Atieh Taheri

University of California, Santa Barbara | PhD Candidate

I’m Atieh, a PhD Candidate in Electrical and Computer Engineering, working under the supervision of Prof. Misha Sra in the Human-AI Integration Lab at UCSB. My research intersects Human-Computer Interaction (HCI) and Accessibility, aiming to create meaningful technological solutions that improve the lives of individuals with disabilities. With a focus on participatory design, I’m dedicated to developing solutions that not only fulfill functional needs but also enhance the quality of life and user experience for those with disabilities, an area that has historically received less attention in Assistive Technology design research. 

In this talk, we will share takeaways from our participatory design study exploring the experience of virtual walking for individuals with congenital mobility disabilities. Despite having not experienced walking first-hand, they have a mental model of how it feels by having observed others walk. Matching the virtual experience to their idea of walking posed a challenge which we overcame with an iterative design approach. In collaboration with a lifelong wheelchair user, we designed a VR walking system. Over a 9-day diary study, they documented their emotional journey and feelings of embodiment, agency, and presence. Our analysis revealed key themes, including the importance of aligning the VR experience with the user’s mental model of walking, providing customization options, and managing emotional complexities. Based on our findings, which emphasize the need for inclusive design practices, we will discuss how VR experiences can be designed to be emotionally engaging and accessible.

Accessibility as a Guide for Equity and Sustainability in Virtual Conferences

Headshot of Andrea Stevenson Won, a white woman with red hair in a white blouse

Andrea Stevenson Won

Cornell University | Assistant Professor

Andrea Stevenson Won directs the Virtual Embodiment Lab. The lab’s research focuses on how mediated experiences change people’s perceptions, especially in immersive media. Research areas include the clinical applications of virtual reality, and how nonverbal behavior as rendered in virtual environments affects collaboration and teamwork. She completed her PhD in the Virtual Human Interaction Lab in the Department of Communication at Stanford University and holds an MS in Biomedical Visualization from the University of Illinois at Chicago.

During the pandemic, in-person conferences were suspended. Some programs turned to immersive virtual environments to provide enhanced social interactions. This highlighted some of the defects and advantages of virtual reality. As in-person interactions again become the norm, we should not lose sight of the potential for virtual experiences to be more equitable, more accessible, and more sustainable than physical world alternatives, and can look to existing work on accessibility in social virtual reality as a guide.

Learnings from Co-Designing Products with People with Disabilities

Nicol Perez headshot

Nicol Perez

Meta Reality Labs | Product Equity & Accessibility Programs Lead

Headshot of Erin Leary, a light-skinned woman with black hair wearing a blue sweater.

Erin Leary

Meta Reality Labs | Product Accessibility Program Manager

Join us for a discussion on Meta’s approach to co-designing products with people with disabilities. You’ll learn about how Meta approaches co-design, what people’s experiences have been participating in co-designs with Meta, and learn about challenges and lessons for co-designing with communities.

A Machine-Learning-Powered Visual Tour Guide for Attentional Disorders

Headshot of Hong Nguyen

Hong Nguyen

The New School NYC | Graduate Student

Hong is a 5th-year graduate student at The New School in New York. Her research focuses on the contributions of core mechanisms of visual processing to (1) naive physical understanding, and (2) visual preferences. One line of research focuses on how information about forces is integrated into visual processes such as visual motion processing, and attention. Another line of research explores how spatial attention contributes to aesthetic preferences for scenes.

Attentional impairment is a feature of many clinical disorders, and designers and behavioral researchers routinely use attentional guides to help diverse users navigate computer interfaces. A downside of existing approaches is that they rely on the designer’s theory about how attention *should* be directed. Here we introduce an alternative, ‘bottom-up’ approach to guiding attention in XR. We identify a desired outcome (eg. enjoying a virtual environment), train an algorithm on the attention patterns of users who achieve that outcome, and use this to guide new users’ attention similarly, to produce the desired result. In two validation studies, observers’ enjoyment of a display greatly increased when they were guided toward regions which were attended by past participants who themselves liked (vs disliked) the display. The beauty of this approach is that it gets results, but does not require the designer to know why, exactly, a reproduced pattern of attention caused the desired outcome.

Breakout Sessions

Our Breakout Sessions offer an opportunity for all of the experts and experienced members of our community to come together and brainstorm solutions to the greatest problems in XR Accessibility. There will be one group of breakout sessions on each day, lasting one hour apiece; attendees can choose one session from each group to attend.

Breakout Sessions A | June 6th

Cognitive Overload

Yuning in pink sweater standing in front of a black art piece

Yuning Gao

New York University | Research Assistant

We will create VR simulations to demonstrate a type of performance art that combines the audio, visual, interactive and textual elements without lead audience to cognitive overload.

Location: Classroom 215

Sign Language & XR

Headshot of Lorna, a white woman with blonde/brown hair and glasses.

Lorna Quandt

Gallaudet University | Associate Professor

Headshot of Abraham Glasser, a light-skinned white male with short curly brown hair wearing a black dress shirt and black polka dot tie. The background is blurred.

Abraham Glasser

Gallaudet University | Assistant Professor

We will discuss challenges, opportunities, and the newest research related to signed languages in augmented and virtual reality.

Classroom 225

Immersive Healthcare

Headshot of Peirce Clark in a navy blue shirt

Peirce Clark

XR Association | Senior Manager of Research & Best Practices

Headshot of Jeanne Li in a black sweater

Jeanne Li

Unity Technologies | Product Manager

Join this discussion to explore the importance of enabling accessibility in immersive healthcare solutions to ensure equitable access to healthcare innovations in patient care and therapeutic interventions in hospitals around the country.


Productizing: Creating Scalable Solutions

Headshot of Tomer Joshua in a dark jacket

Tomer Joshua

Cornell Tech | Assistant Director of the Runway and Spinouts Program

David Begun

Beshi Tech | Founder

From Challenge to Product – How to identify a challenge and build scalable solutions.


Breakout Sessions B | June 7th

Harnessing Community

Headshot of Dylan Fox

Dylan Fox

XR Access | Director of Operations

Headshot of Thomas Logan, a white male

Thomas Logan

Equal Entry | Owner

Working from the outside in – how can community effect change? For example, gathering reports, creating proof of concepts, getting work adopted, etc.

Classroom 215

Blind AR

Headshot of Sean Dougherty

Sean Dougherty

LightHouse for the Blind and Visually Impaired | Director of Accessible UX Services

Learn from the first-hand perspectives of blind/low vision AR users, experience AR demos, and gain an understanding of the current gaps in AR accessibility, along with AT tools & accessibility features that can improve the UX.

Classroom 225

Universal User Data (UUD)

Headshot of Professor Greg Welch, a light-skinned man with short dark hair, wearing a short-sleeved blue shirt. He is smiling with his arms folded.

Gregory Welch

University of Central Florida | Professor

Come help us think about how to reduce the likelihood that user study data for a participant with a disability is not discarded as an outlier, for example by transforming the data to be statistically comparable with others while still preserving the participant’s overall behavior and choices.



Nicol Perez headshot

Nicol Perez

Meta Reality Labs | Product Equity & Accessibility Programs Lead

Headshot of Erin Leary, a light-skinned woman with black hair wearing a blue sweater.

Erin Leary

Meta Reality Labs | Product Accessibility Program Manager

Dive into Meta’s approach to co-designing cutting-edge technology alongside people with disabilities and engage in thought-provoking discussions on how we can elevate industry standards for building alongside diverse communities.



How do I get to the venue?

Public transportation by subway, ferry, or tram is recommended. See this transportation guide for details.

Will the main stage presentations be recorded?

Yes, they will be streamed live via Zoom and added to the XR Access YouTube channel after the conference.

Will the breakout sessions be recorded?

No, unfortunately our recording equipment is not suited to capturing multiple small groups. However, the takeaways will be included in the Symposium Report.

Why does the Symposium cost money to attend in person?

Unfortunately, events in the physical world are expensive; the Symposium will cost tens of thousands of dollars. However, if the expense is a hardship for you, please apply for a scholarship using the link in the description. Note that registering for Zoom and watching online is still free.

What accommodations will be provided?

We will provide American Sign Language interpreters for the event and human-created captions for the main stage presentations. If you need additional accommodations, please note it on your registration or contact

What food will be provided?

We will provide lunch, snacks and drinks during the symposium. Please let us know if you have specific dietary restrictions during registration.

Where should I stay in New York?
We recommend the Graduate New York, located mere steps from the venue. You can use the code NEGCOR to get a discount on your room.


XR Access Symposia can’t happen without the support of our generous sponsors. Visit our sponsorship page to learn about the many benefits of sponsoring the Symposium.

Platinum Sponsors


Gold Sponsors


Captioning Provided By

3Play Media logo

Organizing Committee

Dylan Fox
Dylan FoxGeneral Chair
Director of Operations, XR Access
Shiri Azenkot
Shiri AzenkotGeneral Chair
XR Access Consortium Director | Associate Professor of Information Science, Cornell Tech
Lucy Jiang
Lucy JiangDemo & Poster Chair
Jazmin Collins
Jazmin CollinsDemo & Poster Chair
Ria Gualano
Ria GualanoBreakouts Chair
Jonathan Segal
Jonathan SegalIT Chair
Crescentia Jung
Crescentia JungVolunteer Chair
Woojin Ko
Woojin KoVolunteer Chair

Past Symposia