The Sixth Annual XR Access Symposium

June 6-7, 2024, New York City

XR Access 2024:
Insights From XR Assistive Technology

The XR Access Symposium is our annual conference for leaders in industry, academia, and advocacy to come together and solve the most pressing problems in extended reality accessibility. This year our focus is on Insights from XR Assistive Technology: how can we learn from technologies aimed at assisting disabled people in order to make mainstream XR more accessible?

This 2 day conference will take place on June 6-7, 2024, at the Verizon Executive Education Center on Cornell Tech campus in New York City, with sessions broadcasted online. It is hosted by Cornell University. Previous years have included presentations from industry titans such as Accenture, Meta, Microsoft, and Adobe, as well as cutting edge research from NYU, Carnegie Mellon, Columbia University, and many others.

To keep up to date about the Symposium, join our newsletter or Slack community. We’ll make any future announcements to both.

Hosted by
Cornell Tech, home of the Jacobs Technion-Cornell Institute

Registration

In-person registration closes May 31st! You can register for in-person via Eventbrite for a small fee, or just for the online portion for free via Zoom.

Scholarships

Our scholarship applications have closed. If you applied for a scholarship, please make sure to submit your refund request by July 2nd, 2024.

Volunteering

Volunteer applications have closed. Thank you to everyone who volunteered to help with the Symposium!

Venue

Verizon Executive Education Center at Cornell Tech

2 West Loop Road

New York, NY 10044

The Verizon Executive Education Center at Cornell Tech redefines the landscape of executive event and conference space in New York City. This airy, modern, full-service venue offers convenience and ample space—with breathtaking views—for conferences, executive programs, receptions, seminars, meet-ups and more. Designed by international architecture firm Snøhetta, the Verizon Executive Education Center blends high design and human-centered technology to bring you an advanced meeting space suited for visionary thinkers.

Verizon executive education center exterior: a pointed building of dark glass, like a whale's baleen.

Schedule

The below is an approximate schedule for both June 6 and June 7th. A final schedule will be posted closer to the event.

Time Event
9am-12pm Plenary Talks
12pm-1pm Lunch
1pm-3pm Posters & Demos
3pm-4pm Breakout Sessions
4pm-5pm Final talks & closing

Plenary Sessions

StoryTrails: The People’s Metaverse

Headshot of Professor Angela Chan, a mixed ethnicity Chinese white British woman with long dark hair wearing a black dress and smiling

Prof. Angela Chan

CoSTAR National Lab/ Royal Holloway, University of London | Head of Inclusive Futures

Professor Angela Chan is Head of Inclusive Futures for the new CoSTAR National Lab for creative technology in the UK, overseeing Democratisation and Standards research and managing EDI and sustainability for the lab. Her background is in TV as an executive producer and senior manager for the BBC & Head of Creative Diversity for Channel 4.

Embark on a journey through time with StoryTrails, where immersive storytelling meets the wonders of augmented and virtual reality. Experience the magic of untold local histories coming to life in public spaces across towns and cities in the UK. Join us for a free, entertaining, and family-friendly adventure as we transform ordinary places into extraordinary portals to the past, inviting audiences to time-travel and discover the rich tapestry of historical change in 15 captivating locations throughout 2022.

Wheelchair Based Navigation in VR

Picture of Justin Berry, a white male with a beard wearing a black cap

Justin Berry

Yale Center For Immersive Technologies in Pediatrics | Creative Producer / Project Director

Justin Berry is an artist, educator, creative producer, researcher, and game designer whose interdisciplinary work has been presented internationally in magazines, conferences, and museums. His current role is Creative Producer and Project Director for the Yale Center For Immersive Technologies in Pediatrics.

This talk will challenge the conventional wisdom surrounding virtual reality (VR) locomotion, as we explore groundbreaking research comparing 1:1 walking to 1:1 wheeling. Delve into the findings revealing no significant differences in user experience, and discover how embracing seated experiences can expand accessibility and enhance comfort in VR. Join us in advocating for inclusive design practices that cater to the diverse needs of all VR users.

Using Surface Electromygraphy (sEMG) to Create a New XR Human-Computer UI

Headshot of Kati London, light skinned woman

Kati London

Meta Reality Labs | Product Leader

Kati has been leading productization of first generation hardware & software consumer technologies throughout her 20+ year career. At Meta Reality Labs, she focuses on Trusted AR & AI Interfaces, including Surface-EMG Input at the wrist for more-human interaction. Previously, Kati designed real world game systems for people+ plants, DNA, traffic & sharks – tackling wicked challenges like disaster preparedness + socio-economic segregation. At Microsoft, she introduced human-agents into early Cortana; co-chaired the Listening Machines Summit, led early-GenAI efforts, and oversaw trusted search & news. Kati is obsessed w/gnarly ethical challenges found when productizing bleeding edge technologies.

For the past decade, our team at Meta Reality Labs has been dedicated to developing an advanced neuromotor interface as an alternative to touch-screens, hand-held controllers and keyboards. The goal is to address the Human Computer Interaction challenge of providing effortless, intuitive, and efficient input for XR experiences. This presentation will describe the development of a noninvasive neuromotor interface that allows for computer input using surface electromyography (sEMG), and its applications for accessibility.

Do You See What I’m Saying? – The Design of Owlchemy Labs’ Subtitle System

Headshot of Peter Galbraith, a white man with short curvy blonde hair and a wide smile

Peter Galbraith

Owlchemy Labs | Senior Accessibility Engineer

Peter Galbraith is a programmer, engineer, and designer with a focus on developing new and unique gameplay and interactions for Virtual Reality projects and passionately advocating for accessibility in games. Currently the Senior Accessibility Engineer at Owlchemy Labs, he has worked on several multi-platform VR titles including the award-winning “Job Simulator” and “Vacation Simulator”, Emmy Award-nominated “Rick and Morty: Virtual Rick-ality”, and Owlchemy Labs’ most recent title, “Cosmonious High”. He has also played a key role in developing Owlchemy Labs’ commitment to accessibility, ensuring that its games are playable by everyone, regardless of their abilities.

Subtitle systems in traditional games have decades-old design paradigms to reference, but those paradigms quickly encounter problems once you begin considering the unique challenges of VR. In this talk, Owlchemy Labs’ Senior Accessibility Engineer Peter Galbraith will highlight the challenges of designing subtitles for VR, show some of their early prototypes, and explain the current solutions that have been implemented in the subtitles system used in Owlchemy’s VR games. Attendees will learn about different design considerations for captioning audio in VR and effective methods of implementing subtitles in their own VR projects.

XR for Individuals with Hearing Impairment

Stefania Serafin headshot

Stefania Serafin

Aalborg University in Copenhagen | Professor of Sonic Interaction Design, Leader of the Multisensory Experience Lab

Stefania Serafin is professor of Sonic interaction design at Aalborg University in Copenhagen and the leader of the multi-sensory experience lab together with Rolf Nordahl. She is the President of the Sound and Music Computing association, Project Leader of the Nordic Sound and Music Computing network and lead of the Sound and music computing Master at Aalborg University. Stefania received her PhD entitled “The sound of friction: computer models, playability and musical applications” from Stanford University in 2004, supervised by Professor Julius Smith III. 

In this talk I will present an overview of the technologies we have developed in our lab to support individuals with different hearing impairments. The application range from VR training of spatial awareness for children with hearing impairment to augmented reality based solutions to regain musical skills.

Virtual Steps: The Experience of Walking for a Lifelong Wheelchair User in Virtual Reality

Headshot of Atieh Taheri, a woman with streaked blonde hair and wearing a teal dress seated in a wheelchair.

Atieh Taheri

University of California, Santa Barbara | PhD Candidate

I’m Atieh, a PhD Candidate in Electrical and Computer Engineering, working under the supervision of Prof. Misha Sra in the Human-AI Integration Lab at UCSB. My research intersects Human-Computer Interaction (HCI) and Accessibility, aiming to create meaningful technological solutions that improve the lives of individuals with disabilities. With a focus on participatory design, I’m dedicated to developing solutions that not only fulfill functional needs but also enhance the quality of life and user experience for those with disabilities, an area that has historically received less attention in Assistive Technology design research. 

In this talk, we will share takeaways from our participatory design study exploring the experience of virtual walking for individuals with congenital mobility disabilities. Despite having not experienced walking first-hand, they have a mental model of how it feels by having observed others walk. Matching the virtual experience to their idea of walking posed a challenge which we overcame with an iterative design approach. In collaboration with a lifelong wheelchair user, we designed a VR walking system. Over a 9-day diary study, they documented their emotional journey and feelings of embodiment, agency, and presence. Our analysis revealed key themes, including the importance of aligning the VR experience with the user’s mental model of walking, providing customization options, and managing emotional complexities. Based on our findings, which emphasize the need for inclusive design practices, we will discuss how VR experiences can be designed to be emotionally engaging and accessible.

Learnings from Co-Designing Products with People with Disabilities

Nicol Perez headshot

Nicol Perez

Meta Reality Labs | Product Equity & Accessibility Programs Lead

Headshot of Erin Leary, a light-skinned woman with black hair wearing a blue sweater.

Erin Leary

Meta Reality Labs | Product Accessibility Program Manager

Join us for a discussion on Meta’s approach to co-designing products with people with disabilities. You’ll learn about how Meta approaches co-design, what people’s experiences have been participating in co-designs with Meta, and learn about challenges and lessons for co-designing with communities.

ASL Champ!: Learning ASL in VR with AI-powered Feedback

Headshot of Lorna, a white woman with blonde/brown hair and glasses.

Lorna Quandt

Gallaudet University | Associate Professor

Shahinur Alam headshot

Shahinur Alam

Gallaudet University | Postdoctoral Associate

We developed a VR platform for learning ASL with immersive interaction and real-time feedback named ASL Champ. Our innovative approach includes an interactive game featuring a fluent signing avatar and the first implementation of ASL sign recognition using deep learning in VR. Using advanced motion-capture technology, our expressive ASL teaching avatar operates within a three-dimensional environment. Users learn by mimicking the avatar’s signs, and a third-party plugin executes sign recognition through a deep learning model, adjusting based on user accuracy. The functional prototype effectively teaches sign language vocabulary, making it a promising interactive ASL learning platform in VR.

Unseen Sound

Andy, a middle aged white man smiles as wide as he can , shutting his eyes as if someone off camera is trying very hard to make him laugh. , . He has a full beard that is mostly gray with streaks and spots of red. . Hiss hair is short and reddish brown with a forehead a mile wide. . He's wearing a teal cardigan over a comprehensive patterned shirt. According to AI, the background is a soft, solid green color, enhancing the overall cheerful and pleasant vibe of the photo.

Andy Slater

Virtual Access Lab | Artist and researcher

Sammie Veeler, a white tattooed trans woman, sits at a kitchen table backlit by a green passionflower vine in the window. She has blond center-parted short hair, with long twin braids draped over the front of her open charcoal button up shirt. She looks into the camera while eating a tangerine.

Sammie Veeler

Virtual Access Lab | Founder

Unseen Sound is a spatial audio-based XR experience developed by blind artist Andy Slater, with Virtual Access Lab and New Art City virtual art space. The piece spotlights creative access and disability solidarity while producing new technical infrastructure. Created during the Leonardo “Crip Tech” fellowship, this project challenges conventional accessibility in technology by integrating sonic way-finding, poetic captions, and custom controllers designed for universal use. It addresses the oversight of blind people in tech design processes, providing an immersive experience accessible to a wide audience, including those who are blind, deaf, hard of hearing and neurodivergent.

Designing Interactive AI Visual Guides for Blind and Low Vision People

Headshot of Ricardo Gonzalez

Ricardo Gonzalez

Cornell University | PhD Candidate

A headshot of Jazmin Collins, a young woman with brown eyes and dark red hair, reaching past her shoulders. She is smiling brightly, wearing a black shirt with a pink scarf and earrings.

Jazmin Collins

Cornell University | PhD Candidate

BeMyAI, SeeingAI and other AI-powered applications provide visual information to Blind and Low Vision People. Users can now simply hold their phone up to hear descriptions of their environment or hear text read aloud by asking a question. However, these applications still struggle to provide accurate and salient information. While models like GPT-4 demonstrate human-like performance describing images, ultimately, they still lack the adaptability and sensemaking abilities of humans. In this talk we will present our plan to design an interactive system for smartglasses to act as a personalized AI-powered visual guide for BLV people. We will discuss the findings of a study we conducted to understand how BLV people are using technologies like SeeingAI. Then, we will discuss our plan to collect data about interactions of BLV people with an image description system powered by GPT-4. Finally, we will present a prototype to receive feedback from the audience.

FAQs

How do I get to the venue?

Public transportation by subway, ferry, or tram is recommended. See this transportation guide for details.

Will the main stage presentations be recorded?

Yes, they will be streamed live via Zoom and added to the XR Access YouTube channel after the conference.

Will the breakout sessions be recorded?

No, unfortunately our recording equipment is not suited to capturing multiple small groups. However, the takeaways will be included in the Symposium Report.

Why does the Symposium cost money to attend in person?

Unfortunately, events in the physical world are expensive; the Symposium will cost tens of thousands of dollars. However, if the expense is a hardship for you, please apply for a scholarship using the link in the description. Note that registering for Zoom and watching online is still free.

What accommodations will be provided?

We will provide American Sign Language interpreters for the event and human-created captions for the main stage presentations. If you need additional accommodations, please note it on your registration or contact info@xraccess.org.

What food will be provided?

We will provide lunch, snacks and drinks during the symposium. Please let us know if you have specific dietary restrictions during registration.

Where should I stay in New York?
We recommend the Graduate New York, located mere steps from the venue. You can use the code NEGCOR to get a discount on your room.

Sponsors

XR Access Symposia can’t happen without the support of our generous sponsors. Visit our sponsorship page to learn about the many benefits of sponsoring the Symposium.

Platinum Sponsors

Yahoo!

Gold Sponsors

Cornell Tech

Past Symposia