The Sixth Annual XR Access Symposium
June 6-7, 2024, New York City
Venue
Verizon Executive Education Center at Cornell Tech
2 West Loop Road
New York, NY 10044
The Verizon Executive Education Center at Cornell Tech redefines the landscape of executive event and conference space in New York City. This airy, modern, full-service venue offers convenience and ample space—with breathtaking views—for conferences, executive programs, receptions, seminars, meet-ups and more. Designed by international architecture firm Snøhetta, the Verizon Executive Education Center blends high design and human-centered technology to bring you an advanced meeting space suited for visionary thinkers.
Schedule
June 6th, 2024
Start Time (EDT) | Type | Title | People |
---|---|---|---|
9:00 AM | Plenary | Welcome & Opening Remarks | Shiri Azenkot, Dylan Fox |
9:10 AM | Plenary | XR Access Stories: Why Access Matters | Dylan Fox, Meryl Evans, Sunny Ammerman, Jesse Anderson, Anne Burke |
10:00 AM | Plenary | Unseen Sound | Andy Slater, Sammie Veeler |
10:20 AM | Break | Snack Break | |
10:40 AM | Plenary | Designing Interactive AI Visual Guides for Blind and Low Vision People | Ricardo Gonzalez, Jazmin Collins |
11:00 AM | Plenary | Using Surface Electromyography (sEMG) to Create a New XR Human-Computer UI | Kati London |
11:20 AM | Plenary | ASL Champ!: Learning ASL in VR with AI-powered Feedback | Lorna Quandt, Shahinur Alam |
11:40 AM | Plenary | XR for Individuals with Hearing Impairment | Stefania Serafin |
12:00 PM | Break | Lunch | |
1:00 PM | Exhibit | Posters & Demos | |
3:00 PM | Breakouts | Breakouts Set A | |
4:00 PM | Break | Snack Break | |
4:20 PM | Plenary | Breakouts Summaries, Extra Q&A, and Day 2 Preview | Shiri Azenkot, Dylan Fox |
June 7th, 2024
Start Time (EDT) | Type | Title | People |
---|---|---|---|
9:00 AM | Plenary | The Latest from XR Access | Shiri Azenkot, Dylan Fox |
9:20 AM | Plenary | Wheelchair Based Navigation in VR | Justin Berry |
9:40 AM | Plenary | StoryTrails: The People’s Metaverse | Angela Chan |
10:00 AM | Plenary | Do You See What I’m Saying? – The Design of Owlchemy Labs’ Subtitle System | Peter Galbraith |
10:20 AM | Break | Snack Break | |
10:40 AM | Plenary | Virtual Steps: The Experience of Walking for a Lifelong Wheelchair User in Virtual Reality | Atieh Taheri |
11:00 AM | Plenary | Accessibility as a Guide for Equity and Sustainability in Virtual Conferences | Andrea Stevenson Won |
11:20 AM | Plenary | Learnings from Co-Designing Products with People with Disabilities | Nicol Perez, Erin Leary |
11:40 AM | Plenary | A Machine-Learning-Powered Visual Tour Guide for Attentional Disorders | Hong Nguyen |
12:00 PM | Break | Lunch | |
1:00 PM | Exhibit | Posters & Demos | |
3:00 PM | Breakouts | Breakouts Set B | |
4:00 PM | Break | Snack Break | |
4:20 PM | Plenary | Breakout Summaries, Awards, & Closing | Shiri Azenkot, Dylan Fox |
Plenary Sessions
XR Access Stories: Why Access Matters
Accessibility is often thought of in terms of standards to meet or items to be checked off a list, but this belies the real human beings that are affected by their ability to access technology. XR Access interviewed several disabled members of our community to show why accessibility matters, and these are their stories.
Unseen Sound
Andy Slater
Virtual Access Lab | Artist and researcher
Sammie Veeler
Virtual Access Lab | Founder
Unseen Sound is a spatial audio-based XR experience developed by blind artist Andy Slater, with Virtual Access Lab and New Art City virtual art space. The piece spotlights creative access and disability solidarity while producing new technical infrastructure. Created during the Leonardo “Crip Tech” fellowship, this project challenges conventional accessibility in technology by integrating sonic way-finding, poetic captions, and custom controllers designed for universal use. It addresses the oversight of blind people in tech design processes, providing an immersive experience accessible to a wide audience, including those who are blind, deaf, hard of hearing and neurodivergent.
Designing Interactive AI Visual Guides for Blind and Low Vision People
Ricardo Gonzalez
Cornell University | PhD Candidate
Jazmin Collins
Cornell University | PhD Candidate
BeMyAI, SeeingAI and other AI-powered applications provide visual information to Blind and Low Vision People. Users can now simply hold their phone up to hear descriptions of their environment or hear text read aloud by asking a question. However, these applications still struggle to provide accurate and salient information. While models like GPT-4 demonstrate human-like performance describing images, ultimately, they still lack the adaptability and sensemaking abilities of humans. In this talk we will present our plan to design an interactive system for smartglasses to act as a personalized AI-powered visual guide for BLV people. We will discuss the findings of a study we conducted to understand how BLV people are using technologies like SeeingAI. Then, we will discuss our plan to collect data about interactions of BLV people with an image description system powered by GPT-4. Finally, we will present a prototype to receive feedback from the audience.
Using Surface Electromyography (sEMG) to Create a New XR Human-Computer UI
Kati London
Meta Reality Labs | Product Leader
Kati has been leading productization of first generation hardware & software consumer technologies throughout her 20+ year career. At Meta Reality Labs, she focuses on Trusted AR & AI Interfaces, including Surface-EMG Input at the wrist for more-human interaction. Previously, Kati designed real world game systems for people+ plants, DNA, traffic & sharks – tackling wicked challenges like disaster preparedness + socio-economic segregation. At Microsoft, she introduced human-agents into early Cortana; co-chaired the Listening Machines Summit, led early-GenAI efforts, and oversaw trusted search & news. Kati is obsessed w/gnarly ethical challenges found when productizing bleeding edge technologies.
For the past decade, our team at Meta Reality Labs has been dedicated to developing an advanced neuromotor interface as an alternative to touch-screens, hand-held controllers and keyboards. The goal is to address the Human Computer Interaction challenge of providing effortless, intuitive, and efficient input for XR experiences. This presentation will describe the development of a noninvasive neuromotor interface that allows for computer input using surface electromyography (sEMG), and its applications for accessibility.
ASL Champ!: Learning ASL in VR with AI-powered Feedback
Lorna Quandt
Gallaudet University | Associate Professor
Shahinur Alam
Gallaudet University | Postdoctoral Associate
We developed a VR platform for learning ASL with immersive interaction and real-time feedback named ASL Champ. Our innovative approach includes an interactive game featuring a fluent signing avatar and the first implementation of ASL sign recognition using deep learning in VR. Using advanced motion-capture technology, our expressive ASL teaching avatar operates within a three-dimensional environment. Users learn by mimicking the avatar’s signs, and a third-party plugin executes sign recognition through a deep learning model, adjusting based on user accuracy. The functional prototype effectively teaches sign language vocabulary, making it a promising interactive ASL learning platform in VR.
XR for Individuals with Hearing Impairment
Stefania Serafin
Aalborg University in Copenhagen | Professor of Sonic Interaction Design, Leader of the Multisensory Experience Lab
Stefania Serafin is professor of Sonic interaction design at Aalborg University in Copenhagen and the leader of the multi-sensory experience lab together with Rolf Nordahl. She is the President of the Sound and Music Computing association, Project Leader of the Nordic Sound and Music Computing network and lead of the Sound and music computing Master at Aalborg University. Stefania received her PhD entitled “The sound of friction: computer models, playability and musical applications” from Stanford University in 2004, supervised by Professor Julius Smith III.
In this talk I will present an overview of the technologies we have developed in our lab to support individuals with different hearing impairments. The application range from VR training of spatial awareness for children with hearing impairment to augmented reality based solutions to regain musical skills.
Wheelchair Based Navigation in VR
Justin Berry
Yale Center For Immersive Technologies in Pediatrics | Creative Producer / Project Director
Justin Berry is an artist, educator, creative producer, researcher, and game designer whose interdisciplinary work has been presented internationally in magazines, conferences, and museums. His current role is Creative Producer and Project Director for the Yale Center For Immersive Technologies in Pediatrics.
This talk will challenge the conventional wisdom surrounding virtual reality (VR) locomotion, as we explore groundbreaking research comparing 1:1 walking to 1:1 wheeling. Delve into the findings revealing no significant differences in user experience, and discover how embracing seated experiences can expand accessibility and enhance comfort in VR. Join us in advocating for inclusive design practices that cater to the diverse needs of all VR users.
StoryTrails: The People’s Metaverse
Prof. Angela Chan
CoSTAR National Lab/ Royal Holloway, University of London | Head of Inclusive Futures
Professor Angela Chan is Head of Inclusive Futures for the new CoSTAR National Lab for creative technology in the UK, overseeing Democratisation and Standards research and managing EDI and sustainability for the lab. Her background is in TV as an executive producer and senior manager for the BBC & Head of Creative Diversity for Channel 4.
In this talk we will describe the inclusion and accessibility planning that went on behind the scenes for StoryTrails, the UK’s most ambitious mixed reality live experience to date. Made by StoryFutures, Royal Holloway University of London, and partners, it was one of ten national projects part of Unboxed, a major festival of creativity. StoryTrails toured 15 cities and towns bringing stories from underrepresented groups to audiences through their local libraries. The stories were produced by local creatives told through 15 guided augmented reality trails, 8 virtual reality experiences and immersive cinema experiences using archive, LiDAR scanning and 3D animations. The project adopted an inclusive design approach to every aspect, considering accessibility from recruitment through to production and delivery.
Do You See What I’m Saying? – The Design of Owlchemy Labs’ Subtitle System
Peter Galbraith
Owlchemy Labs | Senior Accessibility Engineer
Peter Galbraith is a programmer, engineer, and designer with a focus on developing new and unique gameplay and interactions for Virtual Reality projects and passionately advocating for accessibility in games. Currently the Senior Accessibility Engineer at Owlchemy Labs, he has worked on several multi-platform VR titles including the award-winning “Job Simulator” and “Vacation Simulator”, Emmy Award-nominated “Rick and Morty: Virtual Rick-ality”, and Owlchemy Labs’ most recent title, “Cosmonious High”. He has also played a key role in developing Owlchemy Labs’ commitment to accessibility, ensuring that its games are playable by everyone, regardless of their abilities.
Subtitle systems in traditional games have decades-old design paradigms to reference, but those paradigms quickly encounter problems once you begin considering the unique challenges of VR. In this talk, Owlchemy Labs’ Senior Accessibility Engineer Peter Galbraith will highlight the challenges of designing subtitles for VR, show some of their early prototypes, and explain the current solutions that have been implemented in the subtitles system used in Owlchemy’s VR games. Attendees will learn about different design considerations for captioning audio in VR and effective methods of implementing subtitles in their own VR projects.
Virtual Steps: The Experience of Walking for a Lifelong Wheelchair User in Virtual Reality
Atieh Taheri
University of California, Santa Barbara | PhD Candidate
I’m Atieh, a PhD Candidate in Electrical and Computer Engineering, working under the supervision of Prof. Misha Sra in the Human-AI Integration Lab at UCSB. My research intersects Human-Computer Interaction (HCI) and Accessibility, aiming to create meaningful technological solutions that improve the lives of individuals with disabilities. With a focus on participatory design, I’m dedicated to developing solutions that not only fulfill functional needs but also enhance the quality of life and user experience for those with disabilities, an area that has historically received less attention in Assistive Technology design research.
In this talk, we will share takeaways from our participatory design study exploring the experience of virtual walking for individuals with congenital mobility disabilities. Despite having not experienced walking first-hand, they have a mental model of how it feels by having observed others walk. Matching the virtual experience to their idea of walking posed a challenge which we overcame with an iterative design approach. In collaboration with a lifelong wheelchair user, we designed a VR walking system. Over a 9-day diary study, they documented their emotional journey and feelings of embodiment, agency, and presence. Our analysis revealed key themes, including the importance of aligning the VR experience with the user’s mental model of walking, providing customization options, and managing emotional complexities. Based on our findings, which emphasize the need for inclusive design practices, we will discuss how VR experiences can be designed to be emotionally engaging and accessible.
Accessibility as a Guide for Equity and Sustainability in Virtual Conferences
Andrea Stevenson Won
Cornell University | Assistant Professor
Andrea Stevenson Won directs the Virtual Embodiment Lab. The lab’s research focuses on how mediated experiences change people’s perceptions, especially in immersive media. Research areas include the clinical applications of virtual reality, and how nonverbal behavior as rendered in virtual environments affects collaboration and teamwork. She completed her PhD in the Virtual Human Interaction Lab in the Department of Communication at Stanford University and holds an MS in Biomedical Visualization from the University of Illinois at Chicago.
During the pandemic, in-person conferences were suspended. Some programs turned to immersive virtual environments to provide enhanced social interactions. This highlighted some of the defects and advantages of virtual reality. As in-person interactions again become the norm, we should not lose sight of the potential for virtual experiences to be more equitable, more accessible, and more sustainable than physical world alternatives, and can look to existing work on accessibility in social virtual reality as a guide.
Learnings from Co-Designing Products with People with Disabilities
Nicol Perez
Meta Reality Labs | Product Equity & Accessibility Programs Lead
Erin Leary
Meta Reality Labs | Product Accessibility Program Manager
Join us for a discussion on Meta’s approach to co-designing products with people with disabilities. You’ll learn about how Meta approaches co-design, what people’s experiences have been participating in co-designs with Meta, and learn about challenges and lessons for co-designing with communities.
A Machine-Learning-Powered Visual Tour Guide for Attentional Disorders
Hong Nguyen
The New School NYC | Graduate Student
Hong is a 5th-year graduate student at The New School in New York. Her research focuses on the contributions of core mechanisms of visual processing to (1) naive physical understanding, and (2) visual preferences. One line of research focuses on how information about forces is integrated into visual processes such as visual motion processing, and attention. Another line of research explores how spatial attention contributes to aesthetic preferences for scenes.
Attentional impairment is a feature of many clinical disorders, and designers and behavioral researchers routinely use attentional guides to help diverse users navigate computer interfaces. A downside of existing approaches is that they rely on the designer’s theory about how attention *should* be directed. Here we introduce an alternative, ‘bottom-up’ approach to guiding attention in XR. We identify a desired outcome (eg. enjoying a virtual environment), train an algorithm on the attention patterns of users who achieve that outcome, and use this to guide new users’ attention similarly, to produce the desired result. In two validation studies, observers’ enjoyment of a display greatly increased when they were guided toward regions which were attended by past participants who themselves liked (vs disliked) the display. The beauty of this approach is that it gets results, but does not require the designer to know why, exactly, a reproduced pattern of attention caused the desired outcome.
Breakout Sessions
Our Breakout Sessions offer an opportunity for all of the experts and experienced members of our community to come together and brainstorm solutions to the greatest problems in XR Accessibility. There will be one group of breakout sessions on each day, lasting one hour apiece; attendees can choose one session from each group to attend.
Breakout Sessions A | June 6th
Cognitive Overload
Yuning Gao
New York University | Research Assistant
Description:
We will create VR simulations to demonstrate a type of performance art that combines the audio, visual, interactive and textual elements without lead audience to cognitive overload.
Location: Classroom 215
Sign Language & XR
Lorna Quandt
Gallaudet University | Associate Professor
Abraham Glasser
Gallaudet University | Assistant Professor
Description:
We will discuss challenges, opportunities, and the newest research related to signed languages in augmented and virtual reality.
Location:
Classroom 225
Immersive Healthcare
Peirce Clark
XR Association | Senior Manager of Research & Best Practices
Jeanne Li
Unity Technologies | Product Manager
Description:
Join this discussion to explore the importance of enabling accessibility in immersive healthcare solutions to ensure equitable access to healthcare innovations in patient care and therapeutic interventions in hospitals around the country.
Location:
Auditorium
Productizing: Creating Scalable Solutions
Tomer Joshua
Cornell Tech | Assistant Director of the Runway and Spinouts Program
David Begun
Beshi Tech | Founder
Description:
From Challenge to Product – How to identify a challenge and build scalable solutions.
Location:
Auditorium
Breakout Sessions B | June 7th
Harnessing Community
Dylan Fox
XR Access | Director of Operations
Thomas Logan
Equal Entry | Owner
Description:
Working from the outside in – how can community effect change? For example, gathering reports, creating proof of concepts, getting work adopted, etc.
Location:
Classroom 215
Blind AR
Sean Dougherty
LightHouse for the Blind and Visually Impaired | Director of Accessible UX Services
Description:
Learn from the first-hand perspectives of blind/low vision AR users, experience AR demos, and gain an understanding of the current gaps in AR accessibility, along with AT tools & accessibility features that can improve the UX.
Location:
Classroom 225
Universal User Data (UUD)
Gregory Welch
University of Central Florida | Professor
Description:
Come help us think about how to reduce the likelihood that user study data for a participant with a disability is not discarded as an outlier, for example by transforming the data to be statistically comparable with others while still preserving the participant’s overall behavior and choices.
Location:
Auditorium
Co-Design
Nicol Perez
Meta Reality Labs | Product Equity & Accessibility Programs Lead
Erin Leary
Meta Reality Labs | Product Accessibility Program Manager
Description:
Dive into Meta’s approach to co-designing cutting-edge technology alongside people with disabilities and engage in thought-provoking discussions on how we can elevate industry standards for building alongside diverse communities.
Location:
Auditorium
Posters
We received permission from the following authors to share their posters. You can find all the posters in our Google Drive.
Speakeasy: Using Voice-Driven AI for Tailored Immersive VR and MR Experiences
Michael Chaves
VR, Embodiment, and Cyberspace: A Historical Debate with Contemporary Resonance
Emmie Hine
Recovery Reimagined: Making VR Accessible to People in Recovery from Substance Use Disorder and Mental Health Challenges
Kaitlin Yeomans, Diana Aleman, Demetra Adams-Kane
Collaborative VR-Environments for People with and without Visual Impairments
Julia Anken, Michael Schneider, Thorsten Schwarz, Karin Müller
Making Avatar Gaze Accessible for Blind and Low Vision People in Virtual Reality: Preliminary Insights
Jazmin Collins, Crescentia Jung, Shiri Azenkot
Ergonomic Hand Motion Assistance and AR Rehabilitation
Jeanne Xinjun Li
Automatic Captioning in Live Virtual Reality Presentations
Dawson Franz, Pranav Pidathala, Abraham Glasser, Raja Kushalnagar, Christian Vogler
Towards the Guidelines for More Accessible and Inclusive XR and the Metaverse
Vanja Garaj, John Dudley, Rosella Galindo, Per Ola Kristensson
Towards an Eye-Brain-Computer Interface: Combining Gaze with the Stimulus-Preceding Negativity for XR Target Selections
Rajshekar Reddy, Michael Proulx, Leanne Hirshfield, Anthony Ries
Design Opportunities for Accessible VR Musical Performances for Blind and Low-Vision People
Khang Dang, Hamdi Korreshi, Yasir Iqbal, Sooyeon Lee
An Illustrative Case Study of Crowd+AI: Toward Innovating in Assistive AR for Blind and Low Vision People
Harish Lingam
Dobble Debate
Lynne Heller
VERA Locomotion Accessibility Toolkit
Parsa Baghaie, Corey Clements, Josh Federman, Adam Lei, Cristian Merino, Oliver Wacker
FAQs
How do I get to the venue?
Public transportation by subway, ferry, or tram is recommended. See this transportation guide for details.
Will the main stage presentations be recorded?
Yes, they will be streamed live via Zoom and added to the XR Access YouTube channel after the conference.
No, unfortunately our recording equipment is not suited to capturing multiple small groups. However, the takeaways will be included in the Symposium Report.
Unfortunately, events in the physical world are expensive; the Symposium will cost tens of thousands of dollars. However, if the expense is a hardship for you, please apply for a scholarship using the link in the description. Note that registering for Zoom and watching online is still free.
Sponsors
XR Access Symposia can’t happen without the support of our generous sponsors. Visit our sponsorship page to learn about the many benefits of sponsoring the Symposium.