09:07:06 From 3Play Media Captioner : If anyone would rather view captions in a separate window, here's the link: https://www.streamtext.net/player?event=3Play-Sponsorship 09:24:34 From matt myrick : Hi. My name is Matt. I have a couple of questions regarding CC. 1. What are the next generation CC; will CC become more fluid when watching TV. What I mean my fluid is the placement of CC, when the person speaks the CC will appear where that person speaks on the screen which will enhance the UX - where are we with that? 2. There was some speculation that Subtitles will eventually go away and be replaced by CC. The current CC is more transparent where it captures background noise, intonation, languages etc… is that accurate? 09:31:40 From Kinga Skorupska : Hi, I am Kinga and I was wondering about the aspects of accuracy and customization, and perhaps their tradeoff. What about the users’ preferred reading speed? At least for movies there are some guidelines to keep it at 14 characters/second for a maximum of 21 to enable people to absorb content more easily, but it may vary greatly from person to person. 09:34:00 From Louell Sala : Hi I’m Louell from Benvision. Just wanted to know : 1. What are some of the accessibility challenges you see in terms of designing applications and platforms? 2. What accessibility standards do you recommend? 09:34:07 From Paul Meyer : My question-How many deaf/hoh people are actually involved in this type of project? I would hopefully think to include more because the deaf/hoh people are pretty much experts on captioning because the "printed" words are basically our life/death related to the world at hand. And we want it to carry over in our leisure time (gaming, etc) to foster communication and involvement within the VR setting and whatnot, so will deaf representation increase or matter ? 09:35:46 From Brenden Gilbert : Reacted to "My question-How many..." with ❤️ 09:36:33 From Dylan Fox : Some questions from Louell Sala via email: 1. What are some of the accessibility challenges you see in terms of designing applications and platforms? 2. What accessibility standards do you recommend and; 3. How do we ensure compliance? 09:36:40 From Brenden Gilbert : Reacted to "Hi I’m Louell from B..." with ❤️ 09:37:17 From Jesse Anderson : I have a specific question for Meta. Are there any plans to natively add built-in accessibility features to the Quest platform in the near future, like captions, a magnifier, screen narration/screen reader, etc.? As excited as I am for some of the new features of the Quest 3, I've so far decided not to consider buying it because lack of dashboard accessibility on the platform. I own the original Quest and Rift, but don't want to spend more money on devices that are difficult, if not largely inaccessible to me. I was admittedly quite disappointed during Quest 3 launch that no accessibility was announced. Thanks. 09:38:25 From Kenneth Moodie : Hello, I'm Ken from Schell Games, and was curious if there is an example of an existing XR title that exemplifies best practices for closed captioning. 09:38:31 From Brenden Gilbert : Reacted to "Hi. My name is Matt...." with 👍🏼 09:42:07 From Joel Ward : Reacted to "I have a specific qu..." with 👍 09:43:34 From Louell Sala : Reacted to "Some questions from …" with ❤️ 09:44:02 From Deogratias Shidende : Hi, my name is Deogratias and I work at the AR learning accessibility lab (AuReLiA) in DHBW Heidenheim, Germany. I want to highlight an issue concerning caption generation and flexibility. It can be a challenge to maintain the UI and UX when users are allowed to adjust the font size of captions for accessibility purposes in XR. This can lead to distortion and breakage of the UI. Therefore, I am seeking recommendations for maintaining the UI, especially in the context of real-time caption generation in the XR environment. 09:46:27 From Marie Vans : VRChat. 09:46:57 From Meryl Evans : Marie, please put the world names in here please. 09:47:44 From Bartosz Muczynski : I think it's this one: https://en.vrcw.net/world/detail/wrld_5e778b1b-88c7-4f44-8ff3-a3f289816fd0 09:47:47 From Meryl Evans : Are there instructions for putting the module in VR Chat? 09:48:22 From Marie Vans : The worlds in VRChat are: Experimental Sign Language World and MrDummy_NL's world 09:48:28 From Meryl Evans : Thanks, Bartosz. How does a deaf person not fluent in ASL communicate in that world? 09:49:06 From Bartosz Muczynski : Oh I have no idea. Haven't tried this yet ^^" 09:49:26 From Meryl Evans : That was a question to @Marie Vans 09:49:30 From Marie Vans : My email is: marie.vans@colostate.edu. If you are interested, I can get you in touch with someone who is "immersed" in the deaf community there. 09:49:54 From Meryl Evans : Thank you, @Marie Vans. What about deaf people not fluent in ASL? Not all people born deaf sign. 09:51:31 From Lily Bond (she/her) : https://tdiforaccess.org/ has started doing some advocacy for XR @Lucas Soto 09:52:02 From Marie Vans : @Meryl Evans The Experimental Sign Language World teaches VSL...it is ASL with the exception of a couple of letters it is not possible to generate with controllers. 09:52:32 From Marie Vans : VSL = Virtual SIgn Language 09:53:04 From Meryl Evans : If a deaf person doesn't even know any sign language ... how are they able to "listen" to the others? 09:53:06 From Marie Vans : There are lessons in the Sign Language World that teaches VSL 09:53:35 From Lucas Soto : Reacted to "@Meryl Evans The Exp..." with 👍🏽 09:53:40 From matt myrick : This is a fabulous discussion…. I don’t like the fact I have to drop for another call. It starts with the Law…. The law is out there and organizations and companies need to adhere to best practices. Companies are establishing their own COE and DEIA and they come hand in hand and it is a large effort to get everyone onboard but that is one way to advocate for ensuring CC is not left out. Keep up the great work… 09:53:41 From Meryl Evans : Alchemy Labs captions can cause nausea for me. They float and are transparent 09:53:42 From Lucas Soto : Reacted to "https://tdiforaccess..." with ❤️ 09:54:53 From Paul Meyer : The only way to make this hugely successful is to have more deaf/hoh representation and involve deaf org's (NAD, etc) and reach out to the deaf people working in FCC to get more accessibility and awareness into the XR world for captioning for all users that rely on captioning 09:54:53 From Brenden Gilbert : Reacted to "This is a fabulous d..." with ❤️ 09:55:08 From Brenden Gilbert : I agree @Paul Meyer! 09:55:18 From Savio Menifer : Reacted to The only way to make... with "👍" 09:55:30 From Michelle DeYoung : Reacted to "The only way to make..." with 👍 09:55:32 From Meryl Evans : There is a whole Immersive Captioning Working Group filled with a diversity of deaf and HoH people who would make great contributors 09:55:59 From Brenden Gilbert : MOre people like me and others need ot be involved or consulted.. We aren;t for the most part currently and have to force ourselves in acxtually. Good point Paul. 09:56:19 From Peirce Clark : Reacted to "There is a whole Imm..." with ❤️ 09:56:52 From Meryl Evans : I've been writing articles about my experiences in VR and captions at EqualEntry.com. Motion can be a problem for many deaf folks. 09:57:03 From Peirce Clark : Reacted to "I've been writing ar..." with ❤️ 09:57:35 From Brenden Gilbert : Key is to offer all options not one set method, Agreed Meryl 🙂 09:57:40 From Lucas Soto : I have another question: Currently, are there “plug ins” or an embedded tool within game creation softwares that allow for easy creation and manipulation of closed captions? (Unity or Unreal Engine) What tools are studios who are providing captions are using to create them? 09:57:40 From Peter Galbraith : Thanks for the feedback @Meryl Evans. Though Owlchemy Labs does believe our captions work well for a majority of players, the team and I are looking into ways to make our captions better for an even wider group of people, including providing more options for customization to allow players to create an experience that is most comfortable for them 09:58:03 From Brenden Gilbert : i may like Owlchemy's and advocate for it, but they need to also offer alternate options for diffeent people - their soluton fits one set of people. 09:58:05 From Meryl Evans : Reacted to "i may like Owlchemy'..." with 🙏 09:58:08 From Jazmin Cano : Reacted to "Thanks for the feedb..." with ❤️ 09:58:09 From Meryl Evans : Removed a 🙏 reaction from "i may like Owlchemy'..." 09:58:15 From Meryl Evans : Reacted to "Thanks for the feedb..." with 🙏 09:58:17 From paul Jackson : Reacted to This is a fabulous d... with "❤️" 09:58:33 From Brenden Gilbert : Reacted to "Thanks for the feedb..." with ❤️ 09:58:46 From Brenden Gilbert : Good to see you @Peter Galbraith! :) 09:58:51 From Meryl Evans : Owlchemy's captions move in two directions ... one is float and one is person's head ... two movements = vestibular problems. 09:59:01 From Peter Galbraith : Reacted to "Good to see you @Pet..." with 👋 09:59:14 From Meryl Evans : Data shows deaf people have a high incidence of vestibular disorders 10:00:02 From Meryl Evans : Here's Savio's article and presentation. https://equalentry.com/accessibility-virtual-reality-captions-open-source/ 10:00:16 From Brenden Gilbert : Reacted to "I have another quest..." with ❤️ 10:00:19 From Savio Menifer : https://github.com/XR-Access-Initiative/chirp-captions 10:00:19 From Lucas Soto : ☺️ Thank you!!! =) Will check that out! 10:00:24 From Dylan Fox : https://xraccess.org/slack 10:00:49 From Lucas Soto : Reacted to "https://github.com/X..." with ❤️ 10:00:50 From Marie Vans : Reacted to "https://github.com/X..." with ❤️ 10:00:53 From Lucas Soto : Reacted to "Here's Savio's artic..." with ❤️ 10:03:58 From Paul Meyer : How many of you on the panel (other than Brendan) work with deaf people on this subject? 10:04:14 From Meryl Evans : Paul, I work with some of the companies on this. You know I'm deaf 10:04:16 From Paul Meyer : Or have deaf/hoh co workers 10:05:11 From Meryl Evans : @Paul Meyer, @Savio Menifer and I worked together on his project. He listened to a lot of my feedback. 10:05:29 From Savio Menifer : Sorry I can't speak today, but one of the challenges I faced when working on XR captions is — how can you display captions at an appropriate depth? Usually headlocked captions are rendered at a fixed depth, which leads the viewer having to switch focus back and forth between captions and the speaker 10:05:38 From Lily Bond (she/her) : Replying to "Hi. My name is Matt...." Hey, Matt! Re: #1: most of these decisions are made by the production companies - it is based on their preferences for how captions appear from a creative perspective, as well as the streaming platforms’ standards. We are seeing a bit more creativity here, and certainly advanced caption formats can support more fluid placement in 2D. Re: #2 - subtitles are intended for viewers who don’t understand the source language, whereas captions are intended for viewers who can’t hear the audio. Therefore, captions include other critical auditory information. I don’t expect either to go away, but we are seeing more subtitles with captioning elements (these are called “subtitles for the deaf and hard of hearing”). Hope this is helpful! 10:06:14 From Jazmin Cano : I'm curious to hear thoughts on hand tracking in social XR. Are there examples where people can communicate in sign language without accidentally interacting with the environment or platform controls? 10:06:21 From Yao Ding : I’d love to hear from the community about privacy. E.g. speech being captured and processed. 10:06:36 From Brenden Gilbert : Reacted to "I'm curious to hear ..." with ❤️ 10:06:45 From Lily Bond (she/her) : Reacted to "I’d love to hear fro..." with 💯 10:07:44 From Michael Swift : Other issues we find are getting hold of ‘expert-users’ to test whether our accessibility provision is really matching well with their needs. 10:08:24 From Brenden Gilbert : Replying to "Other issues we find..." I would love to be one of your expert users 🙂 @Michael Swift 10:08:32 From Michael Swift : Reacted to "I would love to be o..." with ❤️ 10:08:45 From Paul Meyer : There should not be an issue finding expert users....they are literally in front of you... 10:08:53 From Michael Swift : Reacted to "There should not be ..." with ❤️ 10:08:57 From Meryl Evans : Reacted to "There should not be ..." with ❤️ 10:09:02 From Brenden Gilbert : Reacted to "There should not be ..." with ❤️ 10:09:30 From Michael Swift : For the party - I find moving captions (in 2D TV) really problematic… so I agree that I would find captions to be somewhere consistent... perhaps with other haptics to alert to things coming through. 10:11:33 From Savio Menifer : I agree, for a crowded room, it would be cool to be able to have more control over what captions you see. Maybe an overview with the ability to filter or focus on specific speakers would be helpful 10:11:35 From Paul Meyer : I hope someday in the future, an eavesdropping tool for us deaf people! HA. 10:11:40 From Brenden Gilbert : Reacted to "I hope someday in th..." with 😂 10:11:50 From Yao Ding : Reacted to "I hope someday in th..." with 😂 10:11:52 From Deogratias Shidende : Replying to "https://xraccess.org..." HI @Dylan Fox which Slack channel will be used for caption community discussion? 10:12:08 From Lauren Stanger : Reacted to "I would love to be o…" with ❤️ 10:12:09 From Meryl Evans : Can't tell you how many times I thought someone said my name and they didn't. And they said it, but I missed it. 10:12:24 From Lauren Stanger : Reacted to "I hope someday in th…" with 😎 10:12:34 From Dylan Fox : Replying to "https://xraccess.org..." #Captions 10:13:42 From Peirce Clark : That's a great idea Lucas! 10:14:37 From Deogratias Shidende : Reacted to "#Captions" with 👍 10:16:16 From Lucas Soto : @Brenden Gilbert Love the haptics idea. 10:16:17 From Savio Menifer : Maybe it's even possible to pick out keywords, or show quick summaries of what people are saying, so that conversations don't cover up the whole screen in a crowded screen 10:16:35 From Bartosz Muczynski : Adding to this idea with haptics: I'm in love with psvr2 including vibration in headset directly. I was imagining how cool would it be if there was not one but a full array of haptics around the head to direct attention. 10:16:43 From Savio Menifer : Replying to "Maybe it's even poss..." crowded room* 10:16:51 From Peirce Clark : Reacted to "Maybe it's even poss..." with ❤️ 10:20:15 From Paul Meyer : Think about this....most deaf/hoh people have no idea where an item went if we drop the item...good luck in a dark room. Gives you an idea of the direction of where "sound" comes from for us 10:20:38 From Michael Swift : Reacted to "Think about this......." with 👍 10:21:30 From Savio Menifer : Reacted to "Think about this......." with 👍 10:21:57 From Lucas Soto : Reacted to "Think about this......." with 👍🏽 10:21:59 From Meryl Evans : Focus on motion: https://equalentry.com/virtual-reality-accessibility-comfort-ratings-and-reduced-motion/ 10:22:41 From Yao Ding : Reacted to "Focus on motion: htt..." with ❤️ 10:22:49 From Peirce Clark : Reacted to "Focus on motion: htt..." with ❤️ 10:23:52 From Meryl Evans : I've seen a international film use different color captions to represent the language. Like blue for Korean and yellow for Japanese. It just made the experience more confusing and focused on captions. 10:24:15 From Meryl Evans : Have to be very careful with colors for a lot of reasons ... including colorblindness. 10:24:20 From Michael Swift : Reacted to "I've seen a internat..." with 👍 10:24:24 From Lily Bond (she/her) : Reacted to "Have to be very care..." with 💯 10:24:27 From Yao Ding : Reacted to "Have to be very care..." with 👍 10:24:38 From Brenden Gilbert : Again, options provided @Meryl Evans 10:24:54 From Dylan Fox to Brenden Gilbert(Direct Message) : Sorry, turned off video instead of removing spotlight 10:25:03 From Brenden Gilbert : its not a one fits all solution - colors can be modifed to fit. 10:25:17 From Michael Swift : I agree that consistency in captioning is really important… getting the balance between innovation/creative solutions and UI is vital. 🙁 10:25:17 From Meryl Evans : Captions are not meant to make people think. They should be easy and not require figuring anything options. Absolutely, @Brenden Gilbert. Options are the No. 1 thing to do. But as a starting point ... you want captions that have the most reach. 10:25:41 From Michael Swift : Reacted to "Captions are not mea..." with 👍 10:25:54 From Meryl Evans : I'm working with an XR company on their captions. We explored many options. They will provide options, but for a starting point, they're following cc best practices to get it implemented. 10:26:44 From Dylan Fox : https://xraccess.org/slack 10:26:48 From Brenden Gilbert : www.audioradar.com 10:27:00 From Yao Ding : Replying to "I'm working with an ..." If there are such best practices for captions in VR you can share, I’d really appreciate it! 10:27:05 From Dylan Fox : https://xraccess.org/github 10:27:07 From Brenden Gilbert : https://www,audioradar.com 10:27:09 From Peter Galbraith : Reacted to "Captions are not mea..." with 👍 10:27:28 From Paul Meyer : Replying to "I'm working with an ..." need flexibility to fit what we need for our eyes! 10:27:32 From Titi Lucas : Thank you Everyone, this has been fantastic and really insightful 10:27:41 From Lily Bond (she/her) : Thanks for the great discussion, everyone! 10:27:42 From Dan Greenwood : Thank you 10:27:46 From Michael Swift : Absolutely amazing - thanks everyone. 🙂 10:27:47 From Meryl Evans : Thank you! 10:27:47 From Jazmin Cano : Thank you!! 10:27:48 From Peter Galbraith : Thanks all!