Shiri Azenkot: Hi everyone I’m Shiri Azenkot again, and welcome to our panel on research in accessible XR. I have three wonderful researchers joining us today so I’m going to let them introduce themselves. Amy let's go ahead and start with you. Amy Pavel: thanks Shiri. Hi I’m Amy Pavel I’m a postdoctoral researcher in Carnegie Mellon University's Human Computer Interaction Institute. In my research I design and develop new augmented or, sorry, new interactive techniques that use artificial intelligence, and most recently i’ve focused on creating AI assisted tools to make media like augmented reality and videos more non-visually accessible. Shiri Azenkot: Kyle, how about you? Kyle Rector: Yeah, hi everyone, I’m Kyle Rector I’m an Assistant Professor of Computer Science at the University of Iowa, and my research is in designing developing and evaluating technologies to enhance quality of life for people with visual impairments. I’ve focused in domains such as exercise and art but most recently i’ve been trying to look into how to make virtual reality experiences more accessible. Shiri Azenkot: and Martez? Martez Mott: Yeah, hi everyone, my name is Martez Mott. I’m a senior researcher in the ability group at Microsoft Research, and in my research I try to create new ways for people with physical disabilities to have accessible interactions with the different types of computing devices they may have, and recently i’ve been trying to better understand how we can make augmented and virtual reality systems more accessible to people with limited mobility. Shiri Azenkot: Great thank you. So I wanted to get started by just asking each of you to tell us about the work that you've been doing in this area of XR accessibility. So what are some of the research questions that you've approached, and what are some of your key findings? So Amy why don't we start with you again. Amy Pavel: Sure, so my most recent project was working with a team on how to make augmented reality applications more accessible the people who use screen readers, and a couple of the research questions here were, what interactions are on AR applications currently that might be inaccessible? And so we covered domains like games, retail, and education, and then we wanted to know how we could make these interactions screen reader accessible. So to answer these two questions we surveyed some of the most common interactions available on augmented reality applications today and then we developed application prototypes that enabled common interactions like searching for digital content in AR scenes and gaining more information about content in a scene. So for instance imagine you're using an augmented reality museum app you might click on an artifact to get more information about its history and where it came from, and then the second project that i’ve worked on recently that's related to this is trying to make 360 videos accessible without moving so much around to see all of the content. So one key question here where what are the important points in 360 video and can we reorient those points to be in front of the viewer rather than the viewer having to manually search around to find them themselves. Shiri Azenkot: Okay great thanks Amy. Kyle how about you? Kyle Rector: sure yeah so a recent research project i’ve worked on is in specifically VR gaming. I stumbled into VR actually during this so I was at a sports camp for people with visual impairments and I noticed this game people were playing called showdown, which is like air hockey except that you're wearing blinders, and you're playing against an opponent still but the key is you're listening to the ball. And I thought what a cool game and it's not that mainstream and so what if we could implement something virtually in order to make the game come to life? And then we thought well virtual reality is a good avenue to start and then we started investigating and realizing there's not a whole lot out there in terms of making virtual reality accessible to people with visual impairment, specifically if you're trying to track and interact with a moving object that you have to actually hit or touch or something like that. And so one of our key questions in our project was how to convey the sound of a moving object such that somebody can follow along with it in a game and interact with it and as we are working on it we realize another key question is how to scaffold or give hints to folks that are new to this kind of experience, and a secondary goal we had during this research as well when we were implementing and evaluating this game, a virtual reality version of showdown, where now you're hitting a virtual ball is how people use their bodies to interact with the game. Shiri Azenkot: So you also have some you developed a virtual VR version of the real life game showdown right? Kyle Rector: Correct yes, a modified version. Shiri Azenkot: Can you say a little bit about what the actual show showdown game involves? Kyle Rector: absolutely yeah. So the actual showdown game again we can parallel to air hockey but it has some differences. So the table's a bit longer, it's a little bit thinner, than an air hockey table. It also has a wooden wall surrounding the table in that if the ball runs along the edges you can hear it pretty clearly, the ball's less likely to fly off the table, and you're holding a wooden bat which may be, the size of the bat is about the size of a cell phone with a handle on it. And so you're trying to hit this ball that when you shake it it sounds like a maraca, and you're basically hitting the ball back and forth against your opponent trying to get into a semi-circle-shaped goal next to your opponent. And so both of you are wearing blinders and listening for this ball and you're trying to make sure it doesn't go into the goal next to you. And one cool thing about the game that's different than air hockey is that if the ball becomes silent on the opponent's side then you actually get a point because you made it such that they can't hit it back to you. Yeah so we ended up implementing a drill version of this game where balls were coming to the player and they were trying to hit the balls back to the opponent to see how well that would work. Shiri Azenkot: Okay makes sense and then I cut you off what were you going to say was there another part to the project? Kyle Rector: oh yeah just some key findings, we've we actually tried two different ways of scaffolding folks, whether verbal hints or adding some aspects of vibration. We found the verbal was ideal for those who had played the real showdown game, because in the real world the only vibrations you'd be feeling are in the gameplay itself like feeling the vibrations of the ball and not necessarily in hint form. But I can talk more about that later as well. Shiri Azenkot: Okay so it sounds like both of you Amy and Kyle your focus in one way or another was on making AR or VR accessible to people with visual disabilities with visual impairments, is that right? Okay Martez how about your projects? Martez Mott: Yeah, so one of the first projects we did was to actually just you know to get a better understanding of what challenges people with physical disabilities have encountered or might encounter when they're using different types of VR systems. So what we did was we conducted kind of a mixed in-person remote study where we actually had people try out different types of VR applications and then elicited feedback on what worked well, what didn't work, what could be improved, if they could you know change anything what would they like to change. And what we're able to find from that is you know essentially this kind of like set of different barriers that we identified that might prevent people from being able to engage in these different types of VR systems and from that study we've been able to kind of set our research agenda based on these kind of empirical results, based off the challenges that people actually have encountered and might encounter when they're using different types of VR systems. Shiri Azenkot: That's really interesting. So your work was focusing on people with physical disabilities, can you give some examples of what kinds of disabilities that was? Martez Mott: Yeah so we had people with cerebral palsy, people with muscular dystrophy people, with multiple sclerosis. And I think some of the different people physical disabilities, they even though somebody may have the same condition for example people have a wide range of different abilities that they might exhibit. So you might do a interview 10 people with cerebral palsy but everyone is still going to have different abilities that are going to come to bear when you're thinking about understanding how to make these systems more accessible. So I think it's really important, and I’m pretty sure that's, you know, kind of preaching to the choir here with the people attending this, but it's really important to be able to elicit as many different types of feedback from people as possible because you get just such a wide range of different experiences. Shiri Azenkot: Yeah, yeah definitely that sounds a lot more complicated in some ways than just considering people who can't see. You know we don't have any vision. So what were some of the barriers to using VR that you found in the study? Martez Mott: Yeah so most of the barriers that we identified all had to do with the kind of physical aspects of VR hardware. So there are some things that we identified like just setting up a VR system, for example, can be a big challenge especially if somebody's thinking about doing this independently. So if you've ever set up a VR system before, one of the first things you have to do is like set up this kind of like barrier, so like you know or this the interactive area that you might be playing inside. But you know that could pose challenges for people. So for example one of the first things you might have to do is to like set the floor level but to do this the system might require you to take your controller and actually tap the floor with it. But that could be a challenge for people who have difficulty bending down or bending over. If you're a person who's inside of a power chair, for example, you might have difficulty reaching your arm to the ground to touch the floor so that could pose a challenge, just putting on a VR head mounted display, so the HMD that could be a challenge, you know, for people who may have difficulty lifting their arms above their chest. For example, the systems themselves require you to kind of tighten them on top of your head in a certain type of way, so that can be challenging. And then there's also different challenges people might encounter when using kind of the VR controllers themselves. So you know I think there's a push now towards more like hand tracking that you might get, but that can be challenging for people who may not be able to articulate their individual fingers. And then other people may have challenges holding the controllers, pushing buttons on the controllers, using two controllers at the same time. So all of these different you know problems could pop up, and right now we don't have really great mechanisms in place to be able to come up with alternative solutions if and when somebody does encounter one of these challenges. Shiri Azenkot: Yeah yeah it seems like there's so many more ways to like physically manipulate these systems now than you know just like the standard mouse or even just a mouse and a touchpad. So each of you, it seems, are working on very different VR or AR setups in terms of the hardware that's required to use to interact with the environments. So I mean we all have the same goal here ultimately and that's to make VR and AR platforms in general accessible. But I’m wondering if you can describe your specific setup in a little more detail just so that we have a better sense of what exactly you were dealing with. So Amy, maybe we can go back and start with you again? Amy Pavel: Sure, so yeah the specific setup I was dealing with was a mobile augmented reality. So we focused on the setup, because already we have a lot of mobile augmented reality applications. So I think a classic example of this is the Ikea application, or the measure application. And it's difficult to make use of those right now if you use a screen reader. So that's the setup we picked first, is mobile AR. But I think some of the ideas might eventually generalize to AR with things like headsets as well. Shiri Azenkot: Yeah, so by mobile AR you mean AR applications that are available today on like smartphone devices like the iphone? Amy Pavel: Exactly, smartphones or ipads would work Shiri Azenkot: Right right okay cool yeah and that's very practical and kind of very necessary for you know the the more immediate future like the coming few years Amy Pavel: yes Shiri Azenkot: Kyle, what about you? It sounded like you had a very different setup. Kyle Rector: Yes Shiri Azenkot: And you were looking at VR right? Not AR, correct? Kyle Rector: Correct and non-visual VR at that. So that's related to my setup, in fact the first thing that crossed my mind was this game is intentionally non-visual, and we ought to see how to make non-visual experiences just as, I guess, equitable access as visual. And so why need a headset at all? And so in fact my setup I used Unity to tie everything together, the software, and I had a Microsoft Kinect about six feet away from the person on a table. And they were standing at the other end playing at a physical table, and we also had just over the ear headphones where we delivered the audio. And also I wanted to give a haptic experience, so I had their playing hand or dominant hand holding a Nintendo Switch Joy-Con, which would have high-definition rumbles, so they could feel vibrations. But in this way we're able to sense the body, give appropriate feedback to them based on their head and their hands, and how they're facing all without needing a headset. Shiri Azenkot: So Kyle, do you think that what you're finding, and your design of the game itself could be easily applicable to, ah, like kind of a standard off-the-shelf headset VR system, or do you think this is more of like a future scenario where we might be using VR in different contexts? Kyle Rector: So I actually can imagine answering yes to both things you said. So first of all, well, showdown specifically is not visual, and so sure you can put it on a headset because headsets have headphones and everything, so you can certainly play the game while having that challenge would not be able to see the play space. But I also feel like it'd be really cool if virtual reality products could be more modular, so maybe you have the visual module, but maybe you don't need the visual module, and you can remove that, and you can still have the audio and not have it be like you have to have all the different, I guess, sensory outputs at one time depending on your preferences or abilities. So I think also I am thinking about like further in the future as well. Shiri Azenkot: Yeah and I think it's really important at this time to be thinking about different VR setups. I mean where there's no reason to feel locked into any of the commercial devices that are available today, like this technology is evolving very quickly. Martez what about you, in your work? Martez Mott: Yeah, so for the work we've done so far has been primarily looking at commercial VR hardware, so off-the-shelf kind of VR systems that you might get from HTC or Oculus. And I think you know although our investigations so far have focused primarily on virtual reality, we think that our findings can be more generalizable to any type of XR system that requires handheld controllers with these kind of six degrees of freedom. Anything that requires an HMD especially if the person has to kind of equip and unequip it for the different type of experience. So that's been our primary focus so far, but we hope in the future to be able to expand these investigations a little bit to understand kind of what different new types of hardware we might be able to create. Because the current generation of VR hardware poses some significant accessibility challenges, and I think it's reasonable to expect that we're going to have to do a lot of work to make those the physical hardware itself more accessible to people. Shiri Azenkot: Yeah, yeah, and that kind of goes along with what Kyle was saying about making the VR setup more modular. So maybe you know in a few years we'll be able to substitute one controller for another so that with with different headsets so that people can use whatever hardware is more suitable to their needs or to their abilities. Going back to like the actual substance of the projects could you talk a little bit about what were some of the key challenges that you each faced? Amy maybe we can go back and start with you again. Amy Pavel: Sure. So I think the main thing that separates AR applications from VR applications is that you have both physical and digital content that you might need to represent. So for instance in a VR application, imagine you're placing furniture in a VR world you might be able to have the developer define a lot of characteristics about the world and how it should be represented with a screen reader. On the other hand, in AR, you might be messy. You might be using like messy environments, and you need computer vision to be able to assess what's in the physical world and represent it using something like a screen reader. So for instance, one thing that we faced is when you're placing furniture in a room, so say I go to place a couch in my living room, it's really difficult to determine what in the physical space we should represent to the user. So for instance if they go to place a couch we could tell them, oh you're placing the couch next to the desk but should we also represent you know maybe, clutter that's around or artwork on the wall. So I think figuring out how to recognize what's in the physical environment and then how to represent that in a summarized condensed way that's actually useful to the user is like a major challenge in building screen reader accessible AR applications. Shiri Azenkot: Did you actually go ahead and make this type of application accessible to people who don't have vision? Amy Pavel: Yes, and we had a quite a simplified scenario. So for instance you could place a- we developed a couple different prototypes on how you could place or search for content in the scene. So for instance you could place content on the scene by selecting a couple different options where you would like to place that content. So for instance if you had a lamp to place, you could select on the table, but this was manually coded for now, and I think the future work is to try to figure out how we could recognize and create these applications kind of more automatically. So I guess just to summarize, our two prototypes that we created was one for furniture placement, and the second was actually for viewing some educational content. We had a solar system application as an example, that you could view the different objects in the scene. Shiri Azenkot: Okay yeah, say a little bit more about this application for where you can place furniture. Just because I think it's an interesting example, and I want to make sure people have like a concrete understanding of why it was so difficult to make it accessible, and how you actually did it. So this is the kind of application that people can use when they're like considering buying new furniture, redecorating or something, to get kind of a simulation or a mock-up of what the room might look like, right, with the new furniture? Amy Pavel: Yes, that's correct. So you might want to- you can imagine, just to state a little bit more about what visual attributes might be interesting when you are placing furniture. So some things could be the visual and physical attributes, so some things could be the size of the furniture. Does the furniture fit in the space provided, for instance? And then you might imagine also that the color or the texture related to other objects in the room might also be important. So we prototyped pretty simple versions of this application just to try out a couple different interactions. But one example that I was talking about is like placing a lamp in the environment, and it would give you a few options on where you could place it, so for instance, would you like to place it on the desk? On the table? But it was kind of difficult to get more information about the visual aspects around the scene. So that's definitely an interesting area for future work. Shiri Azenkot: Yeah, and I can imagine that you're also very dependent on computer vision, and whatever state of like computer vision libraries you have available to you. Amy Pavel: Right, exactly. So there's two major parts of computer vision I think are important here. So one is like segmenting the 3d scene. So for instance, where are the surfaces in the scene at all? And then the second part is, how you can represent the scene in more of a in more of a descriptive way? So for instance we might have a really good recognizer for desks, but not for other types of things in your environment, Shiri Azenkot: Yeah yeah yeah. I think it's also interesting that a lot of the information that you want to get under this sort of user experience, like for you know, for a typically sighted user, would be very subjective you know? So yes figuring out what kind of information we're looking for and how we can convey that in a useful way that's accessible that allows the person yeah the last person with a visual impairment to make a judgment call it would be really interesting yeah and I think you could find a couple interesting things to do here as well like maybe it could suggest a couple good placements or it could give you a summary of the information and find out where you want more depth about maybe like how well the color matches to something else in your room or what the color itself is so I think that there's kind of you know what information is important to you depending on what task you're doing can also change so yeah it's a it's a super interesting and challenging problem I think there's so much room for more development there yeah yeah Kyle what about you what were some of the key challenges Kyle Rector: Yes, so at first, we were thinking, well, how do we convey the ball sound? And so we had a toy a bit with how to play with audio. We ended up going with binaural sound for right to left and making it very discreetly clear, based on the ear that you're hearing it, where is it with respect to you. And then also a further challenge was conveying the depth of the ball, and we eventually converged on a custom roll-off curve and doing some other audio effects to make the ball on the opposite side of the table sound quieter than the closer side of the table. But once we got there we realized there was a super key challenge and that is having others try out this experience. So like the student who worked with me on this, he's now at Microsoft, myself we were able to pick up the game on our own pretty well. But when we had others kind of just try it out informally we realized that it's not just making it work. That wasn't enough. It was about like here's this bizarre new sound that you're hearing that doesn't sound like a ball, but in fact it's like a wave and the reason for that that's easier to track when an object is moving then like a constant sound will say the same exact pitch. But we had to give hints or scaffold, more specifically scaffold, players as they're playing the game. So we wanted to make this also enjoyable at the same time, so we had to figure out how to embed the hints during the game play and then slowly roll them off, or remove them, as they advanced in level basically. So we have like these different levels based on the score that you got and at first you heard really detailed hints and constructive feedback and we slowly had to remove that. And so the challenge was, how to allow people to pick up this new experience, without any prior experience of course, and in such a way that they're already able to play? And then make it so that it becomes challenging enough later on. And so that was one of the big things that we dealt with in the design. Shiri Azenkot: Uh, you've mentioned this word scaffold several times, or hints, yeah as you call them. Could you say a little bit more about what exactly you mean by this? Kyle Rector: Yes yeah. So scaffolding is something that's used in education as well. And it's where you're providing enough information as you're doing the activity, and that way it's kind of all built in. So in the example of our virtual reality game, we had some hints in advance where you kind of could hear where the ball starts and where it will end and that was said verbally so you could get used to the 3d audio in your headphones, and like things getting louder as to get closer to you, and then as the ball crossed the halfway point we paid attention to where your hand was currently, and we said whether you had to move your hand further to the right or to the left to hit the ball. And then finally if you hit the ball of course you got cheers, and if you missed then we'd give constructive verbal feedback in the form of saying how far your hand should have gone. Sometimes you did it in like literal measurement, other times we used metaphors like the length of a pen or something like that or the length of your forearm. And so we have these three different ways of basically building that information into game play, instead of just having like a tutorial beforehand and then, here you go play, good luck. So it's trying to immerse or put it in all together at the same time. Shiri Azenkot: Do you think that the scaffolds are necessary for, let's say, a future world where the audio simulations are going to be very very realistic? Kyle Rector: Hmm, I see what you're saying. So if it was sounding like an actual ball and you could tell exactly where it is in space would you need this scaffolding? Shiri Azenkot: yeah. Kyle Rector: Perhaps, well perhaps not. I mean if it sounds very realistic then likely you'd be able to pick up on that faster, right? I know at this time we we tried using realistic sounds, and that was not possible because it was really hard to tell where the ball was at a certain time. We actually tried like a ball rolling on wood and replaying that loop, and it just didn't quite work in terms of assessing its depth at this time. Shiri Azenkot: Yeah yeah, I’m sure it would be a very long time until we'd be able to get to that point where you know it sounded just like a real ball. But that's really interesting, because I think that this whole concept of scaffolding is going to be very important. Because yes you know it's a relatively new platform, and then like we were talking about earlier, there's so many different setups for the platform that there's going to be learning curves for everyone. Um Martez what were some of the key challenges that you faced in your work? Martez Mott: Yeah, so it's been really difficult to try to understand the application level accessibility of somebody's VR systems, because you know the physical devices and the software are so tightly coupled. So if you have inaccessible hardware, it's really then really difficult to try to assess, well, what's the overall accessibility of platform? Because you can't really get past, you know, this first part which is, can people actually use the hardware that's required to interact with the software? So we've tried some type of investigations early on to get past this. So for example one system or one kind of attempt that we tried to make with this was to try to understand, for example, how we could have more accessible bi-manual interactions in VR? But with like unimanual input. So if you kind of play any VR games, or these applications you might encounter something like, oh, now I have to like climb this ladder, and I have to use kind of like two hands to perform like a climbing up motion. Or I get to this door or this, like, safe, and I need to use like two hands to like turn this wheel to open a safe. But that could be a problem for people who might have less strength in one hemisphere of their body, or a person who's an amputee and only wants to use one controller. So then there's this case of like well. How do we get past some of these kind of software barriers if the hardware itself doesn't really allow us to do that? So there was some work that we did there to try to understand like how could, we with like just a single controller input, make it more accessible to have bi-manual interactions inside of VR applications. So that's been one kind of big challenge, of just trying to understand how do we get past or overcome these hardware inaccessibility barriers, and then I think the second, and especially this has been true over the last year and a half, which is when I kind of started doing this work, is that not many people with physical disabilities personally own a lot of VR hardware, like VR headsets, these you know VR platforms. So it's really difficult to do co-design type of work, which is what we would prefer to do, and actually be able to work side by side with people due to COVID. We haven't been able to be in person to do any of this work, and there's also no opportunities, or very little opportunities, for us for example to kind of create some software or to create some prototypes and actually have people try them out in their homes, because people just don't have the hardware. So it's really difficult to try to understand you know what kind of small tweaks we need to make, because a lot of these things are based off perception right? So we could build something but if it doesn't really work for the people we're intending it to work for because they don't have the opportunity to actually try it it, really makes it difficult for us to actually understand what kind of like the efficacy of the things we're building are. So those have been two big challenges that we've been facing so far. Shiri Azenkot: Yeah yeah that's really interesting and I know also from the work that i’ve done on making VR more accessible to people with visual disabilities you're kind of starting up for this place of well it's not accessible at all and they don't use the technology at all so it's hard to even figure out what the specific challenges are. But you you raised a really good point, Martez, that you wanted to do co-design with people with disabilities to involve them in the design process itself. And I know the work that all three of you do and I know that that's a priority for all of you, so I’m wondering if you can say a little bit more about your methods and how you involve people with disabilities in, well, how you involve them in this work in particular that we've been discussing. Amy maybe we could go back to you? Amy Pavel: Sure, so for every project I take a slightly different approach, but I would say for this for the AR project specifically, I we essentially surveyed the applications that were already out there and used guidelines on what types of things you might want to do with these applications in order to create a taxonomy of the possible interactions that might be inaccessible. And then because all of all of it was inaccessible, we involved blind users at the time that we created a few different variations of prototypes, so that we could get some early stage feedback on what are the advantages and disadvantages of these different variations of our prototype, and then how could those inform what we do in the future for how we can make better prototypes going forward in other types of projects, I might involve users with disabilities a little bit earlier so that we can get more feedback as well even in an earlier stage than that. But at this in this specific project it was at the stage of comparing a few possible prototypes for just making the base level of these AR applications more accessible. Shiri Azenkot: Yeah yeah that kind of dovetails on what I was just saying, that you know, it's hard to - we'd like to involve people very early on in just figuring out what the challenges are that they're currently experiencing, but right now it's, well, I can't use this. Amy Pavel: Exactly so it's starting from basic zero unfortunately. And I guess you know ways that people might have other ways to get around it still, and I think that might be interesting to look at in the future. So for instance maybe you have your sighted partner help you with parts of the application that you're unable to do right now. So I think there's still ways that people work around using these applications that could be informative, and so definitely probably worth exploring in the future too. Shiri Azenkot: Yeah that's a really good point. Kyle what about you? Kyle Rector: Yeah so in this particular project the whole conception of the idea came from, I guess immersing myself and going to events like a sports camp for people with visual impairments, and just observing, because it's like a whole new world for me. And so that was the motivation. And then I guess over the time of the design process, I did get kind of very informal feedback like, play this out, let me know what you think. Not systematic or fancy in any way, but that was actually what gave me the idea of the scaffolding, because it was clear that some people got it right away, good, and others were not even sure how to get started. And so that's when I was like, we need to scaffold them. And I wouldn't have thought of that if I hadn't had people informally try it out and talk to me about it. And then of course I did the official user studies with people, and doing you know procedures and all that data collection and everything. And the findings from that, like one interesting tidbit that I definitely want to follow up on now in the future, is people kind of were playing you know with forehand or backhand swings but then also people were doing other techniques, like jousting at the ball, like poking at it or just holding the hand stationary. And so then that interaction actually will inform future research. So when I start thinking about how to enter, how to use your body to interact with a moving object,t not just to hit it but like how do you hit it? So when I start on that, now i’ve already had some valuable conversations through my formal research to keep going. And more generally in my research I love to interview people, it's like one of my favorite things to do, in addition to building stuff and testing it. And so i’ve done interview study before or embedding an interview early on in the project, talking to people informally as I go just to make sure I’m on the right track and then by the time I’m officially testing I have some level of confidence in and what I’m doing. Shiri Azenkot: Yeah yeah I think what you mentioned about talking to people informally, that's something that I try to emphasize with my students all the time. Like that level of just immersing yourself in the community and doing all of this informal work is so important even though it's not necessarily something that you know, we as researchers can write up in our papers and get you know credit for in that way. Kyle Rector: Absolutely yeah. I remember even in graduate school, I volunteered at various things in the state of Washington, where I was. And it was just as rewarding as it was helpful for my research, just just a lot of fun. Shiri Azenkot: yeah yeah okay so we only have a few more minutes before going into q a but I wanted to ask each of you what are some of the more general challenges the pressing challenges that you see for the field of making XR accessible moving forward? So Amy what do you think? Amy Pavel: Yeah so I think there's sort of three main challenges that i’ve been thinking about. So one is this, how do we represent the physical world and its interactions with our digital content, knowing that we will likely need to use computer vision algorithms to recognize and then summarize the content in the scene? And so I think there's a lot of interaction as well as deep computer vision questions there. The second main challenge is that at least in in my experience, augmented reality can be not accessible because you also need to move around a lot. So similar to Martez’s project, if you a lot of AR applications require walking or moving your camera, like very using fine-grained motion to move your camera around the scene. So I think you know working on how we can make AR more accessible to people with mobility impairments will be really important. And then a third thing is that often times right now we're designing these applications to be accessible after some of them are already designed, and I think Kyle’s project is a really great example of making a accessible first application. But I think in the future more applications could think about accessibility from the very beginning - involve people in their design process and maybe come up with even better solutions than we have today, for for you know after the fact accessibility improvements. Shiri Azenkot: Kyle how about you? What are some of the pressing challenges for the field of XR accessibility moving forward? Kyle Rector: Absolutely, yeah. Thinking about, I guess sound for a second, I was using sounds such that you could actually play something. But there's sound also that's meant for immersiveness, also realism I guess those are one in the same but for entertainment. And so how do you blend all those sounds together, so that way you can use it, but also enjoy it fully, and have a realistic experience like you do when you hear things in the real world? So that's one thing another thing I found interesting, and this was based on interaction I had a person was actually relieved that the VR game had no headset because they were prone to seizures. And it was something I hadn't even considered when I was building this, and now it's apparent to me that it's not just you know one class of disability, although that's how I often do my work, but we have to think of comorbidities that people aren't just going to necessarily have one disability, or maybe it's a hidden disability. And so we're actually generalizing much broader than maybe that target audience we're thinking of. But people with comorbidities could have even more valuable insights that you hadn't considered prior. So that was helpful for me, and I guess it ties back into this whole modular thing. And yeah so I appreciate what Martez is doing as well with the think of different controllers like what are all the different pieces that you need to make an inclusive VR experience? Shiri Azenkot: Yeah yeah and also just the fact that you know it's easier for us to put people in buckets in different categories, but ultimately there's so much more variation you know within each of these categories that it's very challenging to account for. Martez what are some of the pressing challenges for the field moving forward? Martez Mott: Yeah I think one of the biggest challenges i’ve identified so far has been, we really need to broaden our perception of what we think some of these technologies should be. I think one of the kind of more frustrating experiences i’ve had talking to people is that content creators, application developers, might have a very specific idea for what they want their VR experience to look like. So you know they wanted to feel so realistic, and they wanted your physical emotions to actually feel like you've been transported to these different places, and they really want this sense of embodiment and immersion. And that can be all well and good in some instances, but I think what we end up doing is that we by going through that approach, what we end up doing is actually importing some of the inaccessibility of the physical world into the virtual world. And we don't need to. And that makes it really difficult when we then try to develop new types of methods or techniques to improve the accessibility of VR, because we're so tightly integrating what we think these VR environments should be based off what our physical lives are like. So we really want kind of like this immersion of embodying avatars and for it to really be lifelike, but we're not giving also room for people or for us to understand how can we create more accessible experiences that allows people to be and exist in different ways and still have all of these great experiences in the VR world, because we don't want people to feel that the physical world may present accessibility barriers to them, and then also they find those same accessibility barriers in the virtual world. So I think we really need like as a field need to broaden our kind of conception of what VR and AR technologies really should be in a lot of these different instances. Shiri Azenkot: Yeah I think that's a really interesting point, and that's also something that we can try to do collectively as the XR Access Initiative, in terms of changing or broadening people's conception of what AR and VR should be, and that you know obviously should take into account people's range of abilities. Okay great let's see if we have any questions from slack. So Jesse what are you seeing there? Do we have any questions? Jessie Taft: Thank you, Shiri, Martez, Kyle, and Amy, for that great talk! As a reminder you can post questions to the panelists in the topic-research channel on slack. So I’m going to send this first question to Martez for the industry perspective, but would love to hear from others as well. So we have seen so much great work, yours included, in making XR accessible. What can researchers do, whether they are independent, at a university, or in some other situation, to help get their findings put into practice by developers or platform or hardware companies, so that end users end up benefiting? Or vice versa how can companies benefit from this type of fundamental research? Shiri Azenkot: Sorry, before Martez answers, let me just interject really quickly that that's one of the key goals of XR Access, right, to try and make sure that there's a line of communication and more translation of the research that we're doing. Okay sorry Martez. Martez Mott: No yeah yeah, I agree with Shiri’s point. You know, I mean that's been something for me that's been interesting, as like a trained academic who's now in the kind of research and I mean an industry environment, to understand how I can take the things that we may be doing in Microsoft Research and then try to share our findings, share our learnings with people and product teams so that they can best take advantage of that. And I think you know sometimes, for example i'll just use me as an example, I might naively say like oh yeah we wrote a paper on this, here here's a link, go read my paper. And people might like okay I don't have time, or just don't want to kind of like read through this like 15 page paper you sent me, right? So there's also has to be this kind of property of kind of distilling down some of our findings, and some of the work that we do, into ways that it's more easily digestible for people. Because people on product teams are also have their own set of deadlines, and their own set of requirements that they're trying to meet, and we need to make it as easy as possible I think for them to be able to kind of consume what we're producing, and try to make that as useful as possible for them. So I would try you know things like you know XR Access or other different types of forums to be able to kind of share the work in different ways you know be able to produce alternative writing. So not just like the paper but if you could produce like a small blog post that's like hey, here's a 500 word breakdown of this you know 10,000 word paper I just wrote. That might be a much easier way for people to be able to digest some of the findings, especially if you can produce it in a way like, hey here's this 500 word blog, here's a link to our Github where we have some code online, take a look at this demo or something like that. And I think that can go a long way to helping people understand what's been going on in the research field. Shiri Azenkot: Yeah and we're also going to be looking at additional ways of just getting people together and giving short talks through XR Access to disseminate research to other interested parties. So stay tuned for that. Jessie Taft: so next question on slack is for Amy. You mentioned using machine learning to help create accessible experiences. Can you talk a little bit more about how you do that and what are the some of the challenges, maybe pros and cons, of using that kind of technology in your work? Amy Pavel: Yes, so it's challenging. So I guess one- right now a lot of the technology is quite limited, and so I guess a couple different types of limitations that i’ve run into are, first you know regular problems that we're used to thinking about, like the accuracy of an algorithm. So if I say this is a table or if I say this is a floor how accurate is that is that? And then another thing that i’ve run into is that sometimes the descriptions that we could possibly recognize for something aren't yet adequate. So if we think about doing something like machine learning in your home, you might have personal objects or things that are kind of unique to you that you might want to describe in a different way than what might be the default by the system. And this can be increasingly challenging if you're trying to describe something like people who might have different identities they want represented differently in in the machine learning descriptions. Or it can be challenging with, you know, objects in your room as well, and so I think you know accuracy the training description so that they would actually be relevant to you, and not just and not just kind of quite generic which sort of can happen right now. And then another third thing is you get back a lot of information. So say I was to recognize every object in your room right now that would be way too many objects to describe to you personally. So I think a really interesting challenge for the future is how we can allow end users to customize themselves what types of objects they would like to prioritize in descriptions and what types of objects are less interesting that to them to be described. And so I think a major challenge of machine learning going forward is making sure that end users actually have more access to informing what these techniques recognize about themselves and their environment and not just creating all of that, trying to solve all of that on the side of the developer who might have quite specialized knowledge. So those are a couple of the challenges I could kind of think of so far. Jessie Taft: Great thank you. So we have time for one more question. I’m going to send this to Shiri, but I’d love to hear from others as well. And that is, how can we get students, whether at the undergraduate or graduate level involved in XR design and research? How do we train that next generation of researchers? Shiri Azenkot: Ah well there are many ways to do this I mean it depends on who the question is coming from do we do we mean like in a general sense or specifically someone who's at a university or at a certain company? Jessie Taft: I think I think just in general what are what Are going on to sort of make sure that people have training to become knowledgeable in this area? Shiri Azenkot: So again this is one of the goals of XR Access, and i'll just plug that this this summer, in the next few summers, we have funding to support summer undergraduates in a research experience on making XR accessible. So that's ongoing now. And then if you if you have or know students who might be interested for next summer, we'll be advertising that when the time comes. So I think it's very important to introduce accessibility into the undergraduate and graduate, but especially undergraduate, curriculum. So if you or someone you know has a class on XR then it's incredibly important to somehow make sure that the perspective of people with disabilities is represented, whether it's highlighting some of the research that we've been talking about here, or whether it's bringing a person with a disability to talk about some of the challenges that they experience when using a device. I think that exposure and that awareness is incredibly important. And it's a starting point you know I ideally I’d like to say that we should all be as professors teaching classes on XR accessibility, but they're you know we need to do both, we need to focus on accessibility and we need to make sure that accessibility is incorporated in any design or development or introductory class. Jessie Taft: Awesome, thank you very much. So that's all the time we have for questions for the research panel. We would love to have you all continue the discussion on slack, and I’m hoping that the panelists may be able to answer some additional specific questions about their work there as well because we do have some of those coming up. So thank you again to Shiri Amy Martez and Kyle. Now we're going to take a quick break and we'll return in 10 minutes for the final panel of the day on building a diverse talent pipeline.