Co-designing with People with Low Vision: Using Augmented Reality to Make Physical Games Accessible
Designing For Inclusive Fun: Accessibility is Not Only About Function
Team: Amy Asadi, Shiva Ghasemi, Matthew Ryson
According to the Center for Disease Control and Prevention (CDC), 3.3 million Americans over the age of 40 have low vision. Many assistive technologies have already addressed basic user needs for the visually impaired community. However, little attention has been given to sports and recreation for blind and low vision individuals. People with low vision have been missing out on the tangibility of physical recreational games and often resort to video or other digital-based games for fun. This promotes limitations in life experiences. People with low vision can essentially play indoors but often miss out on the holistic richness of physical games, which can promote good physical and mental health.
Inclusive design is the practice of creating experiences that enable all to participate. Instead of exploring other purely digital versions of recreational activities, we set out to focus on ways to create access to physical recreational sports and activities for people with low vision. We found this possible through the use of augmented reality. It’s true that AR technology is inherently visual. However, our study has shown that augmented reality Smart Glasses provide tremendous benefits to people with low vision.
In our study, we designed an AR tool using both visual and auditory feedback to assist people with low vision and provide a high level of customization with edge enhancement, brightness and contrast adjustment, magnification, audio feedback, and customizable classification attributes. The mentioned features are intended to help the visually impaired community experience inclusion in recreational activities.
We used the following methods in collaboration with a participant who has low vision.
User interview: to understand the unique needs of visually impaired individuals.
Participatory Design (PD) session: to explore and ideate different solutions about select problems with our participant.
Evaluation: to validate our design with our participant and acquire quick feedback for improvement.
The User Interview
We began our project by performing an initial interview with our participant to understand his personal and everyday experiences. At the time of our interview, our participant had a visual acuity of 20/300 in one eye and 20/400 in the other eye. He also had advanced digital literacy. To prepare for our interview, we comprised a list of initial assumptions, which we then formulated as questions. Our interview was semi-structured, which enabled us initial guidance but enough flexibility to allot our participant full reign to describe his context of interest. We set out to learn more about what a day is typically like for our participant and how work, recreation, and other activities are integrated into his life.
Through our discovery interview, we recognized that many of our initial assumptions were proven incorrect. For example, one of our assumptions was that people with low vision may feel higher levels of vulnerability in potentially dangerous environments such as neighborhoods with high levels of crime.
However, our participant noted that feeling unsafe in such environments is not an issue for him. Another assumption we questioned was the accessibility of public places. Initially, our assumption was that current infrastructure of public areas are rather inaccessible.
On the contrary, our participant noted how accessible public places are, including public transportation and restaurants. During our interview, our participant mentioned that most of his functional needs are already addressed through current technology. A few he uses include Google Maps and a white cane. Noting this, we pivoted our conversation to discuss recreational activities of interest. Our participant mentioned his current interests in boxing, golf, and chess to name a few. He expressed limitations with fully participating in such recreational activities. For example, he stated that he cannot fully participate in indoor boxing because he is unable to visually note his opponents’ moves quickly enough in close proximity without suffering more punches than necessary. From this insight, our team was directed to our opportunity space: exploring ways to make recreational activities more accessible to people with low vision.
The Participatory Design Session
Transitioning from the interview phase to our second phase, participatory design planning entailed devising a series of potential design opportunities to share with our participant. Through ideation, our team created sketches of our ideas. These ideas included sensors in boxing gloves with embedded audio signals to magnify sound from a boxing opponent and variations of augmented reality overlays on various recreational artifacts such as baseballs or chess pieces.
Our co-design protocol included fleshing out two design ideas as cultural probes. To do so, we created photoshopped images and GIFs of boxing opponents outlined in high contrast colors and overlays.
During our session, we used our sketches and cultural probes to help initiate blue sky thinking for our participant. The key to our session was to ensure our participant felt like a co-designer instead of a passive evaluator of ideas. To facilitate this, we set up a Google Jamboard “Bags of Stuff” activity after sharing our cultural probes. This activity included white boards with photos of tools such as sensors, recreational helmets, and smart glasses. Our participant was prompted to use the tools to devise his own technology that will make recreational activities accessible. He used the example sketches and probes as inspiration to think creatively.
Participatory Design Insights
During our co-design session, our participant chose to explore a way to make chess accessible over other games such as billiards, boxing, and bike riding. This was due to activities such as boxing and bike riding posing safety concerns with testing. In the past, our participant played chess more actively before his sight began to diminish in recent years.
Because playing chess is a widely adopted recreational activity, our participant felt that expanding access to chess among people with visual impairments would benefit a greater population. He mentioned that the Queen’s Gambit TV series on Netflix inspired him to indulge in chess professionally, but he has difficulties in distinguishing black pieces and is unable to discern pieces such as kings and queens.
Due to his worsening visual acuity, he suggested a tool that would provide gesture recognition, audio feedback, and high contrast for identification of chess pieces, squares, and potential moves. Our participant was particularly excited about including a feature that could help him train professionally. This included a means to understand particular moves and strategies. Together, we felt confident about designing an assistive technology that could assist low vision and blind people with playing chess. Our consensus was to develop a pair of smart glasses that combines features from multiple assistive technologies as a comprehensive solution that could be applied to a wide array of recreational activities and sports.
The Interactive Prototype: Bringing Our Participant’s Design to Life
When it became time to prototype, we decided to focus on developing a solution that consists of both hardware and software: AR smart glasses and a complimentary application that could be accessed via the AR smart glasses or a mobile phone.
The AR Smart Glasses
We intended to create a slim and stylish design of smart glasses to prioritize aesthetics in inclusive design. The AR smart glasses can be synced with a mobile device via Bluetooth connectivity and applies customization settings upon application offerings in order to enable inclusion for people with low vision. Such features include edge enhancement, brightness and contrast adjustment, magnification, audio feedback, and customizable classification attributes (the assignment of objects with new attributes that are easier to discern in physical environments.) All applications can be accessed through voice or simple toggle switches via the AR smart glasses.
PlayAR: A Complimentary Application for Inclusive Recreation
PlayAR is a complementary application that can be opened with the AR smart glasses. With computer vision, more specifically object recognition, the AR smart glasses can recognize the recreational activity one is about to play. After confirming that activity, PlayAR offers several ways to customize the physical experience through enhancements with augmented reality.
Data captured through object recognition is presented to the user by semi-transparent overlays of that object. The overlay highlights potential edges with bright colors that users can customize to their preference to improve spatial understanding and depth perception.
While interviewing our participant, he noted that various high contrast settings on display devices enhance his ability to focus. Our initial aim was to sharpen the edges to help separate important objects like chess pieces from irrelevant areas of the scene. We learned about different color combinations that work to his preference and included these in our prototype.
Since he has difficulty discerning chess pieces like kings and queens, the edge enhancement will assign outlines around these objects to more easily identify their location on the board. We decided to augment the chess squares and perimeter of the board with edge enhancement using a high chroma yellow hue. Customizability for edge enhancement is essential, so our prototype includes multiple options to select an appropriate color preference for edge enhancement.
In addition to visual feedback, our tool recognizes hand gestures and objects while providing audio feedback through bone conduction headphones. Providing auditory feedback avoids overloading the user with visual enhancements.
Our participant has a form of color blindness which, unlike those who experience color blindness separately from other conditions, augments the difficulties he experiences with low vision. Therefore, we must be selective about colors chosen to provide contrast.
On the contrary, our participant did note that color can be used to enhance recognition. In our prototype, we converted irrelevant objects and background elements in the periphery to grayscale and used color for the chessboard and chess pieces as standout objects. All colors are customizable to give users the liberty to select among several options to accommodate their visual deficiencies.
Classifying Objects with Customized Attributes
Games are often integrated with a classification system for objects that perform a particular role within the game. In chess, pieces are classified by color (usually black and white) to articulate one’s own pieces or the opponents and shape, which signifies the corresponding strategy with that piece.
Chess requires an understanding of this classification system in one view in order to make a strategic decision about one’s next move. Our prototype enables the player to optionally customize how they’d like to classify game piece attributes.
This allows them to choose attributes they find more accessible such as color with varying levels or hue and contrast as well as shapes or patterns. This can be accessed within the game mode reference screen at any time during the game and can be provided as an alternative or enhancement to the semi-transparent overlays on the pieces themselves.
Our participant noted the significance of contrast in deciphering visual information for people with low vision. We therefore wanted to ensure that our content has sufficient contrast options to customize. Some users with low vision may need even more contrast. Others may benefit from specific color combinations, such as blue text on a yellow background.
Zoom is the most common technology that people with low vision use. We designed specific toggle switch buttons on our AR smart glasses for magnification. This allows people with visual impairments to customize the view of the scene based on their needs.
While co-designing our product, we determined that magnification should only be adjustable when using the main toggle switch of the smart glasses. We contemplated the concept of using an auto-zoom function that would be triggered by the pointing gesture, but our consensus was that this would be suboptimal and magnification should remain constant and be easily adjustable manually for more user control.
Overall, our participant liked the concept of wearable AR smart glasses that embrace aesthetics and provide gesture recognition. He valued all customization features because everyone’s needs are different. He found out that high contrast yellow outlines help him differentiate chess pieces easily. He liked the color-coded diamonds as a new classification system, as he could recognize the location of chess pieces at once. He found the display of chess strategies through object and gesture recognition valuable, especially for his training needs.
- How well the solution ended up addressing the problem?
This tool can be applied to any indoor and outdoor game. Making physical games accessible for people with low vision was our mission to improve their quality of life and bring fun to them. Incorporating a high level of customization such as zoom, color, contrast, customized classification systems, and captions help people with low vision who have varying needs.
2. Who else may be able to benefit from this design?
This tool can be applied to any individual with low vision as well as sighted individuals. The training feature for chess helps sighted people to learn more about chess strategies as well and envision a few steps ahead.
Our mission for the visually impaired community is to introduce fun and engage them in social and physical activities to reduce isolation and exclusion.
3. What aspects of the design might exclude other users (thinking of users with a range of abilities)? How might you iterate on the design to reduce those exclusions?
The audio feedback might be unnecessary for people with hearing impairments, but other features can complement this. Although this tool is designed for people with low vision, we hope the AR will be accessible for physical activities, so blind people can also enjoy the tangibility of physical games.
It is assumed that the audio feedback can be an information source for blind people. However, further testing and feedback will be required to determine if audio feedback is sufficient in circumstances where scanning environments for simultaneous information such as locations of chess pieces on a board is required. Providing simultaneous audio or haptic feedback as an alternative classification system for game pieces may be an opportunity to explore.
Moreover, ideating for the needs of people with motor impairments should be further explored as well. Having a feature where movement of chess pieces could be made through the direction of eye movements could be an opportunity area to enable access for this community.
4. What aspects of the participatory design process worked well or didn’t work well?
The remote participatory design process was challenging when it came to designing wearable assistive technology. We resorted to the Wizard of Oz method or other methods involving envisioning hypothetical scenarios to gather feedback. It is possible that this placed limitations on the quality of feedback we could receive.
Lack of access to AR tools such as the Microsoft HoloLens and lack of familiarity with AR softwares also presented as a challenge. Most prototyping software for augmented reality requires programming. This was not an avenue we could take due to time constraints, so we had to resort to alternative prototyping methods to simulate the augmented experience.
Utilizing cultural probes and the bags of stuff activity was helpful to initiate blue sky thinking and help our participant see the range of design opportunities he could explore as a co-designer.
5. In working with your participant, what were the biggest surprises — the things you learned that you would not have predicted?
This project really exposed us to our own ability bias. For example, we predicted that our participant would need to rely on the voice assistant to walk through the customization menu items.
However, the contrast provided within the typography was suitable for his needs. Similarly, we questioned whether the use of an alternative classification system would be confusing or not. Our participant found this helpful and useful.
6. To what extent did the methods you chose for your design sessions and prototypes get at what you were looking for?
Using a semi-structured interview for understanding our user’s needs allowed us to test our assumptions through the questions we asked, but also enabled us to really understand the right problem to solve. This is so, because it allowed our participant room to help guide us to what he deemed as the right problem.
Secondly, performing a participatory design session was fruitful to place us in the right direction. We were able to design what our participant envisioned would best suit his needs. This lessened the possibility that translated needs would get lost since our participant was directly involved in designing.
7. In hindsight, would a different approach (process, not specifics of your interface) have been better?
Testing a wearable assistive technology and particularly AR with Microsoft Hololens would have provided our participant with a more authentic experience. This would have allowed him to articulate nuances in his experience that could not otherwise be captured.
8. If you had more time and money for the design process, would you have done anything differently?
If our team had more money and time, we would have learned Unity and C# to make this prototype look better. Having Hololens and other AR smart glasses would have helped us bring our envisioned design into reality. We would have tested it with more participants to meet our expectations.