Abstract
Residual visual capabilities and the associated phenomenological experience can differ significantly between persons with similar visual acuity and similar diagnosis. There is a substantial variance in situations and tasks that persons with low vision find challenging. Smartglasses provide the opportunity of presenting individualized visual feedback targeted to each user’s requirements. Here, we interviewed nine persons with low vision to obtain insight into their subjective perceptual experience associated with factors such as illumination, color, contrast, and movement, as well as context factors. Further, we contribute a collection of everyday activities that rely on visual perception as well as strategies participants employ in their everyday lives. We find that our participants rely on their residual vision as the dominant sense in many different everyday activities. They prefer vision to other modalities if they can perceive the information visually, which highlights the need for assistive devices with visual feedback.
You have full access to this open access chapter, Download conference paper PDF
Similar content being viewed by others
Keywords
1 Introduction
In 2019 the World Health Organization estimated 82.5 million persons worldwide to have a cataract, glaucoma, or age-related macular degeneration [1]. More than 40 million others have been diagnosed with less common eye diseases. For persons with such a visual impairment, prescription glasses or contact lenses cannot fully restore the visual acuity. However, many have substantial residual visual capabilities. Even people classified as “blind” according to the WHO classification [1], can have a visual acuity of up to 0.05 (or 1/20). Only a very small portion of individuals with a vision impairment has no light perception whatsoever. In our work, all participants, except for P9, are “blind” according to the WHO classification. Such a classification aids the common misconception that a blind person cannot see at all [12]. This misconception together with the limited technology available in the past may have hindered the development of visual assistive technology and led to assistive devices often using other modalities like audio [8] or haptic [9] feedback.
Leveraging residual vision when designing an assistive device is challenging due to interindividual differences and visual acuity not being a good indicator to describe visual perception [2]. Even persons with the same diagnosis and visual acuity may have vastly different subjective experiences. Nevertheless, visual feedback does not rely on the user’s hearing or a well-developed sense of touch, which benefits especially elderly users.
Assistive technology with visual feedback is often designed as general-purpose devices; it disregards all context and shows an enhanced and enlarged image to the user. This can range from optical magnifiers over electronic magnifiers to magnifying software on computers or smartphones. Most of these solutions are either stationary or need to be held in the user’s hand. Novel devices such as smartglasses (SG) offer a hands-free solution for visual support. There are companies that bring electronic magnifiers to the user’s head [7] or research into other magnification approaches [11], but SG offer more than magnification; they offer context. This allows to carefully select task-specific information needed by the user and display it in a way the user can perceive [4, 14, 15].
We interviewed nine people with very low vision and a range of visual impairments to obtain information about their subjective perceptual experience as well as everyday activities in which they use their visual sense or would like to use it. We find that the visual sense is used frequently and preferentially wherever possible, rather than as a supporting sense, and that participants expressed a desire for visual adjustments to the environment, such as made possible by SG. As the reported subjective experience and visual strategies differ substantially, we conclude that vision aids should offer options to customize the visual presentation of the feedback.
2 Related Work
Assistive Technology and Smartglasses. Assistive technology ranges from non-electronic devices like a white cane to advanced technology and machine learning like the OrCam [8]. The Orcam provides audio feedback and can read text, recognize colors, and identify people. Other research combines audio with tactile feedback for navigation [9]. Both mentioned devices have one advantage-they are hands-free. Another emerging technology to provide hands-free visual feedback are smartglasses like the Microsoft HoloLens [6]. Augmented reality (AR) applications can assist users by visually highlighting the desired product during a shopping task [10] or by augmenting stairs for safer navigation [14]. Visual augmentations can support users during interactions with touchscreens [4] and provide information in the context of reading clock faces or the facial expression of a conversational partner [5].
Visual Perception and Preferences. There are different parameters that can be measured, such as visual acuity, visual field measurements, and contrast sensitivity to characterize visual perception [3]. However, these measures only provide a hint on how the person perceives a visual stimulus. Zhao et al. evaluated how visual stimuli in SG are perceived by persons with low vision [13] and found that colors, shapes and text displayed in the SG can be perceived by users with visual impairment. Sandnes asked three people with low vision about challenges out of their control in their everyday lives and where SG could assist them [10]. They find recognizing of facial expressions and reading text as the biggest challenges where a solution through assistive technology is desired.
3 Methodology
We conducted semi-structured interviews with nine participants with low vision over the phone due to restrictions caused by an ongoing global pandemic. The duration of the interviews averaged around 1.5 h and the participants were reimbursed with around 13.5 USD equivalent per hour. We sent participants a consent form and information regarding their data privacy and rights via email in advance. The participants could abort or interrupt the interview at any point; however, no participant made use of this option. Afterwards, the interview recordings were transcribed and analyzed for themes.
Participants. We recruited participants with severe visual impairment and with residual vision by contacting local support organizations. The group of participants consists of five female, four male, and zero diverse participants with a mean age of 54 (SD: 22.6) years. Five participants have central vision loss (P2 age-related macular degeneration, P4 and P9 Stargardt’s disease, P6 and P7 cone dystrophy), P8 has peripheral vision loss since birth, P3 has a strong Myopia due to albinism, P1 a retinal detachment, and P5 Nystagmus. Except P9, all participants are “blind” according to the WHO classification [1]. A detailed overview of the participants and their visual acuity is shown in Table 1.
Questions. In addition to demographic and medical information, we had structured the questions in two categories. First, we focused on the subjective visual perception. We asked participants to describe their perception of color and contrast, the influence of flicker, movement and motion, how their perception changes with distance, their preferred lighting as well as the occurrence of glare, reading, recognizable object sizes and shapes, and unpleasant visual stimuli in general. In the second part, we inquired about everyday activities, strategies, and assistive devices. Specifically, we asked the participant about assistive devices they use and about everyday situations where they use mainly their visual sense, their sense of hearing, or their sense of touch and whether they were to prefer using their visual sense given visual adaptations of the environment. If not mentioned by the participant, we specifically asked about using stairs, cooking, doing laundry, computer and smartphone usage, shopping, social interaction, and traveling with public transport. In the following three sections, we report the results and supplement them with suggestions for the design of visual assistive technology.
4 Subjective Visual Perception
Reported individual capabilities and subjective experiences differed widely. However, some aspects were similar for most participants. For instance, color vision and prior knowledge play a crucial part in the processing of visual information.
Top-down Processing. Our brain takes prior knowledge about where things are located and what is happening in the environment and uses this information to structure and interpret the inputs provided by our sensory system. This is termed top-down processing. All but one participant mentioned profiting from top-down processing when they impose strict order onto their environment, e.g., in the fridge (P1, P4, P7), storage space (P4, P7) or working space (P3, P2, P6, P7, P9). This effect is not merely a memory effect, but directly influences visual perception, as P2 describes, “I cannot find it because I cannot see it. [\(\ldots \)] If I know it lies there [\(\ldots \)] then, I can see it.” Suppose one knows the location, shape or color of an object or expects certain persons in a situation. In that case, picking the right object or identifying a person (P4) is possible based on visual features. However, structuring the environment increases cognitive load, “[\(\ldots \)], then you need to remember everything. I could not be on the phone while cooking” - P7. Therefore, visual aids could be particularly useful if they expose structure in the environment, e.g., alert the user when they disrupt an existing order, i.e., misplace an object, or directly display the correct position where the user should place an object to reduce cognitive load.
Colors and Contrast. All participants reported seeing colors and using them in their everyday lives. Three participants can distinguish colors easily; the others need high contrasts or strong colors. P6 only uses the brightness contrast of colors in his central visual field but sees colors in the peripheral field of view. Shades of grey are often challenging to perceive (P2, P5, P8). For example, P5 reported traffic light poles to be hardly visible due to their grey color, “but I can see the yellow [button to indicate pedestrians want to cross] then I know that is a traffic light”. Participants also use color to mark objects, e.g., a colorful post-it on P3s phone (“I am famous for this, I mark everything with colors”), or structure their environment (P4, P7, P1). P1 even notes, “This is important. Seeing colors complements the missing vision.” The color perception also depends on contrasts, as P4 and P6 only recognize a coin if it lies on a contrasting surface. Visual feedback should be designed colorful and in contrast to other visual elements and the background, i.e., the real world.
Object Size and Shape. All participants agree that contrast is more important to them than shape and, to some extent, the size of objects. Within the same color, perceptibility is better for larger objects, but a small contrast-rich object can be perceived more easily than a large object with little contrast. However, small objects disappear in central vision loss and require peripheral fixation to be perceived (P6). Only three participants expressed a preference regarding shape. Rectangles are easier to recognize (P8), or it is clearer where and how to grab them (P7). P3 prefers objects with characteristic and unusual outlines. Despite the minor role of the shape and size of an object compared to its color and contrast, a developer of a visual aid should keep these in mind and offer customization.
Movement, Motion, and Distance. One participant cannot perceive distance at all; the others recognize a moving car from two meters (P7) up to 30m to 50m (P4, P9). For larger distances, movement (P2, P3, P6, P9), especially movement through the visual field (e.g., left to right) (P4, P7), is reported as a crucial cue. Only P1 prefers still objects over movement. P2 stated, “If it does not move, I do not detect it.”, but also clarified that for perceiving details, a still object is required. In contrary to movement, motion-in the sense of movement at a constant position of the visual field, e.g., a person waving-compared to a still object is only favorable for half of the participants and seen as little helpful (P2, P4, P7) or only useful in extreme cases (P1) by the others. Therefore, movement can serve as a cue to get the user’s attention but is probably not particularly useful to convey detailed information.
Illumination and Glaring Lights. Most participants prefer daylight or bright white light (P1, P2, P4, P5, P7, P8, P9); P3 prefers uniform lighting independent of the brightness, and P6 prefers warm light at a low brightness level. However, except P4 all participants indicated to be bedazzled easily, e.g., by white walls or white furniture in well-lit rooms. Many use sunglasses or cut-off filter glasses when outside. Flickering or blinking lights are often perceived as noticeable but irritating, especially if multiple are present (P1). P3 states that if a police car drives by, “I just stand still because I notice that my perception is disturbed [by the police lights],” and P5 finds not only flickering lights but also lights moving through her visual field hard to perceive and irritating. P2, P4, and P8 are not disturbed by blinking lights, but P4 and P8 additionally do not notice blinking lights in well-lit situations. Therefore, visual feedback needs to be adjustable and SG should support wearing sunglasses or cut-off filter glasses underneath. Flicker and blinking should only be used sparsely, e.g., for emergency warnings.
5 Everyday Activities
In the following, we present the results regarding everyday activities, in which participants use their visual perception or employ strategies to compensate for the visual impairment. From few situations to daily usage, all participants have situations where they rely on their vision. As P4 puts it, “I see really little [\(\ldots \)], but I am used to doing a lot visually with that little. Through years of training.”
Navigation and Stairs. Navigation is a crucial element in everyday life. Multiple participants reported using their visual perception to “recognize the border between the grass [and the sidewalk]” (P8) or navigate through crowded areas “thanks to my peripheral vision” (P6). Participants also read the destination boards for public transport, although this is not always possible (P1, P4). For recognizing other traffic participants, especially cars when crossing a street, P5 does not rely on her senses and uses vision to follow other pedestrians. Others use their hearing (P4, P7, P8), vision (P3) or a combination (P9). P1 and P2 adjust according to the situation “if they have lights on, I can see them; otherwise I do not” -P2, “if it is quiet, then a quick look left and right is sufficient. When there is a lot of noise then [\(\ldots \)] I have to trust my vision more” -P1. For obstacles, especially stairs, participants use a combination of their vision and touch through a white cane or their foot. If unknown stairs cannot be visually perceived, touch is used. Three participants mentioned contrast-rich stairs as visually accessible for them and four more agreed when asked that such markings reduce the need for using touch. Therefore, assistive devices can already provide a benefit with little augmentations to increase the contrast of objects. Designers should be careful not to occlude visual cues from the real world.
Food Preparation. All participants reported cooking or baking themselves. Four mentioned strong counter-top lighting as a necessity. Four participants use their hearing (talking kitchen scale) and all use their sense of touch at least as supporting information, e.g., when peeling carrots. On the other hand, three participants mention peeling potatoes as a mainly visual task. Others adjust their surroundings to use their visual sense: P7 uses bowls with strong colors and contrast to the kitchen counter and one participant uses a thermal imaging camera to fry steak. Thus, even a manual task like cooking is supported by the users’ vision and some participants already use visual “assistive” devices.
Household Appliances. All participants base their interactions with household appliances on training and knowledge (“I looked at the scale with my electronic magnifier and said ‘if the pointer is at 9 o’clock, then it is 180 \(^{\circ }\)C”’ -P1). Some combine it with visual or tactile markings. For example, P2 uses visual markers for her iron and stove, but tactile markers for the washing machine. P9 uses the sense of touch when she is certain about her interaction, otherwise she visually confirms her input. An assistive device should generally focus on unknown devices, but support the user for known devices by checking and confirming the user’s action.
Reading. None of the participants can read the newspaper. However, six participants (P1, P3, P4, P5, P6, P8) frequently read text with a large font size or use magnifiers. For long texts, these six use text to speech or screen readers, as do the others for all text. While text in a sufficient font size is useful for many participants, it should only be used as visual feedback if necessary.
Computer and Smartphone. All participants reported to frequently use smartphones and computers. P7 sorted her desktop so that she can use the PC only with magnification and P4 uses vision for all tasks but reading long texts. P6 reads news only on the smartphone because the PC websites are too overloaded. One participant (P3) even reported working for eight hours at a time on his PC, including reading text. Further, the smartphone was frequently mentioned as a magnifying assistive device (taking a photo) for other tasks highlighting the usage of visual assistive devices.
Social Interactions. All participants mentioned that they cannot recognize other persons unless they are spoken to. Then most can use the voice to identify the speaker. P4 and P6 can recognize well-known persons if they expect them to appear in a certain situation by visually recognizing the shape and stride of the persons. Facial expressions are not recognizable, but eight participants recognize body language. Together with the voice, this is enough to tell the coarse emotions of a conversational partner, but not enough to tell if they are bored or distracted (P3, P4), or hold eye contact (P6, P9). A device providing the status of the conversational partner’s attention, eye contact and facial expression would provide a great benefit for persons with visual impairment.
6 Personal Strategies and Adjustments to the World
All participants noted that they prefer their visual sense wherever possible and have regular activities where their visual sense is crucial for their perception. Accordingly, they often use visual aids and only P8 does not use magnifiers at all. Three participants reported carrying a flashlight to illuminate dark scenes or spotlight objects in front of them to perceive details. One uses a thermal imaging camera during cooking. A similar focus on the visual sense is apparent in the adjustments participants make. P1, P2, P3 use visual markers and large-text labels in their home and others rely on differently colored objects to file documents. When asked for any changes in the environment, six participants requested visual adjustments, such as contrast-rich markings for stairs or white lines on the sidewalk as orientation in cities. P9 indicated that although she can accomplish a task without vision in many cases, it is usually just barely within the threshold of being possible and “it just takes way longer”.
7 Guidelines and Challenges
Our results show that persons with low vision can profit from assistive technology using visual feedback. Here, we condense our findings into guidelines for developers of mixed-reality assistive technology. Specifically, the following properties are desirable in a visual aid: it should be adjustable by the user, automatically adapt to the user and the environment, carefully select the information the user needs at the moment, and automatically infer the user’s intention.
Adjustability. It should be possible for the user to adjust the device for their subjective perception. The most important properties for this are color and brightness. However, the form and level of detail of the presented information, and the tasks, for which aid is desired, are important factors, as well. The physical and software-based user interface should be accessible to allow the user quick changes without asking other persons for help.
Context-Sensitivity. The system should automatically adapt to the user and the environment. For example, the brightness can be changed temporarily based on the illumination in the environment and the visual complexity of the displayed information can be adapted to the user’s day to day condition. If the user’s actions indicate that the presented information is not correctly processed, permanent adjustments can be made, e.g., reduce the level of detail. Additionally, the system can successively introduce other modalities, e.g., sound, if the user is gradually losing their remaining visual capabilities due to a progessive medical condition.
Selectivity. Participants expressed the concern that the visual feedback may be too much, e.g., if stimuli for multiple tasks are presented simultaneously, or if they occlude the real world. Therefore, the user should be able to clearly specify tasks they need assistance with as well as an order of priority, in which aid for simultaneous tasks should be presented.
Intention Recognition. The system should recognize the users’ intention and provide aid only for such tasks the user is currently doing or attempting to do. For example, other traffic participants on the road should only be highlighted, if the user actually wants to cross the street and not while they are walking down the sidewalk.
Challenges. At present, building such an application poses a challenge to designers. Current hardware provides a limited field of view for visual augmentations, often paired with further disadvantages such as a considerable size and weight of the headset as well as a short battery life. While this challenge is likely to be solved with time other issues are not. To perfectly adapt to the user, the device requires medical information and permanent monitoring of the user’s actions and the environment. When taking data privacy of the user and of by-standers into account, some features may become difficult to implement from an ethical point of view.
Personalized assistive technology that uses machine learning and artificial intelligence to learn the user’s preferences and subjective perception can provide optimal task and user dependent visual feedback.
8 Conclusion and Outlook
We contribute an overview of the functional vision of persons with low vision and a summary of their everyday activities, which does not focus on what the participants cannot see, cannot do or find challenging but covers an extensive range of capabilities and learned strategies of the participants. We further show that there are specific situations and activities for all individuals, in which the visual sense is dominant and crucial for the task and give some generalized guidelines for designers and developers. These insights can raise the awareness of the impact novel visual solutions can have for “blind” individuals as well as provide some guidance with regard to the design of vision aids.
References
Blindness and vision impairment. World Health Organization (2021). https://www.who.int/news-room/fact-sheets/detail/blindness-and-visual-impairment
Colenbrander, A.: Vision rehabilitation is part of AMD care. Vision 2(1), 4 (2018). https://doi.org/10.3390/vision2010004, https://www.mdpi.com/2411-5150/2/1/4
Hyvärinen, L.: Visual perception in ‘low vision’. Perception 28(12), 1533–1537 (1999). https://doi.org/10.1068/p2856, https://doi.org/10.1068/p2856, pMID: 10793885
Lang, F., Machulla, T.: Pressing a button you cannot see: evaluating visual designs to assist persons with low vision through augmented reality. In: Proceedings of the 27th ACM Symposium on Virtual Reality Software and Technology, VRST 2021. Association for Computing Machinery, New York, NY, USA (2021). https://doi.org/10.1145/3489849.3489873
Lang, F., Schmidt, A., Machulla, T.: Augmented reality for people with low vision: symbolic and alphanumeric representation of information. In: Miesenberger, K., Manduchi, R., Covarrubias Rodriguez, M., Peňáz, P. (eds.) Computers Helping People with Special Needs, pp. 146–156. Springer, Cham (2020)
Microsoft (2022). https://www.microsoft.com/hololens
NuEyes (2022). https://nueyes.com/
OrCam (2022). https://www.orcam.com
Patil, K., Jawadwala, Q., Shu, F.C.: Design and construction of electronic aid for visually impaired people. IEEE Trans. Hum. Mach. Syst. 48(2), 172–182 (2018). https://doi.org/10.1109/THMS.2018.2799588
Sandnes, F.E.: What do low-vision users really want from smart glasses? Faces, text and perhaps no glasses at all. In: Miesenberger, K., Bühler, C., Penaz, P. (eds.) Computers Helping People with Special Needs, pp. 187–194. Springer, Cham (2016)
Stearns, L., Findlater, L., Froehlich, J.E.: Design of an augmented reality magnification aid for low vision users. In: Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility, pp. 28–39. ASSETS 2018. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3234695.3236361
Thevin, L., Machulla, T.: Three common misconceptions about visual impairments. In: 2020 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW), pp. 523–524 (2020). https://doi.org/10.1109/VRW50115.2020.00113
Zhao, Y., Hu, M., Hashash, S., Azenkot, S.: Understanding low vision people’s visual perception on commercial augmented reality glasses. In: CHI 2017, pp. 4170–4181. Association for Computing Machinery, New York, NY, USA (2017). https://doi.org/10.1145/3025453.3025949
Zhao, Y., Kupferstein, E., Castro, B.V., Feiner, S., Azenkot, S.: Designing AR visualizations to facilitate stair navigation for people with low vision. In: Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology, UIST 2019, pp. 387–402. Association for Computing Machinery, New York, NY, USA (2019). https://doi.org/10.1145/3332165.3347906
Zhao, Y., Szpiro, S., Knighten, J., Azenkot, S.: CueSee: exploring visual cues for people with low vision to facilitate a visual search task, pp. 73–84, September 2016. https://doi.org/10.1145/2971648.2971730
Acknowledgments
This research was supported by the German Federal Ministry of Education and Research as part of the project IDeA (grant no. 16SV8102) and Hive (grant no. 16SV8183). We would also like to thank the reviewers and ACs for their work and valuable feedback.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
Copyright information
© 2022 The Author(s)
About this paper
Cite this paper
Lang, F., Schmidt, A., Machulla, T. (2022). Mixed Reality as Assistive Technology: Guidelines Based on an Assessment of Residual Functional Vision in Persons with Low Vision. In: Miesenberger, K., Kouroupetroglou, G., Mavrou, K., Manduchi, R., Covarrubias Rodriguez, M., Penáz, P. (eds) Computers Helping People with Special Needs. ICCHP-AAATE 2022. Lecture Notes in Computer Science, vol 13342. Springer, Cham. https://doi.org/10.1007/978-3-031-08645-8_57
Download citation
DOI: https://doi.org/10.1007/978-3-031-08645-8_57
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-08644-1
Online ISBN: 978-3-031-08645-8
eBook Packages: Computer ScienceComputer Science (R0)