INCLUSIVE EDUCATION: A CASE STUDY ON ITS CHALLENGES AND LONG-TERM IMPACT ON VISUALLY IMPAIRED INDIVIDUALS

  • International Journal of Modern Education 2(4):30-42

Hind Elnour at Universiti Kebangsaan Malaysia

  • Universiti Kebangsaan Malaysia
  • This person is not on ResearchGate, or hasn't claimed this research yet.

Discover the world's research

  • 25+ million members
  • 160+ million publication pages
  • 2.3+ billion citations

Jeremy Johnson

  • Noor Aini Ahmad
  • Siti Muhibah Hj Nor

Zetty Nurzuliana Rashed

  • Fátima Coelho
  • Florentino Blázquez

Muhammad Anwer

  • Dr Nasir Sulman
  • Recruit researchers
  • Join for free
  • Login Email Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google Welcome back! Please log in. Email · Hint Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google No account? Sign up

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Published: 26 January 2023

Early detection of visual impairment in young children using a smartphone-based deep learning system

  • Wenben Chen 1   na1 ,
  • Ruiyang Li 1   na1 ,
  • Qinji Yu 2   na1 ,
  • Andi Xu 1   na1 ,
  • Yile Feng 3   na1 ,
  • Ruixin Wang 1 ,
  • Lanqin Zhao   ORCID: orcid.org/0000-0002-5182-3678 1 ,
  • Zhenzhe Lin 1 ,
  • Yahan Yang 1 ,
  • Duoru Lin 1 ,
  • Xiaohang Wu 1 ,
  • Jingjing Chen 1 ,
  • Zhenzhen Liu 1 ,
  • Yuxuan Wu 1 ,
  • Kang Dang 3 ,
  • Kexin Qiu 3 ,
  • Zilong Wang   ORCID: orcid.org/0000-0002-6760-1471 3 ,
  • Ziheng Zhou 3 ,
  • Dong Liu 1 ,
  • Qianni Wu 1 ,
  • Mingyuan Li 1 ,
  • Yifan Xiang   ORCID: orcid.org/0000-0002-5559-3654 1 ,
  • Xiaoyan Li 1 ,
  • Zhuoling Lin 1 ,
  • Danqi Zeng 1 ,
  • Yunjian Huang 1 ,
  • Silang Mo 4 ,
  • Xiucheng Huang 4 ,
  • Shulin Sun 5 ,
  • Jianmin Hu 6 ,
  • Jun Zhao 7 ,
  • Meirong Wei 8 ,
  • Shoulong Hu 9 , 10 ,
  • Liang Chen 11 ,
  • Bingfa Dai 6 ,
  • Huasheng Yang 1 ,
  • Danping Huang 1 ,
  • Xiaoming Lin 1 ,
  • Lingyi Liang 1 ,
  • Xiaoyan Ding 1 ,
  • Yangfan Yang 1 ,
  • Pengsen Wu 1 ,
  • Feihui Zheng 12 ,
  • Nick Stanojcic 13 ,
  • Ji-Peng Olivia Li   ORCID: orcid.org/0000-0001-8130-2913 14 ,
  • Carol Y. Cheung 15 ,
  • Erping Long   ORCID: orcid.org/0000-0002-3502-5596 1 ,
  • Chuan Chen 16 ,
  • Yi Zhu 17 ,
  • Patrick Yu-Wai-Man   ORCID: orcid.org/0000-0001-7847-9320 14 , 18 , 19 , 20 ,
  • Ruixuan Wang 21 ,
  • Wei-shi Zheng   ORCID: orcid.org/0000-0001-8327-0003 21 ,
  • Xiaowei Ding   ORCID: orcid.org/0000-0003-3263-9266 2 , 3 &
  • Haotian Lin   ORCID: orcid.org/0000-0003-4672-9721 1 , 22 , 23  

Nature Medicine volume  29 ,  pages 493–503 ( 2023 ) Cite this article

20k Accesses

11 Citations

85 Altmetric

Metrics details

  • Eye manifestations
  • Machine learning
  • Paediatrics
  • Translational research

Early detection of visual impairment is crucial but is frequently missed in young children, who are capable of only limited cooperation with standard vision tests. Although certain features of visually impaired children, such as facial appearance and ocular movements, can assist ophthalmic practice, applying these features to real-world screening remains challenging. Here, we present a mobile health (mHealth) system, the smartphone-based Apollo Infant Sight (AIS), which identifies visually impaired children with any of 16 ophthalmic disorders by recording and analyzing their gazing behaviors and facial features under visual stimuli. Videos from 3,652 children (≤48 months in age; 54.5% boys) were prospectively collected to develop and validate this system. For detecting visual impairment, AIS achieved an area under the receiver operating curve (AUC) of 0.940 in an internal validation set and an AUC of 0.843 in an external validation set collected in multiple ophthalmology clinics across China. In a further test of AIS for at-home implementation by untrained parents or caregivers using their smartphones, the system was able to adapt to different testing conditions and achieved an AUC of 0.859. This mHealth system has the potential to be used by healthcare professionals, parents and caregivers for identifying young children with visual impairment across a wide range of ophthalmic disorders.

Similar content being viewed by others

case study of a child with visual impairment

Artificial intelligence enhanced ophthalmological screening in children: insights from a cohort study in Lubelskie Voivodeship

case study of a child with visual impairment

Clinical performance of a smartphone-based low vision aid

case study of a child with visual impairment

An artificial intelligence platform for the screening and managing of strabismus

Visual impairment is one of the most important causes of long-term disability in children worldwide and has a detrimental impact on education and socioeconomic achievements 1 , 2 . Infancy and toddlerhood (early childhood) are critical periods for visual development 3 , during which early detection and prompt treatment of ocular pathology can prevent irreversible visual loss 4 , 5 . Young children are unable to complain of visual difficulties, and since they are unwilling or find it difficult to cooperate with standard vision tests (for example, optotype tests), age-appropriate tests such as grating acuity cards are commonly used to observe their reactions to visual stimuli 6 , 7 . However, evaluating the vision of young children using these tests requires highly trained operators, which greatly hinders their wider adoption, especially in low-income and middle-income countries with the highest prevalence of visual impairment but poor medical resources 8 . In addition, these tests, even when performed by experienced pediatric ophthalmologists, have been shown to have low repeatability in large-scale population screening studies 9 , 10 , 11 . Therefore, it is imperative to develop an easy-to-use and effective detection tool to enable the timely diagnosis of visual impairment in young children and prompt intervention.

Ocular abnormalities causing visual impairment in children often manifest with typical phenotypic features, such as leukocoria (white eye) in cataract 12 and retinoblastoma 13 , eyelid drooping in congenital ptosis 14 , and a cloudy and enlarged cornea in congenital glaucoma 15 . In addition, previous studies have found that dynamic aberrant behavioral features such as abnormal ocular movement, fixation patterns or visual preference can also point toward an underlying ocular pathology in children 16 , 17 . These phenotypic manifestations are frequently seen in ocular diseases, such as amblyopia and strabismus, and they can provide valuable clues for diagnosing visual impairment in young children 18 , 19 , 20 . However, systematically recording and applying these features to real ophthalmic practice are still in their infancy due to the lack of practical and effective tools.

Given the rapid development of mobile health (mHealth) and artificial intelligence (AI) algorithms in identifying or monitoring disease states 21 , 22 , the use of mobile devices, such as smartphones, to record and analyze phenotypic features to help identify visual impairment in young children presents great opportunities. However, developing such a system for large-scale ophthalmic application is hindered by three main challenges: (1) collecting phenotypic data that reliably reflect the visual status of the children in complex environments, (2) generalizing the system for large-scale applications and (3) providing evidence of its feasibility. The major bottleneck that impedes the widespread adoption of many medical AI systems is the limited feasibility and reliability when applied to settings with various data distributions in the real world 23 , 24 . A lack of cooperation is very common in pediatric ophthalmic practice, with constant head movement during examinations introducing test noise that poses several challenges to the stability of the system 25 . For the nascent technology of mHealth, rigorous evidence of clinical application is necessary but generally lacking 21 . These major difficulties explain the current lack of an effective and practical tool for detecting visual impairment in young children.

In this prospective, multicenter, observational study, we developed and validated a smartphone-based system, the Apollo Infant Sight (AIS), to identify visual impairment in young children in real-world settings. AIS was designed to induce a steady gaze in children by using cartoon-like video stimuli and collect videos that capture phenotypic features (facial appearance and ocular movements) for further analysis using deep learning (DL) models with robust quality control design against test noises. We collected more than 25,000,000 frames of videos from 3,652 children using AIS for DL model training and testing. We evaluated the system for detecting visual impairment caused by any of 16 ophthalmic disorders in five clinics at different institutions. Furthermore, we validated this system under different conditions with various test noise levels or ambient interference presented in real-world settings. We also evaluated AIS used by untrained parents or caregivers at home to test its wider applicability. This preliminary study indicates that AIS shows potential for early detection of visual impairment in young children in both clinical and community settings.

Overview of the study

We conducted this prospective, multicenter and observational study (identifier: NCT04237350 ) in three stages from 14 January 2020 to 30 January 2022 and collected a total of 3,865 videos with 25,972,800 frames of images from 3,652 Chinese children (aged ≤48 months) to develop and validate the AIS system in clinical and at-home settings (Fig. 1 ). The AIS system was developed and comprehensively tested (internal validation and reliability analyses under different testing conditions) at the clinic of Zhongshan Ophthalmic Center (ZOC) in the first stage, and was further tested in four other centers (external validation) and community settings (at-home implementation) in the second and third stages, respectively.

figure 1

a , Workflow of the system. The smartphone-based AIS system consists of two key components: an app for user education, testing preparation and data collection and a DL-based back end for data analysis. Parents or other users utilize the app to induce children to gaze at the smartphone, allowing the app to record their phenotypic states as video data. Then, the phenotypic videos are sent to a quality control module to discard low-quality frames. After automatic quality checking, multiple sets of consecutive qualified frames are extracted from the original video as clips, and the child’s facial regions are cropped from the clips to serve as candidate inputs to the detection/diagnostic models. A small rectangle indicates input or output data, a large rectangle indicates mathematical operation, and a trapezoid indicates DL or machine learning algorithm. b , Participant flow diagram. Children were recruited at multiple clinics to develop and comprehensively test the AIS system in stage 1 and stage 2. Children were recruited online to perform an at-home validation by untrained parents or caregivers in stage 3. I , input video; O , clip-level model outputs; P , key point coordinates; Q , qualified clips; S face , facial regions of the clips; FC, fully connected.

Development of the mHealth AIS system

We developed AIS for detecting visual impairment in young children tailored to the present study (Fig. 1a and Supplementary Video 1 ). A child-friendly app was designed to attract children to maintain their gaze using cartoon-like stimuli (Extended Data Fig. 1 ). The inbuilt front camera of the smartphone recorded 3.5-min videos that captured phenotypic features of the facial appearance and ocular movements during gazing. In this process, the mHealth app interactively guided users (healthcare professionals, volunteers, parents and caregivers) to familiarize themselves with the system and complete standardized preparations, including choosing and maintaining a suitable testing setting (Extended Data Fig. 2 ). After data collection was completed, DL models were applied to analyze the collected features and identify visually impaired children. To ensure the system’s performance in chaotic settings (environments with various interference factors or biases that can impact the system’s performance), a series of algorithm-based quality checking operations, including face detection (max-margin object detection (MMOD) convolutional neural network (CNN)); facial key point localization (ensemble of regression trees); and crying, occlusion and interference factor detections (the EfficientNet-B2 backbone shown in Extended Data Fig. 3a,b ), was first automatically performed by a quality control module to extract consecutive frames of high quality from the original video as short clips. Facial areas were cropped out to further eliminate environmental interference before the qualified clips were sent to a DL-based detection model for identifying visually impaired children and a diagnostic model for discriminating multiple ocular disorders (the EfficientNet-B4 backbone shown in Extended Data Fig. 3c ). The final results were returned to the mHealth app to alert users to promptly refer children at high risk of visual impairment to experienced pediatric ophthalmologists for timely diagnosis and intervention.

We first developed the data quality control module. Two facial detection and key point localization models were pretrained on publicly available datasets and adopted from an open-source library 26 . Additionally, we developed three CNNs for crying, interference and occlusion detection using images sampled from raw videos collected at the ZOC clinic (Extended Data Fig. 3d and Supplementary Table 1 ). Then, we trained and validated the detection/diagnostic models on the development dataset collected by trained volunteers using iPhone-7/8 smartphones at the clinic of ZOC (Extended Data Fig. 3e ). A total of 2,632 raw videos from 2,632 children were collected, and after automatic quality control, videos of 2,344 children (89.1%) were reserved as the development dataset (Fig. 1b ), including 871 (37.2%) for children in the ‘nonimpairment’ group, 861 (36.7%) in the ‘mild impairment’ group and 612 (26.1%) in the ‘severe impairment’ group. Detailed information on the qualified dataset is provided in Table 1 . Before model training, the development dataset was randomly split into training, tuning and validation sets stratified on sex, age and the ophthalmic condition (Supplementary Table 2 ). The videos utilized for quality control module development were excluded from the detection/diagnostic model validation.

Performance of the detection model in real clinical settings with trained volunteers

The detection model was trained to discriminate visually impaired children from nonimpaired children based on the high-quality clips extracted from the phenotypic videos. At the clip level, the detection model achieved an area under the receiver operating curve (AUC) of 0.925 (95% confidence interval (95% CI), 0.914–0.936) in the internal validation (Extended Data Fig. 4a ). Furthermore, we evaluated the performance of the detection model via an independent external validation performed by trained volunteers using iPhone-7/iPhone-8 smartphones at the routine clinics of four other centers. In this stage, quality checking was embedded in the data acquisition process, and the quality control module automatically reminded volunteers to recollect data when the videos were of low quality (Fig. 1b ). Qualified videos for 298 children undergoing ophthalmic examinations were utilized for final validation, including 188 (63.1%) nonimpaired children, 67 (22.5%) mildly impaired children and 43 (14.4%) severely impaired children (Table 1 ). At the clip level, the detection model achieved an AUC of 0.814 (95% CI, 0.790–0.838) in the external validation (Extended Data Fig. 4b ).

The performance of the detection model to identify visually impaired children was evaluated by averaging the clip-level predictions. Figure 2a shows distinguished clip predicted probability patterns for children with various visual conditions. At the child level, the detection model achieved an AUC of 0.940 (95% CI, 0.920–0.959), an accuracy of 86.5% (95% CI, 83.4%–89.0%), a sensitivity of 84.1% (95% CI, 80.2%–87.4%) and a specificity of 91.9% (95% CI, 86.9%–95.1%) in the internal validation (Fig. 2b and Supplementary Table 3 ). It achieved a child-level AUC of 0.843 (95% CI, 0.794–0.893), an accuracy of 82.6% (95% CI, 77.8%–86.4%), a sensitivity of 80.9% (95% CI, 72.6%–87.2%) and a specificity of 83.5% (95% CI, 77.6%–88.1%) in the external validation (Fig. 2c and Supplementary Table 3 ).

figure 2

a , Typical predicted probability patterns of the detection model. b , c , Receiver operating characteristic (ROC) curves of the detection model for distinguishing visually impaired children from nonimpaired children in the internal validation set ( b ) and in the external validation set ( c ). Center lines show ROC curves and shaded areas show 95% CIs. d , The predicted probabilities of children with the indicated ophthalmic disorders and nonimpaired children in the internal validation set. Results are expressed as mean ± s.d. * P  < 0.001 (ranging from 4.83 × 10 −27 for congenital cataract (CC) to 2.40 × 10 −5 for high ametropia (HA) compared with nonimpairment (NI), two-tailed Mann–Whitney U- tests). e , f , ROC curves of the detection model for distinguishing nonimpaired children from children with the indicated ophthalmic disorders that overlap ( e ) or did not overlap ( f ) with those in the training set (AUCs range from 0.747 for limbal dermoid (LD) to 0.989 for congenital ptosis (CP)). g , i , k , The predicted probabilities of the detection model for the nonimpaired, mildly impaired and severely impaired groups in the internal validation set ( g ), in the external validation set ( i ) and in the at-home implementation ( k ). Results are expressed as mean ± s.d. # P  < 0.001, two-tailed Mann–Whitney U- tests. h , j , ROC curves of the detection model for distinguishing mildly or severely impaired children from nonimpaired children in the internal validation set ( h ) and in the external validation set ( j ). l , ROC curves of the detection model for distinguishing impaired, mildly impaired or severely impaired children from nonimpaired children in the at-home implementation. m , The confusion matrix of the diagnostic model. n , ROC curves of the diagnostic model for discriminating each category of ophthalmic disorder from the other categories (aphakia (AA), AUC = 0.947 (0.918–0.976); congenital glaucoma (CG), AUC = 0.968 (0.923–1.000); NI, AUC = 0.976 (0.959–0.993); CP, AUC = 0.996 (0.989–1.000); strabismus (SA), AUC = 0.918 (0.875–0.961)). 95% DeLong CIs are shown for AUC values. MO, microphthalmia; NA, nystagmus; OF, other fundus diseases; PA, Peters’ anomaly; PFV, persistent fetal vasculature; PM, pupillary membrane; RB, retinoblastoma; SSOM, systemic syndromes with ocular manifestations; VI, visual impairment.

Furthermore, we investigated whether our system could identify visual impairment with any of 16 common ophthalmic disorders at the child level (Table 2 and Supplementary Table 4 ). For different ophthalmic disorders, the predicted probabilities of the detection model were all significantly higher than those for nonimpairment (Fig. 2d ). AIS achieved AUCs of over 0.800 in 15 of 16 binary classification tasks to distinguish visual impairment with various causes from nonimpairment (Fig. 2e,f and Supplementary Table 5 ), except for limbal dermoid with an AUC of 0.747 (95% CI, 0.646–0.849). Even for diseases not present in the training set, our system showed effective discriminative capabilities, revealing wider extendibility and generalizability to other conditions (Fig. 2f ). In addition, we initially recruited children with aphakia (including iatrogenic aphakia cases with common features of visual impairment, accounting for 10.2% of the visually impaired participants enrolled) to increase diversity of training samples for the robustness of the system. Therefore, to evaluate the performance of AIS in the natural population without iatrogenic cases or cases with medical interventions, the children with aphakia were removed from the validation datasets for further analysis and AIS remained reliable (Supplementary Table 6 ). These results indicate the advanced classification of AIS to detect common causes of visual impairment in young children.

Additionally, the performance of AIS in discriminating mild or severe impairment from nonimpairment was assessed at the child level (Fig. 2g–j and Supplementary Table 3 ). Significantly lower predicted probabilities of AIS were obtained for the nonimpaired group than for the mild or severe impairment groups. For discriminating mild impairment from nonimpairment, an AUC of 0.936 (95% CI, 0.912–0.960) and an AUC of 0.833 (95% CI, 0.774–0.892) were obtained for the internal validation and the external validation, respectively. For discriminating severe impairment from nonimpairment, an AUC of 0.944 (95% CI, 0.919–0.969) and an AUC of 0.859 (95% CI, 0.779–0.939) were obtained for the internal validation and the external validation, respectively.

To further evaluate the performance of AIS when applied to a population with a rare-case prevalence of visual impairment, we conducted a ‘finding a needle in a haystack’ test based on the internal validation dataset, with the simulated prevalences ranging from 0.1% to 9%. AIS successfully identified visually impaired children at different simulated prevalences, with AUCs stabilized around 0.940 (Supplementary Table 7 ).

Performance of the detection model in at-home settings with untrained parents or caregivers

After validation in real clinical settings, we further implemented a more challenging application in at-home settings by parents or caregivers using their smartphones according to the system’s instructions (Fig. 1b ). Of the 125 children recruited online from the Guangdong area, 122 children (97.6%) successfully completed qualified video collection, among whom 120 children undergoing ophthalmic examinations were enrolled. Other detailed information on the qualified data is summarized in Table 1 . Given the great difference in data distributions for the home environments compared with the clinics, we fine-tuned the detection model using qualified videos from 32 children and then tested it by the subsequently collected validation set from another 88 children. On the validation set, 31 (35.2%) children were classified as nonimpaired and 57 (64.8%) children were classified as visually impaired. AIS achieved effective performance in the at-home implementation, with an AUC of 0.817 (95% CI, 0.756–0.881) for discriminating clips of visually impaired children from those of nonimpaired children (Extended Data Fig. 4c ). At the child level, significantly lower predicted probability patterns were obtained for the nonimpaired children compared with mildly or severely impaired children (Fig. 2k ). An AUC of 0.859 (95% CI, 0.767–0.950), an accuracy of 77.3% (95% CI, 67.5%–84.8%), a sensitivity of 77.2% (95% CI, 64.8%–86.2%) and a specificity of 77.4% (95% CI, 60.2%–88.6%) were attained for discriminating visual impairment from nonimpairment (Fig. 2l and Supplementary Table 3 ).

Model visualization and explanation

We improved the interpretability of the detection model outputs by visualizing the model results in the internal validation set. After being projected into a two-dimensional space, the feature information extracted by the detection model exhibited distinct patterns between the visually impaired and nonimpaired clips (Fig. 3a ). The attention patterns of the detection model presented by the average heat maps varied with the children’s visual functions and underlying ophthalmic disorders (Fig. 3b,c ). Among the visually impaired children, the detection model focused more on the eyes and areas around the neck (Fig. 3c ). In particular, for the clips extracted from visually impaired samples, those classified by human experts as having abnormal patterns were more likely to be predicted by our system as ‘visual impairment’ than those that were randomly extracted (Fig. 3d,e and Supplementary Table 8 ), indicating that the detection model might pay more attention to the morphological appearance or behavioral patterns of the eye and head regions, as we previously reported 16 .

figure 3

a , The t -SNE algorithm was applied to visualize the detection model at the clip level. b , Facial detection and facial landmark localization algorithms were applied to detect and crop the facial regions of the children before data served as inputs to the AIS system. c , Average heat maps obtained from the detection model based on the inputs of facial regions in ( b ) for nonimpaired children and for children with the indicated ophthalmic disorders. d , The predicted probabilities for various types of clips were compared: clips randomly extracted from the videos of nonimpaired children, clips randomly extracted from the videos of visually impaired children and clips labeled by experienced ophthalmologists as having abnormal behavioral patterns extracted from videos of visually impaired children. e , Predicted probabilities of the detection model for various types of clips in ( d ) were compared: motionless fixation, n  = 48; squinting, n  = 18; nystagmus (NA), n  = 95; head position, n  = 115; suspected strabismus (SA), n  = 360; random visual impairment (VI), n  = 1,000; nonimpairment (NI), n  = 1,000. Results are expressed as mean ± s.d. * P < 0.01 for comparisons with random VI (motionless fixation, P  = 4.60 × 10 −8 ; suspected SA, P  = 1.09 × 10 −15 ; NA, P  = 1.52 × 10 −7 ; head position, P  = 0.005; two-tailed Mann–Whitney U- tests). AA, aphakia; CC, congenital cataract; CG, congenital glaucoma; CP, congenital ptosis; HA, high ametropia; LD, limbal dermoid; MO, microphthalmia; OF, other fundus diseases; PA, Peters’ anomaly; PFV, persistent fetal vasculature; PM, pupillary membrane; RB, retinoblastoma; SSOM, systemic syndromes with ocular manifestations.

Additionally, the clips misidentified by the system exhibited different clustering characteristics from the correctly recognized clips (true visually impaired or true nonimpaired clips), and more of the misidentified clips fell in the intermediate zone of the two clusters for the correctly recognized clips (Extended Data Fig. 5 ). Moreover, for the 20% of samples with the lowest predicted confidence values, the false identification rate was significantly higher than that of other groups and the system was equivocal. We aimed to find a solution when the system was unreliable by filtering out equivocal samples for manual review by ophthalmologists. The results show that the system performance was substantially improved with the increasing ratios for manual review. For instance, when selecting cases with confidence values less than 0.071 for manual review, accounting for 3% of the total cases, the sensitivity improved from 84.1% to 85.1% and the specificity improved from 91.9% to 93.1%; when selecting cases with confidence values less than 0.193 for manual review, accounting for 7% of the total cases, the sensitivity and specificity improved to 85.4% and 94.2%, respectively (Extended Data Fig. 6 ).

Multiple-category classification of ophthalmic disorders

Considering that our system exhibited different attention patterns for visual impairment caused by specific ophthalmic disorders (Fig. 3c ), we further developed a DL-based diagnostic model to differentiate ophthalmic disorders with characteristic attention patterns by the detection model (aphakia, congenital glaucoma, congenital ptosis and strabismus) and nonimpairment at the child level. In the diagnostic validation, our system effectively discriminated multiple ophthalmic disorders, achieving AUCs ranging from 0.918 for strabismus to 0.996 for congenital ptosis (Fig. 2m,n ).

Reliability and adjusted analyses

Stable performance is critical for real-world applications of mHealth and medical AI systems. Thus, we investigated the reliability of AIS at the clinic of ZOC. We first evaluated the influences of patient-related factors, including sex, age, laterality of the eye disorder and the apparency of the phenotypic features, on the performance of AIS. For the reliability stratified by sex, AIS achieved an AUC of 0.948 (95% CI, 0.921–0.971) in the boys group and an AUC of 0.931 (95% CI, 0.899–0.961) in the girls group (Fig. 4a ). The predicted probability pattern of AIS remained stable under various age conditions (Fig. 4b ), and the system achieved AUCs ranging from 0.909 for age group 4 to 0.954 for age group 3 (Fig. 4c ). Additionally, AIS effectively identified visually impaired children with bilateral or unilateral eye disorders, with an AUC of 0.921 (95% CI, 0.891–0.952) in the unilateral group and an AUC of 0.952 (95% CI, 0.932–0.973) in the bilateral group (Fig. 4d ). In addition, AIS achieved satisfactory performance with an AUC of 0.939 (95% CI, 0.918–0.960) in identifying hard-to-spot visually impaired children, who could have insidious phenotypic features and were easily neglected by community ophthalmologists (Supplementary Table 9 ).

figure 4

a , Performance of AIS in detecting children with visual impairment (VI) based on sex: girls, n  = 254; boys, n  = 315. b , Scatterplot of dispersion of the AIS predicted probability changes by age (months). c , Receiver operating characteristic (ROC) curves of AIS for detecting children with VI by age groups: age group 1, age ≤ 12 months, n  = 98, AUC = 0.925 (0.847–1.000); age group 2, 12 months < age ≤ 24 months, n  = 160, AUC = 0.936 (0.895–0.977); age group 3, 24 months < age ≤ 36 months, n  = 189, AUC = 0.954 (0.928–0.980); age group 4, 36 months < age ≤ 48 months, n  = 122, AUC = 0.909 (0.855–0.964). d , Performance of AIS for identifying children with unilateral or bilateral VI: unilateral, n  = 158; bilateral, n  = 238; nonimpairment (NI), n  = 173. e , Performance of AIS for detecting children with VI under various testing distance conditions: long distance, n  = 47; medium distance, n  = 432; short distance, n  = 90. f , Scatterplot of dispersion of the AIS predicted probability changes by room illuminance (in lux (lx)). g , ROC curves of AIS for distinguishing children with VI under various room illuminance conditions: illuminance group 1, room illuminance ≤ 200 lx, n  = 125, AUC = 0.936 (0.895–0.976); illuminance group 2, 200 lx < room illuminance ≤ 400 lx, n  = 317, AUC = 0.932 (0.901–0.963); illuminance group 3, room illuminance > 400 lx, n  = 127, AUC = 0.950 (0.915–0.985). h , Predicted probabilities of the detection model for repeated detection tests (NI, n  = 102; VI, n  = 85). i , Performance curves of AIS by video duration. In a , d , e , results are expressed as means and 95% CIs with DeLong CIs for AUC values and 95% Wilson CIs for other metrics. ACC, accuracy; SEN, sensitivity; SPE, specificity.

Furthermore, we investigated the reliability of AIS under different data capture conditions, including testing distance, room illuminance, repeated testing and duration of the video recording. Similarly, AIS obtained stable detection performance among groups of different testing distances, with the lowest AUC of 0.935 (95% CI, 0.912–0.958) in the medium-distance group (Fig. 4e ). Additionally, the AIS predicted probability pattern remained stable under different room illuminance conditions (Fig. 4f ). Our system achieved the lowest AUC of 0.932 (95% CI, 0.901–0.963) in the medium illuminance group (Fig. 4g ). In the retest analysis, the system remained robust with an intraclass correlation coefficient for predicted probabilities of 0.880 (95% CI, 0.843–0.908) and a Cohen’s κ for predicted categories of 0.837 (95% CI, 0.758–0.916) in another independent validation population recruited at ZOC (Fig. 4h and Table 1 ). In addition, as the duration of the video recording increased, AIS remained stable and achieved a maximal AUC of 0.931 (95% CI, 0.914–0.956) with a video duration longer than 30 s (Fig. 4i ).

To further verify that the detecting results of our system were reliable and not solely mediated by baseline characteristics as confounders, we examined the odds ratios (ORs) of the AIS predictions adjusted for baseline characteristics at the child level. Even after controlling for potential baseline confounders, the AIS predictions had statistically significant adjusted ORs for detecting visual impairment in the internal and external validations and the at-home implementation ( P < 0.001). The adjusted ORs ranged from 3.034 to 3.248 for tasks in the internal validation (Supplementary Table 10 ) and from 2.307 to 2.761 for tasks in the external validation (Supplementary Table 11 ). For the at-home implementation, the AIS predictions had a statistically significant adjusted OR of 2.496 (95% CI, 1.748–3.565, P  = 4.815 × 10 −7 ) for detecting visual impairment (Supplementary Table 12 ).

Performance of the AIS across different smartphone platforms

To test the stability of our system in more complex settings, we performed adjustments to a dataset randomly sampled from the ZOC validation set with various blurring, brightness, color or Gaussian noise adjustment gradients to simulate the diversity of data quality collected by different smartphone cameras. Our system remained reliable and achieved AUCs of over 0.800 with blurring factors no more than 25 or brightness factors no more than 0.7, and it achieved AUCs of over 0.930 under different color adjustments and over 0.820 under various Gaussian noise adjustments (Extended Data Fig. 7 ).

Furthermore, an independent validation set from 389 children was collected at ZOC using the Huawei Honor-6 Plus and Redmi Note-7 smartphones with the Android operation system to evaluate the performance of AIS (Fig. 1b and Supplementary Table 13 ). After data quality checking, videos of 361 children were reserved (92.8%), including 87 (24.1%) children without visual impairment, 169 (46.8%) children with mild visual impairment and 105 (29.1%) children with severe visual impairment (Table 1 ). AIS showed significantly higher predicted probabilities for mild or severe impairment than for nonimpairment and achieved an AUC of 0.932 (95% CI, 0.902–0.963) for identifying visual impairment for the Android system at the child level (Extended Data Fig. 8 ).

With the high incidence of visual problems during the first few years of life, timely intervention to counter pathological visual deprivation mechanisms during this critical development period can prevent or minimize long-term visual loss 3 . However, early detection of visual impairment in young children is challenging due to the lack of accurate and easy-to-use tools applicable to both clinical and community environments. To overcome these challenges, we developed and validated a smartphone-based system (AIS) that provides a holistic and quantitative technique to identify visual impairment in young children in real-world settings. We comprehensively evaluated this system for 16 important causes of childhood vision loss. Our system achieved an AUC of 0.940 in the internal validation and an AUC of 0.843 in the external validation at the clinics of four different hospitals. Furthermore, our system proved reliable when used by parents or caregivers at home, achieving an AUC of 0.859 under these specific testing conditions.

One of the merits of AIS is in its applicability to different ocular diseases. Previous studies have utilized photographs to detect ocular and visual abnormalities in childhood 27 , 28 . These technologies, which focus on a single static image, are not suitable for large-scale applications due to their limited effectiveness and inability to handle multiple abnormalities with variable patterns. Given the complexity of ocular pathologies in children, the concept of accurately assessing a broad range of ocular conditions is attractive. In our prospective multicenter study, we analyzed more than 25,000,000 frames of information-rich phenotypic videos and accurately identified visual impairment caused by a wide range of sight-threatening eye diseases. Strikingly, AIS was able to detect most of the common causes of visual impairment in childhood, including anterior and posterior segment disorders, strabismus, ocular neoplasms, developmental abnormalities and ocular manifestations of systemic and genetic diseases 29 . Although cases like congenital cataracts tend to be easily diagnosed in specialist settings by experienced doctors, they are still frequently missed in the community, especially in areas with pediatric ophthalmic resource shortfall 28 . To apply AIS to various scenarios, we recruited cases of a broad range of eye disorders with variable severity in terms of their impact on vision. Our system was reasonably accurate in identifying mildly impaired children who could have subtle phenotypic features, making them easy to miss. Furthermore, our results indicate that AIS can be extended to diseases that have not been previously encountered in the training process, demonstrating its broader applicability.

The use of smartphones to detect visual impairment caused by extraocular diseases or systematic diseases is an important application in the future, but the feasibility remained to be further verified. Some systemic diseases, such as cardiovascular, hepatobiliary and renal diseases, can exhibit ocular manifestations that are recognizable by algorithms, which is also indicated by our findings in small samples 30 , 31 , 32 . Furthermore, disorders of neurological system can impact vision and cause cerebral visual impairment with pathology outside the eye, which is a common type of visual impairment in developed countries but lacking in this study 33 , 34 . Therefore, future work is needed to evaluate the merit of AIS in detecting visual impairment caused by a broad range of diseases, such as cerebral visual impairment, and in reducing the extraocular morbidity associated with systemic diseases in a larger population: for example, the cardiovascular complications linked with Marfan syndrome.

A major strength of AIS is its reliability in real-world practice. Although a large number of medical AI systems have been evaluated with high performance in the laboratory setting, only a few systems have demonstrated real-world medical feasibility 23 , 25 . Bias from training data and low stability of the model design greatly limit the generalizability of these AI systems. Previously, we evaluated the feasibility of identifying visual impairment in children by analyzing their phenotypic characteristics using DL algorithms 16 . For that study, the evaluation was conducted by experienced experts under a tightly controlled, standardized laboratory setting to strictly control for interference factors, which is not possible in routine ophthalmic practice. In this study, we prospectively collected a large amount of phenotypic data (facial features and ocular movements) to develop a DL system with a highly reliable design. Our results show that AIS exhibited high stability and prediction effectiveness under various testing conditions. Importantly, AIS remained effective in multicenter external validation and crucially, when rolled out in the community and used by parents or caregivers at home. When transferred to at-home settings, factors such as environmental interference, blurring, brightness, pixels of different cameras and the influence of untrained operators may impact the system’s performance. Therefore, we used a pilot dataset to fine-tune our system for its generalizability to various home environments and broader applications. AIS achieved an acceptable AUC of 0.859 in the subsequent implementation, which indicates that it can benefit from further model updating on larger-scale datasets for broader applications. Importantly, AIS kept stable in 88 different types of home environments after one round of fine-tuning, demonstrating its potential to be used generally in a variety of complex environments with no requirement of regular adaptations or fine-tuning in the future application.

Our findings demonstrate that sensory states, especially vision, can be derived from phenotypic video data recorded using consumer-grade smartphones. Two types of underlying features seemed to be captured by smartphones. First, changes in facial appearance caused by ocular pathologies can be directly recorded by mobile devices, especially those of the ocular surface or adnexa: for example, eyelid drooping in congenital ptosis. Second and more importantly, individuals may display aberrant behaviors to adapt to changes in their sensory modality, a process conserved from arthropods to mammals 35 , 36 and confirmed in human children 16 . Our results show that the model can focus on behavioral features replicated in various eye diseases, such as abnormal ocular movement or alignment/fixation patterns. These common behavioral patterns may broaden the applicability of AIS to multiple ocular diseases, including posterior segment abnormalities that are more challenging to diagnose based on phenotypic video data.

A smartphone-based system to detect ocular pathology in children has obvious clinical implications. Early identification by parents or caregivers of ocular abnormalities facilitates timely referral to pediatric ophthalmologists and prompt intervention. AIS does not require professional medical equipment; smartphones and simple stabilization are sufficient. This low-barrier system is a promising tool for the timely testing of children in the community, which is a major advantage given the rapidly changing nature of the ocular pathology encountered in children. This could have a major impact by improving vision-related outcomes and even survival rates in cases such as retinoblastoma 37 , 38 . Furthermore, AIS is a promising tool to screen young children for ocular abnormalities remotely, which can reduce ophthalmologists’ exposure risk to infectious agents, as exemplified by the impact of the coronavirus disease 2019 (COVID-19) pandemic, in the so-called ‘new normal’ period 39 .

This study has several limitations. First, although we may miss the recruitment of some patients with conditions causing slight visual impairment in specialist clinical settings, our system was satisfactorily accurate in identifying mildly impaired children with subtle phenotypic features. Importantly, the versatile AIS system kept reliable performance to detect visually impaired children who were hard to spot even for community ophthalmologists, which sheds light on its significant application prospect of expanding our future work to the general population and groups of children with mild or early-stage ocular pathology. Second, to develop the quality control module and analyze the influencing factors, only a single video was collected for each child at ZOC, accounting for the relatively high rate of unsuccessful cases in this stage. However, our system allowed users to repeat video recordings until the qualified videos were acquired. As a result, the successful rate of identification greatly improved. Although a proportion of uncooperative children may not be appropriate for our tool, our AIS system has greatly lowered the minimal operating threshold for untrained users, indicating the potential for the general applications. Third, our cohorts recruited in clinical settings may not represent the real-world population. Although AIS effectively identified visually impaired children in the finding a needle in a haystack test with a prevalence simulated to a general population, a large-scale screening trial is needed in the future to validate the utility of the AIS system in the real-world applications. Fourth, AIS requires collecting facial information from children, which may pose a risk of privacy exposure. To avoid potential privacy risks, future techniques such as lightweight model backbones 40 and model pruning 41 could be applied to deploy the DL system in individual smartphones with no requirement for additional computing resources. In addition, digital fingerprint technology, such as blockchain 42 , can also be applied to monitor data usage and mitigate abuse effectively. Additionally, we developed a real-time three-dimensional facial reconstruction technology to irreversibly erase biometric attributes while retaining gaze patterns and eye movements 43 , which can be used in the future to safeguard children’s privacy when using AIS.

In conclusion, we developed and validated an innovative smartphone-based technique to detect visual impairment in young children affected with a broad range of eye diseases. Given the ubiquity of smartphones, AIS is a promising tool that can be applied in real-world settings for secondary prevention of visual loss in this particularly vulnerable age group.

Ethics approval

The predefined protocol of the clinical study was approved by the Institutional Review Board/Ethics Committee of ZOC and prospectively registered at ClinicalTrials.gov (identifier: NCT04237350 ), and it is shown in Supplementary Note . Consent was obtained from all individuals whose eyes or faces are shown in the figures or video for publication. Before data collection, informed written consent was obtained from at least one parent or guardian of each child. The investigators followed the requirements of the Declaration of Helsinki throughout the study.

Study design and study population

This prospective, multicenter and observational study was conducted between 14 January 2020 and 30 January 2022 to recruit children for the development and validation of the mHealth system in three stages (Fig. 1b ). Major eligibility criteria included an age of 48 months or younger and informed written consent obtained from at least one parent or guardian of each child. We did not include children having central nervous system diseases, mental illnesses or other known illnesses that could affect their behavioral patterns, in the absence of ocular manifestations. Children who could not cooperate to complete the ophthalmic examinations or the detection test using AIS were excluded. We also excluded children who had received ocular interventions and treatments in the month immediately preceding data collection.

In the first stage completed from 14 January 2020 to 15 September 2021, children were enrolled at the clinic of ZOC (Guangdong Province) to develop and comprehensively validate (internal validation and reliability analyses) the system. In the second stage, which occurred from 22 September 2021 to 19 November 2021, children were enrolled at the clinics of the Second Affiliated Hospital of Fujian Medical University (Fujian Province), Shenzhen Eye Hospital (Guangdong Province), Liuzhou Maternity and Child Healthcare Hospital (Guangxi Province) and Beijing Children’s Hospital of Capital Medical University (Beijing) to additionally evaluate the system (external validation). We selected these sites from three provinces across northern and southern China, representing the variations in clinical settings. In the first two stages, recruited children underwent ophthalmic examinations by clinical staff, and phenotypic videos were collected by trained volunteers using mHealth apps installed on iPhone-7 or iPhone-8 smartphones at each center. In the third stage conducted from 24 November 2021 to 30 January 2022, we advertised our study through the online platform of the Pediatric Department of ZOC and the social media of WeChat. We recruited children and their parents or caregivers online from the Guangdong area for at-home implementation. The investigators recruited the children following the same eligibility criteria as the previous two stages by collecting their basic information and medical history online. In addition, children who could not come to ZOC for an ophthalmic assessment or who had been included in other stages of this study were excluded. Untrained parents or caregivers recorded the phenotypic videos with their smartphones according to the instructions of the AIS app at home (Extended Data Figs. 1 and 2 ). The quality control module automatically reminded parents or caregivers to repeat data collection when the video recordings were unqualified. In this stage, all the children who completed successful video recordings underwent ophthalmic examinations at ZOC. A total of 3,652 children were finally enrolled, recording more than 25,000,000 frames of videos for development and validation of the system.

Definition of visual impairment

Comprehensive functional and structural examinations were performed to stratify children’s visual conditions for developing and validating the DL-based AIS. For unified examination, a teller vision card (Stereo Optical Company) was utilized to measure children’s monocular visual acuity 44 . In addition, high-resolution slit lamp examinations, fundoscopy examinations and cycloplegic refraction were used to detect abnormalities in the eyes. Additional examinations, such as intraocular pressure, ultrasound, computerized tomography scans and genetic tests, were determined by experienced pediatric ophthalmologists when necessary.

According to the results of the abovementioned examinations and a referenced distribution of monocular visual acuity 45 , experienced pediatric ophthalmologists comprehensively stratified children’s visual conditions into three groups. Children with the best-corrected visual acuity (BCVA) of both eyes in the 95% referenced range with no abnormalities of structure or other examination results were assigned to the nonimpaired group. Children with the BCVA in the 99% referenced range in both eyes with abnormalities of structure or other examination results were assigned to the mildly impaired group. Children with the BCVA of at least one eye outside the 99% referenced range or worse than light perception with structural abnormalities or other examination results were assigned to the severely impaired group 16 . We recruited visually impaired children with primary diagnoses of the following 16 ocular disorders: aphakia, congenital cataract, congenital glaucoma, high ametropia, Peters’ anomaly, nystagmus, congenital ptosis, strabismus, persistent fetal vasculature, retinoblastoma, other fundus diseases, limbal dermoid, microphthalmia, pupillary membranes, systemic syndromes with ocular manifestations and other ocular conditions (Table 2 and Supplementary Table 4 ). A tiered panel consisting of two groups of experts assigned and confirmed the primary diagnosis as the most significant diagnostic label for each child. The first group of experts consisted of two pediatric ophthalmologists with over 10 years of experience in each recruiting ophthalmic center who separately provided the preliminary labeling information. If a consensus was not reached at this stage, a second group of more senior pediatric ophthalmologists with over 20 years of experience at ZOC verified the diagnostic labels as the ground truth. The diagnoses of children recruited online for the at-home implementation were made by experts at ZOC following the same criteria.

Concept of the AIS system

The AIS system consisted of a smartphone app (available for iPhone and Android operating systems) for data collection and a DL back end for data analysis (Fig. 1a and Extended Data Fig. 1 ). To ensure the quality of data collected in real-world settings, AIS interactively instructed users to follow a standardized preparation sequence for data collection (Extended Data Fig. 2 ). Before data collection, a short demo video was displayed to instruct users on the standard operation and how to choose an appropriate environment to minimize testing biases (for example, room illuminance, background, testing distance and interference). Once the smartphone was firmly in place, a face-positioning frame was shown on the screen to help adjust the distance and position of the child in relation to the smartphone. After all preparations were completed properly, AIS played a cartoon-like video stimulus lasting approximately 3.5 min to attract children’s attention, and the inbuilt front camera recorded the children’s phenotypic features (ocular movements and facial appearance) in video format.

Then, the collected data were transferred to the DL-based back end, where the quality control module automatically performed quality checking on each frame first. To eliminate background interference, the children’s facial regions were then cropped out of consecutive frames of sufficient quality to form short video clips as inputs of the subsequent DL models for final decision-making (a detection model to distinguish visually impaired children from nonimpaired individuals and a diagnostic model to discriminate multiple ocular disorders). The DL models produced classification probabilities for short video clips, which were eventually merged into the video-level classification probability as the final outcome by averaging. The final results were returned to the mHealth app to alert users to promptly refer children at high risk of visual impairment to experienced pediatric ophthalmologists for further diagnosis and intervention.

Deep quality control module

To ensure prediction reliability, we adopted a strict data quality control strategy to ensure that the input clips of the detection/diagnostic models satisfied certain quality criteria (Fig. 1a ). First, for each frame, the child’s facial area was detected, and frames without successful face detection were rejected. If two or more faces were detected in a given frame, it suggested that the child’s parents or other persons were inside the scene, and such a frame was also rejected. The facial region detection algorithm was based on MMOD CNN 46 , which consisted of a series of convolutional layers for feature learning and max-margin operation during model training. In this study, the MMOD CNN face detector pretrained on publicly available datasets was adopted from the Dlib Python Library, which has been proven to be effective and robust in facial detection tasks 26 .

Second, a facial key point localization algorithm was applied to the detected facial area to extract the landmarks of facial regions, including the left eye, right eye, nose tip, chin and mouth corners, which served as the reference coordinates for the cropping of facial regions. The facial key point localization algorithm was realized based on a pretrained ensemble of regression trees, which was also provided by the Dlib Python Library 47 , 48 . We adopted a cascade of regressors to take the facial region of the frame as the input. The network was able to learn coarse-to-fine feature representations of the child’s face, especially details of the facial patterns. The output of this model was then fitted to the coordinates representing facial structures to generate 68 key target points. The coordinates of the key points then served as the reference for facial region cropping. All video data and image data processing were performed using the FFmpeg toolkit and OpenCV Python Library 49 .

Then, a combination of crying, interference and occlusion classification models based on EfficientNet-B2 networks (Extended Data Fig. 3a,b ) was applied to each frame, which was trained based on the data collected at ZOC (Extended Data Fig. 3d and Supplementary Table 1 ) 50 . During model training and inference, the input frame was first rescaled to 384 × 384 resolution and then sent into the models for deep feature representation learning (Supplementary Table 14 ). Positive outputs by the models indicated that the child was crying, was interfered with or had its facial region blocked by objects such as toys or other persons’ hands, and the corresponding frames were also discarded. In practice, we fine-tuned the models pretrained on the ImageNet public dataset 51 .

Eventually, the remaining frames were considered high-quality candidates, and consecutive high-quality frames were selected to form short video clips. Each clip lasted at least 1.5 s and at most 5 s. The child’s facial region within each clip was then cropped out to serve as the final input of the subsequent detection/diagnostic models based on the facial key point coordinates to eliminate the interference of the background region. A qualified video should contain more than ten clips; otherwise, the video was treated as a low-quality sample and discarded.

DL framework of the detection/diagnostic models

Two models with various clinical purposes were developed in this study: a detection model to detect visually impaired children from nonimpaired children and a five-category diagnostic model to discriminate specific ophthalmic disorders (aphakia, congenital glaucoma, congenital ptosis and strabismus) and nonimpairment. The backbone of each DL model was built on a deep convolutional network known as EfficientNet-B4 (Extended Data Fig. 3c and Supplementary Table 14 ) 50 . The models made predictions on the children’s cropped facial regions. Specifically, spatial cues of the input clips were learned by cascaded convolutional layers, while temporal cues were integrated by temporal average pooling layers, which was inspired by successful applications in gait recognition 52 . The temporal average pooling operator was given by \(\frac{1}{{{{{n}}}}}\mathop {\sum}\nolimits_{{{{{i}}}} = 1}^{{{{n}}}} {\mathop{x}\limits^{\rightharpoonup}}_{\!i}\) , where n was the number of frames in the input clip and \({\mathop{x}\limits^{\rightharpoonup}}_{\!i}\) was the feature map of each frame output by the last convolutional layer of the network. Before training, all convolutional blocks were initialized by the parameters of the models pretrained on the ImageNet dataset 51 . At the inference stage, class scores given by the models were treated as the final clip-level probability outcomes. For the detection model, the output of the last classification layer, indicated by x i , was normalized to the range between 0.00 and 1.00 for each clip using the sigmoid function \({{{{p}}}}_{{{{i}}}} = \frac{1}{{1 + \exp ( - {{{{x}}}}_{{{{i}}}})}}\) , representing the final probability of the i th clip being classified as a visually impaired candidate. To train the detection model, the cost function was given by the classic binary cross-entropy loss \({{{{L}}}} = - \frac{{{{{1}}}}}{{{{{N}}}}}\mathop {\sum}\nolimits_{{{{{i}}}} = 1}^{{{{N}}}} {\left( {\widehat {{{{{y}}}}_{{{{i}}}}}\log \left( {{{{{p}}}}_{{{{i}}}}} \right) + \left( {1 - \widehat {{{{{y}}}}_{{{{i}}}}}} \right)\log \left( {1 - {{{{p}}}}_{{{{i}}}}} \right)} \right)}\) , where N was the number of clips within each batch, \(\widehat {{{{{y}}}}_{{{{i}}}}}\) was the ground truth label of the i th clip and p i was the output classification probability of the model.

The diagnostic model was developed based on the same EfficientNet-B4 backbone as the detection model. The only difference was that the output of the diagnostic model was activated by a five-category softmax function that indicated the probability of each class: \({{{{p}}}}_{{{{k}}}} = \frac{{{{{{\mathrm{e}}}}}^{{{{{x}}}}_{{{{k}}}}}}}{{\mathop {\sum }\nolimits_{{{{{j}}}} = 1}^5 {{{{\mathrm{e}}}}}^{{{{{x}}}}_{{{{j}}}}}}}\) , where x k was the output of the last classification layer for the k th class. The cost function of the network was given by the stochastic cross-entropy loss \({{{{L}}}} = - \frac{1}{{{{{N}}}}}\mathop {\sum}\nolimits_{{{{{i}}}} = 1}^{{{{N}}}} {\mathop {\sum}\nolimits_{{{{{k}}}} = 1}^5 {\left( {{{{\hat{ y}}}}_{{{{k}}}}^{{{{i}}}}\log \left( {{{{{p}}}}_{{{{k}}}}^{{{{i}}}}} \right)} \right)} }\) , where N was the batch size and \({{{\hat{y}}}}_{{{{k}}}}^{{{{i}}}} \in \{ 0{{{{,}}}}1\}\) was the binary Boolean variable of the i th input clip within each batch, indicating whether the k th class matched the ground truth label of the i th clip.

Child-level classification was based on clip-level predictions. A sliding window integrated with the quality control module was applied along the temporal dimension of the whole video to extract high-quality clips. Such clips then served as the candidate inputs for detection/diagnostic models. For the detection model, if the average score of the clips exceeded 0.50 within each video, the child was eventually classified as a visually impaired individual. For the diagnostic model, the category with the highest average probability was treated as the final prediction outcome.

Model training and internal validation

We first developed the data quality control module using both publicly available datasets and the ZOC dataset (Supplementary Table 1 ). Then, we trained and validated the detection/diagnostic models with the ground truth of the visual conditions using the development dataset at ZOC. In this stage, data collection preceded the development of the quality control module, so raw videos without quality checking were collected. In total, raw videos from 2,632 children undergoing ophthalmic examinations were collected by trained volunteers using the mHealth apps installed on iPhone-7 or iPhone-8 smartphones. After initial quality checking by the quality control module, qualified videos of 2,344 (89.1%) children were reserved as the development dataset, which was randomly split into training, tuning and validation (internal validation) sets using a stratified sampling strategy according to sex, age and the category of ophthalmic disorder to train and internally validate the detection/diagnostic models (Fig. 1b , Extended Data Fig. 3e and Supplementary Table 2 ). The age distribution and the proportions of children with unilateral and bilateral severe visual impairment for different datasets are shown in Supplementary Tables 15 and 16 , respectively. Internal validation refers to the assessment of the performance of the selected optimized model, after training and hyperparameter selection and tuning, on the independent datasets from the same settings as training datasets. The top-performing checkpoint was selected on the basis of accuracy on the tuning set. In particular, the videos utilized for quality control module development did not overlap with those in the detection/diagnostic model validation.

Finding a needle in a haystack test

To estimate the performance of the AIS system in the general population with a rare-case prevalence of visual impairment, we simulated a gradient of prevalences ranging from 0.1% to 9% to conduct a finding a needle in a haystack test. For each simulated prevalence, we resampled 10,000 children based on the internal validation dataset in a bootstrap manner to test whether the AIS system could pick up the ‘needle’ (visually impaired children at the simulated prevalence) in the ‘haystack’ (10,000 resampled children) and repeated this process 100 times to estimate the 95% CIs.

Data augmentation

To ensure better model capacity and reliability in complex environments, data augmentation was performed during model training using brightness and contrast adjustments, together with blurring techniques. Specifically, the brightness of the input frames was randomly adjusted by a factor of 0.40, and the contrast was randomly adjusted by a factor of 0.20. Blurring techniques included Gaussian blur, median blur and motion blur. The factor of all blurring techniques was set to five. Each input frame had a probability of 0.50 to perform data augmentation (Supplementary Table 17 ). All data augmentation processes were based on a publicly available Python library known as Albumentations 53 .

Multicenter external validation

External validation refers to the assessment of the performance of the AI system using independent datasets, captured from different clinical settings. This is to ensure the generalizability of the system to different settings. Trained volunteers used mHealth apps installed on iPhone-7 or iPhone-8 smartphones to perform external validation in the ophthalmology clinics of the Second Affiliated Hospital of Fujian Medical University, Shenzhen Eye Hospital, Liuzhou Maternity and Child Healthcare Hospital and Beijing Children’s Hospital of Capital Medical University. In this stage, the quality control module automatically reminded volunteers to repeat data collection when the videos were of low quality. In total, 305 children were recruited and qualified videos for 301 children (98.7%) were successfully collected. Qualified videos for 298 children undergoing ophthalmic examinations were reserved for final validation of the detection model (see Fig. 1b and Table 1 for details of the participants and the dataset used for external validation).

Implementation by untrained parents or caregivers at home

We further challenged our system in an application administered by untrained parents or caregivers with their smartphones in daily routines (Fig. 1b ). Children (independent from the development and external validation participants) were recruited online, and their parents or caregivers autonomously used AIS at home according to the system’s instructions to collect qualified videos and perform tests without pretraining or controlling any biases before testing, such as brands and models of smartphones and the home environment. This process generated data with huge variations of distributions that had an extremely high requirement of generalizability and extensibility for the DL-based system. Thus, before final implementation, we performed a pilot study to collect a dataset for fine-tuning our system to chaotic home environments. To efficiently evaluate the performance of AIS for identifying visual impairment in at-home settings, a sufficient proportion of visually impaired children with various ocular diseases were recruited. Of the 125 children recruited, 122 children (97.6%) successfully completed the detection tests and collected qualified videos, among whom 120 children undergoing ophthalmic examinations were enrolled to fine-tune and evaluate the detection model. We fine-tuned the detection model using qualified videos from 32 children collected first and then tested it by the subsequently collected validation set from another 88 children. See Fig. 1b and Table 1 for more information on the fine-tuning and implementation.

Reliability analyses and adjusted analyses

To test the stability and generalizability of AIS under various conditions, investigators conducted a batch of reliability analyses and adjusted analyses (Fig. 1b and Table 1 ).

Reliability across different smartphone platforms

We performed adjustments at different blur, brightness, color or Gaussian noise adjustment gradients to a dataset ( n  = 200 children and n  = 200 qualified videos) randomly sampled from the ZOC validation set to simulate the characteristics of data collected by various cameras and evaluate the reliability of AIS. Furthermore, we collected another dataset in an independent population of children at ZOC to assess the reliability of the AIS system across different operating systems. In total, raw videos from 389 children undergoing ophthalmic examinations were collected by trained volunteers using two Android smartphones, Redmi Note 7 and Huawei Hornor-6 plus. After initial quality checking, qualified videos of 361 (92.8%) children were reserved for testing. The technical specifications of the smartphones used in this study are summarized in Supplementary Table 13 .

Retest reliability analysis

We performed detection tests for each child twice by two volunteers at least 1 day apart on another independent population recruited at ZOC to evaluate the retest reliability. Raw videos from 213 children undergoing ophthalmic examinations were collected using iPhone-7 or iPhone-8 smartphones. Qualified videos of 187 (87.8%) children were reserved for retest analysis after initial quality checking (Fig. 1b and Table 1 ). An intraclass correlation coefficient was calculated for repeated predicted probabilities of the detection model, and a Cohen’s κ was calculated for repeated predicted categories to evaluate retest reliability.

Hard-to-spot test

To investigate the influence of the apparency of the phenotypic features on the AIS system, a panel of 14 community ophthalmologists with 3–5 years of clinical experience identified ‘likely impaired’ children based on the phenotypic videos in the ZOC validation dataset. The true impaired and nonimpaired children were mixed at a ratio of 1:1 during identification. Each case was independently reviewed by three ophthalmologists. When no more than one ophthalmologist provided ‘likely impaired’ labels for one true impaired child, this child was classified as a hard-to-spot case with insidious phenotypic features rather than a relatively evident case. The performance of the AIS system for relatively evident/hard-to-spot cases was assessed.

Other reliability analyses

We tested AIS under different room illuminance conditions. Photometers (TESTES-1330A; TES Electrical Electronic Corp.) were used to measure the mean room illuminance intensity before and after data collection. The following criteria were applied to estimate the distances between the children and the smartphones to assess the reliability of AIS in different testing distance groups. When most of the vertical lengths of a child’s head regions were less than one-third of the height of the smartphone screen at the frame level, the video was determined to be taken from a long distance. When most of the lengths were between one-third and one-half of the height of the screen, the video was judged to be taken from a medium distance, and when most of the lengths were larger than one-half of the height of the screen, the video was judged to be taken at a close distance. For each full-length video, subvideos with various durations were generated to serve as inputs to evaluate the influence of the duration of the video recording on the performance of AIS. We also evaluated the performances of AIS grouped by patient-related factors including sex, age and laterality of the eye disorder.

Adjusted analyses

To further verify that the predictions of this system were not solely mediated by sample characteristics as confounders, we performed adjusted analyses to examine the ORs of the predictions of the system adjusted for sample characteristics leveraging logistic regression models.

Detection model visualization and explanation

Two strategies were used to interpret and visualize the detection model: t -distributed stochastic neighbor embedding ( t -SNE) and gradient-weighted class activation mapping (Grad-CAM) 54 , 55 , 56 . The former was used to visualize the high-dimensional activation status of the deep CNN at the clip level by projecting its feature vector into a two-dimensional space, and the latter was adopted to create a heat map showing the area within each frame of the clip that contributed most to the output class of the network. In practice, the feature vectors output by the temporal average pooling layer and flatten operation and the feature maps output by the last convolutional layer before the temporal average pooling operation were chosen to visualize the results generated by t -SNE and Grad-CAM, respectively. Specifically, 1,200 visually impaired clips and 1,200 nonimpaired clips were randomly selected from the ZOC validation set to perform t -SNE analysis. To generate average heat maps, we randomly sampled ten videos for each ophthalmic disorder from the internal validation dataset. Since each video had multiple clips, we ranked these clips according to the model predicted probabilities and selected the two clips with the highest probabilities. For each selected clip, we took 30 frames at equal intervals to generate the corresponding average heat map. In summary, we had a total of 600 heat maps for each type of disorder, and we summed and averaged these heat maps to obtain the typical heat map for a certain disease. A public machine learning Python library named Scikit-learn was used to generate two-dimensional coordinates of t -SNE results, and Grad-CAM analysis was performed based on an open-source GitHub code set 57 .

Additionally, we compared the model-predicted probabilities of three groups of clips (clips randomly sampled from videos of nonimpaired children, clips randomly sampled from videos of visually impaired children, and clips annotated by experts as having abnormal behavioral patterns from videos of visually impaired children) to investigate whether the detection model focused on specific behavioral patterns in children (Fig. 3d and Supplementary Table 8 ).

Triage-driven approach to select equivocal cases for manual review

We assessed a triage strategy to find a solution when the system was likely unreliable by choosing equivocal cases for manual review in the internal validation set. An equivocal case referred to a child predicted by the AIS system with a low confidence value, given by | p  − 0.50|, where p was the predicted probability for the child. Three ophthalmologists from ZOC with over 10 years of clinical experience vetted the phenotypic videos of the equivocal cases and the AIS predictions in a voting manner. Additional information, including baseline information and medical histories, was provided when necessary. An increasing ratio from 0 to 19% of equivocal cases with the lowest confidence values was chosen for manual review to evaluate this triage strategy.

Statistical analysis

The primary outcomes were the AUCs of the detection/diagnostic models. The secondary outcomes included the accuracy, sensitivity and specificity of the models and the reliability of the detection model under various settings. The 95% CIs of the AUC, accuracy, sensitivity and specificity of the models were estimated. Specifically, the DeLong CIs of AUCs were calculated at the child level. To eliminate bias due to the association of multiple clips for the same child, the bootstrap CIs of the AUCs of the detection model were calculated at the clip level. One clip for each child was randomly taken to form a bootstrap sample, and this process was repeated 1,000 times. Wilson CIs were reported for other proportional metrics. Descriptive statistics, including means, s.d., numbers and percentages, were used. Mann–Whitney U- tests were used to compare means on continuous variables, and Fisher exact tests were used to compare distributions on categorical variables. A two-sided P value of <0.05 indicates statistical significance. All statistical analyses were performed in R Statistics (v.4.1.2) or Python Programs (v.3.9.7), and plots were created with the ggplot2 package (v.3.3.5) in R Statistics.

Computational hardware

Hardware information for this study is shown as follows: graphics processing unit (GPU), Nvidia Titan RTX 24 GB memory × 4, Driver v.440.82, Cuda v.10.2; central processing unit (CPU), Intel(R) Xeon(R) CPU E5-2678 v.3 @ 2.50 GHz × 2, 48 threads; random access memory (RAM), Samsung 64 GB RAM × 8, configured speed 2,133 MHz.

Use of human data

The ethical review of this study was approved by the Institutional Review Board/Ethics Committee of ZOC. The test was prospectively registered at ClinicalTrials.gov (identifier: NCT04237350 ).

Reporting summary

Further information on research design is available in the Nature Portfolio Reporting Summary linked to this article.

Data availability

The data that support the findings of this study are divided into two groups: published data and restricted data. The authors declare that the published data supporting the main results of this study can be obtained within the paper and its Supplementary Information . For research purposes, a representative video deidentified using digital masks on children’s faces for each disorder or behavior in this study is available. In the case of noncommercial use, researchers can sign the license, complete a data access form provided at https://github.com/RYL-gif/Data-Availability-for-AIS and contact H.L. Submitted license and data access forms will be evaluated by the data manager. For requests from verified academic researchers, access will be granted within 1 month. Due to portrait rights and patient privacy restrictions, restricted data, including raw videos, are not provided to the public.

Code availability

Since we made use of proprietary libraries in our study, our codes for system development and validation release to the public are therefore not feasible. We detail the methods and experimental protocol in this paper and its Supplementary Information to provide enough information to reproduce the experiment. Several major components of our work are available in open-source repositories: PyTorch (v.1.7.1): https://pytorch.org ; Dlib Python Library (v.19.22.1): https://github.com/davisking/dlib (frameworks for facial region detection and facial key point localization); EfficientNet-PyTorch: https://github.com/lukemelas/EfficientNet-PyTorch (frameworks for models in the quality control module and the detection/diagnostic models); Albumentations (v.0.5.2): https://github.com/albumentations-team/albumentations (data augmentation); and OpenCV Python Library (v.4.5.3.56): https://github.com/opencv/opencv-python (video data and image data processing).

Kliner, M., Fell, G., Pilling, R. & Bradbury, J. Visual impairment in children. Eye 25 , 1097–1097 (2011).

Article   CAS   PubMed   PubMed Central   Google Scholar  

Mariotti, A. & Pascolini, D. Global estimates of visual impairment. Br. J. Ophthalmol. 96 , 614–618 (2012).

Article   PubMed   Google Scholar  

Bremond-Gignac, D., Copin, H., Lapillonne, A. & Milazzo, S. Visual development in infants: physiological and pathological mechanisms. Curr. Opin. Ophthalmol. 22 , S1–S8 (2011).

Teoh, L., Solebo, A. & Rahi, J. Temporal trends in the epidemiology of childhood severe visual impairment and blindness in the UK. Br. J. Ophthalmol. https://doi.org/10.1136/bjophthalmol-2021-320119 (2021).

Gothwal, V. K., Lovie-Kitchin, J. E. & Nutheti, R. The development of the LV Prasad-Functional Vision Questionnaire: a measure of functional vision performance of visually impaired children. Investigative Ophthalmol. Vis. Sci. 44 , 4131–4139 (2003).

Article   Google Scholar  

Brown, A. M. & Yamamoto, M. Visual acuity in newborn and preterm infants measured with grating acuity cards. Am. J. Ophthalmol. 102 , 245–253 (1986).

Article   CAS   PubMed   Google Scholar  

Dutton, G. N. & Blaikie, A. J. How to assess eyes and vision in infants and preschool children. BMJ Br. Med. J. 350 , h1716 (2015).

Blindness and Vision Impairment (World Health Organization, 2021); https://www.who.int/en/news-room/fact-sheets/detail/blindness-and-visual-impairment

Mayer, D. L. & Dobson, V. in Developing Brain Behaviour (ed. Dobbing, J.) 253–292 (Academic, 1997).

Quinn, G. E., Berlin, J. A. & James, M. The Teller acuity card procedure: three testers in a clinical setting. Ophthalmology 100 , 488–494 (1993).

Johnson, A., Stayte, M. & Wortham, C. Vision screening at 8 and 18 months. Steering Committee of Oxford Region Child Development Project. Br. Med. J. 299 , 545–549 (1989).

Article   CAS   Google Scholar  

Long, E. et al. Monitoring and morphologic classification of pediatric cataract using slit-lamp-adapted photography. Transl. Vis. Sci. Technol. 6 , 2 (2017).

Article   PubMed   PubMed Central   Google Scholar  

Balmer, A. & Munier, F. Differential diagnosis of leukocoria and strabismus, first presenting signs of retinoblastoma. Clin. Ophthalmol. 1 , 431 (2007).

PubMed   PubMed Central   Google Scholar  

SooHoo, J. R., Davies, B. W., Allard, F. D. & Durairaj, V. D. Congenital ptosis. Surv. Ophthalmol. 59 , 483–492 (2014).

Mandal, A. K. & Chakrabarti, D. Update on congenital glaucoma. Indian J. Ophthalmol. 59 , S148 (2011).

Long, E. et al. Discrimination of the behavioural dynamics of visually impaired infants via deep learning. Nat. Biomed. Eng. 3 , 860–869 (2019).

Brown, A. M. & Lindsey, D. T. Infant color vision and color preferences: a tribute to Davida Teller. Vis. Neurosci. 30 , 243–250 (2013).

Holmes, J. M. & Clarke, M. P. Amblyopia. Lancet 367 , 1343–1351 (2006).

Abadi, R. & Bjerre, A. Motor and sensory characteristics of infantile nystagmus. Br. J. Ophthalmol. 86 , 1152–1160 (2002).

Wright, K. W., Spiegel, P. H. & Hengst, T. Pediatric Ophthalmology and Strabismus (Springer, 2013).

Sim, I. Mobile devices and health. N. Engl. J. Med. 381 , 956–968 (2019).

Grady, C. et al. Informed consent. N. Engl. J. Med. 376 , 856–867 (2017).

Beede, E. et al. A human-centered evaluation of a deep learning system deployed in clinics for the detection of diabetic retinopathy. In Proc. 2020 CHI Conference on Human Factors in Computing Systems 1–12 (Association for Computing Machinery, 2020)..

Davenport, T. H. & Ronanki, R. Artificial intelligence for the real world. Harvard Bus. Rev. 96 , 108–116 (2018).

Google Scholar  

Lin, H. et al. Diagnostic efficacy and therapeutic decision-making capacity of an artificial intelligence platform for childhood cataracts in eye clinics: a multicentre randomized controlled trial. eClinicalMedicine 9 , 52–59 (2019).

King, D. E. Dlib-ml: a machine learning toolkit. J. Mach. Learn. Res. 10 , 1755–1758 (2009).

Munson, M. C. et al. Autonomous early detection of eye disease in childhood photographs. Sci. Adv. 5 , eaax6363 (2019).

Long, E. et al. An artificial intelligence platform for the multihospital collaborative management of congenital cataracts. Nat. Biomed. Eng. 1 , 0024 (2017).

Gogate, P., Gilbert, C. & Zin, A. Severe visual impairment and blindness in infants: causes and opportunities for control. Middle East Afr. J. Ophthalmol 18 , 109–114 (2011).

Cheung, C. Y. et al. A deep-learning system for the assessment of cardiovascular disease risk via the measurement of retinal-vessel calibre. Nat. Biomed. Eng. 5 , 498–508 (2021).

Sabanayagam, C. et al. A deep learning algorithm to detect chronic kidney disease from retinal photographs in community-based populations. Lancet Digital Health 2 , e295–e302 (2020).

Xiao, W. et al. Screening and identifying hepatobiliary diseases through deep learning using ocular images: a prospective, multicentre study. Lancet Digital Health 3 , e88–e97 (2021).

Pehere, N., Chougule, P. & Dutton, G. N. Cerebral visual impairment in children: causes and associated ophthalmological problems. Indian J. Ophthalmol. 66 , 812–815 (2018).

Gilbert, C. & Foster, A. Childhood blindness in the context of VISION 2020—the right to sight. Bull. World Health Organ 79 , 227–232 (2001).

CAS   PubMed   PubMed Central   Google Scholar  

Dey, S. et al. Cyclic regulation of sensory perception by a female hormone alters behavior. Cell 161 , 1334–1344 (2015).

Klein, M. et al. Sensory determinants of behavioral dynamics in Drosophila thermotaxis. Proc. Natl Acad. Sci. USA 112 , E220–E229 (2015).

Finger, P. T. & Tomar, A. S. Retinoblastoma outcomes: a global perspective. Lancet Glob. Health 10 , e307–e308 (2022).

Wong, E. S. et al. Global retinoblastoma survival and globe preservation: a systematic review and meta-analysis of associations with socioeconomic and health-care factors. Lancet Glob. Health 10 , E380–E389 (2022).

Romano, M. R. et al. Facing COVID-19 in ophthalmology department. Curr. Eye Res. 45 , 653–658 (2020).

Howard, A. et al. Searching for mobilenetv3. In Proc. IEEE/CVF International Conference on Computer Vision 1314–1324 (IEEE, 2019).

Hoefler, T., Alistarh, D., Ben-Nun, T., Dryden, N. & Peste, A. Sparsity in deep learning: pruning and growth for efficient inference and training in neural networks. J. Mach. Learn. Res. 22 , 1–124 (2021).

Leeming, G., Ainsworth, J. & Clifton, D. A. Blockchain in health care: hype, trust, and digital health. Lancet 393 , 2476–2477 (2019).

Yang, Y. et al. A digital mask to safeguard patient privacy. Nat. Med. 28 , 1883–1892 (2022).

Drover, J. R., Wyatt, L. M., Stager, D. R. & Birch, E. E. The teller acuity cards are effective in detecting amblyopia. Optom. Vis. Sci. 86 , 755 (2009).

Mayer, D. L. et al. Monocular acuity norms for the Teller Acuity Cards between ages one month and four years. Investigative Ophthalmol. Vis. Sci. 36 , 671–685 (1995).

CAS   Google Scholar  

King, D. E. Max-margin object detection. Preprint at https://ui.adsabs.harvard.edu/abs/2015arXiv150200046K (2015)..

Zhou, E., Fan, H., Cao, Z., Jiang, Y. & Yin, Q. Extensive facial landmark localization with coarse-to-fine convolutional network cascade. In 2013 IEEE International Conference on Computer Vision Workshops 386–391 (IEEE, 2013).

Kazemi, V. & Sullivan, J. One millisecond face alignment with an ensemble of regression trees. In 2014 IEEE Conference on Computer Vision and Pattern Recognition 1867–1874 (IEEE, 2014).

Bradski, G. The openCV library. Dr. Dobb’s J. Softw. Tools 25 , 120–123 (2000).

Tan, M. & Le, Q. EfficientNet: rethinking model scaling for convolutional neural networks. In Proc. 36th International Conference on Machine Learning 6105–6114 (PMLR, 2019).

Deng, J. et al. Imagenet: a large-scale hierarchical image database. In 2009 IEEE Conference on Computer Vision and Pattern Recognition 248–255 (IEEE, 2009).

Chao, H., He, Y., Zhang, J. & Feng, J. GaitSet: regarding gait as a set for cross-view gait recognition. In Proc. Thirty-Third AAAI Conference on Artificial Intelligence and Thirty-First Innovative Applications of Artificial Intelligence Conference and Ninth AAAI Symposium on Educational Advances in Artificial Intelligence Article 996 (AAAI Press, 2019).

Buslaev, A. et al. Albumentations: fast and flexible image augmentations. Information 11 , 125 (2020).

Hinton, G. E. & Roweis, S. Stochastic Neighbor Embedding. In Advances in Neural Information Processing Systems 15 (Eds. Becker, S., Thrun, S. and Obermayer, K.) 833–840 (NIPS, 2002).

Belkina, A. et al. Automated optimized parameters for T-distributed stochastic neighbor embedding improve visualization and analysis of large datasets. Nat. Commun. 10 , 5415 (2019).

Selvaraju, R. R. et al. Grad-CAM: visual explanations from deep networks via gradient-based localization. In 2017 IEEE International Conference on Computer Vision (ICCV) 618–626 (IEEE, 2017).

Zuppichini, F. S. FrancescoSaverioZuppichini/cnn-visualisations. GitHub https://github.com/FrancescoSaverioZuppichini/cnn-visualisations (2018).

Download references

Acknowledgements

We thank all the participants and the institutions for supporting this study. We thank H. Sun, T. Wang, T. Li, W. Lai, X. Wang, L. Liu, T. Cui, S. Zhang, Y. Gong, W. Hu, Y. Huang, Y. Pan and C. Lin for supporting the data collection; M. Yang for the help with statistical suggestions and Y. Mu for the help with our demo video. This study was funded by the National Natural Science Foundation of China (grant nos. 82171035 and 91846109 to H.L.), the Science and Technology Planning Projects of Guangdong Province (grant no. 2021B1111610006 to H.L.), the Key-Area Research and Development of Guangdong Province (grant no. 2020B1111190001 to H.L.), the Guangzhou Basic and Applied Basic Research Project (grant no. 2022020328 to H.L.), the China Postdoctoral Science Foundation (grant no. 2022M713589 to W.C.), the Fundamental Research Funds of the State Key Laboratory of Ophthalmology (grant no. 2022QN10 to W.C.) and Hainan Province Clinical Medical Center (H.L.). P.Y.-W.-M. is supported by an Advanced Fellowship Award (NIHR301696) from the UK National Institute of Health Research (NIHR). P.Y.-W.-M. also receives funding from Fight for Sight (UK), the Isaac Newton Trust (UK), Moorfields Eye Charity (GR001376), the Addenbrooke’s Charitable Trust, the National Eye Research Centre (UK), the International Foundation for Optic Nerve Disease, the NIHR as part of the Rare Diseases Translational Research Collaboration, the NIHR Cambridge Biomedical Research Centre (BRC-1215-20014) and the NIHR Biomedical Research Centre based at Moorfields Eye Hospital National Health Service Foundation Trust and University College London Institute of Ophthalmology. The views expressed are those of the author(s) and not necessarily those of the National Health Service, the NIHR or the Department of Health. The funders had no role in the study design, data collection and analysis, decision to publish or preparation of the manuscript.

Author information

These authors contributed equally: Wenben Chen, Ruiyang Li, Qinji Yu, Andi Xu, Yile Feng.

Authors and Affiliations

State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China

Wenben Chen, Ruiyang Li, Andi Xu, Ruixin Wang, Lanqin Zhao, Zhenzhe Lin, Yahan Yang, Duoru Lin, Xiaohang Wu, Jingjing Chen, Zhenzhen Liu, Yuxuan Wu, Dong Liu, Qianni Wu, Mingyuan Li, Yifan Xiang, Xiaoyan Li, Zhuoling Lin, Danqi Zeng, Yunjian Huang, Huasheng Yang, Danping Huang, Xiaoming Lin, Lingyi Liang, Xiaoyan Ding, Yangfan Yang, Pengsen Wu, Erping Long & Haotian Lin

Institute of Image Communication and Network Engineering, Shanghai Jiao Tong University, Shanghai, China

Qinji Yu & Xiaowei Ding

VoxelCloud, Shanghai, China

Yile Feng, Kang Dang, Kexin Qiu, Zilong Wang, Ziheng Zhou & Xiaowei Ding

School of Medicine, Sun Yat-sen University, Shenzhen, China

Silang Mo & Xiucheng Huang

Department of Urology, Peking University Third Hospital, Peking University Health Science Center, Beijing, China

Department of Ophthalmology, The Second Affiliated Hospital of Fujian Medical University, Quanzhou, China

Jianmin Hu & Bingfa Dai

Shenzhen People’s Hospital (The Second Clinical Medical College, Jinan University; The First Affiliated Hospital, Southern University of Science and Technology), Shenzhen, China

Liuzhou Maternity and Child Healthcare Hospital, Affiliated Women and Children’s Hospital of Guangxi University of Science and Technology, Liuzhou, China

Meirong Wei

National Center for Children’s Health, Department of Ophthalmology, Beijing Children’s Hospital, Capital Medical University, Beijing, China

Shoulong Hu

Department of Ophthalmology, Zhengzhou Children’s Hospital, Zhengzhou, China

Shenzhen Eye Hospital, Jinan University, Shenzhen Eye Institute, Shenzhen, China

Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore

Feihui Zheng

Department of Ophthalmology, St. Thomas’ Hospital, London, UK

Nick Stanojcic

Moorfields Eye Hospital, London, UK

Ji-Peng Olivia Li & Patrick Yu-Wai-Man

Department of Ophthalmology & Visual Sciences, Faculty of Medicine, The Chinese University of Hong Kong, Hong Kong, China

Carol Y. Cheung

Sylvester Comprehensive Cancer Center, University of Miami Miller School of Medicine, Miami, FL, USA

Department of Molecular and Cellular Pharmacology, University of Miami Miller School of Medicine, Miami, FL, USA

University College London Institute of Ophthalmology, University College London, London, UK

Patrick Yu-Wai-Man

Cambridge Eye Unit, Addenbrooke’s Hospital, Cambridge University Hospitals, Cambridge, UK

Cambridge Center for Brain Repair and Medical Research Council (MRC) Mitochondrial Biology Unit, Department of Clinical Neurosciences, University of Cambridge, Cambridge, UK

School of Computer Science and Engineering, Sun Yat-sen University, Guangzhou, China

Ruixuan Wang & Wei-shi Zheng

Hainan Eye Hospital and Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Haikou, China

  • Haotian Lin

Center for Precision Medicine and Department of Genetics and Biomedical Informatics, Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, China

You can also search for this author in PubMed   Google Scholar

Contributions

W.C., R.L. and H.L. contributed to the concept of the study and designed the research. W.C., R.L., A.X., Ruixin Wang, Yahan Yang, D. Lin, X.W., J.C., Z. Liu, Y.W., K.Q., Z.Z., D. Liu, Q.W., Y.X., X.L., Zhuoling Lin, D.Z., Y.H., S.M., X.H., S.S., J.H., J.Z., M.W., S.H., L.C., B.D., H.Y., D.H., X.L., L.L., Xiaoyan Ding, Yangfan Yang and P.W. collected the data. W.C., R.L., Q.Y., Y.F., Zhenzhe Lin, K.D., Z.W., M.L. and Xiaowei Ding conducted the study. W.C., R.L. and L.Z. analyzed the data. W.C., R.L., Q.Y., Y.F. and H.L. cowrote the manuscript. D. Lin, X.W., F.Z., N.S., J.-P.O.L., C.Y.C., E.L., C.C., Y.Z., P.Y.-W.-M., Ruixuan Wang and W.-s.Z. critically revised the manuscript. Zhenzhe Lin, Ruixuan Wang, W.-s.Z, Xiaowei Ding and H.L. performed the technical review. All authors discussed the results and provided comments regarding the manuscript.

Corresponding authors

Correspondence to Xiaowei Ding or Haotian Lin .

Ethics declarations

Competing interests.

Zhongshan Ophthalmic Center and VoxelCloud have filed for patent protection for W.C., R.L., A.X., Y.F., Zhenzhe Lin, K.D., K.Q., Xiaowei Ding and H.L. for work related to the methods of detection of visual impairment in young children. All other authors declare no competing interests.

Peer review

Peer review information.

Nature Medicine thanks Pete Jones, Ameenat Lola Solebo and the other, anonymous, reviewer(s) for their contribution to the peer review of this work. Primary Handling Editor: Michael Basson, in collaboration with the Nature Medicine team.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Extended data

Extended data fig. 1 the app for data collection..

a , The operation interface of the app. b , Utilize the smartphone for data collection in real-world settings.

Extended Data Fig. 2

The standard preparation sequence guided by the app for data collection.

Extended Data Fig. 3 Development of deep learning models of the AIS system.

a , Basic building blocks and architecture of EfficientNet. Two model architectures, EfficientNet-B2 and EfficientNet-B4, were used in data quality control for detection/diagnostic tasks, respectively. b , Architecture of the EfficieNet-B2 model. c , Architecture of the EfficientNet-B4 model. d , ROC curves of the models trained for the quality control module. e , The training and tuning curves of the detection model at the clip level. Conv 2d, 2-dimensional convolutional layer; ReLU, rectified linear unit; Temporal Avg Pooling, average pooling along the temporal dimension; ROC curve, receiver operating characteristic curve; AIS, Apollo Infant Sight.

Extended Data Fig. 4 Performance of the detection model at the clip level.

a , ROC curves of the detection model in the internal validation (NI, n = 6,735; mild, n = 8,310; severe, n = 6,685; VI versus NI, AUC = 0.925 (0.914–0.936); mild versus NI, AUC = 0.916 (0.904–0.928); severe versus NI, AUC = 0.935 (0.924–0.946)). b , ROC curves of the detection model in the external validation (NI, n = 7,392; mild, n = 2,580; severe, n = 1,569; VI versus NI, AUC = 0.814 (0.790–0.838); mild versus NI, AUC = 0.802 (0.770–0.831); severe versus NI, AUC = 0.834 (0.807–0.863)). c , ROC curves of the detection model in the at-home implementation by parents or caregivers (NI, n = 947; mild, n = 943; severe, n = 809; VI versus NI, AUC = 0.817 (0.756–0.881); mild versus NI, AUC = 0.809 (0.735–0.884); severe versus NI, AUC = 0.825 (0.764–0.886)). Parentheses show 95% bootstrap CIs. A cluster-bootstrap biased-corrected 95% CI was computed, with individual children as the bootstrap sampling clusters. NI, nonimpairment; VI, visual impairment; ROC curve, receiver operating characteristic curve; AUC, area under the curve; CI, confidence interval.

Extended Data Fig. 5 Visualization of the clips correctly classified or misclassified by the detection model.

a , The t-distributed stochastic neighbor embedding (t-SNE) algorithm was applied to visualize the clustering patterns of clips correctly classified or misclassified by the detection model. b , Distances from true VI and false clips to the center of true VI clips in the t-SNE scatter plot were compared. * P < 0.001 (true VI clip, n = 999; false clip, n = 317; P < 1.00 × 10 −36 , two-tailed Mann-Whitney U test) c , Distances from true NI and false clips to the center of true NI clips in the t-SNE scatter plot were compared. * P  < 0.001 (true NI clip, n = 1084; false clip, n = 317; P < 1.00 × 10 −36 , two-tailed Mann-Whitney U test). The thick central lines denote the medians, the lower and upper box limits denote the first and third quartiles, and the whiskers extend from the box to the outermost extreme value but no further than 1.5 times the interquartile range (IQR). VI, visual impairment; NI, nonimpairment.

Extended Data Fig. 6 The triage-driven approach to select the equivocal cases with the lowest predicted confidence values for manual review.

a , The false predicted rate (both false positive and false negative) in different percentile intervals of predicted confidence values. * P < 0.001 (0 th –9 th , n = 51; 10 th –20 th , n = 61; 20 th –30 th , n = 59; 30 th –40 th , n = 57; 40 th –50 th , n = 57; 50 th –60 th , n = 56; 60 th –70 th , n = 57; 70 th –80 th , n = 57; 80 th –90 th , n = 57; 90 th –100 th , n = 57; 0 th –9 th percentile versus other percentile intervals, P ranging from 7.92 × 10 − 8 for 90 th –100 th to 1.45 × 10 −3 for 20 th –30 th ; 10 th –20 th percentile versus other percentile intervals, P ranging from 2.02 × 10 −6 for 90 th –100 th to 2.02 × 10 −2 for 20 th –30 th ; two-tailed Fisher’s exact tests). Results are expressed as means and the 95% Wilson confidence intervals (CIs). b , The performance of the triage-driven system with increasing manual review ratios for the equivocal cases. SPE, specificity; SEN, sensitivity; ACC, accuracy.

Extended Data Fig. 7 Performance of the detection model under blurring, brightness, color, and noise adjustment gradients.

a , Cartoon diagram showing adjusting effect on the input data by blurring factors. b , Cartoon diagram showing adjusting effect on the input data by brightness factors. c , Cartoon diagram showing adjusting effect on the input data by color factors. d , Cartoon diagram showing adjusting effect on the input data by noise factors. e , ROC curves of the detection model for identifying visual impairment change by blurring factors (AUCs range from 0.683 for factor 37 to 0.951 for factor 0). f , ROC curves of the detection model for identifying visual impairment change by brightness factors (AUCs range from 0.551 for factor 0.9 to 0.951 for factor 0). g , ROC curves of the detection model for identifying visual impairment change by color factors (AUCs range from 0.930 for factor 70 to 0.952 for factor 20). h , ROC curves of the detection model for identifying visual impairment change by noise factors (AUCs range from 0.820 for factor 1800 to 0.951 for factor 0). NI, n = 60; VI, n = 140; ROC curve, receiver operating characteristic curve; VI, visual impairment; NI, nonimpairment.

Extended Data Fig. 8 Performance of the AIS system using Huawei Honor-6 Plus/Redmi Note-7 smartphones.

a , Comparisons of the predicted probabilities for the AIS system between the nonimpairment, mild impairment, and severe impairment groups. * P  < 0.001 (NI versus mild, P = 8.10 × 10 −28 ; NI versus severe, P = 1.51 × 10 −27 ; two-tailed Mann-Whitney U tests). The cross symbols denote the means, the thick central lines and triangle symbols denote the medians, the lower and upper box limits denote the first and third quartiles, and the whiskers extend from the box to the outermost extreme value but no further than 1.5 times the interquartile range (IQR). b , ROC curves of the AIS system with Android smartphones. c , Performance of the AIS system in the across-smartphone analysis. VI, visual impairment; NI, nonimpairment; ROC curve, receiver operating characteristic curve; AIS, Apollo Infant Sight.

Supplementary information

Supplementary information.

Supplementary Note and Tables 1–17.

Reporting Summary

Supplementary video 1.

Demo video of using the Apollo Infant Sight (AIS) smartphone-based system.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Cite this article.

Chen, W., Li, R., Yu, Q. et al. Early detection of visual impairment in young children using a smartphone-based deep learning system. Nat Med 29 , 493–503 (2023). https://doi.org/10.1038/s41591-022-02180-9

Download citation

Received : 16 June 2022

Accepted : 09 December 2022

Published : 26 January 2023

Issue Date : February 2023

DOI : https://doi.org/10.1038/s41591-022-02180-9

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

This article is cited by

A sustainable approach to universal metabolic cancer diagnosis.

  • Ruimin Wang
  • Shouzhi Yang

Nature Sustainability (2024)

  • Longqian Liu

Cost-effectiveness and cost-utility of a digital technology-driven hierarchical healthcare screening pattern in China

  • Xiaohang Wu

Nature Communications (2024)

Potential roles of lncRNA MALAT1-miRNA interactions in ocular diseases

  • Ava Nasrolahi
  • Fatemeh Khojasteh Pour
  • Maryam Farzaneh

Journal of Cell Communication and Signaling (2023)

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing: Translational Research newsletter — top stories in biotechnology, drug discovery and pharma.

case study of a child with visual impairment

Academia.edu no longer supports Internet Explorer.

To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to  upgrade your browser .

Enter the email address you signed up with and we'll email you a reset link.

  • We're Hiring!
  • Help Center

paper cover thumbnail

Teaching Students with Visual Impairments in Inclusive Classrooms.

Profile image of Abdulwakil  Hassen

 The purpose of this study was to assess primary school teachers’ attitude, perception and sense of efficacy towards teaching students with visual impairments in inclusive classroom and to know whether there is relationship between teachers’ attitude, perception and sense of self-efficacy towards teaching students with visual impairments. The total population of the study was 76 primary school teachers and the necessary data were gathered using questionnaires and semi-structured interviews. The questionnaires consisted of 17 attitude Likert scale type questions, 9 and 15 efficacy, perception questions respectively. One sample t-test and Pearson product moment correlation coefficient was employed as a method of data analysis and the results of one sample t-test revealed that there is statistically significant positive teachers’ attitude towards the inclusion of students with visual impairments in inclusive classrooms. Similarly, teachers’ perception and sense of efficacy in teaching students with vision impairment were found to be statistically significant. And Pearson’s correlation coefficient indicated that there existed statistically positive correlation between teachers’ attitude, perception and sense of efficacy in teaching students with visual impairments in inclusive classrooms. Assessing the challenges facing primary school teachers in teaching students with visual impairments in inclusive classrooms was another intent of the study and the results from the interview revealed that primary school teachers are facing many challenges in teaching students in inclusive classrooms such as lack of knowledge, skills and material resources were some of the challenges. Based on this, the current study revealed that teachers have positive attitude towards teaching students with visual impairments in inclusive classroom and there is positive relationship between teachers’ perception and sense of efficacy. Furthermore, teachers’ who had positive attitude and perception were found to be efficacious in teaching students with visual impairments in inclusive classroom. However, teachers were lacking adequate training how to teach students with visual impairments or students with disabilities in inclusive classroom. Generally, it is possible to conclude that primary schools teachers were positive in their attitudes towards teaching students with visual impairments. Finally, the investigator recommended that teachers need to be given adequate pre-service and in-service trainings to overcome the shortcomings associated with teaching in inclusive classrooms and teachers attitudes need to be developed positively and the authorities has to allocate enough budget for the purpose of educating those students with vision impairments in inclusive classrooms. Keywords: inclusive education, teacher self-efficacy, teacher attitudes towards inclusion, perception

Related Papers

Teaching and Teacher Education

Jorine Vermeulen

case study of a child with visual impairment

Olli-Pekka Malinen

chido ajesam

Prof. Emeka Desmond Ozoji , Maria Cardona Molto , Cindy Anderson , Eileen Raymond , Akinpelu Funmi , Richard Villa , Sonia Villabella

claudia saldana

The aim of this qualitative study is to explore how art, as a semiotic tool, transforms children with disabilities. To achieve this purpose, one must listen to the voices of teachers and childcare workers in the field of special education. The study’s preliminary findings found three main categories through data analysis: 1) Teachers’ perceptions of art; 2) How children with disabilities respond to art; and 3) Teaching practices through art. These findings show that children with disabilities can establish a connection with teachers through music. With music practices, teachers become aware of what the child wants to express, and of what the child is learning and developing. The study further shows the importance to understand how children respond to art through the diverse disciplines and the development of the practice routines. In addition, the study identifies the necessity to continue researching the area of art, semiotics, and children with disabilities

Educating Every Learner, Every Day: A Global …

Sheena Deaf

ABSTRACT This study examined perceived knowledge and self efficacy as a veritable tool in predicting teachers’ attitude towards the inclusion of children with autism into mainstream/regular schools. It adopted a survey research method and specifically utilized the Ex-Post Facto design. The questionnaire format was used for data collection. Three instruments were used. They are Perceived Autism Knowledge Scale (PAKS), Teacher Sense of Efficacy Scale (TSES) and The Inclusion Attitude Scale (TIAS). A total of 147 teachers took part in the study. And three hypotheses were tested using t-test for independent groups, correlation and multiple regressions. The result of the study revealed that regular teachers had less positive attitudes towards inclusion than their autistic counterparts as given in a t-test analysis (t(137)=-2.387;P<.05). It was also observed that female teachers in regular schools showed more positive attitudes towards mainstreaming of autistics than their male counterparts (t(104)=1.817;P<.05); but this gender differential in attitudes towards mainstreaming was not found among autistics teachers (t(31)=.223;P>.05. Also, teachers’ knowledge of autism and their self-efficacy were each significantly positively correlated with attitudes towards inclusion (r(134)= .349;P<.05) and (r(130)= .270;P<.05);); while their pair-wise relationship was positive but not significant (r(135)= .165;P>.05. It is therefore recommended that educational administrators and policy makers as well as implementers of inclusion programme for autistic children in regular classrooms adopts techniques for selecting personnel particularly teaching staff with due consideration for their knowledge of the disorder, self-efficacy and gender (male) so as to enable a supportive attitudinal environment by ensuring greater commitment to the strenuous responsibilities that may be involved in mainstreaming.

In M. Chitiyo, G. Prater, L. Aylward, G. Chitiyo, E. Dalton, & A. Hughs (Eds.)

Carla Reis Abreu-Ellis

Michael J Boyle

Catholic school principals typically serve as the prime decision-makers in admission and enrollment issues. A key factor in this decision-making can be the principals’ perceptions and attitudes about servicing students with disabilities within a Cath- olic school context. e purpose of the present study is to investigate the attitudes and perceptions of Catholic school principals toward inclusion of students with dis- abilities in Catholic schools. Overall, a majority of surveyed principals reported a positive attitude toward including students with disabilities. Some signi cant relationships were found between principal’s pervious experiences with students with disabilities and the principals’ willingness to enroll students with disabilies. Implications for practice are presented.

Journal of Education and Educational Development

The perceptions of primary school teachers towards inclusive education was investigated in mainstream government schools of Islamabad capital territory where inclusive education was being supported by Sight savers and other international organizations. The study was carried out involving 54 teachers in six randomly selected primary schools. The sampled group comprised both, teachers trained in inclusive education and teachers working in same schools, but not trained in inclusive education. Purposive sampling method was used to select the teachers. Structured questionnaire (Likert Scale) and structured interview method was used for data collection. The results of the study revealed that inclusive education is considered to be a desirable practice. The teachers believed that all learners regardless of their disabilities should be in regular classrooms and they showed more favorable attitude towards children with mild disabilities, but were not very optimistic about children with severe disabilities. The study also recognized teachers' capacity as an essential component of inclusive education and recommends that inclusive education should be a part of pre and in-service teacher education.

Loading Preview

Sorry, preview is currently unavailable. You can download the paper by clicking the button above.

RELATED PAPERS

Special Education

Anastasios Michailidis

Prof. Emeka Desmond Ozoji , Bernadette E Ozoji , Adeniyi Akangbe , Chikodi Anyanwu , Nkeng Prisca

Itfaq Khaliq

Dianne Chambers

Uzair-ul- Hassan

Nefi Charalambous-Darden , Phivi Antoniou

David W"sterfors , Goran Basic , Martin Hugo , Ekberg, Margareta Stigsdotter

ricardo vargas garcia

Patricia Rice Doran

Journal of the International Association of Special Education

Constantia Charalambous

International journal of special education

Olli-Pekka Malinen , Hannu K Savolainen

International Journal of Inclusive Education

Ijbmm Journal

Kenneth Muzata

Kenneth Lantang

Todd Fletcher , Bradford Harkins

Edward Asamoah

Damene M A T S A N A Malado

Simbarashe Muputisi

Abdulaziz Alasmari

Teacher Education Quarterly

Patricia Alvarez McHatton

Educational Research

Farida Kurniawati , Frieda Mangunsong

LITERATURE COLLECTION

Jovian Cheung

Marie-Noëlle AbiYaghi

Emma Goodall

prof. Dr. Dr. Mira Tzvetkova-Arsova

Meloney Trimmingham

Marios A. Pappas , Chara Papoutsi

د.عبدالرحمن الجعيد

Anyi Milena Alvarez Cogollo

Publisher ijmra.us UGC Approved

  •   We're Hiring!
  •   Help Center
  • Find new research papers in:
  • Health Sciences
  • Earth Sciences
  • Cognitive Science
  • Mathematics
  • Computer Science
  • Academia ©2024

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Front Psychol

Viewing Strategies in Children With Visual Impairment and Children With Normal Vision: A Systematic Scoping Review

Anke fonteyn-vinke.

1 Royal Dutch Visio, Nijmegen, Netherlands

2 Behavioral Science Institute, Radboud University, Nijmegen, Netherlands

Bianca Huurneman

3 Department of Cognitive Neuroscience, Donders Institute for Brain, Cognition and Behaviour, Radboud University Medical Centre, Nijmegen, Netherlands

Frouke N. Boonstra

Associated data.

The original contributions presented in the study are included in the article/ Supplementary Material , further inquiries can be directed to the corresponding author.

Viewing strategies are strategies used to support visual information processing. These strategies may differ between children with cerebral visual impairment (CVI), children with ocular visual impairment, and children with normal vision since visual impairment might have an impact on viewing behavior. In current visual rehabilitation practice a variety of strategies is used without consideration of the differences in etiology of the visual impairment or in the spontaneous viewing strategies used. This systematic scoping review focuses on viewing strategies used during near school-based tasks like reading and on possible interventions aimed at viewing strategies. The goal is threefold: (1) creating a clear concept of viewing strategies, (2) mapping differences in viewing strategies between children with ocular visual impairment, children with CVI and children with normal vision, and (3) identifying interventions that can improve visual processing by targeting viewing strategies. Four databases were used to conduct the literature search: PubMed, Embase, PsycINFO and Cochrane. Seven hundred and ninety-nine articles were screened by two independent reviewers using PRISMA reporting guidelines of which 30 were included for qualitative analysis. Only five studies explicitly mentioned strategies used during visual processing, namely gaze strategies, reading strategies and search strategies. We define a viewing strategy as a conscious and systematic way of viewing during task performance. The results of this review are integrated with different attention network systems, which provide direction on how to design future interventions targeting the use of viewing strategies to improve different aspects of visual processing.

Introduction

The etiology of visual impairment in childhood is very heterogeneous. Visual impairment in children is caused by ocular diseases and/or genetic factors affecting the eye or by cerebral visual impairment (CVI). CVI comprises visual malfunction due to retro-chiasmal and visual association pathway pathology (Philip and Dutton, 2014 ). In Western countries CVI is the most prevalent cause of visual impairment (Fazzi et al., 2007 ; Kran et al., 2019 ). Since the etiology of visual impairment in children is diverse, a variety of visual problems can occur and visual rehabilitation can focus on different stages of visual processing. For example, deficits in lower visual functions, like visual acuity, visual field or contrast sensitivity, are often compensated by the use of aids or adjustments to the environment. On the other hand, deficits in higher order visual functions can be trained (e.g., strategic eye movement training). What remains unclear in the field is how children implement viewing strategies. The aim of this review is to provide an overview of the viewing strategies used during near school-based tasks, such as reading, in children with and without (C)VI and available interventions.

Below, four antecedents leading to this review are elaborated: (1) the benefits of compensatory strategies in subjects with brain damage, (2) in current visual rehabilitation a variety of strategies is used without consideration of the different etiologies of visual impairment, (3) viewing strategies hold potential benefits for visual attention, and (4) oculomotor behavior provides insights into viewing strategies.

The diversity of visual deficits in CVI is considerable, and the impact of CVI on education and rehabilitation strategies is less well-understood than for ocular visual impairments. A recent review states that educational strategies employed for children with ocular impairments are largely ineffective for children with CVI (Martin et al., 2016 ). There is evidence for cross modal brain reorganization in blind individuals as demonstrated by robust activation in the occipital cortex while performing non-visual tasks (Kujala et al., 2000 ; Fine and Park, 2018 ). Still, an absence of vision during the critical period seems to weaken auditory and tactile spatial representations (Pasqualotto and Newell, 2007 ; Finocchietti et al., 2015 ; Cappagli et al., 2017 ), while temporal tactile judgements can be superior in congenitally blind subjects (Roder et al., 2004 ). There is little research on compensatory mechanisms in CVI. Bryck and Fisher ( 2012 ) mention that strengthening compensatory or strategic processes can support information processing in children with acquired cerebral damage. An example of such a strategy is teaching self-verbalization to help guide action. Another example of strategic processes in rehabilitation is the use of eye movement training in adults after stroke (resulting in visual field deficits or neglect). Compensatory strategies can involve visual imagery, visual search strategies and compensating eye movements to improve visual search time and accuracy as well as to improve attention and daily living activities. These compensatory processes increase brain activation reflecting neural processes supporting task performance (Bryck and Fisher, 2012 ).

Since the etiology of CVI is located in the brain, visual training might target the use of viewing strategies to guide visual behavior. In current clinical rehabilitation practice, viewing strategy training is commonly employed. A viewing strategy can be defined as the systematic way in which a visual task is consciously approached. Optimally, the chosen strategy fits the task at hand and contributes to accurate and fast visual information processing. Yet, little is known about viewing strategies used by children with (cerebral) visual impairment. Based on the variety of causes of (cerebral) visual impairment and clinical practice it would be conceivable that children with CVI use different viewing strategies compared to children with ocular visual impairment and children with normal vision. Knowledge about these strategies is needed to adapt rehabilitation techniques and training to the different problems in visual functioning.

CVI is most often accompanied by impairments in higher order visuospatial processing, visual selective attention, and object recognition (Bennett et al., 2020 ; Zuidhoek, 2020 ). Targeting visual attention is likely to have a widespread beneficial impact on multiple aspects of higher and lower level stages of visual processing. A commonly accepted framework of attention networks has been presented by Posner (Petersen and Posner, 2012 ). This framework suggests that there are three attentional systems in the human brain: an alerting network (which focuses on brain stem arousal systems along with right hemisphere systems related to sustained vigilance), an orienting network (involving among other regions, the parietal cortex and frontal eye fields, related to the ability to direct visual attention and prioritize sensory input by selecting a modality or location) and an executive network (including the midline frontal/anterior cingulate cortex related to the ability to direct focal attention and cope with interference). Viewing strategies are plans that are consciously addressed before starting a visual task to facilitate achieving a desired goal (e.g., accuracy or speed up performance). These strategies can put a demand on different aspects of the attention networks. For example, a viewing strategy can involve instructing children to look in a structured way (i.e., left to right during reading) while keeping their attention directed at the task at hand (e.g., by using a warning signal or check points). This strategy is directed at the alerting system. An example of a viewing strategy directed at the orienting system is a strategy in which a child selects a visual array fit to the task at hand: i.e., a larger part of the picture for global processing in scene interpretation or zooming in on details (local processing). Finally, an example of a viewing strategy which puts a large demand on the executive network is a strategy in which the child focusses on a specific target which restricts the perceptual awareness of surrounding targets.

Williams et al. ( 2011 ) showed that problems in higher level visual functions, especially difficulties with interpreting cluttered scenes (visual selective attention) and visuomotor functions, are associated with underachievement in reading and in mathematics. Children with CVI demonstrate more random visual search patterns (Kooiker et al., 2016 ), which may be associated with less strategic visual processing. Results from a review in dyslexic and normal readers (Kulp and Schmidt, 1996 ) indicate that during primary school years oculomotor skills improve and are associated with increases in reading rate and ability. For example, fixation duration decreases, saccade length increases, the number of regressions (backward saccades) decreases and the perceptual span decreases during the primary school years. The authors concluded that accurate eye movements are integral to achieving reading proficiency. Some visual impairments are accompanied by oculomotor abnormalities, such as nystagmus (i.e., involuntary bilateral oscillating eye movements). Thomas et al. ( 2011 ) demonstrated that individuals with infantile nystagmus (IN) on average show slightly lower reading speeds than controls, but that there are also individuals with IN showing higher reading speeds than normal controls despite their foveal and ocular deficits. Adults with IN use several strategies to manipulate their nystagmus during reading, such as: (1) suppression of corrective quick phases allowing involuntary slow phases to achieve the desired goal, (2) voluntarily changing the character of the involuntary slow phases using quick phases, and (3) correction of involuntary slow phases using quick phases. This research provided evidence that accurate eye movements might not be as integral as once thought to reach good reading speeds and demonstrated that individuals with oculomotor deficits can adapt to and compensate for their condition by using strategies with which they can reach (near) normal reading speeds.

The threefold goal of this scoping review is: (1) to provide a clear concept of viewing strategies, (2) to map differences in viewing strategies between children with ocular visual impairment, children with CVI and children with normal vision and (3) to identify interventions that can improve visual processing by targeting viewing strategies.

Search Strategy

Studies were identified through electronic databases searching in PubMed, EMBASE, PsycINFO and Cochrane Library. The final search was run on 20 December 2020. In addition to the articles found with our search query, reference lists of included articles were scanned and experts were consulted. No limitation regarding year of publication was applied. The search was developed by an experienced clinical librarian. The selection was made by the first two authors of the article. The following search strategy was used to search for all databases using MeSH terms and keywords: visual perception (MeSH), viewing strategy or viewing skills, reading strategy or reading skills, visual processing or perceptual exploration or perceptual processing or fixation strategy or perceptual learning or visual development. An example of the search strategy used in PubMed is shown in Table 1 .

Search history in PubMed.

#6#4 AND #5
#5“Video Games” [MeSH] OR “Education”[MeSH] OR Training OR Game* OR Intervention* OR rehabilitation
#4#1 AND #2 AND #3
#3“Visually Impaired Persons” [MeSH] OR “Vision, Low” [MeSH] OR “Vision Disorders” [MeSH] OR Visually impaired OR visual impairment OR CVI OR low vision.
#2Child [MeSH] OR child* OR schoolchild* OR pediatri* OR paediatr* OR boy OR boys OR boyhood OR girl OR girls OR girlhood OR youth OR youths
#1Visual Perception [MeSH] AND (viewing strategy OR viewing skill* OR viewing difficult* OR reading strateg* OR reading skill* OR reading difficult* OR visual processing OR perceptual exploration* OR perceptual processing OR fixation strateg* OR perceptual learning OR visual development)

Study Selection

Two authors (AF and BH) independently reviewed the list of results and identified relevant articles, based on predefined in- and exclusion criteria regarding viewing strategies of school-aged children. The inclusion criteria are presented in Table 2 . All studies concerning viewing strategies, viewing skills, reading strategies and efficiency, accuracy or speed of visual processing or ways to train viewing behavior (strategy, interventions, visual training using strategies) were included. Articles involving non-school aged children were excluded (subject <4 or >12 years), because the focus in this review is on children and developmental stage affects viewing behavior. Also, studies about children with intellectual disabilities, children with psychiatric disorders (DSM related diagnosis like ADHD and ASS), and children with amblyopia were excluded, to create a clear picture of viewing strategies in typically developing children vs. children with (cerebral) visual impairment. An exception was made for studies in the above mentioned groups if the study contained information about visual viewing strategies used by a typically developing control group, in this case studies were included. Animal or machine studies, studies concerning medical diagnosis (for example describing imaging techniques like PET/MEG/MRI/CT) and studies aimed at viewing behavior regarding moving stimuli were excluded. We decided to exclude studies with moving stimuli, because we expect that different viewing strategies are used during tasks with moving stimuli compared to school-based tasks. Disagreements during selection were resolved by application of criteria, discussion and consensus.

Inclusion criteria.

Population• Children with ocular visual impairment 4–12 years
• Children with cerebral visual impairment 4–12 years
• Children with normal vision 4–12 years
Intervention• (Longitudinal) Cohort
• Cross sectional
• Randomized controlled trials
• Non-randomized controlled trials
Comparison• Differences in viewing strategies between (age) groups
• Training of viewing strategies
Outcome measurements• Description of viewing strategy
• Temporal or spatial aspects of visual processing (e.g., visual acuity, speed)
• Reading strategies
• Viewing strategy interventions

Included articles focused on the concept of viewing strategies or viewing behavior during near school-based tasks in a variety of children. Since limited research about children with (cerebral) visual impairment is available, we choose to include articles describing viewing behavior of children with normal vision (i.e., studies describing control group data). For example we included articles concerning reading strategies by disabled or dyslexic children and controls. Control data provide information about viewing strategies used in children with normal vision which makes their inclusion relevant for this scoping review. On consensus five articles regarding the use of Irlen filters were included, since the use of aids may count as a strategy used by the visually impaired child. Since visual training programs commonly are assigned to children in primary school, the age was limited to 4–12 years old.

Quality Assessment

The included studies appeared very heterogeneous in subjects, paradigm and outcome measures. There was too little information provided on quantitative outcome measures regarding viewings strategies to conduct a meta-analysis. Therefore, the results of the studies will be described in a narrative manner. To assess the quality of the articles included in this scoping review, we modified the QUADAS tool (Whiting et al., 2011 ) for the non-intervention studies to fit it to our review purposes ( Table 3 ). For the intervention studies we used the Cochrane Collaboration's tool for assessing risk of bias (see Supplementary Tables 4a,b for the results of the quality assessment).

QUADAS criteria and modifications used in the current study.

)
1. Was the spectrum of patients representative of the patients who will receive the test in practice?This question was not applicable. The scoping review focused on three target groups. The studies were only included if fitted to the target groups.
2. Were selection criteria clearly described?This question was unmodified.
3. Is the reference standard likely to correctly classify the target condition?This question was not applicable.
4. Is the time period between reference standard and index test short enough to be reasonably sure that the target condition did not change between the two tests?This question was not applicable. We did not compare results obtained with a reference and index test.
5. Did the sample or a random selection of the sample receive verification using a reference standard of diagnosis?This question was not applicable.
6. Did patients receive the same reference standard regardless of the index test result?This question was not applicable.
7. Was the reference standard independent of the index test (i.e., the index test did not form part of the reference standard?)This question was not applicable.
8. Was the execution of the index test described in sufficient detail to permit replication of the test?We modified the questions 8 and 9 to a more general question regarding test procedure.
Was the execution of the test procedure described in sufficient detail to permit replication of the test?
9. Was the execution of the reference standard described in sufficient detail to permit its replication?See 8.
10. Were the index test results interpreted without knowledge of the results of the reference standard?This question was not applicable.
11. Were the reference standard results interpreted without knowledge of the results of the index test?This question was not applicable.
12. Were the same clinical data available when test results were interpreted as would be available when the test is used in practice?This question was not applicable. We did not aim at one particular reference test.
13. Were uninterpretable/intermediate test results reported?This question was unmodified.
14. Were withdrawals from the study explained?This question was unmodified.

Results of Search and Selection Process

The conducted search of PubMed, EMBASE, PsycINFO and Cochrane Library provided a total number of 799 articles. Duplicates were removed ( n = 75) and 724 articles were screened by abstract. In addition one article was identified by an expert (Barsingerhorn et al., 2018 ). A total of 686 articles were discarded for not meeting inclusion criteria. Of these articles 400 articles did not contain our primary outcome measures, 123 articles included children with additional impairments (intellectual disabilities, amblyopia, psychiatric disorders) and no control data and 84 articles only concerned medical diagnosis. Another 41 articles were excluded because the age of the subjects fell outside the primary school years (<4 or > 12 years). After full inspection eight articles were discarded (see PRISMA Flow Chart Figure 1 ). The remaining 30 quantitative studies consisted of 12 non randomized controlled trials (Non-RCT), 10 cross sectional studies, two cohort studies, and six case control studies.

An external file that holds a picture, illustration, etc.
Object name is fpsyg-13-898719-g0001.jpg

PRISMA 2009 Flow Chart.

Description of the Included Studies

Thirty-three studies were included. None of the included articles described viewing strategies as an outcome measure. In order to facilitate comprehensibility we categorized the included studies in the following manner: (1) studies regarding viewing strategies, (2) studies describing viewing behavior in children with normal vision, (3) studies describing viewing behavior in children with (cerebral) visual impairment, and (4) intervention studies focused at improving viewing behavior. We found five studies regarding viewing strategies (Robinson and Conway, 1994 ; Robinson and Foreman, 1999 ; Wilkinson et al., 2008 ; Pollux et al., 2014 ; Vinuela-Navarro et al., 2017 ). None of these five studies involved children with (C)VI.

Fourteen of the included studies were focused on viewing behavior in children with normal vision. A large part of these articles described differences between good readers and children with reading difficulties. Three studies were found which evaluated viewing behavior in children with (C)VI. Finally, thirteen studies were found which evaluated the effectiveness of behavioral treatment on visual processing proficiency in children with normal vision and/or children with (C)VI. The characteristics and outcomes of these studies are presented in Supplementary Tables 1 – 3 . The quality assessment is presented in Supplementary Table 4a (non-intervention studies) and Supplementary Table 4b (intervention studies).

Viewing Strategies

A variety of viewing strategies were mentioned in five studies. None of these five studies involved children with (cerebral) visual impairment. The tasks for which viewing strategies were described varied across these studies: three studies mentioned strategies during reading (Robinson and Conway, 1994 ; Robinson and Foreman, 1999 ; Vinuela-Navarro et al., 2017 ), one study described gaze strategies during categorization of subtle facial expressions (Pollux et al., 2014 ) and one study described search strategies during symbol discrimination (Wilkinson et al., 2008 ). Study characteristics and descriptions of viewing strategies mentioned are presented in Table 4 .

Study characteristics and descriptions of viewing strategies.

Pollux et al. ( )Cohort Study• Children with normal vision ( = 16)
• Adults with normal vision (N = 16, not further regarded)
8;2–9;3 years
Four training sessions on four consecutive days with a self-paced, free-viewing facial expression categorization task using emotional faces with varying intensity levels

• Behavioral measures (accuracy, response times, incorrect response)
• Eye movement measures (number of fixations, proportion of fixations and viewing times on different facial features, proportion of fixations and viewing times of different facial features during second fixation)
Gaze strategyTo categorize subtle facial expressions, a holistic gaze strategy is necessary to extract relevant facial cues from all internal features.
Robinson and Conway ( )Non-RCT• Experimental group
• = 29 children with reading or study problems
• Control group
• = 31 children with similar reading and learning problems as the experimental group (age matched)
9–14 years
Four months of Irlen filter (color overlays) use

- Questionnaire relating to reading and writing performance
- series of visual tasks
Reading strategyReading strategies in poor readers (based on analyses of reading errors) involved:
1. guessing words from single-letter cues
2. rereading of lines
3. skipping words or lines
Robinson and Foreman ( )Non-RCT• Experimental group
• = 113 children with reading difficulties
• Control group
• = 35 children with children with reading difficulties (age matched)
9–13 years
initial test, placebo tint (4 months), diagnosed tint (8 months) and diagnosed tint (20 months after baseline).

- Questionnaire relating to reading and writing performance
- series of visual tasks and assessment
Reading strategySee above.
Vinuela-Navarro et al. ( )Case control• = 120 children without delayed reading skills
• = 43 children with delayed reading skills
4–11 years
- main sequence (collected with the Tobii TX300 eye tracker by showing children cartoon characters horizontally from-20° to +20° in steps of 5°).
- Fixation stability (by showing children an animated stimulus in the center of the screen for 8 s).
- Saccade number
- Saccade amplitude
Reading strategyGood readers showed a similar eye movement strategy for each line of text during reading (number of saccades, fixations and duration of fixations were comparable)
Poor readers performed very differently in each line (unstructured and disorganized eye movement strategy).
Wilkinson et al. ( )Cross sectional• = 10 children with Down Syndrome (DS)
• = 8 typically developing children > 4 years (TDO)
• = 8 typically developing children <4 years (TDY)
DS: 106–201 months;
TDO: 48–57 months;
TDY: 40–46 months;
Line drawings of symbols in two color conditions (clustered and distributed arrays).

1. auditory—visual matching of food stimuli
2. visual—visual matching of clothing stimuli
3. visual—visual matching of activity stimuli

- Accuracy (percentage correct)
- speed (RT)
Search strategyColor cueing facilitates visual search for symbols.

Vinuela-Navarro et al. ( 2017 ) explicitly addressed oculomotor strategies used by children with and without reading problems. Children without reading problems showed similar eye movement strategies for each line of text (similar number of saccades and fixation durations) while children with reading problems showed a disorganized oculomotor strategy which differed per line [more fixations, extended fixation duration (progressive and regressive)].

In the two studies by Robinson and Conway ( 1994 ); Robinson and Foreman ( 1999 ) reading strategies are deduced by type of reading errors made by children with reading problems, namely pauses and refusals (grapheme substitutions or semantic substitutions). Based on an oral reading error analysis, the findings indicate that the use of an Irlen lens decreases the number of pauses and increases reading rate. The authors conclude that enhanced print clarity may facilitate the use of rereading as a strategy for monitoring word recognition thereby enhancing reading comprehension.

Pollux et al. ( 2014 ) investigated gaze distributions in facial expression categorization. In this study participants (adults and 9-year-olds) received feedback directly after the categorization of a facial expression with a variety of meanings (happy—sad—fearful). Eye movements were measured during the first and fourth training session. This study showed a holistic gaze strategy is used to extract relevant facial cues from all internal features when categorizing subtle expressions. More holistic processing is needed to categorize subtle expressions more accurately.

Wilkinson et al. ( 2008 ) addressed search strategies in a study regarding the use of a visual communication array for children with Down Syndrome. The reaction times to visual symbols in an array were measured for typically developing children and children with Down Syndrome. Both groups showed shorter reaction times on a verbal or visual cue to find the target symbol in an array when symbols were clustered by color, indicating that color cueing facilitates visual search and can be used as a visual strategy to enhance reaction times.

Viewing Behavior: Normal Vision

Fourteen studies regarding viewing behavior in school-based tasks—mainly reading—in children with normal vision were included (see Supplementary Table 1 ). One study evaluated the relationship between optometric parameters, e.g., refractive error and binocular vision measures, and reading skills (Vinuela-Navarro et al., 2017 ). The authors found no relationship between optometric measures (visual acuity, refractive error, ocular alignment, convergence, stereopsis and accommodation accuracy) and reading skills. Riddell et al. ( 1990 ) compared children with good and poor vergence control in spatial localization skills. Children with unstable vergence had trouble localizing small objects in visual space, which the authors contributed to problems maintaining an accurate visuospatial map. For reading, the ability to localize letters within a word might be impaired in children with poor vergence control.

Oculomotor measure comparisons between average and poor readers were collected in three studies (Medland et al., 2010 ; Nilsson Benfatto et al., 2016 ; Vinuela-Navarro et al., 2017 ). These studies demonstrated that: (1) average readers required fewer fixations while reading a line of text than children with reading difficulties, while fixation stability was found to be comparable (Vinuela-Navarro et al., 2017 ), (2) in average readers, fixation durations (progressive and regressive) were shorter compared to children with reading problems (Nilsson Benfatto et al., 2016 ; Vinuela-Navarro et al., 2017 ), (3) saccade amplitudes were greater for children without than with reading problems (Nilsson Benfatto et al., 2016 ), despite normal saccadic main sequences in children with reading problems (Vinuela-Navarro et al., 2017 ), and (4) children without reading problems showed faster reading times for the habitual reading direction (left to right in English readers vs. right to left for Arabic readers) (Medland et al., 2010 ). The studies by Vinuela-Navarro et al. ( 2017 ) and Medland et al. ( 2010 ) demonstrated that oculomotor functions are integral to reading processes, but do not give insights into causality (i.e., whether reading problems result in abnormal eye movements or whether abnormal eye movements result in reading problems).

Visual attention was measured in two studies comparing normal and dyslexic readers (Solan et al., 2007 ; Franceschini et al., 2017 ). Franceschini et al. ( 2017 ) provided evidence for differences in visual attentional processes between children with dyslexia and normal controls. In the study by Franceschini et al. ( 2017 ), five behavioral experiments provided evidence for global perception as predictive of reading skills. Solan et al. ( 2007 ) evaluated the relation between reading comprehension, visual attention and magnocellular processing (as measured with a coherent motion task) in Grade 7 students (19 good readers and 23 poor readers). Group differences were found on all measures. The authors concluded that diagnostic batteries for students who have been identified as reading disabled should include magnocellular and visual attention tests.

The relation between visual temporal processing and reading performance was evaluated using spatial visual stimuli (dot counting) in the study by Eden et al. ( 1995 ). Three groups of 5 th graders were compared with regards to performance on the temporal and spatial dot counting task: children with normal vision ( n = 39), reading disabled children ( n = 26), and backward reading children ( n = 12). The control group performed better on the temporal dot counting task than reading disabled children, while there were no group differences on the spatial dot counting task. This study emphasizes the relationship between visual temporal processing skills and reading performance. Spatial abilities might however, play a role in predicting reading ability in young children. Fisher et al. ( 1985 ) found evidence for a relation between visuospatial abilities (lef-right coding) and (pre)reading skills in 4–7 year old children (Fisher et al., 1985 ).

The relation between visual learning and reading skills for children with and without reading problems was evaluated by Tong et al. ( 2019 ) and Garcia et al. ( 2019 ). Both studies showed that children without reading problems are better in learning of structures in visual elements (visual statistical learning, Tong et al., 2019 ) and fixed combinations (fixed bindings in four element non-sense-words and -shapes, Garcia et al., 2019 ) compared to children with reading problems. Visual statistical learning—the ability to unconsciously extract and integrate the structure of various (visual or auditory) elements to produce a unitary structure for further learning—appeared to be a significant predictor of Chinese word reading (Tong et al., 2019 ).

Lutzer ( 1986 ) measured color discrimination in children with varying intellectual capacities. This study showed no difference in the ability to match-to-sample colors after training in ranking colors vs. match-to-sample training. Children with lower cognitive abilities made more errors in color discrimination compared to average and gifted children.

In sum, oculomotor studies demonstrate that: (1) average readers use less fixations and regressions during reading, and (2) a shorter fixation duration found in average readers is linked with greater amplitudes of saccades during reading. Eye movements take a developmental leap when children learn to read more fluently, but are not essential to learn to read. Visual attentional processes seem to influence reading performance as well: average readers more often used global before local processing and performed better on visual temporal processing tasks (like dot counting and coherent motion) compared to children with reading problems.

Viewing Behavior: (Cerebral) Visual Impairment

Three studies were included with viewing behavior as an outcome measure in children with (C)VI (see Supplementary Table 2 ). These studies evaluated viewing behavior in children with (C)VI and children with normal vision (Kooiker et al., 2014 , 2015 ; Barsingerhorn et al., 2018 ). Barsingerhorn et al. ( 2018 ) used a speed acuity test in which children with (C)VI were asked to indicate the orientation of a Landolt C symbol for a range of acuities surrounding their threshold acuity. Both children with ocular VI as well as children with CVI showed longer reaction times to the visual symbols compared to controls. Children with ocular pathology were 170 ± 28 (SD) ms slower than children with normal vision; children with CVI were 232 ± 36 (SD) ms slower. In addition, reaction times for children with ocular VI and children with CVI were also longer for simple visual and auditory detection tasks. This might refer to a more general underlying problem in sensorimotor functioning. Kooiker et al. ( 2014 , 2015 ) measured reaction to fixation times while presenting a cartoon image on the screen. Quantitative measurement of orienting responses showed longer reaction times for children at risk for CVI and with the clinical diagnosis of CVI compared to typically developing children. Overall, the reaction times on cartoon stimuli were the shortest followed by contrast and form. Kooiker et al. showed an increased fixation area on cartoons in children with low visual acuity and children with nystagmus compared to normal controls.

Overall, children with (C)VI showed longer reaction times to visual stimuli than controls. In addition, children with low visual acuity and children with nystagmus showed increased gaze fixation areas on cartoons.

Interventions

Twelve non-randomized controlled trials (non-RCT) were included in this review, and one cohort study (see Supplementary Table 3 ). Seven of the non-R-CT studies involved children with (C)VI (Obrzut et al., 1982 ; Huurneman et al., 2013 , 2016a , b , c , 2020 ; Yu et al., 2020 ). Five intervention studies were focused on children with reading problems versus controls (Bieger, 1974 ; Robinson and Conway, 1994 ; Robinson and Foreman, 1999 ; Hall et al., 2013 ; Zhao et al., 2019 ).

One training program was focused on improving visual processing skills in children with poor visual processing (Obrzut et al., 1982 ). The training program, “Learning to Look and Listen,” consists of three sections aiming at viewing strategies: (1) hierarchical analysis (part vs. whole), (2) systematic scanning and (3) dimensional differences. After visual information processing training, performance on visual perceptual tasks as the Bender Gestalt Closure task was improved for the training group vs. a contrast and a control group The training group maintained their progress 5–6 weeks after the training. However, academic performance did not change after training.

Three studies used visual training programs to improve reading skills (Bieger, 1974 ; Huurneman et al., 2016b ; Zhao et al., 2019 ). One of these studies was conducted in children with visual impairment (i.e., infantile nystagmus) (Huurneman et al., 2016a , b , c ). Huurneman et al. ( 2016b ) compared reading performance in children with infantile nystagmus (IN) and controls. Maximum reading speed and acuity reserve did not differ between both groups. However, reading acuity and critical print size were larger for children with IN than for normal controls. After ten computerized crowding training sessions, reading acuity improved as well as critical print size. The results of the reading study indicate that not only visual acuity, but crowding is also related to reading performance in children with IN. Also, training on a computerized crowded letter discrimination task can contribute to improvements in reading performance. The study by Bieger ( 1974 ) showed no improvement on the ability to discriminate single words after visual training (i.e., Frostig program plus visual components). Although non-readers with perceptual problems improved on perceptual skills after visual training, reading skills did not improve. Zhao et al. ( 2019 ) created different research groups based on visual attentional span (VAS) intact or dysfunctional, and reading performance (dyslexic vs. normal readers). For children with dyslexia with VAS dysfunction visual training not only improved VAS function, but reading performance as well. Children with dyslexia with intact VAS did not improve on reading performance after VAS based training. VAS based training included bottom-up attention (length estimation task), top-down attentional modulation (visual search and digit canceling) and eye movement control (visual tracking).

Three studies evaluated the effect of the use of colored overlays. These studies showed inconsistent evidence that colored overlays improve reading performance (Robinson and Conway, 1994 ; Robinson and Foreman, 1999 ; Hall et al., 2013 ).

Huurneman et al. conducted a variety of studies on perceptual learning in children with visual impairment (Huurneman et al., 2013 , 2016a , b , c , 2020 ). Perceptual learning is shown to improve visual functions like near visual acuity, stereopsis and crowding for children with visual impairment and these improvements are retained over time (Huurneman et al., 2020 ). In their first study comparing different types of pen-and-paper perceptual learning paradigms (i.e. a magnifier group, uncrowded letter training group and crowded letter training group, Huurneman et al., 2013 ), task-specific improvements were shown in all groups. Only the crowded perceptual learning group showed transfer to crowded near visual acuity (Huurneman et al., 2013 ). The improvements in near visual acuity after a pen-and-paper visual training did not transfer to distance visual acuity or fine motor skills (Huurneman et al., 2020 ) after six weeks. Improvements in distance visual acuity and reading, indicating broad learning transfer, were shown after a computerized crowded letter discrimination training (Huurneman et al., 2016a , b , c ). A study by Yu et al. ( 2020 ) indicates that perceptual learning can be improved by the use of electronic visual devices (5–10× magnification) for children with moderate to severe visual impairment.

In sum, although different visual processing skills improved after training (i.e., gestalt closure, reading acuity), effects on academic outcome measures are often not reported or absent (Obrzut et al., 1982 ). The extent of learning transfer seems to depend on the training paradigm that is used. The use of colored overlays to improve detailed word analysis showed varying results. Computerized crowded letter discrimination training and a VAS based training showed potential positive effects on reading performance. Perceptual learning was shown to improve visual functions like near visual acuity and crowding for children with visual impairment and improvements seem to be retained over time. For moderate to severe visual impairment additional use of electronic aids might boost the effect of perceptual learning paradigms.

In this scoping review we focused on viewing strategies used during near school-based tasks and on possible interventions targeting viewing strategies for children with (cerebral) visual impairment. The main goals of this scoping review were to define a concept of viewing strategies and to compare viewing strategies between children with normal vision and children with (cerebral) visual impairment. In this scoping review we found no published research regarding viewing strategies for (cerebral) visually impaired children which makes a comparison between groups impossible. The lack of published research illustrates a giant gap between daily child vision rehabilitation practice, in which viewing strategies are a key component, and scientific evidence. Even for school-aged children with normal vision, literature about viewing strategies is scarce.

To create a concept of viewing strategies, information could be extracted from a total number of five studies in which the use of viewing strategies is mentioned (Robinson and Conway, 1994 ; Robinson and Foreman, 1999 ; Wilkinson et al., 2008 ; Pollux et al., 2014 ; Vinuela-Navarro et al., 2017 ). In the introduction of this paper, viewing strategy was described as a conscious and systematic way of approaching a task. Since the use of the term viewing strategies was sparse, all descriptions of more implicit or unconscious strategies were included in this review. Viewing strategies appear to be task-dependent. Within the broader concept of viewing strategies a differentiation seems necessary dependent on the visual task at hand: gaze strategies, reading strategies and search strategies.

A remarkable finding is that none of the five studies mentioning viewing strategies, address the role of attentional mechanisms. In the description of viewing strategies presented in Table 4 , it is clear that the strategies are not restricted to one form of visual processing, but rather target the activation of attention systems. We have therefore connected the viewing strategies described in this review with the attention networks defined by Posner (see Figure 2 , Petersen and Posner, 2012 ). For visual search, the strategy that was mentioned, was to use color cueing to facilitate visual search (Wilkinson et al., 2008 ). Color cueing might support the alerting network, because it can function as a pre-attentive signal to guide visual attention to a certain visual array (Wolfe and Utochkin, 2019 ). Using pop-out stimuli involves a sensory-driven or bottom-up mechanism directing perception toward a subset of stimuli, which might be useful strategy in children with attention deficits. For some children the use of colored overlays may help to arouse and/or sustain visual attention (Robinson and Conway, 1994 ). The opposite mechanism of bottom-up is top-down processing, which involves goal-directed mechanisms that are determined by the individual's goals (Donnelly et al., 2007 ). Top-down “guided” search strategies are more commonly used in conjunctive search tasks in which an individual has to look for a combination (i.e., consider color and shape). From an attention network point of view, top-down processing is especially dependent of executive network activity (Petersen and Posner, 2012 ). Executive network control improves during development and can be the aim of viewing strategy training. Gaze strategies refer to the part of the visual array which is processed, a strategy which can be linked to the orienting attention network. The orienting attention network enables prioritizing sensory input by selecting a modality or location, a process in which both frontal as well as parietal areas are implicated (Petersen and Posner, 2012 ). A holistic strategy appears to be helpful to extract all relevant visual cues from a visual stimulus, for example in recognizing facial expressions and reading (Pollux et al., 2014 ; Franceschini et al., 2017 ). Reading strategies can involve a structured way of moving the eyes through the lines of text (Vinuela-Navarro et al., 2017 ). Average readers turned out to show similar oculomotor behavior for each line of text and oculomotor functions seem to improve during primary school years (Medland et al., 2010 ). The analysis of oral reading errors by Robinson and Conway ( 1994 ) showed that a more detailed word analysis supported reading comprehension. Children with reading problems more often guessed words from single-letter cues, reread the lines and skipped words or lines. The executive network is involved in the visual discrimination of distinguishing visual features, i.e., letters during reading or search. We propose that viewing strategies which a child can use consciously and adjust to the task at hand, will support the executive network system.

An external file that holds a picture, illustration, etc.
Object name is fpsyg-13-898719-g0002.jpg

Viewing strategies in relation to Posner's attention networks.

The final goal of this scoping review was to identify possible interventions targeting viewing strategies that can improve visual processing. Due to the diversity in interventions and outcome measures, outcomes were presented in a narrative manner. We did not find any intervention studies targeting viewing strategies for children with (C)VI. Since improving viewing strategies is a common training goal in visual rehabilitation, more research concerning viewing strategies used by children with (C)VI is needed. Till date, it remains unknown whether there are differences in viewing strategies between children with normal vision and children with (C) VI. Also, even if school-aged children use the same kind of viewings strategies, it remains unclear if the quality of the viewing strategies differs between these groups, for example in speed of processing or accuracy of visual identification. Three studies showed lower visual processing speed in children with (C)VI (Kooiker et al., 2014 , 2015 ; Barsingerhorn et al., 2018 ), although no relation with viewing strategies was investigated. In order to learn more about viewing strategies in typical and impaired childhood vision, future research should be directed at comparing spatial (gaze patterns) and temporal aspects of oculomotor behavior during task performance (e.g., reading and search).

Despite the lack of concrete measures in the field of viewing strategies, the included studies showed some starting points for the development of a training targeting viewing strategies in children with (C)VI. The influence of attentional processes on viewing behavior is shown in a variety of studies and stresses the importance of investigating the potential benefits of targeting task-relevant attention networks in visual training. VAS dysfunction, or more specific disorders in visual selective attention processes, can be one of the visual dysfunctions in children with CVI (Zuidhoek, 2020 ). VAS based training for children with dyslexia and VAS dysfunction showed improvements not only on visual attention span, but also on reading performance (Zhao et al., 2019 ). Including VAS based elements (e.g., length estimation task, digit canceling and visual tracking to train eye-movement control) in viewing strategy training could have an impact on attention network activity. Although eye movements appear to develop when children learn to read more fluently (Medland et al., 2010 ), training a structured way of moving the eyes over a line of text or images might improve reading speed as well. At the moment, it is unclear whether only practicing a viewing strategy like moving the eyes to specific spots on a line, is enough to contribute to improved attentional processes in children with (C)VI. For complex skills such as reading (especially in children with deficits in visual attentional processing), it is to be expected that enhancing “guided” top-down control, e.g. by consciously adopting a task-appropriate viewing strategy, could have a larger beneficial impact than oculomotor training.

Vision training targeting improvements in general visual processing, such as the “Frostig Visual Perception Training” or the “Learning to Look and Listen” program, showed ambiguous results (Bieger, 1974 ; Obrzut et al., 1982 ). Although children showed task-specific improvements after training, there was no learning transfer to educational measures. Perceptual learning, defined as a consistent change in the perception of a stimulus array following practice or experience with this array, resulted in improvements in a variety of visual functions, like near visual acuity, binocular vision and spatial aspects of reading in children with VI (Huurneman et al., 2013 , 2016a , b , c , 2020 ; Yu et al., 2020 ). We did not find any intervention studies evaluating the effect of perceptual learning in children with CVI. Perceptual learning paradigms do not aim at the use of conscious viewing strategies, but lead to performance improvements on a perceptual task as a result of experience or repeated exposure to a task. Learning by perceptual experience differs from explicit top-down guided instruction to support visual processing. Traditional perceptual learning paradigms require a considerable amount of attention from a child. If there is a deficit in visual attention, the more logical first step in rehabilitation might be to learn the child to adopt viewing strategies that support visual attention.

In conclusion, this scoping review provides new leads toward the development of a viewing strategy training which can support visual attention processing. However, the specific relation between strategic viewing processes and academic performance remains unclear. A possible limitation of the current review is that the term ‘search strategies' was not included in the literature query. Including this term in the current review would widen the scope too much. Zhao et al. ( 2019 ) mentioned visual search as a top-down process when there was high target-distractor similarity. In visual search studies, both bottom-up as well as top-down processes are regularly described. For example, Donnelly et al. ( 2007 ) showed that targets with one unique feature (i.e., color or size) can be processed fast regardless of the amount of distractors (flat curve). If a target can only be distinguished based on a combination of features (i.e., conjunction search), then search time will increase as a function of distractor number. Interestingly, only conjunction search (and not feature search) efficiency seems to be affected by age: older children show shorter search times when the number of distractors increase (Donnelly et al., 2007 ). Future research is needed to examine the relation between the use and viewing strategies and visual performance and the role of development on the use of viewing strategies.

Conclusions

This scoping review shows that, viewing strategies involved in daily school based tasks like reading, are a relatively new concept. We signaled a giant gap between current scientific literature and daily rehabilitation practice in which training programs aiming at viewing strategies are common. It remains unclear if and how viewing strategies of children with (cerebral) visual impairment differ from viewing strategies in children with normal vision. Clear definitions of viewing strategies were not found in the included studies. Gaze strategies, search strategies and reading strategies were described as elements contributing to visual processing. The use of viewing strategies might influence different visual attention networks (i.e., the alerting, orienting and executive network). We propose that for the development of an effective viewing strategy, the role of these three networks should be considered. Although the scoping review revealed attentional processes that are involved in viewing behavior, we did not find any interventions that targeted visual attentional processing in children with (cerebral) visual impairment. The relation between strategic viewing processes and academic performance therefore remains unclear.

Data Availability Statement

Author contributions.

Literature screening and selection, data extraction, and synthesis was performed by AF-V and BH. Preparation of the first draft of the manuscript was done by AF-V. Review and approval of the manuscript was performed by BH and FB. All authors read and approved the final manuscript.

This research was supported by Novum. The funder was not involved in the study design, collection, analysis, interpretation of data, the writing of this article or the decision to submit it for publication.

Conflict of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher's Note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Acknowledgments

The authors wish to express their appreciation to Alice Tilema, Radboud University, for her assistance during the literature search.

Supplementary Material

The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fpsyg.2022.898719/full#supplementary-material

  • Barsingerhorn A. D., Boonstra F. N., Goossens J. (2018). Symbol discrimination speed in children with visual impairments . Invest. Ophthalmol. Vis. Sci. 59 , 3963–3972. 10.1167/iovs.17-23167 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Bennett C. R., Bauer C. M., Bailin E. S., Merabet L. B. (2020). Neuroplasticity in cerebral visual impairment (CVI): assessing functional vision and the neurophysiological correlates of dorsal stream dysfunction . Neurosci. Biobehav. Rev. 108 , 171–181. 10.1016/j.neubiorev.2019.10.011 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Bieger E. (1974). Effectiveness of visual perceptual training on reading skills of non-readers, an experimental study . Percept. Mot. Skills 38 , 1147–1153. 10.2466/pms.1974.38.3c.1147 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Bryck R. L., Fisher P. A. (2012). Training the brain: practical applications of neural plasticity from the intersection of cognitive neuroscience, developmental psychology, and prevention science . Am. Psychol. 67 , 87–100. 10.1037/a0024657 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Cappagli G., Cocchi E., Gori M. (2017). Auditory and proprioceptive spatial impairments in blind children and adults . Dev. Sci. 20 , e12374. 10.1111/desc.12374 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Donnelly N., Cave K., Greenway R., Hadwin J. A., Stevenson J., Sonuga-Barke E. (2007). Visual search in children and adults: top-down and bottom-up mechanisms . Q. J. Exp. Psychol. 60 , 120–136. 10.1080/17470210600625362 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Eden G. F., Stein J. F., Wood H. M., Wood F. B. (1995). Temporal and spatial processing in reading disabled and normal children . Cortex 31 , 451–468. 10.1016/S0010-9452(13)80059-7 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Fazzi E., Signorini S. G., Bova S. M., La Piana R., Ondei P., Bertone C., et al.. (2007). Spectrum of visual disorders in children with cerebral visual impairment . J. Child Neurol. 22 , 294–301. 10.1177/08830738070220030801 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Fine I., Park J. M. (2018). Blindness and human brain plasticity . Annu. Rev. Vis. Sci. 4 , 337–356. 10.1146/annurev-vision-102016-061241 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Finocchietti S., Cappagli G., Gori M. (2015). Encoding audio motion: spatial impairment in early blind individuals . Front. Psychol. 6 , 1357. 10.3389/fpsyg.2015.01357 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Fisher C. B., Bornstein M. H., Gross C. G. (1985). Left-right coding and skills related to beginning reading . J. Dev. Behav. Pediatr. 6 , 279–283. 10.1097/00004703-198510000-00009 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Franceschini S., Bertoni S., Gianesini T., Gori S., Facoetti A. (2017). A different vision of dyslexia: local precedence on global perception . Sci. Rep. 7 , 17462. 10.1038/s41598-017-17626-1 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Garcia R. B., Tomaino A., Cornoldi C. (2019). Cross-modal working memory binding and learning of visual-phonological associations in children with reading difficulties . Child Neuropsychol. 25 , 1063–1083. 10.1080/09297049.2019.1572729 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Hall R., Ray N., Harries P., Stein J. (2013). A comparison of two-colored filter systems for treating visual reading difficulties . Disabil. Rehabil. 35 , 2221–2226. 10.3109/09638288.2013.774440 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Huurneman B., Boonstra F. N., Cox R. F., van Rens G., Cillessen A. H. (2013). Perceptual learning in children with visual impairment improves near visual acuity . Invest. Ophthalmol. Vis. Sci. 54 , 6208–6216. 10.1167/iovs.13-12220 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Huurneman B., Boonstra F. N., Goossens J. (2016a). Perceptual learning in children with infantile nystagmus: effects on 2D oculomotor behavior . Invest. Ophthalmol. Vis. Sci. 57 , 4229–4238. 10.1167/iovs.16-19555 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Huurneman B., Boonstra F. N., Goossens J. (2016b). perceptual learning in children with infantile nystagmus: effects on reading performance . Invest. Ophthalmol. Vis. Sci. 57 , 4239–4246. 10.1167/iovs.16-19556 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Huurneman B., Boonstra F. N., Goossens J. (2016c). Perceptual Learning in Children With Infantile Nystagmus: Effects on Visual Performance . Invest. Ophthalmol. Vis. Sci. 57 , 4216–4228. 10.1167/iovs.16-19554 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Huurneman B., Nienke Boonstra F., Goossens J. (2020). Specificity and retention of visual perceptual learning in young children with low vision . Sci. Rep. 10 , 8873. 10.1038/s41598-020-65789-1 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Kooiker M. J., Pel J. J., van der Steen J. (2014). Viewing behavior and related clinical characteristics in a population of children with visual impairments in the Netherlands . Res. Dev. Disabil. 35 , 1393–1401. 10.1016/j.ridd.2014.03.038 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Kooiker M. J., Pel J. J., van der Steen J. (2015). The relationship between visual orienting responses and clinical characteristics in children attending special education for the visually impaired . J. Child Neurol. 30 , 690–697. 10.1177/0883073814539556 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Kooiker M. J., Pel J. J., van der Steen-Kant S. P., van der Steen J. (2016). A method to quantify visual information processing in children using eye tracking . J. Vis. Exp. 16 , 1–11. 10.3791/54031 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Kran B. S., Lawrence L., Mayer D. L., Heidary G. (2019). Cerebral/cortical visual impairment: a need to reassess current definitions of visual impairment and blindness . Semin. Pediatr. Neurol. 31 , 25–29. 10.1016/j.spen.2019.05.005 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Kujala T., Alho K., Näätänen R. (2000). Cross-modal reorganization of human cortical functions . Trends Neurosci. 23 , 115–120. 10.1016/S0166-2236(99)01504-0 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Kulp M. T., Schmidt P. P. (1996). Effect of oculomotor and other visual skills on reading performance: a literature review . Optom. Vis. Sci. 73 , 283–292. 10.1097/00006324-199604000-00011 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Lutzer V. D. (1986). Perceptual learning by educable mentally retarded, average, and gifted children of primary school age . Percept. Mot. Skills 62 , 959–966. 10.2466/pms.1986.62.3.959 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Martin M. B., Santos-Lozano A., Martin-Hernandez J., Lopez-Miguel A., Maldonado M., Baladron C., et al.. (2016). Cerebral versus ocular visual impairment: the impact on developmental neuroplasticity . Front. Psychol. 7 , 1958. 10.3389/fpsyg.2016.01958 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Medland C., Walter H., Woodhouse J. M. (2010). Eye movements and poor reading: does the Developmental Eye Movement test measure cause or effect? Ophthalmic Physiol. Opt. 30 , 740–747. 10.1111/j.1475-1313.2010.00779.x [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Nilsson Benfatto M., Oqvist Seimyr G., Ygge J., Pansell T., Rydberg A., Jacobson C. (2016). Screening for dyslexia using eye tracking during reading . PLoS ONE 11 , e0165508. 10.1371/journal.pone.0165508 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Obrzut J. E., Hansen R. L., Heath C. P. (1982). The effectiveness of visual information processing training with hispanic children . J. Gen. Psychol. 107 , 165–174. 10.1080/00221309.1982.9709922 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Pasqualotto A., Newell F. N. (2007). The role of visual experience on the representation and updating of novel haptic scenes . Brain Cogn. 65 , 184–194. 10.1016/j.bandc.2007.07.009 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Petersen S. E., Posner M. I. (2012). The attention system of the human brain: 20 years after . Annu. Rev. Neurosci. 35 , 73–89. 10.1146/annurev-neuro-062111-150525 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Philip S. S., Dutton G. N. (2014). Identifying and characterising cerebral visual impairment in children: a review . Clin. Exp. Optom. 97 , 196–208. 10.1111/cxo.12155 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Pollux P. M., Hall S., Guo K. (2014). Facial expression training optimises viewing strategy in children and adults . PLoS ONE 9 , e105418. 10.1371/journal.pone.0105418 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Riddell P. M., Fowler M. S., Stein J. F. (1990). Spatial discrimination in children with poor vergence control . Percept. Mot. Skills. 70 ( 3 Pt 1 ), 707–718. 10.2466/pms.1990.70.3.707 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Robinson G. L., Conway R. N. (1994). Irlen filters and reading strategies: effect of colored filters on reading achievement, specific reading strategies, and perception of ability . Percept. Mot. Skills 79 ( 1 Pt 2 ), 467–483. 10.2466/pms.1994.79.1.467 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Robinson G. L., Foreman P. J. (1999). Scotopic sensitivity/Irlen syndrome and the use of colored filters: a long-term placebo-controlled study of reading strategies using analysis of miscue . Percept. Mot. Skills 88 , 35–52. 10.2466/pms.1999.88.1.35 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Roder B., Rosler F., Spence C. (2004). Early vision impairs tactile perception in the blind . Curr. Biol. 14 , 121–124. 10.1016/j.cub.2003.12.054 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Solan H. A., Shelley-Tremblay J. F., Hansen P. C., Larson S. (2007). Is there a common linkage among reading comprehension, visual attention, and magnocellular processing? J. Learn. Disabil. 40 , 270–278. 10.1177/00222194070400030701 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Thomas M. G., Gottlob I., McLean R. J., Maconachie G., Kumar A., Proudlock F. A. (2011). Reading strategies in infantile nystagmus syndrome . Invest. Ophthalmol. Vis. Sci. 52 , 8156–8165. 10.1167/iovs.10-6645 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Tong X., Leung W. W. S., Tong X. (2019). Visual statistical learning and orthographic awareness in Chinese children with and without developmental dyslexia . Res. Dev. Disabil. 92 , 103443. 10.1016/j.ridd.2019.103443 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Vinuela-Navarro V., Erichsen J. T., Williams C., Woodhouse J. M. (2017). Saccades and fixations in children with delayed reading skills . Ophthalmic Physiol. Opt. 37 , 531–541. 10.1111/opo.12392 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Whiting P. F., Rutjes A. W. S., Westwood M. E., Mallett S., Deeks J. J., Reitsma J. B., et al.. (2011). QUADAS-2: a revised tool for the quality assessment of diagnostic accuracy studies . Ann Intern Med. 155 , 529–536. 10.7326/0003-4819-155-8-201110180-00009 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Wilkinson K., Carlin M., Thistle J. (2008). The role of color cues in facilitating accurate and rapid location of aided symbols by children with and without down syndrome . Am. J. Speech Lang. Pathol. 17 , 179–193. 10.1044/1058-0360(2008/018) [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Williams C., Northstone K., Sabates R., Feinstein L., Emond A., Dutton G. N. (2011). Visual perceptual difficulties and under-achievement at school in a large community-based sample of children . PLoS ONE 6 , e14772. 10.1371/journal.pone.0014772 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Wolfe J. M., Utochkin I. S. (2019). What is a preattentive feature? Curr. Opin. Psychol. 29 , 19–26. 10.1016/j.copsyc.2018.11.005 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Yu M., Liu W., Chen M., Dai J. (2020). The assistance of electronic visual aids with perceptual learning for the improvement in visual acuity in visually impaired children . Int. Ophthalmol. 40 , 901–907. 10.1007/s10792-019-01257-8 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Zhao J., Liu H., Li J., Sun H., Liu Z., Gao J., et al.. (2019). Improving sentence reading performance in Chinese children with developmental dyslexia by training based on visual attention span . Sci. Rep. 9 , 18964. 10.1038/s41598-019-55624-7 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Zuidhoek S. (2020). CVI in the Picture. When the Brain Is the Cause of Visual Impairment in Children . Huizen: Visio. [ Google Scholar ]

Information

  • Author Services

Initiatives

You are accessing a machine-readable page. In order to be human-readable, please install an RSS reader.

All articles published by MDPI are made immediately available worldwide under an open access license. No special permission is required to reuse all or part of the article published by MDPI, including figures and tables. For articles published under an open access Creative Common CC BY license, any part of the article may be reused without permission provided that the original article is clearly cited. For more information, please refer to https://www.mdpi.com/openaccess .

Feature papers represent the most advanced research with significant potential for high impact in the field. A Feature Paper should be a substantial original Article that involves several techniques or approaches, provides an outlook for future research directions and describes possible research applications.

Feature papers are submitted upon individual invitation or recommendation by the scientific editors and must receive positive feedback from the reviewers.

Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.

Original Submission Date Received: .

  • Active Journals
  • Find a Journal
  • Proceedings Series
  • For Authors
  • For Reviewers
  • For Editors
  • For Librarians
  • For Publishers
  • For Societies
  • For Conference Organizers
  • Open Access Policy
  • Institutional Open Access Program
  • Special Issues Guidelines
  • Editorial Process
  • Research and Publication Ethics
  • Article Processing Charges
  • Testimonials
  • Preprints.org
  • SciProfiles
  • Encyclopedia

education-logo

Article Menu

  • Subscribe SciFeed
  • Recommended Articles
  • Google Scholar
  • on Google Scholar
  • Table of Contents

Find support for a specific problem in the support section of our website.

Please let us know what you think of our products and services.

Visit our dedicated information section to learn more about MDPI.

JSmol Viewer

A systematic review on inclusive education of students with visual impairment.

case study of a child with visual impairment

1. Introduction

  • What are the perceptions of general education teachers toward the inclusion of students with visual impairment?
  • What factors impact attitudes of general education teachers toward inclusion?
  • What are the challenges in accessing academic subjects for students with visual impairment in inclusive settings?
  • What elements increase accessibility to academic subjects?

2.1. Steps 1 through 4

  • The articles must have been published in English in a peer-reviewed journal between 1980 and September 2018.
  • The articles must be related to students in a compulsory education program (that is, first grade to twelfth grade) with visual impairment (blind or low vision) and learning in general education classrooms/inclusive settings.
  • The articles must be related only to students with visual impairment, with no other disabilities.

2.2. Steps 5 and 6

3.1. general education teachers’ perception toward the inclusion of students with visual impairment, 3.2. factors that impact attitudes of general education teachers, 3.2.1. teacher-related factors, 3.2.2. student-related factors, 3.2.3. environment-related factors, 3.3. challenges pertaining to access to academic subjects for students with visual impairment, 3.4. the elements that increase accessibility in academic subjects, 3.4.1. general education teachers possessing a generic set of effective pedagogical strategies, 3.4.2. effective teaching-learning tools, 3.4.3. external support, 4. discussion, 4.1. overview of findings, 4.2. salient suggestions to improve current inclusive education, 4.2.1. teacher training, 4.2.2. holistic support system with external specialist support, 5. conclusions, 6. limitations, conflicts of interest.

  • Miyauchi, H.; Peter, P. Perceptions of students with visual impairment on inclusive dducation: A narrative meta-analysis. Hum. Res. Rehabil. 2020 , 10 , 4–25. [ Google Scholar ]
  • United Nations. Convention on The Rights of Persons with Disabilities and Optional Protocol . 2006. Available online: https://www.un.org/development/desa/disabilities/convention-on-the-rights-of-persons-with-disabilities.html#Fulltext (accessed on 1 August 2020).
  • Baker, E.T.; Wang, M.C.; Walber, H.J. The effects of inclusion on learning. Synth. Res. 1995 , 52 , 33–35. [ Google Scholar ]
  • Jessup, G.; Bundy, A.C.; Broom, A.; Hancock, N. The social experiences of high school students with visual impairments. J. Vis. Impair. Blind. 2017 , 111 , 5–19. [ Google Scholar ] [ CrossRef ] [ Green Version ]
  • Chang, S.C.H.; Schaller, J. The views of students with visual impairments on the support they received from teachers. J. Vis. Impair. Blind. 2002 , 96 , 558–575. [ Google Scholar ] [ CrossRef ]
  • Brydges, C.; Mkandawire, P. Perceptions and concerns about inclusive education among students with visual impairments in Lagos, Nigeria. Int. J. Disabil. Dev. Educ. 2017 , 64 , 211–225. [ Google Scholar ] [ CrossRef ]
  • Haegele, J.A.; Porretta, D. Physical activity and school-age individuals with visual impairments: A literature review. Adapt. Phys. Act. Q. 2015 , 32 , 68–82. [ Google Scholar ] [ CrossRef ]
  • Haegele, A.; Zhu, X. Experiences of individuals with visual impairments in integrated physical education: A retrospective study. Res. Q. Exerc. Sport 2017 , 88 , 425–435. [ Google Scholar ] [ CrossRef ]
  • Ronal National Institute for the Blind. Children and Young People-England ; RNIB Evidence-Based Review: London, UK, 2017. [ Google Scholar ]
  • Lieberman, L.J.; Lepore, M.; Lepore-Stevens, M.; Ball, L. Physical education for children. Am. Phys. Educ. Rev. 2019 , 90 , 30–38. [ Google Scholar ]
  • Ahsan, T.; Sharma, U. Pre-service teachers’ attitudes towards inclusion of students with high support needs in regular classrooms in Bangladesh. Br. J. Spec. Educ. 2018 , 45 , 81–97. [ Google Scholar ] [ CrossRef ]
  • Ministry of Education New Zealand. Criteria and Definitions for Ongoing Resourcing Scheme (ORS). Available online: https://www.education.govt.nz/school/student-support/special-education/ors/criteria-for-ors/ (accessed on 1 August 2020).
  • Spungin, S.J.; Huebner, K.M. Historical Perspectives. In Foundations of Education, History and Theory of Teaching Children and Youths with Visual Impairments , 3rd ed.; Holbrook, M.C., McCarthy, T., Kamei-Hannan, C., Eds.; American Foundation for the Blind: Arlington County, VA, USA, 2017; Chapter 1; pp. 3–49. [ Google Scholar ]
  • Kalloniatis, M.; Johnston, A.W. Visual environmental adaptation problems of partially sighted children. J. Vis. Impair. Blind. 1994 , 88 , 234–243. [ Google Scholar ] [ CrossRef ]
  • Opie, J.; Deppeler, J.; Southcott, J. You have to be like everyone else: Support for students with vision impairment in mainstream secondary schools. Support Learn. 2017 , 32 , 267–287. [ Google Scholar ] [ CrossRef ]
  • Finkelstein, S.; Sharma, U.; Furlonger, B. The inclusive practices of classroom teachers: A scoping review and thematic analysis. Int. J. Incl. Educ. 2019 , 0 , 1–28. [ Google Scholar ] [ CrossRef ]
  • Monsen, J.J.; Ewing, D.L.; Kwoka, M. Teachers’ attitudes towards inclusion, perceived adequacy of support and classroom learning environment. Learn. Environ. Res. 2014 , 17 , 113–126. [ Google Scholar ] [ CrossRef ]
  • de Boer, A.; Pijl, S.J.; Minnaert, A. Regular primary schoolteachers’ attitudes towards inclusive education: A review of the literature. Int. J. Incl. Educ. 2011 , 15 , 331–353. [ Google Scholar ] [ CrossRef ]
  • Wall, R. Teachers’ exposure to people with visual impairments and the effect on attitudes toward Inclusion. RE View 2002 , 34 , 111–119. [ Google Scholar ]
  • George, A.L.; Duquette, C. The psychosocial experiences of a student with low vision. J. Vis. Impair. Blind. 2006 , 100 , 152–163. [ Google Scholar ] [ CrossRef ] [ Green Version ]
  • Lieberman, L.J.; Robinson, B.L.; Rollheiser, H. Youth with visual impairments: Experiences in general physical education. RE View 2006 , 38 , 152–163. [ Google Scholar ] [ CrossRef ]
  • de Verdier, K.; Ek, U. A longitudinal study of reading development, academic achievement, and support in Swedish inclusive education for students with blindness or severe visual impairment. J. Vis. Impair. Blind. 2014 , 108 , 461–472. [ Google Scholar ] [ CrossRef ]
  • Hatlen, P. A personal odyssey on schools for blind children. J. Vis. Impair. Blind. 1993 , 87 , 171–173. [ Google Scholar ] [ CrossRef ]
  • Sapp, W.; Hatlen, P. The expanded core curriculum: Where we have been, where we are going, and how we can get there. J. Vis. Impair. Blind. 2010 , 104 , 338–348. [ Google Scholar ] [ CrossRef ]
  • Jessup, G.M.; Bundy, A.C.; Hancock, N.; Broom, A. Being noticed for the way you are: Social inclusion and high school students with vision impairment. Br. J. Vis. Impair. 2018 , 36 , 90–103. [ Google Scholar ] [ CrossRef ]
  • Hess, I. Visually impaired pupils in mainstream schools in Israel: Quality of life and other associated factors. Br. J. Vis. Impair. 2010 , 28 , 19–33. [ Google Scholar ] [ CrossRef ]
  • Ravenscroft, J.; Davis, J.; Bilgin, M.; Wazni, K. Factors that influence elementary school teachers’ attitudes towards inclusion of visually impaired children in Turkey. Disabil. Soc. 2019 , 34 , 629–656. [ Google Scholar ] [ CrossRef ]
  • Selickaitė, D.; Hutzler, Y.; Pukėnas, K.; Block, M.E.; Rėklaitienė, D. The analysis of the structure, validity, and reliability of an Inclusive Physical Education Self-Efficacy Instrument for Lithuanian physical education teachers. SAGE Open 2019 , 9 , 215824401985247. [ Google Scholar ] [ CrossRef ] [ Green Version ]
  • Mushoriwa, T. Research Section: A study of the attitudes of primary school teachers in Harare towards the inclusion of blind children in regular classes. Br. J. Spec. Educ. 2001 , 28 , 142–147. [ Google Scholar ] [ CrossRef ]
  • Pliner, S.; Hannah, M. The role of achievement in teachers’ attitudes toward handicapped children. Acad. Psychol. Bull. 1958 , 7 , 327–335. [ Google Scholar ]
  • Bardin, J.A.; Lewis, S. A survey of the academic engagement of students with visual impairments in general education classes. J. Vis. Impair. Blind. 2008 , 102 , 472–483. [ Google Scholar ] [ CrossRef ]
  • Bandura, A. Self-efficacy: Toward a unifying theory of behavioral change. Adv. Behav. Res. Ther. 1978 , 1 , 139–161. [ Google Scholar ] [ CrossRef ]
  • Haegele, J.A. Inclusion illusion: Questioning the inclusiveness of integrated physical education. Quest 2019 , 71 , 387–397. [ Google Scholar ] [ CrossRef ]
  • Abrahamson, D.; Flood, V.J.; Miele, J.A.; Siu, Y.T. Enactivism and ethnomethodological conversation analysis as tools for expanding Universal Design for Learning: The case of visually impaired mathematics students. ZDM Math. Educ. 2019 , 51 , 291–303. [ Google Scholar ] [ CrossRef ]
  • Klingenberg, O.G.; Holkesvik, A.H.; Augestad, L.B. Digital learning in mathematics for students with severe visual impairment: A systematic review. Br. J. Vis. Impair. 2020 , 38 , 38–57. [ Google Scholar ] [ CrossRef ]
  • Klingenberg, O.G.; Holkesvik, A.H.; Augestad, L.B.; Erdem, E. Research evidence for mathematics education for students with visual impairment: A systematic review. Cogent Educ. 2019 , 6 . [ Google Scholar ] [ CrossRef ]
  • Koehler, K.; Wild, T. Students with visual impairments’ access and participation in the science curriculum: Views of teachers of students with visual ompairments. J. Sci. Educ. Stud. Disabil. 2019 , 22 , 1–17. [ Google Scholar ] [ CrossRef ]
  • Teke, D.; Sozbilir, M. Teaching energy in living systems to a blind student in an inclusive classroom environment. Chem. Educ. Res. Pract. 2019 , 20 , 890–901. [ Google Scholar ] [ CrossRef ]
  • Pino, A.; Viladot, L. Teaching-Learning resources and supports in the music classroom: Key aspects for the inclusion of visually impaired students. Br. J. Vis. Impair. 2019 , 37 , 17–28. [ Google Scholar ] [ CrossRef ]
  • Rogers, S. Learning braille and print together—The mainstream issues. Br. J. Vis. Impair. 2007 , 25 , 120–132. [ Google Scholar ] [ CrossRef ]
  • West, J.; Houghton, S.; Taylor, M.; Ling, P.K. The perspectives of Singapore secondary school students with vision impairments towards their inclusion in mainstream education. Australas. J. Spec. Educ. 2004 , 28 , 18–27. [ Google Scholar ] [ CrossRef ]
  • Haegele, J.A.; Lieberman, L.J. The current experiences of physical dducation teachers at schools for blind students in the United States. J. Vis. Impair. Blind. 2016 , 110 , 323–334. [ Google Scholar ] [ CrossRef ] [ Green Version ]
  • Brody, L.E.; Mills, C.J. Gifted children with learning disabilities: A Review of the Issues. J. Learn. Disabil. 1997 , 30 , 282–296. [ Google Scholar ] [ CrossRef ] [ Green Version ]
  • Whitmore, J.; Maker, C.J. Intellectual Giftedness in Disabled Persons ; Aspen Publication: New York, NY, USA, 1985. [ Google Scholar ]
  • Besnoy, K.D.; Manning, S.; Karnes, F.A. Screening students with visual impairments for intellectual giftedness—A pilot study. RE view Rehabil. Educ. Blind. Vis. Impair. 2006 , 37 , 134–141. [ Google Scholar ] [ CrossRef ]
  • Johnson, L. Teaching the visually impaired gifted youngster. J. Vis. Impair. Blind. 1987 , 81 , 51–52. [ Google Scholar ] [ CrossRef ]
  • Porter, J.; Lacey, P. Safeguarding the needs of children with a visual impairment in non-VI special schools. Int. J. Disabil. Dev. Educ. 2014 , 65 , 108–118. [ Google Scholar ] [ CrossRef ]
  • Morris, C.; Sharma, U. Facilitating the inclusion of children with vision impairment: Perspectives of itinerant support teachers. Australas. J. Spec. Educ. 2011 , 35 , 191–203. [ Google Scholar ] [ CrossRef ]
  • Ben-Yehuda, S.; Leyser, Y.; Last, U. The inclusive practices of classroom teachers: A scoping review and thematic analysis. Int. J. Incl. Educ. 2011 , 15 , 17–34. [ Google Scholar ]
  • Sirriyeh, R.; Lawton, R.; Gardner, P.; Armitage, G. Reviewing studies with diverse designs: The development and evaluation of a new tool. J. Eval. Clin. Pract. 2012 , 18 , 746–752. [ Google Scholar ] [ CrossRef ] [ PubMed ]

Click here to enlarge figure

Author & DateCountryResearch PurposeMethodsParticipantsFindings
Bardin and Lewis (2008) [Academic engagement]USATo clarify the academic engagement of students with VI in general education classrooms.A modified version of the Student Participation Questionnaire (SPQ) developed by Finn, Pannozzo, and Voelkl (1995) was formatted as an electronic survey and posted.There were 77 general education teachers (preschool to 12th grade) who had a student with VI placed for academic instruction.Based on the teachers’ perceptions, about half (52%) of the students with VI were performing at grade level, 21.1% above grade level, and 26.7% below grade level. Teachers reported that VI students were engaged between half and most of the time in class. There was a discrepancy between the engagement level perceived by teachers and student performance levels. Possible explanations for this outcome are that the modified SPQ was not appropriate for measuring engagement levels of VI students, or that teachers are more generous in their overall estimates of student performance levels, but rated them more precisely when using the modified SPQ.
Hess (2010) [Attitude]IsraelTo clarify whether school climate, staff attitude towards inclusion, and VI students’ quality of life (QoL) are correlated based on two hypotheses. The research model included multiple variables. To measure school climate components and staff attitudes towards inclusion, a questionnaire developed by Halpin and Croft (1963) was used. For VI students’ QoL, a total of six different questionnaires and scales, including the self-esteem scale by Rosenberg (1965), were used.There were 63 VI pupils (ages 12 to19) and 200 teachers from 40 different schools. The research sample was selected randomly.When the school climate and teachers’ attitudes towards inclusion were positive, there was a significant correlation between the self-reports of pupils and teacher evaluations regarding pupils’ emotional and social status. In addition, when both the climate and attitudes were positive, pupils’ Felt Stigma was lower, meaning that the impact of stigma was less severely experienced.
Pliner and Hannah (1985)
[Attitude] [Factors]
USATo investigate general education teachers’ attitudes towards four types of children with disabilities in relation to children’s levels of achievement.A Pupil Placement Scale (PPS) was developed and used. The four disabilities targeted were orthopedically impaired, VI, hard of hearing, and emotionally disturbed.There were 83 general education teachers in six elementary schools. The teachers were 30 to 39 years old.Teachers held negative attitudes only when the child’s level of achievement was low. When it was at an acceptable level, teachers were quite positive toward the child with disabilities.
Ravenscroft et at. (2019) [Attitude] [Factors] TurkeyTo clarify the attitudes of elementary school teachers towards inclusion of VI children and factors that influence their attitudes.Two questionnaires were administered to teachers from rural and urban areas of Turkey.There were 253 elementary school teachers (72.1% response rate). Stratified random sampling was used, and 64% were working in urban districts and 35% in rural areas. Thirty-eight percent of teachers had at least one student with disabilities included in their classrooms.Teachers held a positive attitude towards the inclusion of VI children. Rural teachers’ positivity score was higher than that of the urban teachers. Previous research suggesting that teacher age and teaching experience do not influence teachers’ attitudes towards inclusion was also confirmed. However, the results did not replicate previous research that suggested that female teachers have more positive attitudes than male teachers and that younger teachers are more enthusiastic about inclusion. One factor that contributed to a positive attitude was the teacher’s initial and in-service training, highlighting that it is important for schoolteachers to feel prepared to teach. This suggests the need for greater post-qualification training to facilitate inclusion of VI children.
Selickaite et al. (2019) [Attitude] [Factors]LithuaniaTo investigate the validity and reliability of the inclusive Self-Efficacy Instrument for Physical Education Teacher scale (SE-PETE-D) using Lithuanian PE teachers and the impact of the type of disability and personal attributes.The English version of the SE-PETE-D (Black et al., 2013) was used. The scale was translated to Lithuanian using the back-translation technique described by Brislin (1986).There were 193 PE teachers working in Lithuanian schools, 60 males and 132 females, aged 22 to 65.The content and construct validity of the instrument were supported. The types of student disabilities influenced the teachers’ self-efficacy, and inclusion of students with VI into PE lessons appeared to be a greater challenge for PE teachers than the inclusion of students with intellectual disabilities or physical disabilities. Adapted PE courses or seminars had significant positive influence on PE teachers’ self-efficacy toward inclusion of students with disabilities, including VI. In addition, PE teachers who had experience with students with VI in their PE classes and/or had friends with VI tended to have higher self-efficacy toward inclusion than those who did not.
Mushoriwa (2001) [Attitudes] [Factors]ZimbabweTo examine the attitudes of primary school teachers towards the inclusion of blind children in general education classrooms.A Likert-type questionnaire adopted from Booth and Ainscow (1998) with modifications to fit the context of Harare was used.There were 400 teachers in the Harare area.The majority of teachers had a negative attitude towards the inclusion of blind children, and male and female teachers equally rejected the idea. The majority thought that including a blind child would not increase their circle of friends and felt that such a child would be likely to be less well-adjusted socially. In addition, many felt that because the child would use a different mode (braille) to read, they might not grasp concepts at the same pace as others, and, therefore, placement in regular classes would not benefit them. The majority indicated that they were not happy to have blind children in their classes, as they were not prepared to teach them.
Wall (2002) [Attitude] [Factors]CanadaTo explore whether the amount of teachers’ previous exposure to people with VI affected their attitudes toward the inclusion of students with VI in general education classrooms.A questionnaire survey was used.There were 96 teachers categorized into groups: group 1 with experience teaching VI children, group 2 with indirect experience with VI students, and group 3 with randomly selected teachers without any experience of teaching VI students.Teachers with direct or indirect experiences with students with VI held a more positive attitude toward inclusion than randomly selected teachers, but only toward students with low vision. All three groups demonstrated similar attitudes and reactions to the inclusion of students with blindness. Teachers with the least experience in interacting with VI students tended to place those students in more restrictive placements, have less confidence in their abilities to interact, and to hold less positive attitudes towards the inclusion of students with VI. Narrative response also showed that a person’s attitude depends on ancillary factors, such as the setting, the moods of the people involved, and the comfort level of the interaction.
Author & DateCountryResearch PurposeMethodsParticipantsFindings
Abrahamson et al. (2018) [Mathematics] [Solution to challenges] USATo illustrate how utilizing enactivism and ethnomethodological conversation analysis (EMCA) can enhance universal design for learning (UDL) efforts by contextualizing the thesis and proposing a tool for sensorily heterogeneous students.A narrative review (without a systematic literature search) was used.NAMath contents such as spatial relationships constituting mathematical structures can be apprehended through non-visual sensory modalities. By applying enactivism, it is crucial that students with VI are engaged in experiences of a particular concept through sensorimotor means. Based on EMCA, the produced social encounters allow students to share experiences and the process, and help shape each student’s perception of their surroundings. This provides an important analytic complement to enactivism, which enables classrooms with sensorily heterogeneous students to learn together effectively.
By combining the concepts of the UDL, the paper proposed an instructional activity for ratio and proportion that enabled sensorily heterogeneous students to collaborate in achieving the enactment, mutual sensation, and mathematical signification of coordinated movements.
De Verdier and Ek (2014) [Braille/literacy]SwedenTo examine reading development and academic achievement of students with VI learning in inclusive settings and the support they received.Semi-structured interviews and documents, such as observation reports and grades for each subject, were collected and analyzed.There were six students with blindness or severe VI in inclusive educational settings and their parents and teachers.The outcome varied in all three aspects. Two students had satisfactory support from the school; however, most had an unsatisfactory level of support. Overall, there was no difference in reading comprehension for sighted and VI readers found. Differences were seen in decoding and reading speed. All students that attended general education classes had average grades.
Haegele (2019) [Physical education] USATo clarify the difference between inclusion and integration and to examine whether the current integrated physical education is inclusive.For clarification of the terms, a narrative review was conducted. To examine the current situation, a telephone interview was conducted.The participant was a 24-year-old male with VI.Several concerns that emerged in the existing literature on integrated physical education were also evident in the empirical study. The participant experienced challenging social interactions with his peers, particularly when the peers misunderstood their impact when attempting to help. The teacher’s actions in perpetuating social issues with peers were also impactful. These challenging experiences in integrated physical education had a long-term impact on the participant, as they led to his apprehension toward participating in leisure sports as an adult.
Haegele and Zhu (2017) [Physical education]USATo examine the experiences of adults with VI during school-based integrated physical education.Semi-structured audiotaped telephone interviews were conducted, and an interpretative phenomenological analysis (IPA) research approach was used.There were 16 adults with VI, aged 21 to 48, 10 females and 6 males. All were individuals who did not consider themselves to be elite athletes.Three interrelated themes that depict feelings, experiences, and reflections of the participants were uncovered. They were related to: a) frustration and inadequacy by being excluded from activities; b) debilitating feelings arising from PE teachers’ attitudes and being treated differently; and c) feelings about peer interactions. PE seems to highlight perceived differences between VI individual and their peers.
Klingenberg et al. (2019) [Mathematics] [Solutions to challenges] NATo conduct a systematic review and synthesize the evidence-based literature on math education among students with VI.A systematic review was conducted on English-language, peer-reviewed articles published from 2000–2017. The Quality Assessment Tool for Studies with Diverse Designs (QATSDD) was used to evaluate the quality of the articles.NAThere were 11 publications that met the inclusion criteria.
The studies focused on various topics, such as teachers’ attitudes and experiences, the use of the abacus, tactile graphics, and the development of mathematical concepts. The ability to choose suitable teaching strategies requires qualified and enthusiastic teachers who allow students to experience a sense of accomplishment and success. Only four studies reported eye disorder diagnoses.
Klingenberg et al. (2020) [Mathematics] [Solutions to challenges]NATo summarize current evidence-based knowledge about e-learning in mathematics among students with severe VI.A systematic review was conducted on English-language, peer-reviewed articles published from 2000–2017. The Quality Assessment Tool for Studies with Diverse Designs (QATSDD) was used to evaluate the quality of the articles.NAThere were 13 publications that met the inclusion criteria. There were 12 reported studies with an intervention or an experimental design, and the thirteenth had a cross-sectional design. The number of VI students in each study varied from 3 to 16. With QATSDD, three were classified as “high quality” and 10 were “good quality”. The number of subjects in each study was small, and only a few studies included math skills testing before the start of the study. Eight papers reported the use of audio-based applications as learning aids. Interactive e-learning with audio and tactile learning programs were suggested as useful resources; however, weaknesses in scientific evidence were evident.
Koehler and Wild (2019) [Science]USA and CanadaTo clarify what pedagogical practices, accommodations, modifications, adaptive equipment, and instructional practices are used in the general science classroom to educate students with VI.A survey was conducted online. The survey asked how students with VI accessed the science classroom, what instruments they used, what modifications and accommodations were made, and what assistive technology was used. There were 35 questions.There were 51 specialist teachers for the VI/Orientation & Mobility specialists. Convenience sampling was used to access participants, and 47% had been teaching more than 15 years in settings from preschool to post-high school.The majority of VI students spent instructional time in science within the general education classroom and received a standards-based education. However, most were not supported by teachers of VI during science. Over half of the teachers said that none of their students took advanced placement science classes. The most common accommodations were preparing tactile images and providing accommodations such as verbal descriptions, extended time periods for tests, and large print materials. The laboratory participation of VI students was low.
Lieberman et al. (2006) [Physical education]USATo examine the experiences of students with VI in inclusive general physical education classes, the types of modifications made, and their awareness of their individual education plans.A survey that contained the following three parts was conducted: (1) questions about modifications to equipment and rules; (2) listing the most and least liked sports activities; and (3) knowledge of individual education plans (IEPs).
Intuitive and inductive processes were used for analysis.
There were 60 students with VI (9 to 23 years old) enrolled in inclusive general physical education classes who also attended a one-week sports camp. Results varied depending on the level of the vision loss. Students with severe VI had experienced more modifications related to sounds and physical and verbal assistance. Groups of students with severe VI liked open sports, although the sports were difficult to modify. The severe VI group was aware of their IEPs, but some of the students with less severe VI were not aware.
Pino and Viladot (2019) [Music] [Solution to challenges]SpainTo clarify teaching-learning resources and needed support in inclusive music classrooms for VI students, particularly focusing on topics related to teaching-learning strategies and specialized support.A semi-structured interview was conducted and analyzed based on the ideas of Grounded Theory (Glaser and Strauss, 1967).There were two music specialists, one music teacher with two VI students in the class, and one VI student who studied music and specialized in piano performance.The study confirmed that teaching-learning resources (strategies, adaptations, and materials) lie at the core of inclusive teaching in music classrooms. Although the responsibility for inclusive teaching lies with the teachers, specialized centers and support teachers that provide technical and transcribed materials as well as support teachers and VI students are vital. It was evident that for class teachers to introduce the teaching-learning resources needed for the inclusion of students with blindness, they needed instruction on teaching methods. Similarly, provision of support requires more than just mastery of the discipline of music, and specialized knowledge is necessary. The study confirmed the indispensable role played by specialized centers.
Rogers (2007) [Braille/literacy]EnglandTo clarify the conditions and challenges faced by VI children who read large print but also need to learn braille to increase access to the curriculum in general education classrooms.A national survey of all local education authorities in England with follow-up interviews via telephone was conducted.There were 232 questionnaires sent out with a 60% response rate providing information on 107 VI children. Follow- up interviews were conducted with teachers providing information on eight children ranging in age from Reception to Year 7.Almost all pupils began learning with print in reception, and the majority began learning braille at Key Stage 1. Just over half of the children attended mainstream schools, while 41% attended resourced mainstream schools. Three distinct groups were identified: children who used print as their dominant medium; those who used braille as their dominant medium; and those who successfully used both. The print user group contained children who disliked or were reluctant to learn braille because they did not want to be seen as “different”. The decision of whether to pursue braille or print (or both) was complex. The negative impact on attitudes when families and professionals are not in agreement about the need for braille was also highlighted.
Teke and Sozbilir (2019) [Science] [Solution to challenges]TurkeyTo identify the needs of a blind student in an inclusive chemistry classroom and to design and develop tactile materials to teach "energy in living systems”.This was a single case study design. In-depth interviews and classroom observations were conducted.This was a tenth-grade, congenitally blind, and male student in a public school who was literate in braille.The student obtained information through the teacher’s verbal description or by reading the textbook on his own. The blind student’s needs were not being met, and he did not understand the symbolic representations in chemistry. After he was provided with written materials, 2D embossed drawings, and 3D models, the student was able to develop a better understanding.
MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

Miyauchi, H. A Systematic Review on Inclusive Education of Students with Visual Impairment. Educ. Sci. 2020 , 10 , 346. https://doi.org/10.3390/educsci10110346

Miyauchi H. A Systematic Review on Inclusive Education of Students with Visual Impairment. Education Sciences . 2020; 10(11):346. https://doi.org/10.3390/educsci10110346

Miyauchi, Hisae. 2020. "A Systematic Review on Inclusive Education of Students with Visual Impairment" Education Sciences 10, no. 11: 346. https://doi.org/10.3390/educsci10110346

Article Metrics

Article access statistics, further information, mdpi initiatives, follow mdpi.

MDPI

Subscribe to receive issue release notifications and newsletters from MDPI journals

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • My Bibliography
  • Collections
  • Citation manager

Save citation to file

Email citation, add to collections.

  • Create a new collection
  • Add to an existing collection

Add to My Bibliography

Your saved search, create a file for external citation management software, your rss feed.

  • Search in PubMed
  • Search in NLM Catalog
  • Add to Search

Bodily-tactile early intervention for a mother and her child with visual impairment and additional disabilities: a case study

Affiliations.

  • 1 Department of Psychology and Speech-Language Pathology, University of Turku, Turku, Finland.
  • 2 Pediatric Research Center, New Children's Hospital, University of Helsinki and Helsinki University Hospital, Helsinki, Finland.
  • 3 Faculty of Educational Sciences, University of Helsinki, Helsinki, Finland.
  • 4 Department for Deafblindness and Combined Vision and Hearing Impairments, STATPED, Oslo, Norway.
  • 5 Sense Scotland, Scotland, UK.
  • 6 City of Helsinki, Social Services and Health Care Division, Maternity and Child Health Clinics, Helsinki, Finland.
  • 7 Department of Psychology and Logopedics, University of Helsinki, Helsinki, Finland.
  • PMID: 35786127
  • DOI: 10.1080/09638288.2022.2082563

Purpose: Congenital visual impairment and additional disabilities (VIAD) may hamper the development of a child's communication skills and the quality of overall emotional availability between a child and his/her parents. This study investigated the effects of bodily-tactile intervention on a Finnish 26 - year-old mother's use of the bodily-tactile modality, the gestural and vocal expressions of her one - year-old child with VIAD, and emotional availability between the dyad.

Materials and methods: Mixed methods were used in the video analysis. The child's and his mother's bodily-tactile and gestural expressions were analyzed using a coding procedure. Applied conversation analysis was used to further analyse the child's emerging gestural expressions in their sequential interactive context. Emotional availability scales were used to analyze the emotional quality of the interaction.

Results: The results showed that the mother increased her use of the bodily-tactile modality during the intervention, especially in play and tactile signing. The child imitated new signs and developed new gestural expressions based on his bodily-tactile experiences during the intervention sessions. His vocalizations did not change. Emotional availability remained stable.

Conclusions: The case study approach allowed the in-depth investigation of the components contributing to the emergence of gestural expressions in children with VIAD.Implications for rehabilitationBodily-tactile modality may compensate for the absence of a child's vision in child-parent interactions.Bodily-tactile early intervention may be effective in guiding caregivers to use bodily-tactile modality in interacting with their child with VIAD.Caregivers' use of bodily-tactile modality in interactions may contribute to the development of gestural expressions in a child with VIAD.The use of bodily-tactile modality in interactions may improve the emotional connection between children with VIAD and their caregivers.

Keywords: Congenital visual impairment; L1 syndrome; augmentative and alternative communication; bodily-tactile modality; early intervention; emotional availability; parent empowerment; tactile communication.

PubMed Disclaimer

Similar articles

  • Does the subjective quality of life of children with specific learning disabilities (SpLD) agree with their parents' proxy reports? Rotsika V, Coccossis M, Vlassopoulos M, Papaeleftheriou E, Sakellariou K, Anagnostopoulos DC, Kokkevi A, Skevington S. Rotsika V, et al. Qual Life Res. 2011 Oct;20(8):1271-8. doi: 10.1007/s11136-011-9857-z. Epub 2011 Feb 10. Qual Life Res. 2011. PMID: 21308415
  • Effect of Tactile Imitation Guidance on Imitation and Emotional Availability. A Case Report of a Mother and Her Child With Congenital Deafblindness. Peltokorpi S, Daelman M, Salo S, Laakso M. Peltokorpi S, et al. Front Psychol. 2020 Oct 2;11:540355. doi: 10.3389/fpsyg.2020.540355. eCollection 2020. Front Psychol. 2020. PMID: 33132950 Free PMC article.
  • Mothers' emotional experiences related to their child's diagnosis of deafness and cochlear implant surgery: Parenting stress and child's language development. Majorano M, Guerzoni L, Cuda D, Morelli M. Majorano M, et al. Int J Pediatr Otorhinolaryngol. 2020 Mar;130:109812. doi: 10.1016/j.ijporl.2019.109812. Epub 2019 Dec 6. Int J Pediatr Otorhinolaryngol. 2020. PMID: 31841781
  • Togetherness, beyond the eyes: A systematic review on the interaction between visually impaired children and their parents. Grumi S, Cappagli G, Aprile G, Mascherpa E, Gori M, Provenzi L, Signorini S. Grumi S, et al. Infant Behav Dev. 2021 Aug;64:101590. doi: 10.1016/j.infbeh.2021.101590. Epub 2021 May 29. Infant Behav Dev. 2021. PMID: 34062369 Review.
  • Children with disabilities: a longitudinal study of child development and parent well-being. Hauser-Cram P, Warfield ME, Shonkoff JP, Krauss MW, Sayer A, Upshur CC. Hauser-Cram P, et al. Monogr Soc Res Child Dev. 2001;66(3):i-viii, 1-114; discussion 115-26. Monogr Soc Res Child Dev. 2001. PMID: 11677873 Review.

Publication types

  • Search in MeSH

Related information

Linkout - more resources, full text sources.

  • Taylor & Francis
  • Genetic Alliance

full text provider logo

  • Citation Manager

NCBI Literature Resources

MeSH PMC Bookshelf Disclaimer

The PubMed wordmark and PubMed logo are registered trademarks of the U.S. Department of Health and Human Services (HHS). Unauthorized use of these marks is strictly prohibited.

The accessibility of digital technologies for people with visual impairment and blindness: a scoping review

  • Open access
  • Published: 01 August 2024
  • Volume 27 , article number  24 , ( 2024 )

Cite this article

You have full access to this open access article

case study of a child with visual impairment

  • Sara Hamideh Kerdar   ORCID: orcid.org/0000-0001-7479-8607 1 , 2 ,
  • Liane Bächler 2 &
  • Britta Marleen Kirchhoff   ORCID: orcid.org/0000-0002-3054-5613 1  

264 Accesses

2 Altmetric

Explore all metrics

This scoping review aimed to improve the understanding of important factors in digital accessibility for people with visual impairment and blindness, focusing on the first-hand experiences and challenges faced by this target group while using digital technologies. Keywords related to ‘digital technologies,’ ‘accessibility,’ ‘visual impairment,’ and ‘blindness’ were used in searching two databases (n = 683), with additional articles identified by means of manual searches (n = 60). Two reviewers independently screened the titles and abstracts to select 97 articles for full-text screening, of which 49 articles met the inclusion criteria and were selected for review based on the WCAG guidelines, highlighting details for consideration and improvement of the guidelines. The analysis revealed that users suffered from inaccessibility in several ways. For example, many applications or websites are developed for sighted users, where information is communicated through visual content without providing alternatives for assistive technology users. In addition, the lack of keyboard accessibility, shortcuts, or compatibility with different assistive technologies remains a consistent challenge. Furthermore, it was highlighted that simple accessibility measures are not followed adequately or consistently, such as providing alternative text for images or labels for links and buttons. This review highlighted the challenges and consequences of the inaccessibility of digital technologies, providing a detailed explanation regarding the elements that should be considered in the development of digital technologies. It is recommended that people with disabilities should be involved in the design of technology and its accessibility assessment.

Similar content being viewed by others

case study of a child with visual impairment

The Challenges Found in the Access to Digital Information by People with Visual Impairment

case study of a child with visual impairment

Advancement in navigation technologies and their potential for the visually impaired: a comprehensive review

case study of a child with visual impairment

Is there an imbalance in the supply and demand for universal accessibility knowledge? Twenty years of UAIS papers viewed through the lens of WCAG

Explore related subjects.

  • Artificial Intelligence

Avoid common mistakes on your manuscript.

1 Introduction

Increasing digitalization brings with it numerous advantages in terms of information procurement, networking, communication, and efficiency, which have revolutionized the everyday and business worlds and driven innovation. While the digital age is constantly opening new doors, speeding up processes, and enabling communication on a global level, the questions of whether this progress is equally accessible to all people, who benefits from this progress, and who is excluded by barriers are now more relevant than ever. This highlights the relevance of digital accessibility, where people with disabilities should also be able to use digital technologies, including hardware as well as software [ 1 ], without difficulties and barriers [ 2 , 3 , 4 ]. Other potential barriers to accessibility are emerging as the digital world evolves [ 5 ]; thus, other than a universal design, guidelines could provide a unified framework for the development of accessible digital technologies. For example, the Web Content Accessibility Guidelines (WCAG) are an international standard for the accessible website design of the World Wide Web Consortium (W3C). The guidelines are categorized based on the POUR principles: perceivability, operability, understandability, and robustness. Perceivability aims to ensure that all information, whether in a textual, image or multimedia format, is perceptible to users. Operability describes the need for all users to use the navigation and user interface effectively, including interaction with the keyboard, mouse, and other input devices. Understandability indicates that digital content should be clear and understandable to all users. Robustness aims to ensure that digital content is sufficiently robust to be interpreted by different assistive technologies (AT) (see [ 6 ]). It should be noted, however, that the WCAG are a work in progress—there remains room for improvement in the guidelines [ 7 ].

Although digital technologies such as smartphones, apps and websites have opened new pathways of inclusion for visually impaired and blind people, many aspects of accessibility are ignored in their development [ 8 ]. In our qualitative pre-study [ 9 ], we investigated the positive and negative user experiences, barriers, challenges, and opportunities of video conference tools faced by people with visual impairment and blindness. The challenges faced by the target group in using these platforms were enormous, causing not only disruption to their work or university activities but also emotional and psychosomatic problems. In a study to evaluate the accessibility of e-commerce websites for blind users based on the WCAG, Gonçalves et al. [ 10 ] reported that blind users found it difficult to complete the tasks and conclude a purchase because the process was complicated and the website was not sufficiently accessible. Regarding alternative text (alt text) for images, Crane et al. [ 11 ] studied more than 200 journals based on the WCAG to assess the accessibility of their images to visually impaired and blind users. However, they used an automated accessibility evaluation tool to assess accessibility, rather than the target group, to show that, as expected, not all journals provided alt text for the images. In total, 14.8% of cases had no alt text, 9.6% of cases had insufficient alt text, and none of the journals provided a full interpretation of the meaning of the images. Khan and Khusro [ 12 ] reviewed the use of smartphones as ATs and their potential risks and challenges for the target group, highlighting the challenges of touchscreens (e.g., difficulties with text input or a lack of keyboards and buttons), the incompatibility of devices with other ATs, and crowded screens. Tsatsou [ 13 ] examined the role of accessibility in the social inclusion and stigmatization of people with disabilities, concluding that accessible digital technologies make them feel part of society, not separate from it [ 13 ]. Moreover, studies have highlighted that barriers to digital technologies experienced by visually impaired or blind people hinder their achievements at different stages of their lives, for example, in education [ 5 , 14 ] or employment [ 15 ].

In a systematic review on the subject of the usability of mobile applications for the target group, Al-Razgan et al. [ 16 ] summarized that studies on accessibility had mainly focused on their accessibility assessments, challenges faced by the target group, and suggestions for improvement. Senjam et al. [ 17 ] expanded on this by focusing on smartphones and apps in their literature review for the target group, concluding that guidelines for developers and information in this regard for people in contact with the target group, e.g., parents, teachers, or medical staff, would be a valuable addition to the field. Oh et al. [ 18 ] conducted a systematic review of the accessibility of images for screen reader users when using touchscreens, focusing on the methods used to understand whether any types of images, i.e., with and without motion, were investigated. They concluded that most studies had not involved the target group in the design process, where their input as AT users could play an important role.

In conclusion, although many studies focus on the accessibility of digital technologies for people with visual impairment and blindness and suggest improvements, there is no comprehensive overview of the shortcomings and challenges. Furthermore, based on the literature as well as suggestions for future research, a gap was identified in which an accessible and understandable overview of important accessibility factors for all stakeholders is needed. To the best of our knowledge, no scoping review has evaluated the accessibility of a wide range of digital technologies for blind and visually impaired users based on the POUR principles of the WCAG using first-hand user experience. Therefore, given the importance of digital accessibility for this target group, a scoping review was conducted to explore this gap and investigate the challenges, issues or facilitators in the use of digital technologies by visually impaired and blind people. The review focuses on detailed factors of digital accessibility by extending the POUR items specifically tailored for this target group.

2 Materials and methods

This scoping review was performed according to the PRISMA guidelines for scoping reviews (PRISMA-ScR) [ 19 ] as well as the framework proposed by Arksey and O’Malley [ 20 ] in five phases:

2.1 Identifying the research question

The purpose of this review was to understand the experiences of people with visual impairment and blindness in using digital technologies, as well as the challenges faced by the target group due to the inaccessibility of digital technologies. Furthermore, it aimed to investigate detailed factors that support better accessibility of digital technologies for the target group and to explore whether following the existing guidelines alone is sufficient for the development of accessible digital technologies.

2.2 Identifying relevant studies

Relevant keywords were selected and pilot-tested for use in two databases, namely Web of Science and PsycInfo. They were the combination of different terms in English and German for “visual impairment” and “blindness” AND “digital technology” AND “accessibility” used in conjunction with truncators such as asterisks (*) and Boolean operators (AND, OR and NOT) depending on the database (Table  1 ). It is noteworthy that digital technology was used as an umbrella term for this review, covering both hardware, such as smartphones and tablets, and software, such as apps and websites [ 1 ]. The search results were exported to Rayyan [ 21 ] for further analysis. Additionally, several databases (ACM, IEEE, Google Scholar, EbscoHost, ResearchGate, and the journal of “technology and disability”) were also manually searched and recently published articles were also considered. Articles published in English and German were reviewed. Since technologies change, improve and evolve rapidly, this review intended to identify patterns or issues that are consistent over a broad time period. Therefore, no publication timeframe was considered. Moreover, the context of the studies (e.g., focused only on work or education) was not limited, as the intention was to investigate the first-hand experiences of the target group in general as well as the important factors of accessibility from various perspectives.

2.2.1 Inclusion criteria

Studies were included if they focused on the experiences of people with visual impairment and blindness in using websites, applications, software, educational platforms, etc., the accessibility of digital technologies, ways of using these technologies, and challenges in using these technologies (as reported by the target group).

2.2.2 Exclusion criteria

The following articles were excluded:

Doctoral theses (as the full text was not available in most cases).

Usability studies without a focus on accessibility.

Studies focused on prototypes without reporting their accessibility aspects.

Articles in which participants with different disabilities and outcomes were not specifically and separately reported.

Studies with sighted individuals who were blindfolded, as the focus of the current review was on understanding the first-hand experiences of people with visual impairment and blindness.

Studies in languages other than English and German.

In the data-charting process, articles were excluded if they did not describe accessibility aspects of digital technologies or if they did not include feedback from the target group on the products. Information described in very general terms was also not documented during data extraction, for example, “software faults were also present” [ 22 ], as it was not clear what the faults were. Discussions and authors’ interpretations were not documented, as the review focus was on the reported experiences of visually impaired or blind users.

2.3 Study selection

Titles and abstracts of all articles (n = 683) were independently reviewed by two reviewers in Rayyan. Regular meetings were held to discuss uncertainties and conflicts, resulting in further refinement of the inclusion and exclusion criteria. In the case of uncertainty or a lack of consensus, the opinion of a third reviewer was sought. In total, 97 articles were selected for a full-text review, of which 39 articles were included in the review. During the full-text review process, any uncertainties were discussed among the authors until an agreement was reached.

2.4 Charting the data

A data extraction table was created in Excel, and the first draft was pilot-tested in six articles by two reviewers independently and further discussed among the authors. An explanation regarding the items was prepared and distributed to all authors [ 23 ] to reduce confusion and bias. During the data extraction process, regular meetings were held between the authors to discuss uncertainties, confusion, questions, etc. The key information recorded included the article title, author, year of publication, journal, study location, aim, technology studied, assistive devices used by the participants, participants’ characteristics, tasks, and key results. The table was exported to MAXQDA 2020 (VERBI Software, 2021) for further qualitative analysis.

2.5 Collating, summarizing and reporting results

Several articles followed the WCAG in different ways. Therefore, this information was documented in detail based on POUR elements, as it could provide helpful details for all of the stakeholders in the context of the accessibility of digital technologies for people with visual impairment and blindness. Various checklists were observed and used as references to organize the items; for example, the POUR list was initially prepared using the categorizations of WCAG 2.1 [ 6 ] as well as the University of Washington’s accessibility checklist [ 24 ]. This information was recorded and, subsequently, further details were added based on the included papers. It is noteworthy that some items, e.g., headings or code validation [ 24 ], were not found in any of the selected studies and, therefore, are not listed in the results. The results presented were developed based on the included papers and the reported experiences of the target group in these studies.

Some items are common to several categories; for example, forms (e.g., on websites) need to be designed in an accessible way so that if an error occurs, screen reader users are appropriately informed, so this item can be categorized in the categories “forms” and “input assistance.” However, it was decided to gather all of the errors under “input assistance” to more easily navigate relevant items related to a specific topic. In the example given, the intention was to show how errors should be managed for accessibility and in which areas they need to be considered. Another example is that if a list of links is presented on a website, they have to be accessible via keyboards and shortcuts, so this item could be grouped in both “lists” and “keyboard accessibility” but was grouped only in “lists.” Such categorization is especially helpful in maintaining the focus within a subject (e.g., popups, keyboard), thus helping people without IT expertise to understand and maintain accessibility guidelines.

Figure  1 presents the flow of this scoping review. The study characteristics of the included papers are documented in “supplement file 1.” This section explains the POUR items found in the included articles, with more detailed information and examples presented in “supplement file 2.” It should be noted that the results of this scoping review covered some of the WCAG elements as well as other details that should be considered for the accessibility of digital technologies. User experiences are quoted as examples for clarity, and the article in which the quote was originally used is cited immediately afterward.

figure 1

Scoping review flow

3.1 Perceivable

3.1.1 lists.

Different problems with lists were reported by participants in different studies, for example, dealing with a long list to select a date or year [ 25 ] or understanding a long list of search results [ 26 ], whereas well-placed links [ 27 ] and an accessible list of search results [ 28 ] were mentioned as facilitators. Therefore, it is recommended that, firstly, either lengthy lists should be avoided or skipping information should be possible for screen reader users to experience easier navigation. Secondly, a group of links should be created as a list with clear labeling, for example, “ the menu is a list of links, which were labeled and well described in order to enable the user to navigate through the application without making mistakes ” [ 29 ]. Thirdly, the list must also be accessible via different ATs, as well as using a keyboard and shortcuts. Shortcut keys were mentioned several times as a means of facilitating the use of long lists for AT users [ 26 , 27 , 30 ]; this is particularly helpful because it allows users to operate without the need for a mouse and scrolling (for more examples regarding lists see [ 22 , 26 , 27 , 30 , 31 , 32 ]). Furthermore, a list should be created as a list to allow the screen reader to read it as a list [ 24 ], making it easier for the user to understand and navigate.

3.1.2 Layout

Websites and applications usually follow a standard structure so that users know how to work with them or search for the information that they need. When the usual structure (i.e., layout) is not followed, screen reader users could face difficulties in navigating them. Thus, buttons (e.g., menu buttons) should be located consistently and in a size and place that are easily detectable [ 33 , 34 ]. Important elements should be positioned within reach; for example, if important links are located in the depth of a website, screen reader users will need to listen to all of the content until they achieve their goal [ 22 , 27 ]. Additionally, participants in Yeong’s study found the zigzag text alignment to be inaccessible and preferred a left-aligned format so that information was not lost when using magnification tools. Additionally, they mentioned that they were faced with inaccessibility due to the lack of consistency in text alignment [ 35 ]. Furthermore, using a table as the layout should be avoided, as the screen reader will treat it as a table and this could create confusion. Accessible Rich Internet Applications (ARIA) must be used for tables [ 36 ] (see Sect.  3.4.3 ). A simple and less crowded layout is recommended, according to Kamei-Hannan [ 37 ]: “ Long reading passages required the test taker to use a scroll bar to view the entire passage. Because of the various fields on the screen, the scroll bar was inaccessible, and a mouse was required to scroll through the text. ”

Based on WCAG 1.4.10, content should be presented without a loss of information or functionality, without the need for scrolling in two dimensions (i.e., vertical and horizontal) [ 25 , 38 , 39 ] to allow people with low vision to zoom in so as to enlarge text without losing content. This information was also mentioned in some of the included articles; for example, participants stated that “zooming/enlarging the page was properly done” [ 31 ] as an indication of accessibility.

3.1.3 Video/audio

Transcripts should be available for videos and audio based on WCAG 1.2.5, but this was mentioned in only one article. In the study by Alajarmeh [ 25 ], participants were not able to understand some of the content because the videos did not provide audio descriptions or alt text for the videos.

3.1.4 Forms

A lack of information on the functionality of different elements leads to the exclusion of AT users. For example, Alajarmeh [ 25 ] explained that in the case of complex forms or where information is not presented for different functions of a form, participants faced great challenges and experienced inaccessibility. Therefore, forms must be as simple as possible, with clear labels indicating their functionality [ 40 ]. Options for easy navigation should be provided, for example, if typing in a field is not possible and the user has to go through a list of options (see lists) [ 25 ]. Feedback messages, where the user is informed that their input has (not) been successful, should not disappear in a short time and should be detectable by screen readers [ 25 ]. Alajarmeh [ 25 ] reported that participants with visual impairment and blindness often face such inaccessibility because the initial coding of programs and applications is not based on the needs of AT users.

3.1.5 Alternative text

Alternative text plays an important role for AT users, particularly those with visual impairment and blindness. Alt text should be provided for graphical and non-textual links [ 29 , 38 ], images [ 38 , 40 ], CAPTCHA [ 41 , 42 ], logos, and non-textual visual content, such as diagrams or charts that present information within an image [ 36 , 37 ]. For example, in a study of digital tests using ATs, Kamei-Hannan [ 37 ] showed that students using ATs could not answer questions when pictures and diagrams did not have alternative text. Singleton and Neuber [ 36 ] highlighted that material should be accessible to all types of AT. In their study, although the blind users had no problems regarding the images, as alt text was available, the users who used magnifiers found the PDFs to be inaccessible (as the information was lost when the PDF was resized). In cases of online shopping, Alluqmani [ 43 ] reported that a lack of description was challenging; therefore, images of items must offer detailed and clear descriptions. For example, a participant found an item description of “Take this with you on your summer holiday” [ 43 ] not to be comprehendible, as they could not imagine the meaning of it. In such cases, users have to seek the help of sighted users (for more detailed examples see [ 43 ]). Information should not be communicated with visual characteristics, but rather contain alternative explanations that are accessible for AT users [ 37 ]. Decorative non-textual content should be marked as decoration so that the screen reader can ignore it (see WCAG 1.1.1).

3.1.6 Tables

Providing facilitators such as shortcuts and accessible navigational links helps AT users to navigate webpages and applications more quickly and more easily. Participants in the study by Singleton and Neuber [ 36 ] found it very helpful when PDF or Word files had a linked table of contents. These features help AT users to find content more easily without scrolling or using a mouse [ 36 ].

3.1.7 Meaningful sequence

According to WCAG 1.3.2, “when the sequence in which content is presented affects its meaning, a correct reading sequence can be programmatically determined.” Therefore, users need to be informed when a particular sequence needs to be followed. For example, some participants were confused when they had to follow a particular way of choosing dates without receiving any information on it [ 40 ]. Sequence disruptions such as popups and advertisements should be avoided, as the user could lose the content on which they were focusing and it could be difficult for AT users to return to it [ 25 ]. This shift in focus could be problematic, specifically for screen reader users.

As mentioned previously, any type of accessible navigation link could facilitate the navigation experiences of AT users. In this respect, the use of a heading helps a user to follow the sequence of content, but the content needs to be relevant to its heading level to avoid confusion [ 44 ].

3.1.8 Sensory characteristics

According to WCAG 1.3.3, “ instructions provided for understanding and operating content do not rely solely on sensory characteristics of components such as shape, color, size, visual location, orientation, or sound. ” The reviewed articles emphasized that symbols and special characters must be described in textual format for screen reader users to understand [ 29 , 37 ].

Additionally, the following elements should be considered: visual clutter on the screen should be avoided [ 39 , 42 ]; if icons are used, their meanings should be clear to the user [ 33 ]; and patterned and transparent backgrounds should be avoided [ 25 , 33 ]. Visual elements such as color or lines should not be used to communicate information. For example, in the study by Kamei-Hannan [ 37 ], students using Braille displays were unable to answer some questions because they relied solely on visual elements (e.g., when answers to questions were based on underlined words that were not recognizable on the Braille display). Moreover, the use of visual elements applies to the use of blank lines (when used for readability purposes), as this causes problems for blind users [ 22 ]. For visually impaired users, it is helpful to be able to change the size of text, zoom in and magnify [ 45 ] when necessary without losing information (this includes both apps and websites). Fonts and colors used on a website should be accessible and consistent [ 35 ]. The color contrast of fonts and backgrounds (neither too low nor too bright) should be considered for people with visual impairment [ 31 ]. Alajarmeh [ 25 ] reported that the lack of ability to change colors and fonts according to user needs is a severe problem. Thus, it is recommended that, where possible, applications provide the ability to change colors, fonts, etc. [ 25 , 45 ] so that they can be accessed by different types of AT, for example, on educational platforms [ 46 ].

3.1.9 Color contrast

WCAG 1.4.2 details aspects of color contrast and exceptions in detail (see 25, 31, 33, 35, 38 for further information). According to WCAG 1.4.2, the visual presentation of text and images of text should have a contrast ratio of at least 4.5:1. The adjustment of color contrast should be offered so that users can set colors based on their needs. In an example of touchscreens, Huang suggests that such flexibility is specifically helpful for visually impaired users to find information on their touchscreen devices [ 48 ]. In a study of an accessible educational platform, Sapp provided users with such adaptability. Such features were positively evaluated by the study’s participants [ 39 ], but it should be noted that in Huffman’s study [ 47 ], although users were able to adjust the contrast, the highest contrast available was not sufficient for the participants’ needs. This highlights that different needs must be considered in the design and development of technologies.

3.1.10 Text

The font color and its contrast with the background play an important role in the accessibility of digital technologies, and the failure to follow such measures could lead to eye fatigue [ 33 , 39 ]. A participant in Kim et al.’s study expressed such inaccessibility as follows: “ It is difficult to read text because of the unclear color of text. This can easily cause eye fatigue to people with low vision ” [ 33 ]. Users should be able to adjust the text and font size based on their requirements [ 25 , 32 , 39 , 45 ]. Following accessibility measures in detail, such as the text and font size, is very important for an accessible experience for the target group. A participant in a study by Yeong et al. expressed the consistency in text and font size as a positive attribute: “ The text all have the same size throughout which is good as I did not have to change my magnification level often ” [ 35 ] (see WCAG 1.4.4, 1.4.12, and 3.3.1 for further details regarding text accessibility). Such accessibility measures apply to the adaptability of such features based on users’ needs. Users should be able to adjust text sizes without losing information [ 31 , 39 , 48 ]. For example, a participant in a study by Yeong et al. found such adjustability to be a facilitator: “ I am still able to see the whole page at 200% magnification ” [ 35 ].

3.1.11 Popups

As mentioned previously, “screen focus” [ 41 ] is important for AT users, as it may not always be easy to return to the original page or content once the focus has been changed. Popup windows change the focus of screen readers, which could potentially cause confusion and disorientation for users [ 41 , 44 ]. Examples of popups include notifications such as email reminder notifications [ 41 ], authentication methods on web applications that cause new screens to pop up [ 42 ], or ads [ 49 ]. In the case of popups, users should be notified of how to enable (e.g., advertisements), dismiss [ 24 ] or handle any form of popup [ 22 , 49 ].

3.2 Operable

3.2.1 keyboard accessibility.

Content, elements and navigation should be accessible via the keyboard (e.g., tab key and shortcuts) [ 29 , 50 , 51 ] and follow a meaningful sequence. This option allows screen reader users and those using a Braille display to navigate and access websites. Vollenwyder et al. investigated the accessibility of websites in two groups of websites with either low or high compliance with accessibility guidelines, with the low-compliance group highlighting the lack of keyboard accessibility as a problem: “ CTRL  +  Click worked, but the tab order in their forms is all wrong and the search window is also very keyboard-unfriendly. As expected, pretty bad UX ,” or “the tab order in the contact form is wrong… when you tab through it jumps from name all the way down and then back up, and the main field can’t be tabbed at all ” [ 31 ]. In this regard, see also WCAG 2.1 and Sect.  3.1.7 . Accessibility through keystrokes and shortcuts provides users with a better experience when using digital technologies, as well as saving time. For example, a participant in a study by Candan et al. [ 28 ] found their accessible prototype to be a time saver because it allowed them to perform functions with fewer keystrokes. Another participant also appreciated this specifically for the search results: “When you clicked on a result, it displayed results in another window so you could just alt-tab to that window, hit alt-/’, and you landed right on your target” [ 28 ]. Furthermore, familiar formats should be maintained, as many AT users utilize their memory for navigation (see 3.1.2). This problem was mentioned in a study by Watanabe et al., who explained that the “first issue occurred when participants were not able to operate the “Select File” push button with the Enter key. The problem was circumvented by pressing the Space key” [ 52 ].

3.2.2 Links and buttons

Links should have sufficient and clear text and buttons clear labels [ 29 , 31 , 34 , 38 , 39 ] to allow users to understand the purpose of the links and buttons and the destinations to which they will be led, hence facilitating their navigation. For example, blind participants in a study by Carvalho et al. [ 40 ] could not complete their tasks due to the “lack of text for links.” They further explained that “by clicking of the icons, screen reader read them as “link”. Hence, task was not completed as blind users could not find their destination” [ 40 ]. As mentioned previously, visual clutter on a screen should be avoided, especially for visually impaired users who use magnifying ATs. In this regard, the number of links on a page or menu should also be limited [ 29 , 38 ]. For instance, a participant in Yeong et al.’s study found too many links to be a problem: “ There are too many links to read with a screen reader on Wikipedia ” [ 35 ]. Buttons should be easily located [ 34 , 35 , 44 , 53 ]; for example, button sizes should not be too small so that visually impaired users can locate them without difficulties [ 33 ].

3.2.3 Headings and labels

Headings and their appropriate labels are very important for the accessibility of digital technologies. Navigation using headings with clear labels could save AT users’ time and facilitate the use of digital technologies [ 29 , 31 , 41 ]. Headings must be organized in a logical order [ 35 , 53 ]. For example, a participant in Yeong et al.’s study stated: “I found it easy to navigate through the different headings for each page using the ‘Insert-F6’ function on JAWS which lists all the headings” [ 35 ]. In addition, headings and page titles should represent the content intended. For example, “in many cases, the content appeared to be visually divided into heading levels; however, the underlying representation of the content was not implemented using structural heading levels. This caused problems in navigating over the content, especially on mobile Apps” [ 34 ]. For more information see WCAG 2.4.6.

3.2.4 Navigation

Navigation was one of the most frequent items mentioned and in included articles. Here some of important factors will be reported. Further detailed information is presented in “supplementary file 2.”

Each page should clearly indicate its content for screen reader users to know the purpose of a page and navigate it easily; for example, having clear page titles could facilitate navigation using ATs [ 34 , 38 , 40 , 54 ]. Options to facilitate navigation should be offered, for example, “jump to top buttons” at the end of long webpages [ 35 ] or the correct sequences of actions. A blind participant in Zeinullin and Hersh’s study [ 55 ] found the lack of clarity in navigation to be a problem: “I wish the app guided me while I was exploring images. Like, in which direction I should move in order to get to the object” [ 55 ]. Icons, links and items should be located in places that AT users can find easily [ 25 , 26 , 28 , 31 , 33 , 34 , 39 , 42 , 44 , 46 , 53 , 56 ], for example, finding products easily on online shopping websites [ 31 ] or attachment buttons on an educational platform [ 46 ].

One of the frustrating problems that users faced was being stuck in a loop, as a clear exit from a webpage was not detectable [ 38 , 53 ]. This also applied to cases in which by clicking on an unwanted link or button, they could not navigate back to where they were; for example, popups could confuse screen reader users’ navigation [ 42 ]. For further examples see [ 25 , 26 , 49 ]. It is helpful when important navigational links are easily detectable to all users [ 25 , 27 , 35 ]. For example, when they are located in menus, less scrolling should be offered (drop-down menu [ 35 ]) or positioned strategically in the layout, where it is easily detectable for AT users [ 39 ]. Keyboard accessibility for navigation is an important factor [ 50 ], where the consistency of keystrokes should be followed; for example, a participant in a study by Vollenwyder et al. stated: “ In all input fields you could navigate with the tab key (first name, last name, street etc .), but in the last field Your message you could not get to it with the tab key you have to search for the field explicitly ” [ 31 ].

The studies also frequently mentioned the search and results function. Search results should have clear and sufficient labels and headings so that screen reader users can understand and navigate them [ 29 ]. Participants in a study by Vigo and Harper [ 38 ] were confused because their search did not have any results. In such cases, users should be clearly informed when no search results are found [ 38 , 54 ]. In addition, search results should be structured clearly so that screen reader users can understand and navigate them [ 22 , 29 , 31 , 38 ]. Organizing long search results into clusters was found to be helpful and time-efficient by participants in the study by Aqle et al. [ 57 ], who tested an accessible search engine prototype. Therefore, it is recommended that when too much information or too many links are found, a cluster of search results should be offered to help screen reader users. In this example, the screen reader user can skip the unwanted clusters, rather than read out each result aloud, which could cause confusion or be time-consuming: “ This is very helpful to categorize search results ” [ 57 ]. Mobile apps with a large amount of content should offer a search function so that the user does not have to scan the content completely [ 25 ]. See WCAG 2.4 and “supplement file 2” for further information and examples regarding navigation.

3.2.5 Auto-updating content

Carousals create difficulties for screen reader users because the information that they provide is automatically updated and the user has no control over their speed of occurrence. This issue is specifically challenging for screen reader users, as this fast change of focus could lead to a loss of information as well as confusion [ 25 ]. Both blind and visually impaired participants in Alajarmeh’s study [ 25 ] found auto-updating content to be a serious problem. In this regard, Yeong et al. concluded that time-sensitive carousel content could lead to user confusion and frustration, particularly for those who use ATs [ 35 ]. For further examples see [ 35 , 38 , 48 ] and WCAG 2.2.2 and 2.2.4.

3.2.6 Time limit

If a form, application, learning platform, etc. has a time limit, the user should be made aware of this and allowed to postpone or extend the time limit [ 24 ]. In cases of a malfunction of a timed task or process, the user should be able to find their way back as easily as sighted users. This issue occurred in the study by Muwanguzi and Lin [ 58 ], where a participant needed two hours to retake a timed test, whereas a sighted user required only 15 min. Errors in timed tasks were also mentioned in Alajarmeh’s study [ 25 ], where participants had to repeat the process. Such errors cause AT users to invest more time in completing their tasks. See WCAG 2.2.1, 2.2.5 and 2.2.6 for more details.

3.2.7 Flashing and flickering content

In a study by Vollenwyder et al. [ 31 ], users found a lack of flashing content to be a positive attribute, but no other study included in this review has mentioned this item.

3.3 Understandable

3.3.1 language.

In several studies, complicated and technical information was a challenge for the target group. For example, Nimmolrat et al. [ 59 ] reported that visually impaired users avoided some applications because the information provided was difficult to understand. In a usability evaluation study of web-based tourist information software, Stary and Totter [ 50 ] reported that the target group considered well-explained information to be an important factor and concluded that content must be developed based on the needs of the target group. Therefore, clear, readable and understandable language for the target group should be used [ 31 , 33 , 35 , 43 , 50 , 59 , 60 ]. Unexpected and unfamiliar characters should be avoided [ 38 ]. Acronyms should be explained in the context. A lack of such information could specifically cause confusion and difficulties for screen reader users following the content, as screen readers treat acronyms as a word: “ For example, the acronym ACM is incomprehensibly read as one word that confuses the user ” [ 25 ].

3.3.2 Predictability

WCAG 3.2 states that webpages should appear and operate in predictable ways. For example, unexpected changes or functions like banners [ 38 ] could cause difficulties for users.

Usually, specific links, icons or features are positioned as a standard in certain places on websites or apps, with users eventually learning these to develop a mental model and, consequently, looking for them in their designated places. For example, a participant in a study by Vollenwyder et al. criticized the fact that the contact details were not placed where they usually are, either in the footer or in the menu [ 31 ]. In this regard, an unfamiliar layout, icons, buttons, links and features should be avoided [ 33 , 38 , 40 ]. When these standards are not followed, users are challenged to navigate or accomplish their goals. In contrast, Craven et al. [ 27 ] reported that visually impaired users outperformed sighted users in some cases when navigation links were positioned appropriately.

Users expect a particular answer by performing specific actions. Yeong et al. [ 35 ] noted that predictable navigation is an essential part of accessibility. In their case, participants were confused when using their prototype because the search and results function was not similar to the common search engines to which they have used. When the search function of a website or app is used, users expect to obtain a list of results relevant to their search. If the focus thereafter is not on the search results, the screen reader user could face difficulties in navigating (see [ 25 , 31 , 35 , 38 ] for examples).

3.3.3 Input assistance and errors

WCAG 3.3 advises that users should receive support to avoid and correct mistakes. Sighted users usually find it easier to rectify errors than do blind users, according to Shimomura et al. [ 22 ]. Alnfiai and Sampalli [ 42 ] confirmed this in their study in which participants’ mistakes, here entering information in the wrong places and failing to log in, led to confusion and time spent on correcting these errors (for more examples see [ 22 , 42 , 50 ]). In this regard, when an action is performed, users should receive accessible feedback regarding their action, allowing them to understand the result of their action. For example, for online shopping experiences, a participant in a study by Vollenwyder et al. stated that: “Although I could select all products, only in the shopping cart I have not brought them. Each time came: Your shopping cart is empty. There was no feedback that a product was assigned to the cart, nor a button, add to cart or anything else” [ 31 ]. In a study of a prototype smart home system, de Oliveira et al. [ 34 ] explained that accessible feedback plays an important role for visually impaired users. Feedback allows users to know whether their actions have produced the expected response, for example, whether or not the lights have been turned off. In this regard, WCAG 3.3.1 indicates that users should be aware that an error has occurred and can determine what went wrong. The error message should be as specific as possible and errors should be consequently described in a textual form for screen reader users [ 25 , 44 ].

3.4.1 Compatible

WCAG 4.1 suggests enabling compatibility with ATs. Moreover, several studies have mentioned the importance of websites and applications being compatible with different types of AT  [ 40 , 51 , 61 ]. On the positive side, where such compatibility provides user satisfaction and the user’s ability to use digital technologies, participants in a Yeong et al. study stated: “The website works really well and is really fluid when using Voiceover on my iPhone,” or “I found the website easy to understand and JAWS friendly” [ 35 ]. However, when compatibility with different ATs is not offered, AT users are challenged and sometimes avoid using digital technologies. For example, a participant in a study by Alnfiai et al. expressed: “I do not open the bank account from the phone because I feel it is not safe and the web application is not accessible because it does not support the VoiceOver service” [ 42 ]. Such difficulties were also observed on tablets and smartphones in a study by Leporini and Buzzi [ 32 ], which explored the use of eBooks by the target group. A participant stated: “Many editing functions are not available or are impracticable via screen reader and gestures on the touch-screen” [ 32 ]. Mobile apps, including browser apps, should be accessible for ATs as well, for example, for visually impaired users who need to magnify the screen [ 45 ]. Another example is CAPTCHA or images that should be accessible to the screen reader, and content should be accessible via different alternatives [ 49 ]. In cases of the inaccessibility of CAPTCHA, e.g., an unclear voice [ 42 ] or inaccessibility to screen readers [ 49 ], users have no choice but to seek sighted help [ 25 ].

In cases of educational platforms or online tests, compatibility plays an important role in the inclusion of AT users. For an example of challenges caused by underlined words and the lack of compatibility with Braille, see Sect.  3.1.8 and [ 37 ].

3.4.2 Dynamic content

There was an overlap of meaning in the categories of “auto-updating content” and “dynamic content.” Several articles considered dynamic content to be content that plays automatically at short intervals and over which the user has less control. However, dynamic content could also mean content that changes based on the user’s data and preferences. Examples include advertisements on websites or the way in which websites such as YouTube work. Most comments used the definition of the former. Therefore, we included all such comments in “auto-updating content,” even though users utilized the words “dynamic content.”

Regarding dynamic content, AT users rely usually on their memory and build mental models of how a website or app functions. Dynamic content (such as ads) that causes changes based on users’ behavior could create difficulties for users who rely on their mental models [ 56 ].

3.4.3 Accessible Rich Internet Applications (ARIA)

ARIA can be added to HTML so that screen readers and other types of AT can better understand the elements on a webpage, and can be used to access dynamic content or page regions [ 24 ]. Buzzi and Leporini [ 29 ] mentioned that the use of ARIA could have increased the accessibility of Wikipedia for screen reader users, improving navigation and comprehension (see Sect.  3.1.2 ).

4 Discussion

This scoping review focused on the accessibility of digital technologies for people with visual impairment and blindness, identifying items facilitating their use based on first-hand experiences of the target group and the existing accessibility guidelines. The studies included in this review covered a variety of contexts, including work, education, and daily activities such as online shopping.

It was noted that users would like to be able to make changes based on their needs, for example, in terms of color contrast, text font, text color, and text size [ 39 ]. This flexibility is particularly helpful for people with visual impairment, for example, when they use magnifier applications to access online platforms and websites. Additionally, font shape, the space between letters and lines, and stroke width were identified as factors that influence the readability of text, and that consequently might contribute to eye fatigue: “It is difficult to read text because of the unclear color of text. This can easily cause eye fatigue to people with low vision” [ 33 ]. This highlights the importance of being able to customize the details of apps and websites to suit one’s needs and preferences. It is therefore advisable to offer such flexibility, especially in mobile applications, so that customization is possible.

In this review, no limitations were considered for the year of publication to detect any patterns. It was observed that some accessibility issues remain consistent. For example, keyboard accessibility on websites seems to have been a problem for screen reader users from as early as 2004 [ 27 ] to 2022 [ 49 ]. Following a universal design perspective when developing new technologies could prevent such limitations that all people with disabilities face while using technologies.

It was repeatedly mentioned that communication using visual elements must be avoided on websites, apps, social media, and so on. Using images as icons or a way of communication without alternative text makes such tools inaccessible for screen reader users. The role of each element, e.g., UI elements, must be clear [ 25 ]. The fact that many websites and applications are designed for sighted users was also reported in many studies; for example, users with blindness could not use an online banking website, as the information on the homepage was highlighted visually, so while sighted users could find their way on this website, users with blindness were excluded [ 26 ].

Compatibility with different ATs must be taken into account. Singleton et al. [ 36 ] showed that even though an accessible PDF file (e.g., containing alt text for images) for screen reader users was easy to navigate, visually impaired users who use screen magnifiers were challenged because by zooming in, the text, images and columns would collide and the users would lose some information, affecting their educational performance. In this case, measurements could be taken to avoid such limitations (text size). Additionally, CAPTCHA should be accessible for users of ATs; for example, the audio version of text must be clear for users [ 42 ]. In such cases of inaccessibility, users have to either abandon using the technologies or seek assistance from those without visual impairment [ 25 ].

In online shopping when sighted users look at images to understand the details and make decisions, this is challenging for blind users because the descriptions provided are usually short and do not provide enough information regarding how they look. For example, a participant in a study by [ 43 ] stated: “Quite often, it will just say blue and white, but it wouldn’t tell you [if] it’s blue and white stripes or blue with white flowers or, you know, blue and white spots… I don’t know which one is most… which one has the most color.” Furthermore, some users stated that they ignore images on websites altogether because, based on their experience, they never are accessible: “I ignore images on a website as they’ve never had useful descriptions” and “images do not affect shopping experience as alt-text is normally not good” [ 60 ]. Users claimed that, instead of hearing the alt text of images, they usually read the review comments from users who have already purchased these items because they provide detailed information such as the color, if they are formal, etc. In this regard, Tigwell et al. [ 62 ] explored the use of emojis from the perspectives of people with visual impairment and blindness. While sighted users might benefit from using these features easily, screen reader users could be challenged. Firstly, some emojis do not have appropriate alt text, confusing screen reader users. For example, “Email subjects have emoji now; Ebay put a [truck emoji] to show your order has been sent. For a long time, I [was] puzzled as to why they’d write the word ‘truck’ there.” Secondly, searching the long list of emojis and finding the one that they need is laborious: “…finding the right one to send. I either don’t know whether it exists or what it is, or where to find it. Sighted people just glance at a screen and can find them pretty quickly, while we have to go through all of them.” Lastly, for some blind users, comprehending the meaning of such emojis is not easy, as without eyesight or assistance their use is not possible: “Some emoji [are] useless or just have a bad design (I was told the ‘pray’ emoji … is actually a ‘high five’,” or “Emoji is something fun for sighted texters…but for me it’s just an extra string of words. …like the grinning face emoji; it looks fun and cute when you look at it, but Voiceover describes it as ‘grinning face with clenched teeth emoji’ which sounds more like a grimace than a big smile.”

Individual differences in the topics of disability are widely observed and the accessibility of digital technologies is also to some extent influenced by individual differences. For example, color contrast preferences vary among individuals: “most participants preferred a black background with white text, but one participant preferred black background with yellow text” [ 63 ]. Another example of individual differences was detected in the preference for using menus. While the participants in the study by Yeong et al. [ 35 ] found drop-down menus to be helpful (as less scrolling is required), participants in the study by Sapp et al. [ 39 ] were happy to have alternative possibilities to drop-down menus.

Regarding accessibility, even though guidelines offer constructive and helpful frameworks, they do not suffice for complete accessibility of digital technologies for people with disabilities [ 25 , 31 , 54 ] and there remains room for improvement. This scoping review highlights details of first-hand experiences of the target group, and such information is a helpful source for improving accessibility. In addition, prototypes that are focused on improving digital accessibility provide constructive details that could be transferred to the development and design of accessible digital technologies. However, this type of information is usually lost in the pool of available information. Therefore, this review contributes to bringing together all of these attempts so that all stakeholders may benefit from existing research. The examples provided in this review, as further explanations of WCAG elements, can also help all stakeholders, even those with no or less IT knowledge, to better understand accessibility and try to implement it in their products. Despite all of these benefits, this review has some limitations. Firstly, the review only focused on one type of disability, so further research is needed that focuses on different types of disabilities. In addition, other databases could be considered to include other publications.

This review’s findings are consistent with those of Khan & Khusro’s literature review [ 12 ] on smartphones, as they highlight influential factors such as the importance of meaningful sequences, accessible navigation, compatibility with other ATs, consistency, and flexibility. However, where this review provides a more detailed description of items based on the target group’s experiences, Khan and Khusro’s review [ 12 ] offers a summary of the importance of each factor. Furthermore, this review is in line with other studies in that accessibility guidelines need to be improved. For example, on the topic of the accessibility of mobile applications, Di Gregorio et al. [ 3 ] argued that, although there has been increased interest in accessibility and the development of ATs, there is room for improvement in accessibility guidelines for mobile applications. Secondly, in agreement with Ashraf et al. [ 64 ], all stakeholders, including technology developers, need to be informed about these guidelines and asked to implement them in earlier stages of design and development. It is noteworthy that previous studies and reviews have predominantly concentrated on a specific technology or certain features of technologies; for instance, Shera et al. [ 65 ] developed a checklist for the user interface of mobile applications based on the accessibility issues of the target group. This review, however, encompassed a broader range of technologies, focusing on accessibility and identifying both challenges and facilitators. In conclusion, accessibility starts from the moment at which technologies are designed and developed to the moment at which they are implemented. Consequently, it is important not only to educate product and application developers about accessibility but also to train AT users to use these technologies competently. A checklist will be developed based on the review findings, further analysis of the results, and in consultation with the target group. This checklist is intended to be a tool for stakeholders in accessible development and use of digital technologies.

Data availability

No datasets were generated or analysed during the current study.

Lindqvist MH. The uptake and use of digital technologies and professional development: exploring the university teacher perspective. In: Elçi A, Beith LL, Elçi A, editors. Handbook of research on faculty development for digital teaching and learning. Hershey: IGI Global; 2019. p. 505–25.

Chapter   Google Scholar  

Kulkarni M. Digital accessibility: challenges and opportunities. IIMB Manag Rev. 2019;31(1):91–8.

Article   Google Scholar  

Di Gregorio M, Di Nucci D, Palomba F, Vitiello G. The making of accessible Android applications: an empirical study on the state of the practice. Empir Softw Eng. 2022;27(6):145.

Iwarsson S, Ståhl A. Accessibility, usability and universal design–positioning and definition of concepts describing person-environment relationships. Disabil Rehabil. 2003;25(2):57–66.

Google Scholar  

Botelho FHF. Accessibility to digital technology: Virtual barriers, real opportunities. Assist Technol. 2021;33(sup1):27–34.

Article   MathSciNet   Google Scholar  

Web Content Accessibility Guidelines (WCAG) 2.1. How to Meet WCAG (Quick Reference): W3C World Wide Web Consortium Recommendation; 2023. https://www.w3.org/WAI/WCAG22/quickref/?versions=2.1 . Accessed 16 Feb 2024.

Johansson S, Gulliksen J, Gustavsson C. Disability digital divide: the use of the internet, smartphones, computers and tablets among people with disabilities in Sweden. Univ Access Inf Soc. 2021;20(1):105–20.

Croxford S, Rundle C. Blind people and apps on mobile phones and tablets—challenges and possibilities; 2013. p. 343–6.

Haury I, Hamideh Kerdar S, Kirchhoff BM. Barrierefreiheit digitaler Arbeitswelten am Beispiel von Webkonferenztools: Eine Interviewbefragung blinder und sehbehinderter Nutzer*innen von Webkonferenztools am Arbeitsplatz Sicherit Sicher; 2023. p. 26–32.

Gonçalves R, Rocha T, Martins J, Branco F, Au-Yong-Oliveira M. Evaluation of e-commerce websites accessibility and usability: an e-commerce platform analysis with the inclusion of blind users. Univ Access Inf Soc. 2018;17(3):567–83.

Crane MA, Nguyen M, Lam A, Berger ZD, Paulus YM, Romley JA, et al. Figure accessibility in journals: analysis of alt-text in 2021–23. Lancet. 2023;402(10419):2287–9.

Khan A, Khusro S. An insight into smartphone-based assistive solutions for visually impaired and blind people—issues, challenges and opportunities. Univ Access Inf Soc. 2021;19:1–25.

Tsatsou P. Is digital inclusion fighting disability stigma? Opportunities, barriers, and recommendations. Disability Soc. 2021;36(5):702–29.

Almeida AMP, Beja J, Pedro L, Rodrigues F, Clemente M, Vieira R, et al. Development of an online digital resource accessible for students with visual impairment or blindness: challenges and strategies. Work. 2020;65:333–42.

Halbach T, Fuglerud KS, Fyhn T, Kjæret K, Olsen TA, et al. The role of technology for the inclusion of people with visual impairments in the workforce. Universal access in human-computer interaction user and context diversity. Cham: Springer International Publishing; 2022.

Al-Razgan M, Almoaiqel S, Alrajhi N, Alhumegani A, Alshehri A, Alnefaie B, et al. A systematic literature review on the usability of mobile applications for visually impaired users. PeerJ Comput Sci. 2021;7: e771.

Senjam S, Manna S, Bascaran C. Smartphones-based assistive technology: accessibility features and apps for people with visual impairment, and its usage, challenges, and usability testing. Clin Optom. 2021;13:311–22.

Oh U, Joh H, Lee Y. Image accessibility for screen reader users: a systematic review and a road map. Electronics. 2021;10(8):953.

Tricco AC, Lillie E, Zarin W, O’Brien KK, Colquhoun H, Levac D, et al. PRISMA Extension for Scoping Reviews (PRISMA-ScR): checklist and explanation. Ann Intern Med. 2018;169(7):467–73.

Arksey H, O’Malley L. Scoping studies: towards a methodological framework. Int J Soc Res Methodol. 2005;8(1):19–32.

Ouzzani M, Hammady H, Fedorowicz Z, Elmagarmid A. Rayyan—a web and mobile app for systematic reviews. Syst Rev. 2016;5(1):210.

Shimomura Y, Hvannberg ET, Hafsteinsson H. Accessibility of audio and tactile interfaces for young blind people performing everyday tasks. Univ Access Inf Soc. 2010;9(4):297–310.

Pollock D, Peters MDJ, Khalil H, McInerney P, Alexander L, Tricco AC, et al. Recommendations for the extraction, analysis, and presentation of results in scoping reviews. JBI Evid Synth. 2023;21(3):520–32.

University of Washington. https://www.washington.edu/accessibility/checklist/ . Accessed 14 Feb 2024.

Alajarmeh N. The extent of mobile accessibility coverage in WCAG 2.1: sufficiency of success criteria and appropriateness of relevant conformance levels pertaining to accessibility problems encountered by users who are visually impaired. Univ Access Inf Soc. 2022;21(2):507–32.

Brunsman-Johnson C, Narayanan S, Shebilske W, Alakke G, Narakesari S. Modeling web-based information seeking by users who are blind. Disabil Rehabil Assist Technol. 2011;6(6):511–25.

Craven J, editor. Linear searching in a non-linear environment: the information seeking behaviour of visually impaired people on the world wide web. Computers helping people with special needs. Berlin: Springer; 2004.

Candan KS, Dönderler ME, Hedgpeth T, Kim JW, Li Q, Sapino ML. SEA: segment-enrich-annotate paradigm for adapting dialog-based content for improved accessibility. ACM Trans Inf Syst. 2009;27(3):15.

Buzzi M, Leporini B. Editing wikipedia content by screen reader: easier interaction with the accessible rich internet applications suite. Disabil Rehabil Assist Technol. 2009;4(4):264–75.

Hochheiser H, Lazar J. Revisiting breadth vs. depth in menu structures for blind users of screen readers. Interact Comput. 2010;22(5):389–98.

Vollenwyder B, Petralito S, Iten GH, Brühlmann F, Opwis K, Mekler ED. How compliance with web accessibility standards shapes the experiences of users with and without disabilities. Int J Hum Comput Stud. 2023;170: 102956.

Leporini B, Buzzi M. Visually-impaired people studying via eBook: investigating current use and potential for improvement. In: Proceedings of the 2022 6th international conference on education and E-learning; Yamanashi, Japan: association for computing machinery; 2023. p. 288–95.

Kim HK, Han SH, Park J, Park J. The interaction experiences of visually impaired people with assistive technology: a case study of smartphones. Int J Ind Ergon. 2016;55:22–33.

de Oliveira GAA, Oliveira ODF, de Abreu S, de Bettio RW, Freire AP. Opportunities and accessibility challenges for open-source general-purpose home automation mobile applications for visually disabled users. Multimedia Tools Appl. 2022;81(8):10695–722.

Yeong JL, Thomas P, Buller J, Moosajee M. A newly developed web-based resource on genetic eye disorders for users with visual impairment (Gene.Vision): usability study. J Med Internet Res. 2021;23(1): e19151.

Singleton KJ, Neuber KS. Examining how students with visual impairments navigate accessible documents. J Vis Impairment Blind. 2020;114(5):393–405.

Kamei-Hannan C. Examining the accessibility of a computerized adapted test using assistive technology. J Visu Impairment Blind. 2008;102(5):261–71.

Vigo M, Harper S. Coping tactics employed by visually disabled users on the web. Int J Hum Comput Stud. 2013;71(11):1013–25.

Sapp W. MySchoolDayOnline: Applying universal design principles to the development of a fully accessible online scheduling tool for students with visual impairments. J Vis Impairment Blind. 2007;101(5):301–7.

Carvalho MCN, Dias FS, Reis AGS, Freire AP. Accessibility and usability problems encountered on websites and applications in mobile devices by blind and normal-vision users. In: Proceedings of the 33rd annual ACM symposium on applied computing; Pau, France: association for computing machinery; 2018. p. 2022–9.

Wentz B, Hochheiser H, Lazar J. A survey of blind users on the usability of email applications. Univ Access Inf Soc. 2013;12(3):327–36.

Alnfiai MM, Sampalli S. BraillePassword: accessible web authentication technique on touchscreen devices. J Ambient Intell Humaniz Comput. 2019;10(6):2375–91.

Alluqmani A, Harvey MA, Zhang Z. The barriers to online clothing websites for visually impaired people: an interview and observation approach to understanding needs. In: Proceedings of the 2023 ACM designing interactive systems conference; Pittsburgh: association for computing machinery; 2023. p. 753–64.

Swati MA, Madni TM, Janjua UI, Ahmad I, editors. Accessibility of social media application for blinds. In: 2021 4th international conference on computing & information sciences (ICCIS); 2021 29–30 Nov; 2021.

Watanabe T, Yamaguchi T, Minatani K. Advantages and drawbacks of smartphones and tablets for visually impaired people—analysis of ICT user survey results. IEICE Trans Inf Syst. 2015;E98.D:922–9.

Muwanguzi S, Lin L. Wrestling with online learning technologies: blind students’ struggle to achieve academic success. IJDET. 2010;8:43–57.

Huffman LA, Uslan MM, Burton DM, Eghtesadi C. A study of multifunctional document centers that are accessible to people who are visually impaired. J Vis Impairment Blind. 2009;103(4):223–9.

Huang H. Blind users’ expectations of touch interfaces: factors affecting interface accessibility of touchscreen-based smartphones for people with moderate visual impairment. Univ Access Inf Soc. 2018;17(2):291–304.

Senjam SS, Primo SA. Challenges and enablers for smartphone use by persons with vision loss during the COVID-19 pandemic: a report of two case studies. Front Public Health. 2022;10: 912460.

Stary C, Totter A. Measuring the adaptability of universal accessible systems. Behav IT. 2003;22(2):101–16.

Godfrey AJR. Statistical software from a blind person’s perspective. R J. 2013;5:73–9.

Watanabe T, Araki K, Yamaguchi T, Minatani K. Development of Tactile graph generation web application using R statistics software environment. IEICE Trans Inf Syst. 2016;E99.D(8):2151–60.

Mirri S, Salomoni P, Prandi C, Muratori LA. GAP for APE: an augmented browsing system to improve Web 2.0 accessibility. N Rev Hypermedia Multimedia. 2012;18(3):205–29.

Leuthold S, Bargas-Avila JA, Opwis K. Beyond web content accessibility guidelines: design of enhanced text user interfaces for blind internet users. Int J Hum Comput Stud. 2008;66(4):257–70.

Zeinullin M, Hersh M. Tactile audio responsive intelligent system. IEEE Access. 2022;10:122074–91.

Rodrigues A, Nicolau H, Montague K, Guerreiro J, Guerreiro T. Open challenges of blind people using smartphones. Int J Human-Comput Interact. 2020;36(17):1605–22.

Aqle A, Al-Thani D, Jaoua A. Can search result summaries enhance the web search efficiency and experiences of the visually impaired users? Univ Access Inf Soc. 2022;21(1):171–92.

Muwanguzi S, Lin L. Wrestling with online learning technologies: blind students’ struggle to achieve academic success. Int J Distance Educ Technol (IJDET). 2010;8(2):43–57.

Nimmolrat A, Khuwuthyakorn P, Wientong P, Thinnukool O. Pharmaceutical mobile application for visually-impaired people in Thailand: development and implementation. BMC Med Inform Decis Mak. 2021;21(1):217.

Sreedhar R, Tan N, Zhang J, Jin K, Gregson S, Moreta-Feliz E, et al. AIDE: an automatic image description engine for review imagery. In: Proceedings of the 19th international web for all conference; Lyon, France: association for computing machinery; 2022. p. Article 9.

Locke K, McRae L, Peaty G, Ellis K, Kent M. Developing accessible technologies for a changing world: understanding how people with vision impairment use smartphones. Disabil Soc. 2022;37(1):111–28.

Tigwell GW, Gorman BM, Menzies R. Emoji Accessibility for Visually Impaired People. Proceedings of the 2020 CHI conference on human factors in computing systems; Honolulu, HI, USA: association for computing machinery; 2020. p. 1–14.

Kim HN. User experience of assistive apps among people with visual impairment. Technol Disabil. 2022;34:165–74.

Ashraf M, Hasan N, Lewis L, Hasan M, Ray P. A Systematic literature review of the application of information communication technology for visually impaired people. Int J Disabil Manag. 2017;11: e6.

Shera MA, Iqbal MW, Shahzad SK, Gul M, Mian NA, Naqvi MR, et al. Blind and visually impaired user interface to solve accessibility problems. Intell Autom Soft Comput. 2021;30(1):285–301.

Download references

Open Access funding enabled and organized by Projekt DEAL. No funding was received for conducting this study.

Author information

Authors and affiliations.

Federal Institute of Occupational Safety and Health, 44149, Dortmund, Germany

Sara Hamideh Kerdar & Britta Marleen Kirchhoff

Department of Rehabilitation and Special Education, University of Cologne, 50674, Cologne, Germany

Sara Hamideh Kerdar & Liane Bächler

You can also search for this author in PubMed   Google Scholar

Contributions

All authors contributed to the review’s idea. The literature search was performed by Sara Hamideh Kerdar. Data screening was performed by Sara Hamideh Kerdar and Liane Bächler. Data analysis was performed mainly by Sara Hamideh Kerdar and supported by Liane Bächler and Britta Kirchhoff. The manuscript was drafted by Sara Hamideh Kerdar and critically reviewed by Britta Kirchhoff and Liane Bächler.

Corresponding author

Correspondence to Sara Hamideh Kerdar .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Below is the link to the electronic supplementary material.

Supplementary Material 1.

Supplementary material 2., rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Hamideh Kerdar, S., Bächler, L. & Kirchhoff, B.M. The accessibility of digital technologies for people with visual impairment and blindness: a scoping review. Discov Computing 27 , 24 (2024). https://doi.org/10.1007/s10791-024-09460-7

Download citation

Received : 24 April 2024

Accepted : 26 July 2024

Published : 01 August 2024

DOI : https://doi.org/10.1007/s10791-024-09460-7

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Accessibility
  • Digital technologies
  • Visual impairment
  • Find a journal
  • Publish with us
  • Track your research

IMAGES

  1. (DOC) Identifying and minimising barriers to Iearning for children with

    case study of a child with visual impairment

  2. (PDF) Assessment and Management of Children with Visual Impairment

    case study of a child with visual impairment

  3. (PDF) Effects of Visual Rehabilitation on a Child With Severe Visual

    case study of a child with visual impairment

  4. (PDF) Visual Rehabilitation of Children with Visual Impairments

    case study of a child with visual impairment

  5. Project on:-CASE STUDY OF A CHILD WITH VISUAL IMPAIRMENT Project of:-Creating An Inclusive School

    case study of a child with visual impairment

  6. (PDF) Cerebral visual impairment in children: Causes and associated

    case study of a child with visual impairment

COMMENTS

  1. PDF Case Study 5: 12 month old boy with severe/complex visual impairment

    Report included: Luke is a 12 month old infant with a history of epilepsy, right-sided limb weakness and delayed development of uncertain aetiology. He is under the ongoing care of ophthalmology and receives regular occupational and physiotherapy input. His seizures are currently well controlled and mum reports a reduction in the roving eye ...

  2. (PDF) INCLUSIVE EDUCATION: A CASE STUDY ON ITS ...

    The present study examines the perceptions of visually impaired individuals towards inclusive education in Malaysia, including the challenges faced. The impact on their social lives was also ...

  3. Visual impairment, severe visual impairment, and blindness in children

    Study design and case definition. BCVIS2 was a prospective UK-wide, cross-sectional, observational study to establish an inception cohort of children newly diagnosed with visual impairment. ... Thus, ophthalmologists reported all eligible children (visual impairment, severe visual impairment, and blindness) and paediatricians reported those ...

  4. Educating Students with Visual Impairments in the General Education Setting

    experiences. A case study approach was used to gather the data in a naturalistic setting. In this case study, all student participants were individuals with visual impairments along the spectrum of being legally blind. Findings of this study revealed four emerging themes that produced evidence of the unique participant's experiences.

  5. Vision-related tasks in children with visual impairment: a multi-method

    Children with visual impairments may experience delays in acquiring the crucial skill of dressing due to the absence or reduction of visual input, as revealed in a study (Hayton et al., 2019). Therefore, training in independent living skills is a component of habilitation practice that aims to promote autonomy in children and adolescents with ...

  6. Early detection of visual impairment in young children using a

    After data quality checking, videos of 361 children were reserved (92.8%), including 87 (24.1%) children without visual impairment, 169 (46.8%) children with mild visual impairment and 105 (29.1% ...

  7. PDF How to help children with neurodevelopmental and visual problems: a

    Severe visual impairment or blindness in children is often associated with other impairments.1 Children ... We found a CCT, a before/after study and a number of case reports describing the effects of prescribing distance spectacles for children with VND. The CCT11 involved four groups of

  8. Including children with a visual impairment in the mainstream primary

    The inquiry comprised a multiple case study of children with visual impairment in 17 mainstream primary schools. Classroom observation and interviews were the main methods used. Interviews were conducted with all those who had a direct impact on the quality of the children's inclusion in the classrooms, such as the teaching assistant, class ...

  9. PDF A study to understand the inclusion of learners with and without visual

    children with visual impairment. The study was conducted using a qualitative research approach, and a case study format was adopted. Eight participants (aged 16-23; 5 girls and 3 boys) participated in the study. Two focus groups were formed: one comprised 4 learners without visual impairment, and another 4 learners with visual impairment.

  10. PDF Creating Audio Books for Children with Visual Impairment: The

    conclusion of this case study approach was the learning outcome of the students who felt it was a time-consuming effort, but a rewarding experience. Keywords: audio books, children with visual impairments, virtual learning, collaborative learning, India INTRODUCTION Virtual learning has revolutionized learning among all ages. Students use computers

  11. Teaching Students with Visual Impairments in Inclusive Classrooms

    Children with visual impairments often have talents that they will be unable to develop without guidance to help them learn by using different sensory modes. A variety of teaching approaches will serve to enhance their learning and abilities in all areas of their lives. ... The teachers in the study were presented with 14 case study description ...

  12. Visual impairment, severe visual impairment, and blindness in children

    The British Childhood Visual Impairment and Blindness Study 2 (BCVIS2) was done to address this evidence gap. ... Thus, the study sample comprised 784 children with permanent newly-diagnosed all-cause visual impairment, severe visual impairment, or blindness. 559 (72%) of 778 children had clinically significant non-ophthalmic impairments or ...

  13. Visual Impairment in Preschool Children in the United States

    In 2015, more than 174 000 children aged 3 to 5 years in the United States were visually impaired. Almost 121 000 of these cases (69%) arose from simple uncorrected refractive error, and 43 000 (25%) from bilateral amblyopia. By 2060, the number of children aged 3 to 5 years with VI is projected to increase by 26%.

  14. PDF Visual impairment, severe visual impairment, and blindness in children

    Study design and case definition BCVIS2 was a prospective UK­wide, cross­sectional, ... reported all eligible children (visual impairment, severe visual impairment, and blindness) and paediatricians reported those with severe visual impairment and blindness. Cases were ascertained over a 12 ­month period

  15. PDF Case Study 7: 3 year old girl with severe developmental disability, CP

    CP, deafness and significant visual impairment Report included: Olivia is a three year old girl with a history of kernicterus and severe dystonic four limb movement disorder. She has sensorineural hearing loss and a cochlear implant. Olivia attends a pre-school for children with learning and physical impairments.

  16. PDF Students with Visual Impairments in a Dual-language Program: A Case

    The participants in the study included Sarah and Madison, two students who are visually impaired; Sarah's father and Madison's mother; 10 general education teachers; and 2 teachers of students with visual impairments (both the former and current teacher of visually impaired students). All 12 teachers were bilingual and biliterate, but the ...

  17. Children With Cortical Visual Impairment and Complex Communication

    However, the leading cause of visual impairment in children today is cortical visual impairment (CVI; Good et al., 2001; Jan et al., 2006). CVI is a brain-based visual impairment caused by damage to, or atypical structures of, visual pathways and/or visual processing centers of the brain (Huo et al., 1999).

  18. Understanding cortical visual impairment in children

    This article presents a review of the literature and a case study on a child with cortical visual impairment. The literature review covers the diagnosis, etiology, prevalence, prognosis, and a comparison of the differences between children with cortical visual impairment and those with ocular impairment. The case study presents occupational ...

  19. Viewing Strategies in Children With Visual Impairment and Children With

    Visual impairment in children is caused by ocular diseases and/or genetic factors affecting the eye or by cerebral visual impairment (CVI). ... Cohort Study • Children with normal vision (N = 16) • Adults with normal vision (N = 16, not further regarded) ... Case control • N = 120 children without delayed reading skills

  20. A Systematic Review on Inclusive Education of Students with Visual

    This was a systematic review on the inclusive education of students with visual impairment. This study focused on two of the most addressed topics: the perceptions of general education teachers and challenges faced by students with visual impairment in accessing academic subjects. It synthesized the findings of 18 peer-reviewed articles published in English from 1980 to 2020. General education ...

  21. Thomas the Writer: Case Study of a Child With Severe Physical, Speech

    A case study is presented to describe the development of augmentative and alternative communication (AAC) and literacy skills by a 9-year-old child, Thomas, who has quadriplegic cerebral palsy and a central vision impairment.

  22. Bodily-tactile early intervention for a mother and her child with

    Purpose: Congenital visual impairment and additional disabilities (VIAD) may hamper the development of a child's communication skills and the quality of overall emotional availability between a child and his/her parents. This study investigated the effects of bodily-tactile intervention on a Finnish 26-year-old mother's use of the bodily-tactile modality, the gestural and vocal expressions of ...

  23. PDF Social Experiences of High School Students with Visual Impairments

    This study explores the social experiences in high school of students with visual impairments. Methods: Experience sampling methodology was used to examine (a) how socially included students with visual impairments feel, (b) the internal qualities of their activities, and (c) the factors that influence a sense of inclusion.

  24. The accessibility of digital technologies for people with visual

    This scoping review was performed according to the PRISMA guidelines for scoping reviews (PRISMA-ScR) [] as well as the framework proposed by Arksey and O'Malley [] in five phases:2.1 Identifying the research question. The purpose of this review was to understand the experiences of people with visual impairment and blindness in using digital technologies, as well as the challenges faced by ...