Abstract
Introduction
Identifying vision problems after stroke is important for providing appropriate referral and vision rehabilitation in healthcare services. In Norway, vision assessment is not a standard routine or integrated in stroke care, due to lack of knowledge, guidelines and validated Norwegian assessment tools for healthcare professionals (HCPs) without formal vision competence. This study aimed to validate and assess the reliability of the KROSS (Competence, Rehabilitation of Sight after Stroke) tool for identifying vision problems in stroke patients.
Methods
The KROSS tool has 21 items, including symptoms, observations, and assessment of visual acuity, visual field, eye movements and visual inattention. The primary outcome is to identify if a vision problem is present. Sixty-seven stroke survivors (69.8 years, 28 females) were assessed twice. The first KROSS assessment was by an HCP without formal vision competence and compared to a reference assessment by an optometrist/KROSS specialist within 2 days. Sensitivity, specificity, positive and negative predictive values (PPV/NPV) and inter-rater reliability (Gwet’s AC1/Cohen’s Kappa) were calculated with 95% confidence intervals.
Results
The KROSS tool demonstrated high sensitivity (98%) and specificity (83%), with excellent reliability (AC1 > 0.86/Kappa > 0.83) and observer agreement (93%) for the primary outcome. A vision problem was identified in 64% of patients, where 44% reported a vision symptom. The PPV and NPV for identifying a vision problem were 0.91 and 0.95 respectively. Sensitivity scores for visual acuity, reading, and visual inattention assessments were all excellent (> 80%) and specificity scores were high for all items (> 70%). Most items showed excellent or substantial agreement (AC1 > 0.7/kappa > 0.6). The lowest agreements were for motility (AC1 > 0.8/kappa > 0.4) and peripheral visual fields (AC1 > 0.8/kappa > 0.5).
Conclusions
This study shows that the KROSS tool shows promise as a valuable tool for integrating vision assessment into stroke health services. It has high sensitivity and specificity, and excellent reliability, indicating high accuracy for identifying a vision problem. This indicates that the KROSS tool can reliably be used by HCPs without formal vision competence to identify a vision problem. The fact that many stroke survivors were identified with vision problems using the KROSS tool, even if they did not complain of visual symptoms, supports the significance of including structured vision assessment in stroke care.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Worldwide, stroke is a leading cause of death and disability [1, 2]. One of many sequelae after stroke is visual impairments (VIs), which affect up to 60% of all stroke survivors [3, 4]. VIs after stroke include visual field defects, eye movement disorders, reduced visual acuity and perceptual disorders [3, 4]. Visual Impairments after stroke are associated with increased depression, a higher risk of falls, decreased participation in activities and a reduced effect of general rehabilitation, among other issues [5,6,7]. Appropriate vision rehabilitation, provided at the right time, can assist in performing daily activities such as reading, driving, and walking. Moreover, optimised vision, coupled with the right information and education, can support coping mechanisms, enhance family life, and facilitate participation in work and social activities. This can lead to an improved quality of life [8,9,10]. Therefore, it is essential that VIs are identified after stroke.
However, symptoms of VIs before and after stroke can be difficult to identify, and risk being overlooked or delayed initially, when life-saving acute treatment is needed [8, 10,11,12,13]. Vision-related symptoms may present in many ways, including dizziness, reading problems, headache, balance problems, or fatigue and are often not experienced as a visual problem by the stroke survivor [8, 14]. In stroke survivors with new visual problems, it has been shown that almost 40% fail to report this to healthcare professionals, and that asking about VI by the healthcare provider is not enough [11, 12, 15, 16]. Many of these symptoms may also represent impairments that are not vision related, and it is important to find the underlying cause of the visual symptoms. Unless the visual function is properly examined, many visual problems are difficult to identify by healthcare professionals and may be overlooked or perceived as a symptom of other impairments [16,17,18]. To secure proper care and referral to relevant rehabilitation, it is crucial that visual problems are identified before discharge. This means that both visual functions (the performance of visual system components, such as visual acuity) and functional vision (the ability to use our vision to perform activities such as reading, walking, or pouring a cup of coffee)[18] must be assessed within the health services [19, 20]. A basic vision assessment should minimum include objective assessments of visual acuity, eye movements, visual fields, and visual attention (neglect), in addition to history and symptoms and clinical observations of functional vision [15, 16, 21].
In stroke services, several functional assessment tools are implemented, but there is no standard tool that includes a full vision assessment [22]. One tool is the English Vision Impairment Screening Assessment (VISA) tool. The VISA tool is validated, and clinicians who are not specialists in vision problems can identify VIs and refer patients with suspected VIs to vision experts [23]. In Norway, the assessment tool Competence, Rehabilitation Of Sight after Stroke (KROSS), has been developed and tried out in two stroke units [24] and Kongsberg municipal healthcare service [16, 25]. The KROSS tool includes objective tests of visual acuity, visual field, eye movements and neglect. In addition, the patients are asked about symptoms, and clinical observations are described. It was designed to provide healthcare professionals without formal vision competence with an easy-to-use tool to help identify VIs after stroke during treatment in the stroke unit [16, 24, 25].
In Norway, the hospital stay in stroke units is short (median 4 days) and municipal healthcare services are the main providers of rehabilitation and follow-up after the initial treatment [26]. A recent article from our group confirmed that Norwegian stroke survivors experienced a lack of attention and follow-up of VIs after stroke and that healthcare professionals in both specialist and municipal healthcare services had their attention on other consequences of stroke rather than vision problems [8]. This necessitates the need for vision competence, attention, and a Norwegian assessment tool to be integrated into the interdisciplinary stroke care and rehabilitation pathways. The main aim of this study was to validate the KROSS tool which combines basic established assessments, observations and questions of visual functions and functional vision, to identify vision problems after stroke in Norwegian stroke health services. Secondary aims were to determine sensitivity, specificity, predictive values, and inter-rater agreements for the main outcome and individual visual function items of the KROSS tool. We compared the use of the KROSS tool by health care professionals without formal vision competence to its use by KROSS assessors with formal vision competence.
Methods.
1.1 Study design and setting
This is a cross-sectional design with a prospective data collection from an acute stroke unit in Vestre Viken Hospital Trust and Vikersund rehabilitation centre. Participants gave informed written consent after understanding that participation was voluntary, in addition to, and not affecting, their standard treatment and rehabilitation. If a vision problem was identified, the patients were informed and referred appropriately to an optometrist or ophthalmologist.
1.2 Participants
Patients were recruited consecutively by a member of staff. All patients who fulfilled the inclusion criteria were invited to participate (during April 2022–June 2023). Inclusion criteria were: patients suffering acute stroke, TIA diagnosis or stroke mimics; adults over 18 years; able to participate in the KROSS assessment; able to participate with written informed consent. Seventy-three patients were recruited, and 67 patients completed both KROSS assessments, see Fig. 2 and Table 1.
1.3 Procedure
Five healthcare professionals without formal vision competence (physician, nurse, physiotherapist, occupational therapist, assistant nurse, termed KROSS tester) and four experienced KROSS specialists/optometrists (termed KROSS reference) independently assessed the patient’s vision with the KROSS tool. The two assessments took place on different sessions, separated by one to three days. The assessment followed standard instructions in the KROSS tool guideline. The KROSS reference was considered the standard reference. The KROSS reference was blinded to the results of the KROSS tester to avoid bias when comparing the results.
1.4 Data collection
The KROSS tool includes screening of: habitual monocular visual acuity (0.5, 0.3 and 0.1 logMAR at 3 m; decimal acuity: 0.3, 0.5 and 0.8), binocular reading (newspaper print, N6 or IReST [27]), peripheral visual field (confrontation (Donders), monocular Facial Amsler, Rapid finger-counting [28, 29]), oculomotor function (motility, observation for strabismus and eye alignment) [30], and neglect (Heart cancellation task [31]). In addition, the patients' subjective symptoms are documented and clinical observations are described. Figure 1 shows the 18 items in the KROSS tool (KROSS is available for free (in Norwegian or English) on request from https://nettskjema.no/a/krosslisens#/page/1).
The primary outcome measure was the presence or absence of a vision problem identified with the KROSS tool. Secondary outcome was to determine the sensitivity, specificity, positive and negative predictive value of the KROSS tool. The vision data include binary results from the KROSS assessment of identifying a vision problem and the need for further referral or follow-up.
1.5 Data analysis
A sample size calculation was performed prior to the start of the study. The primary outcome was to test the difference in agreement between two raters of a binary outcome measure (present or absent visual problem). With π = 0.6 (expected probability of visual problem [3]), kappa0 = 0.6 (corresponding to ‘substantial’ agreement [32]), kappa1 = 0.9 (expected interobserver agreement), α = 0.05 (significance level), and 1-β = 0.80 (power), the minimum required sample size was 58 participants [33]. To account for drop out, 73 patients were recruited to the study (See Fig. 2).
Inter-rater reliability was analysed using binary data from the two KROSS assessments for identifying the presence or absence of a vision problem, and for categories of vision problems (reduced visual acuity, visual field loss, oculomotor deficits, reduced attention/neglect). All inter-rater agreement, where the vision-specialist KROSS result was considered the reference, was analysed in terms of Cohen’s kappa and Gwet’s agreement coefficient (AC1) with 95% confidence intervals (95% CI) [34,35,36]. Gwet’s AC1 is not affected by trait prevalences or rater bias in the same way as kappa, particularly when there is high agreement between raters [35]. To describe the strength of agreement we used the criteria by Landis and Koch [32], who described a kappa value between 0–0.20 as slight, 0.21–0.40 as fair, 0.41–0.60 as moderate, 0.61–0.80 as substantial, and 0.81–1 as almost perfect agreement, or excellent [37].
Sensitivity (the ability to correctly identify a present vision problem in a person with vision problems), specificity (the ability to correctly identify an absent vision problem in a person without vision problems), positive and negative predictive values were calculated with 95% CI. Descriptive statistics were presented as means, 95% CI, frequencies, and percentages. Alpha was set to 0.05, and analyses were performed in Microsoft Excel and IBM SPSS Statistics (Version 24, US).
2 Results
A total of 67 patients (41.8% females) completed two KROSS assessments and were included in the data analysis (Fig. 1). The average age was 69.8 years, with a range from 21 to 97 years. The stroke type was predominantly ischemic (70.1%), the mean NIHSS scores for the 43 patients in the stroke units was 1.9, and for the 24 patients in the rehabilitation centre, the mean Barthel index was 0.95. The time since stroke for the first KROSS assessment was median 3 days in the stroke unit, and 300 days post-stroke in the rehabilitation center (See Table 1 for details).
The results showed a high level of sensitivity at 98% (95% CI 0.88–0.99) and a specificity at 83% (95% CI 0.64–0.93) for identifying a vision problem with the KROSS tool (See Table 2).
Further, the positive predictive value (PPV) was 0.91 and the negative predictive value (NPV) was 0.95. The level of agreement between the KROSS tester and the KROSS reference test was excellent (93% observer agreement, Gwet’s AC1: 0.87, Kappa: 0.83) for the main outcome, to identify a vision problem.
It can be seen from Table 2 that the observer agreement scores were high (from 100 to 82%), and most individual vision problem items had excellent or substantial agreement (Gwet’s AC1 ranged from 0.98–0.67/Kappa: 0.85–0.4). For identifying reading problems, the kappa of 0.63 indicates substantial agreement. However, an AC1 of 0.93 and an observer agreement of 94% suggested that this variable had a skewed distribution and therefore an artificially low kappa coefficient.
This is also similar for identifying strabismus (eye deviation problems), with an excellent AC1 of 0.95, an observer agreement of 96% and a substantial kappa of 0.64. For visual field and motility assessments, the kappa coefficients were moderate (0.53 and 0.4, respectively) while the AC1 values were excellent (> 0.80), and the observer agreements were high (> 88%). Again, indicating an underestimation of agreement using the kappa coefficient. For central visual field loss, Facial Amsler showed complete observer agreement (100%), identifying five persons (7%) with central visual field loss, and finger counting had 99% agreement and one false negative for the four persons (6%) who were correctly identified. However, care should be taken to generalize the diagnostic scores in Table 2 as complete agreement between two testers is theoretically possible, but not plausible in real clinical practice.
Table 2 also shows that the sensitivity scores for identifying individual visual problem items ranged from 94 to 40%, where assessment of impaired visual acuity, reading, and visual inattention all were excellent (> 80%). The sensitivity for identification of a motility problem was the only one below 60%, indicating that the healthcare professionals without formal vision competence did not identify all eye movement problems. Specificity scores were high and ranged between 100 and 70% for the individual visual problem item, and reading, visual field confrontation, eye deviation, motility and visual neglect were all above 90%.
The lowest specificity score was for overall visual acuity (70%), where 8 persons were wrongly identified with impaired visual acuity. Visual acuities were assessed with six acuity items (2 × 3; for each eye separately, and for the three different letter sizes), and Table 3 shows that the level of agreement varied between the different letter sizes. When considering all 402 eyes measured, the level of agreement was excellent or substantial (AC1: 0.81/Kappa: 0.7), with a good sensitivity (81%) and excellent specificity (90%). It can also be seen that the observer agreement dropped and false positive and negative increased with declining letter size. For the largest two letter sizes (decimal acuity 0.3 and 0.5), the levels of agreements were excellent or substantial (AC1 > 0.8/Kappa > 0.7), with the lowest level of agreement being for the smallest letter size (AC1: 0.63/Kappa: 0.63).
2.1 Identified visual problems and symptoms
Visual problems were identified in 43 (64.2%) patients overall by the KROSS reference, see Table 4 for details. Most patients (89.5%) used glasses for both distance and near viewing. However, 22 (32.8%) did not have them available for testing. For individual vision items, problems with visual acuity were identified in 40 (59.7%) patients, reading problems were identified in 6 (9%) patients. Peripheral visual field defects were identified in 10 (14.9%), central visual field defects were identified in five patients (7.5%), three of whom also had peripheral field loss. 18 (26.9%) patients failed the heart cancellation assessment, with 8 (11.9%) of these patients showed to fail either the left or right side. Four patients (5.9%) were identified with strabismus, and ten (14.9%) with an ocular motility problem.
Thirty-three (49.9%) patients reported to have known eye diseases, with cataract being the most common, affecting 20 (29.9%) patients. Three patients reported known low vision in one eye due to amblyopia, trauma, or age-related macular degeneration. Symptoms of any vision problems were reported by 30 (44.8%) patients. When asked specifically, 23 (34.4%) patients reported to notice problems with distance or near vision, 14 (20.1%) acknowledged visual field or mobility problems, and 14 (20.1%) reported problems with double vision or depth perception.
The KROSS tool also includes items on functional observations reported by a healthcare professional or family member, and in 12 (17.9%) patients, a vision problem was suspected overall. Further, healthcare professionals suspected visual field problems in three (4.5%) patients based on observation of mobility, and a suspected strabismus in five (7.5%) patients. Of the 43 patients identified with a vision problem, 40 were referred for further follow-up. The majority were referred to an optometrist (39 patients), one to an ophthalmologist and five were referred to either an optometrist, ophthalmologist, or orthoptist.
3 Discussion
The results show that the KROSS tool had a high sensitivity and specificity of identifying a vision problem after stroke. Analyses also demonstrated a high degree of agreement between healthcare professionals without formal vision competence and the experienced KROSS reference specialist/optometrists when using the KROSS tool. This indicates that healthcare professionals without formal vision competence can use the KROSS tool to identify a vision problem, confirming results from previous studies on stroke patients using similar basic screening tools, such as the VISA tool [23, 38]. The KROSS tool’s sensitivity and specificity scores were 98% and 83%, respectively, and there was substantial inter-rater agreement for identifying the presence or absence of a vision problem. Further, our results showed that for most of the individual visual problem items, the KROSS tool generally had a high sensitivity and specificity, and excellent or substantial inter-rater agreements. This is in line with the results from the VISA tool [23, 38]. The lowest sensitivity and agreement scores were found for the identification of peripheral visual field using confrontation testing and eye movement motility. It is well known that the sensitivity for peripheral visual field confrontation is low [39]. However, our study showed that the KROSS testers are well within the expected sensitivity of the test itself. Others have also shown that small eye movements errors are difficult to identify for healthcare professionals without formal vision competence [23, 38].
Regarding visual acuity, the agreement between the KROSS testers and the KROSS reference was highest for the largest letter size, but was less accurate for the smallest letters. The routine of performing a visual acuity assessment was completely new to the KROSS testers, and the differences in the agreement for small letters may be explained by their lack of experience with the test. When assessing visual acuity, many patients tend to be cautious and only read letters they can easily see, which is often well above their acuity threshold [30]. Encouraging patients to guess the letters when they stop reading can, in many cases, improve their visual acuity score. This approach, is well known among vision specialists to reduce underestimation of visual acuity [40, 41]. The instructions were clear that three out of the five letters needed to be correct. However, the point of asking the person to guess should maybe be more explicitly explained in the KROSS assessment instructions.
The population in this study was recruited from both an acute stroke unit and a rehabilitation centre, with the time from stroke onset to the KROSS assessment differing between the two sites. Our study showed that the KROSS tool was a useful tool for identifying vision problems, irrespectively of when the assessment was conducted. This elucidates studies from acute stroke settings [38, 42] and supports the clinical usefulness of integrating the KROSS tool as part of stroke rehabilitation in the municipal healthcare services [25]. However, we suggest that the KROSS tool assessment should be offered as soon as the patient is available for such an assessment, to plan the rehabilitation pathway in the best way, including interventions for vision rehabilitation. Early detection of post stroke impaired vision is the first step to improve visual care after stroke and reduce the negative impacts of the vision impairments. In line with international studies [19, 38], the current study describes the effectiveness of the KROSS tool for an interdisciplinary group of healthcare professionals without formal vision competence. A vision assessment tool like the KROSS tool, may contribute to confidence in assessing vision and add objective findings to clinical observations of functional vision in stroke care [16, 25, 43].
A strength of this study is our validation of KROSS, the first Norwegian vision assessment tool designed for use after stroke. One of its strengths is that the KROSS tool was developed using a knowledge translation approach by an interdisciplinary team including an optometrist, ophthalmologist, neurologist, nurse and patients’ organizations. These stakeholders understood the importance of actively involving and collaborating with healthcare professionals who intended to incorporate the KROSS tool into their standard clinical practice [16, 24, 25]. Another strength is the high inter-rater agreements we demonstrated. This reflects the potential for healthcare professionals without formal vision competence to develop the necessary skills to identify vision problems in stroke survivors. It is also an advantage that the KROSS tool is efficient, regardless of where or when the assessment is conducted. This applies whether the KROSS tool is used in the acute stroke unit, where the length of stay is short and the assessment is close to the stroke onset, or in rehabilitation services, which have access to stroke survivors over a longer period, long after the stroke onset. This study illuminates a different aspect of international studies in the acute stroke settings by demonstrating the tool's use in rehabilitation [38, 42].
There are also limitations to consider. One is that some patients from the rehabilitation centre were recruited many months after the stroke, and there may be many factors other than their stroke that could influence their vision. Despite this, many of these participants were identified with visual problems, which are important for their rehabilitation process and everyday life. Even if these problems were caused by factors other than the stroke, they still need to be addressed. Another limitation is that using reading as a measure of visual function may be challenging for patients with aphasia, dyslexia or cognitive impairment. To minimize these problems, the KROSS tool allows for different types of reading material to be used, and any problem with reading requires further examination. A potential limitation is that the participants had low NIHSS and high Barthels score, indicating a relatively mild burden of post-stroke symptoms. Despite these limitations, a key strength of our study is that it highlights the importance of vision assessment even in patients with relatively mild stroke symptoms, as the majority of the participants were found to have some form of vision impairment.
4 Conclusion
This validation study demonstrates that the KROSS tool can enhance the integration of vision assessment into stroke healthcare services. The KROSS tool exhibits high sensitivity and specificity, along with excellent reliability, thereby demonstrating its effectiveness in identifying vision problems. The scores for most of the individual KROSS items were also excellent or substantial. This indicates that the KROSS tool can reliably be used by healthcare professionals without formal vision competence to identify vision problems. The ability of the KROSS tool to detect vision problems in many stroke survivors, including those who may not report visual symptoms, is a critical finding. This not only highlights the tool’s sensitivity, but also underscores the potential for underdiagnosed vision problems among stroke survivors. This finding strongly advocates for the integration of structured vision assessments like KROSS into standard stroke healthcare delivery. By doing so, vision problems can be identified and addressed promptly, even in patients who might not be aware of, or able to articulate their symptoms. This could significantly improve the quality of care and the overall recovery outcomes. Therefore, the use of KROSS or similar tools should be considered an essential component of comprehensive stroke care.
Data availablity
The dataset analysed during the current study is available from the corresponding author on reasonable request.
Abbreviations
- AC1 :
-
Gwet’s Agreement coefficient
- CI:
-
Confidence Interval
- FN:
-
False negative
- FP:
-
False positive
- KROSS:
-
Competence; rehabilitation of sight after stroke
- logMAR:
-
Logarithmic minimum angle of resolution
- NIHSS:
-
National Institute of Health Stroke Scale
- NPV:
-
Negative predictive value
- Obs. agr:
-
Observer agreement
- PPV:
-
Positive predictive value
- VA:
-
Decimal visual acuity
- VI:
-
Vision impairment
References
Feigin VL, Forouzanfar MH, Krishnamurthi R, Mensah GA, Connor M, Bennett DA, Moran AE, Sacco RL, Anderson L, Truelsen T, et al. Global and regional burden of stroke during 1990–2010: findings from the Global Burden of Disease Study 2010. Lancet. 2014;383(9913):245–54.
Donkor ES. Stroke in the 21st century: a snapshot of the burden, epidemiology, and quality of life. Stroke Res Treat. 2018;2018:3238165.
Rowe FJ, Hepworth L, Howard C, Hanna K, Cheyne C, Currie J. High incidence and prevalence of visual problems after acute stroke: an epidemiology study with implications for service delivery. PLoS ONE. 2019;14(3):e0213035–e0213035.
Hepworth L, Rowe F, Walker M, Rockliffe J, Noonan C, Howard C, Currie J. Post-stroke visual impairment: a systematic literature review of types and recovery of visual conditions. Ophthal Res. 2015;5(1):1–43.
Sand KM, Wilhelmsen G, Naess H, Midelfart A, Thomassen L, Hoff JM. Vision problems in ischaemic stroke patients: effects on life quality and disability. Eur J Neurol. 2016;23(Suppl 1):1–7.
Campbell GB, Matthews JT. An integrative review of factors associated with falls during post-stroke rehabilitation. J Nurs Scholarsh. 2010;42(4):395–404.
Pedersen SG, Løkholm M, Friborg O, Halvorsen MB, Kirkevold M, Heiberg G, Anke A. Visual problems are associated with long-term fatigue after stroke. J Rehabil Med. 2023;55:jrm00374.
Falkenberg HK, Mathisen TS, Ormstad H, Eilertsen G. “Invisible” visual impairments. A qualitative study of stroke survivors’ experience of vision symptoms, health services and impact of visual impairments. BMC Health Serv Res. 2020;20(1):302.
Rowe FJ. Stroke survivors’ views and experiences on impact of visual impairment. Brain Behav. 2017;7(9): e00778.
Smith TM, Pappadis MR, Krishnan S, Reistetter TA. Stroke survivor and caregiver perspectives on post-stroke visual concerns and long-term consequences. Behav Neurol. 2018;2018:1463429.
Berthold-Lindstedt M, Ygge J, Borg K. Visual dysfunction is underestimated in patients with acquired brain injury. J Rehabil Med. 2017;49(4):327–32.
Hepworth LR, Howard C, Hanna KL, Currie J, Rowe FJ. “Eye” don’t see: an analysis of visual symptom reporting by stroke survivors from a large epidemiology study. J Stroke Cerebrovasc Dis. 2021;30(6): 105759.
Sand KM, Naess H, Nilsen RM, Thomassen L, Hoff JM. Less thrombolysis in posterior circulation infarction—a necessary evil? Acta Neurol Scand. 2017;135(5):546–52.
Berthold Lindstedt M, Johansson J, Ygge J, Borg K. Vision-related symptoms after acquired brain injury and the association with mental fatigue, anxiety and depression. J Rehabil Med. 2019;51(7):499–505.
Berthold-Lindstedt M, Johansson J, Ygge J, Borg K. How to assess visual function in acquired brain injury—asking is not enough. Brain Behav. 2021;11(2): e01958.
Mathisen TS, Eilertsen G, Ormstad H, Falkenberg HK. ‘If we don’t assess the patient’s vision, we risk starting at the wrong end’: a qualitative evaluation of a stroke service knowledge translation project. BMC Health Serv Res. 2022;22(1):351.
Bould J, Hepworth L, Howard C, Currie J, Rowe F. The impact of visual impairment on completion of cognitive screening assessments: a post-hoc analysis from the IVIS study. Br Ir Orthopt J. 2022;18(1):65–75.
Bennett CR, Bex PJ, Bauer CM, Merabet LB. The assessment of visual function and functional vision. Semin Pediatr Neurol. 2019;31:30–40.
Roberts PS, Wertheimer J, Ouellette D, Hreha K, Watters K, Fielder J, Graf MJP, Weden KM, Rizzo JR. Feasibility and clinician perspectives of the visual symptoms and signs screen: a multisite pilot study. Top Geriatr Rehabil. 2024;40(1):69–76.
Roberts PS, Rizzo JR, Hreha K, Wertheimer J, Kaldenberg J, Hironaka D, Riggs R, Colenbrander A. A conceptual model for vision rehabilitation. J Rehabil Res Dev. 2016;53(6):693–704.
Rowe FJ, Hepworth L, Kirkham J. Development of core outcome sets for vision screening and assessment in stroke: a Delphi and consensus study. BMJ Open. 2019;9(9): e029578.
Hanna KL, Hepworth LR, Rowe F. Screening methods for post-stroke visual impairment: a systematic review. Disab Rehab. 2017;39(25):2531–43.
Rowe FJ, Hepworth L, Hanna K, Howard C. Visual impairment screening assessment (VISA) tool: pilot validation. BMJ Open. 2018;8(3): e020562.
Falkenberg HK, Langeggen I, Ormstad HK, Eilertsen G. Improving outcome in stroke survivors with visual problems: Knowledge translation in a multidisciplinary stroke unit intervention study. Optom Vis Sci. 2016;93: E-abstract 165147.
Mathisen TS, Eilertsen G, Ormstad H, Falkenberg HK. Barriers and facilitators to the implementation of a structured visual assessment after stroke in municipal health care services. BMC Health Serv Res. 2021;21(1):497.
Skogseth-Stephani S, Varmdal T, Halle KK, Bjerkvik TF, Krokan TGH, Indredavik B. Annual Report for 2023 (Norwegian Stroke Register) arsrapport-norsk-hjerneslagregister-2023.pdf (stolav.no)
Nachtnebel D, Falkenberg HK. Validation of the Norwegian International Reading Speed Texts (IReST) in a sample of adult readers with normal and low vision. Scand J Optom Vis Sci. 2024;17(1):2802.
Anderson AJ, Shuey NH, Wall M. Rapid confrontation screening for peripheral visual field defects and extinction. Clin Exp Optom. 2009;92(1):45–8.
Elliott DB, North I, Flanagan J. Confrontation visual field tests. Ophthalmic Physiol Opt. 1997;17:S17–24.
Elliott DB. Clinical procedures in primary eye care. 5th ed. Amsterdam: Elsevier; 2021.
Mancuso M, Demeyere N, Abbruzzese L, Damora A, Varalta V, Pirrotta F, Antonucci G, Matano A, Caputo M, Caruso MG, et al. Using the Oxford cognitive screen to detect cognitive impairment in stroke patients: a comparison with the mini-mental state examination. Front Neurol. 2018;9:101–101.
Landis JR, Koch GG. The measurement of observer agreement for categorical data. Biometrics. 1977;33(1):159–74.
Shoukri MM, Asyali MH, Donner A. Sample size requirements for the design of reliability study: review and new results. Stat Methods Med Res. 2004;13(4):251–71.
Cohen J. A coefficient of agreement for nominal scales. Educ Psychol Measur. 1960;20(1):37–46.
Gwet KL. Computing inter-rater reliability and its variance in the presence of high agreement. Br J Math Stat Psychol. 2008;61(1):29–48.
Gwet KL. Handbook of inter-rater reliability: the definitive guide to measuring the extent of agreement among raters. 4th ed. Gaithersburg: Advanced Analytics, LLC Gaithersburg, MD; 2014.
Joseph L, Bruce F, Cho LM. Statistical methods for rates and proportions Wiley. 2003.
Rowe FJ, Hepworth L, Howard C, Bruce A, Smerdon V, Payne T, Jimmieson P, Burnside G. Vision screening assessment (VISA) tool: diagnostic accuracy validation of a novel screening tool in detecting visual impairment among stroke survivors. BMJ Open. 2020;10(6): e033639.
Kerr NM, Chew SSL, Eady EK, Gamble GD, Danesh-Meyer HV. Diagnostic accuracy of confrontation visual field tests. Neurology. 2010;74(15):1184–90.
Carkeet A. Modeling logMAR visual acuity scores: effects of termination rules and alternative forced-choice options. Optom Vis Sci. 2001;78(7):529–38.
Elliott DB, Whitaker D. Clinical contrast sensitivity chart evaluation. Ophthalmic Physiol Opt. 1992;12(3):275–80.
Rowe FJ, Hepworth LR, Howard C, Hanna KL, Helliwell B. Developing a stroke-vision care pathway: a consensus study. Disabil Rehabil. 2020. https://doi.org/10.1080/09638288.2020.1768302.
Gasque H, Morrow C, Grattan E, Woodbury M. Understanding occupational therapists’ knowledge and confidence when assessing for spatial neglect: a special issue review. Am J Occup Ther. 2024;78(2):7802180140.
Funding
Open access funding provided by University Of South-Eastern Norway. This study was financed by the National Centre for Optics, Vision and Eyecare, Department of Optometry, Radiography and Lighting Design, University of South-Eastern Norway (project hours and equipment), Kongsberg Hospital and Vikersund Rehabilitation Centre (project hours and equipment).
Author information
Authors and Affiliations
Contributions
Study conceptualization: HKF, GE; study design: HKF, TM, RMK, MR, IL; data collection: TM, IL, RMK, MR, HKF; data analysis and initial draft of the manuscript: HKF; interpretation, revision, and editing: HKF, TM. All authors read, refined, critically revised, and approved the final manuscript.
Corresponding author
Ethics declarations
Ethics approval and consent to participate
Participation was voluntary, and patients participated with informed written consent. There was no discomfort or considered risk of harm to the participants in this project. The project followed the tenets of Helsinki (World Medical Association, 2013). The need for ethics approval was waived by the Regional Committees for Medical and Health Research Ethics (REC) in South East Norway (480709). The project was approved by the Norwegian Centre for Research Data (SIKT; 610040) and the Vestre Viken Hospital Trust data protection officer (21/10674).
Consent for publication
Not applicable.
Competing interests
The authors declare no competing interests.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Falkenberg, H.K., Langeggen, I., Munthe-Kaas, R. et al. Validation of the interdisciplinary Norwegian vision assessment tool KROSS in stroke patients admitted to hospital or rehabilitation services. Discov Health Systems 3, 57 (2024). https://doi.org/10.1007/s44250-024-00123-4
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s44250-024-00123-4