With increased accessibility provided by AI integration into systematic DR screening programs, the repetitive nature of the retinal exams may act as a failsafe to potentially identify disease that may have been missed before. Photo: Julie Torbit, OD. Click image to enlarge. |
An exponential increase in diabetes has taken place globally in recent decades, with 2021 projections from the International Diabetes Federation estimating that 537 million, or one in 10 adults, aged 20 to 79 live with diabetes. This translates into roughly one billion eyes needing screening for diabetic retinopathy (DR) at least once annually, or around three million eyes daily. The amount of images that must be graded in a timely manner has placed an overwhelming burden on human graders, thus leading to more research coming forth about potential aid from artificial intelligence (AI) tools.
One new study, published in Ophthalmology Science, prospectively evaluated mydriatic handheld retinal imaging performance assessed by point-of-care AI as compared to retinal imaging graders at a centralized reading center to identify DR and diabetic macular edema (DME). A total of 5,585 eyes were observed from 2,793 patients with diabetes.
By reading center evaluation, DR severity in the sample was broken down as:
- no DR—67.3%
- mild nonproliferative DR—9.7%
- moderate nonproliferative DR—8.6%
- severe nonproliferative DR—4.8%
- proliferative DR—3.8%
- ungradable images—5.8%
DME by reading center evaluation was as follows:
- no DME, 80.4%
- non–center-involving DME, 7.7%
- center-involving DME, 4.4%
- ungradable images—7.5%
Referable DR was present in 25.3% of eyes and vision-threatening DR in 17.5%. Ungradable images were twice as likely with AI at 15.4%; however, there was substantial agreement between AI and the reading center for referrable DR and moderate agreement for vision-threatening DR.
Sensitivity/specificity of AI evaluation was 0.86/0.86 for referrable DR and 0.92/0.80 for vision-threatening DR. Based on these rates, the AI demonstrated it meets the current sensitivity and specificity for referrable DR as set by FDA thresholds of 85% and 82.5%, but it does not meet the threshold for vision-threatening DR.
The study authors elaborate on their findings, speculating that one potential reason for the high ungradable image rate of the AI is because the algorithm was trained using a different camera type than the one used in the study. A tabletop retinal camera was the source of images used to train the AI, whereas a handheld device was used in the study. While the authors are hopeful this AI technology can be used in the future, they do caution that the high failure rate will need to be addressed in future versions of the software, since this would affect the efficiency and acceptability of AI. To do so, it may need to undergo optimization using a training set of images comparable to the intended population.
Despite this limitation, the AI has proved efficacy for one of the two main categories, which the authors see as a future solution to the ever-increasing burden of human graders and clinicians, who may struggle to grade all images captured within the day and deal with backlogs as a result. The AI only refers patients at risk of losing their sight for an in-person consult, thus relieving a part of the issue.
As the authors explained in their paper, the use of point-of-care AI and handheld imaging as a DR screening tool “has the potential to decrease the burden on reading centers especially in low-income settings or geographically isolated communities. Reliable AI assessment of DR at point of care with real-time output can guide clinical decision-making and referral recommendations.”
Salongcay RP, Aquino LAC, Alog GP, et al. Accuracy of integrated artificial intelligence grading using handheld retinal imaging in a Community Diabetic Eye Screening Program. Ophthalmol Sci. December 14, 2023. [Epub ahead of print]. |