Advertisement

RimNet: A Deep Neural Network Pipeline for Automated Identification of the Optic Disc Rim

Open AccessPublished:November 03, 2022DOI:https://doi.org/10.1016/j.xops.2022.100244

      Abstract

      Purpose

      Accurate neural rim measurement based on optic disc imaging is important to glaucoma severity grading and often performed by trained glaucoma specialists. We aim to improve upon existing automated tools by building a fully automated system (RimNet) for direct rim identification in glaucomatous eyes and measurement of the minimum rim-to-disc ratio (mRDR) in intact rims, the angle of absent rim width (ARW) in incomplete rims, and the rim-to-disc-area ratio (RDAR) with the goal of optic disc damage grading.

      Design

      Retrospective cross-sectional study.

      Participants

      1208 optic disc photographs with evidence of glaucomatous optic nerve damage from 1021 eyes of 903 patients with any form of primary glaucoma were included. Mean age was 63.7 (±14.9) years. The average mean deviation of visual fields is -8.03 (±8.59).

      Methods

      The images were required to be of adequate quality, have signs of glaucomatous damage, and be free of significant concurrent pathology as determined independently by glaucoma specialists. Rim and optic cup masks for each image were manually delineated by glaucoma specialists. The database was randomly split into 80/10/10 for training, validation, and testing, respectively. RimNet consists of a deep learning rim and cup segmentation model, a computer vision mRDR measurement tool for intact rims, and an ARW measurement tool for incomplete rims. mRDR is calculated at the thinnest rim section while ARW is calculated in regions of total rim loss. The rim-to-disc area ratio (RDAR) was also calculated. Evaluation on the Drishti-GS dataset provided external validation (Sivaswamy 2015).

      Main Outcome Measures

      Median Absolute Error (MAE) between glaucoma specialists and RimNet for mRDR and ARW.

      Results

      On the test set, RimNet achieved a mRDR MAE of 0.03 (0.05), ARW MAE of 31 (89) degrees, and an RDAR MAE of 0.09 (0.10). On the Drishti-GS dataset, an mRDR MAE of 0.03 (0.04) and an mRDAR MAE of 0.09 (0.10) was observed.

      Conclusions

      RimNet demonstrated acceptably accurate rim segmentation and mRDR and ARW measurements. The fully automated algorithm presented here would be a valuable component in an automated mRDR-based glaucoma grading system. Further improvements could be made by improving identification and segmentation performance on incomplete rims and expanding the number and variety of glaucomatous training images.

      Key Words

      Abbreviations/Acronyms:

      MAE (Median Average Error), mRDR (minimum Rim-To-Disc Ratio), CDR (Cup-to-disc-ratio), ISNT (Inferior>superior>nasal>temporal), IoU (Intersection over Union), RimIoU (Rim Intersection over Union), CupIoU (IoU of the optic cup), DiscIoU (IoU of the optic disc), CupDice (Dice score of the optic cup), DiscDice (Dice score of the optic disc), DDLS (disc damage likelihood scale), ARW (Absent Rim Width), RDAR (Rim To Disc Area Ratio)

      Introduction

      Glaucoma is the leading cause of irreversible blindness and the second leading cause of blindness worldwide

      Giangiacomo A, Coleman AL. The Epidemiology of Glaucoma. In: Glaucoma. Berlin, Heidelberg: Springer Berlin Heidelberg; :13–21.

      . Roughly half of glaucoma cases are undiagnosed according to population-based studies

      Michelson G, Hornegger J, Wärntges S, Lausen B. The Papilla as Screening Parameter for Early Diagnosis of Glaucoma. Dtsch Arztebl Int 2008;105:583. Available at: /pmc/articles/PMC2680559/ [Accessed March 13, 2022].

      ,
      • Shaikh Y.
      • Yu F.
      • Coleman A.L.
      Burden of undetected and untreated glaucoma in the United States.
      . Early treatment preserves patient quality of life and reduces disease burden

      Cristina Leske M, Heijl A, Hussein M, et al. Factors for Glaucoma Progression and the Effect of Treatment The Early Manifest Glaucoma Trial. 2003. Available at: https://jamanetwork.com/.

      . Therefore, identification of early glaucoma is key to preventative care.
      Glaucoma diagnosis and grading are performed, in part, by evaluation of the optic nerve head’s neuroretinal rim of the optic disc. Metrics often include cup-to-disc ratio (CDR), rim-to-disc ratio (mRDR), and the ISNT rule, which compares the regional width of the neuroretinal rim

      Spaeth GL, Henderer J, Liu C, et al. The disc damage likelihood scale: reproducibility of a new method of estimating the amount of optic nerve damage caused by glaucoma. Trans Am Ophthalmol Soc 2002;100:181. Available at: /pmc/articles/PMC1358961/?report=abstract [Accessed March 12, 2022].

      . Recent studies have shown the advantages of mRDR compared to ISNT and CDR for glaucoma classification accuracy

      Kumar JRH, Seelamantula CS, Kamath YS, Jampala R. Rim-to-Disc Ratio Outperforms Cup-to-Disc Ratio for Glaucoma Prescreening. Scientific Reports 2019 9:1 2019;9:1–9. Available at: https://www.nature.com/articles/s41598-019-43385-2 [Accessed March 13, 2022].

      .
      The mRDR cannot adequately account for the degree of damage in optic discs with localized rim loss where the neuroretinal rim is noncontinuous or ‘incomplete’. A solution can be found in the Disc Damage Likelihood Scale (DDLS) proposed by Spaeth et al

      Spaeth GL, Henderer J, Liu C, et al. The disc damage likelihood scale: reproducibility of a new method of estimating the amount of optic nerve damage caused by glaucoma. Trans Am Ophthalmol Soc 2002;100:181. Available at: /pmc/articles/PMC1358961/?report=abstract [Accessed March 12, 2022].

      . DDLS accounts for incomplete rims by measuring the angle for which a rim is absent. This is called the absent rim width (ARW). Additionally, the scale accounts for disc size which affects the significance of the mRDR or ARW

      Spaeth GL, Henderer J, Liu C, et al. The disc damage likelihood scale: reproducibility of a new method of estimating the amount of optic nerve damage caused by glaucoma. Trans Am Ophthalmol Soc 2002;100:181. Available at: /pmc/articles/PMC1358961/?report=abstract [Accessed March 12, 2022].

      . It is commonly accepted and has been incorporated into eye health guidelines for optometrists and ophthalmologists

      Formichella P, Annoh R, Zeri F, Tatham AJ. The role of the disc damage likelihood scale in glaucoma detection by community optometrists. Ophthalmic and Physiological Optics 2020;40:752–759.

      .
      • Tong W.
      • Romero M.
      • Lim V.
      • et al.
      Reliability of Graders and Comparison with an Automated Algorithm for Vertical Cup-Disc Ratio Grading in Fundus Photographs.
      DDLS is limited as a diagnostic tool by the need for expert time to accurately grade images. Automated high-efficacy DDLS grading could offer a powerful screening method.
      In recent years, a confluence of several factors has led to efforts in automated glaucoma diagnosis and grading. First, studies have shown that automated algorithms can offer more consistent and reliable grading than human graders
      • Tong W.
      • Romero M.
      • Lim V.
      • et al.
      Reliability of Graders and Comparison with an Automated Algorithm for Vertical Cup-Disc Ratio Grading in Fundus Photographs.
      . Second, there has been a rapid advancement in image segmentation, image processing, and deep learning neural networks. In other fields, several neural networks outperformed human graders in image classification tasks
      • Sarvamangala D.R.
      • Kulkarni R v
      Convolutional neural networks in medical image understanding: a survey.
      . This could allow for unprecedented accuracy in optic rim segmentation and glaucoma grading

      Joshua AO, Nelwamondo F v., Mabuza-Hocquet G. Segmentation of Optic Cup and Disc for Diagnosis of Glaucoma on Retinal Fundus Images. In: Proceedings - 2019 Southern African Universities Power Engineering Conference/Robotics and Mechatronics/Pattern Recognition Association of South Africa, SAUPEC/RobMech/PRASA 2019. Institute of Electrical and Electronics Engineers Inc.; 2019:183–187.

      . Finally, the optic disc exhibits characteristic alterations in glaucomatous patients, a prime candidate for automated segmentation and analysis. Together, these factors make automated glaucoma diagnosis and grading a possibility.
      While DDLS also requires disc size analysis, automated rim segmentation with mRDR calculation for intact neuroretinal rims and ARW calculation for incomplete neuroretinal rims offers a step towards creating an efficacious, high-throughput diagnostic system for glaucomatous disc damage. Such a segmentation algorithm would need to be broadly applicable. Additionally, it would require an expansive learning capacity that could be applied to a variety of fundus images taken with different imaging modalities and with concurrent pathologies and normal variations. Convolutional neural networks offer such an approach

      Cybenko G. Approximation by superpositions of a sigmoidal function. Mathematics of Control, Signals and Systems 1989 2:4 1989;2:303–314. Available at: https://link.springer.com/article/10.1007/BF02551274 [Accessed March 20, 2022].

      .
      The goal of this paper is to present a novel convolutional neural network algorithm for neuroretinal rim segmentation, automated mRDR calculation for intact rims, and ARW calculation for incomplete neuroretinal rims. This neural network algorithm offers an important step towards building an automated DDLS screening tool.

      Methods

      The study adhered to the tenets of the Declaration of Helsinki, was approved by UCLA's Human Research Protection Program, and conformed to the Health Insurance Portability and Accountability Act (HIPAA) policies.

      Dataset

      Optic disc photographs were taken from the UCLA Stein Eye Glaucoma database. The images were of varied magnifications and taken from slides and three different digital fundus cameras. All cameras were visible light cameras. No infra-red, laser scanning, red-free, autofluorescence, or hand-held smartphone-based cameras were used. Slide films were scanned and digitized at a third-party location.
      The enrolled images met the following inclusion and exclusion criteria as deemed by two board-certified glaucoma specialists. Inclusion criteria include: (i) evidence of glaucomatous damage in the posterior pole; (ii) images had to be in focus, with discernible posterior pole and vasculature details. Exclusion criteria were concurrent non-glaucoma disease including optic neuritis, optic disc neovascularization, and vitreous hemorrhage that would impair visualization of the posterior pole. Globally, it was ensured that the full spectrum of glaucomatous damage, from early-stage intact neuroretinal rims to late-stage incomplete rims, were included while abiding by the inclusion and exclusion criteria. Figure 4 shows the mRDR distributions of our train, validate, and test set. The neuroretinal rim and optic cup were then manually segmented by one of three glaucoma specialists with a smart tablet and the image editing program GIMP. These masks were used as ground truth. The diagnostic categories for patients are shown in Table 1.
      Figure thumbnail gr4
      Figure 4Distribution of mRDRs for Train, Validation, and Test Datasets. For each dataset, a frequency histogram is shown above with a box plot corresponding to the dataset below.
      Table 1Demographic Data for Dataset. Lists the gender distribution, age distribution, and racial distribution by camera type.
      Gender DistributionSlide ImagesDigital Camera 1Digital Camera 2Digital Camera 3
      F4071195512
      M302854411
      Age DistributionMean60.7267.1372.8066.92
      SD13.4817.4312.7517.71
      Median61.8771.0673.9172.37
      IQR15.7916.3312.8622.54
      Min9.366.9216.1917.48
      Max90.0596.1094.4186.17
      Race DistributionAsian9034242
      Black632281
      Hispanic6620166
      White3661004512
      Other53530
      Unknown712232

      RimNet Model and Hyperparameter Architecture

      A deep learning model for rim segmentation was developed as the centerpiece of the RimNet pipeline. The model was developed with Python 3.9.712. Libraries used include TensorFlow 2.6.0, Segmentations Models 1.0.1, Keras Tuner 1.04, OpenCV Python 4.5.3, Numpy 1.19.5, Scipy 1.7.1, and Scikit-learn 0.24.213–16.
      Optimizing the deep learning model requires a careful choice of model architecture and hyperparameters. The choice of hyperparameters can greatly influence the prediction speed, processing requirements, and accuracy of a neural network model
      • Elsken T.
      • Metzen J.H.
      • Hutter F.
      Neural Architecture Search: A Survey.
      . These hyperparameters include the decoder, learning rate, optimizer, and loss function as shown in Table 3. The optimal combination of these parameters is task-dependent. Whereas trial and error has been used in the past, newer architecture search techniques allow for the rapid evaluation of combinations of hyperparameters with the goal of optimizing a selected metric
      • Elsken T.
      • Metzen J.H.
      • Hutter F.
      Neural Architecture Search: A Survey.
      .
      Table 3Hyperparameter search space for the RimNet included the encoders, decoders, loss functions, learning rates, and optimizer. The optimized metric was intersection over union of the rim.
      Hyperparameters
      EncodersMobileNetV2, ResNet34, EfficientnetB0, InceptionV3, ResNet101, VGG16, ResNet50
      DecodersU-Net, FPN, LinkNet, PSPnet
      Loss FunctionBinary_Crossentropy, Binary_Focal_Los
      Learning Rate10-3, 10-4, 10-5, 10-6
      OptimizerAdam, SGD
      To narrow the search space, an encoder of InceptionV3 was chosen based on literature review and computational efficiency. InceptionV3 was first published in 2015, outperforming popular encoders at the time with a fraction of the computation costs

      Szegedy C, Vanhoucke V, Ioffe S, et al. Rethinking the Inception Architecture for Computer Vision. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition 2015;2016-December:2818–2826. Available at: https://arxiv.org/abs/1512.00567v3 [Accessed March 13, 2022].

      . It has previously been used for medical segmentation
      • Shoaib M.
      • Sayed N.
      YOLO Object Detector and Inception-V3 Convolutional Neural Network for Improved Brain Tumor Segmentation.
      ,
      • Salama W.M.
      • Aly M.H.
      Deep learning in mammography images segmentation and classification: Automated CNN approach.
      . Our workstation uses NVIDIA 2080 RTX Ti graphics cards. Therefore, with limited computational efficiency, the selection of InceptionV3 was appropriate.
      Transfer learning with ImageNet weights was used to initialize InceptionV3. No transfer learning was done for the decoder. Augmentations were used including a 20-degree rotation, a 10% vertical shift, a 10% horizontal shift, a horizontal flip, a vertical flip, up to a 30% random crop, a brightness change by +/- 50 units, and a contrast limited adaptive histogram equalization (CLAHE) filter. Image down sampling was completed via a nearest neighbor algorithm. Color information was encoded using RGB channels with 8 bits per channel. The encoder and decoder were coupled using the Segmentation Models 1.0.1 library. The total number of trainable parameters was 29,896,979. No dropout layers were manually added.
      Finally, a random search was performed using the Keras Tuner library
      • O’Malley Tom
      • Bursztein Elie
      • Long
      • et al.
      . The search parameters included the decoder, loss function, learning rate, and the optimizer. The rim Intersection over Union (IoU) was used as the segmentation metric. The full search space is documented in Table 3.

      End-to-End mRDR Calculation Procedure

      mRDR, ARW, and rim-to-disc-area (RDAR) measurements are the final output of RimNet, which can be accomplished by accurate rim segmentation followed by image analysis. These two steps, along with preprocessing, led to the final framework for RimNet as shown in Figure 1.
      Figure thumbnail gr1
      Figure 1RimNet Pipeline. The raw image first undergoes preprocessing, where CLAHE is applied and the image is resized to model specifications. A mask is generated after applying the segmentation model to the preprocessed image. Image analysis then calculated the RDAR. If the rim is intact, the mRDR is calculated. If the rim is incomplete, ARW is calculated.
      The optic disc photographs were first resized to 224x224 with nearest neighbor interpolation in order to meet model specifications. A contrast limited adaptive histogram equalization (CLAHE) filter was then applied to highlight distinctive features. The preprocessed image was submitted to the neural network model which generated a segmentation mask of the optic rim and cup. While a segmentation of the optic cup is not directly needed for mRDR or RDAR calculations, it was found that training the model to identify and segment the optic cup improved identification of incomplete rims and ARW calculations. Finally, the rim segmentation mask was resized to the dimensions of the original image to allow for accurate mRDR calculation and submitted to image analysis algorithms.
      For mRDR, the algorithm first identified the center of the segmented optic cup using OpenCV. Vectors were created from the center of the cup to the boundary points. Boundary points were found using OpenCV. The number of vectors depended on the number of boundary points detected in the segmented rim. The intersection between the vectors and the segmented rim was taken as the rim width. The shortest rim width was identified and, through boundary point analysis of the rim, the disc diameter was found. Hence, the mRDR was calculated by dividing the rim width by the diameter. The RDAR was calculated by dividing the number of pixels of the segmented rim by the number of pixels in optic disc.
      The Absent Rim Width (ARW) was calculated by first applying contour hierarchies to identify shapes within the rim segmentation. We rely on the fact that intact rims will have a ‘second shape’ within the segmentation, the elliptical or circular form of the optic cup. Incomplete rims will not have this second shape. If the rim is classified as broken, 360 radial segments from the center are drawn to the edge of the of the rim. The radial segments that do not intersect the rim are those within the ‘broken’ segment of the neuroretinal rim. The number of radial segments within the incomplete segment are added to give the ARW, one radial segment for each degree. If there were two breaks in a neuroretinal rims, the angles were added together and reported as one ARW. Examples of this can be found in the neuroretinal rim segmentation shown in Figure 2.
      Figure thumbnail gr2
      Figure 2Segmentation Results. This figure demonstrates several examples of RimNet segmentation compared to physician segmentation. The left-most column shows the raw image. The middle column overlays the physician segmentation (white) over the raw image. The right-most column overlays the RimNet segmentation (white) over the raw image. In intact rims, green line shows the diameter and the dark blue shows the thinnest rim. In incomplete rims, the dark blue shows the edges of the segmentation.

      External Validation

      The Drishti-GS database is a publicly available dataset of retinal images of glaucomatous eyes with manual cup and disc segmentations
      • Sivaswamy J.
      • Chakravarty A.
      • Datt Joshi G.
      • Abbas Syed T.
      A Comprehensive Retinal Image Dataset for the Assessment of Glaucoma from the Optic Nerve Head Analysis.
      ,

      Sivaswamy J, Krishnadas SR, Joshi GD, et al. Drishti-GS: Retinal image dataset for optic nerve head(ONH) segmentation. 2014 IEEE 11th International Symposium on Biomedical Imaging, ISBI 2014 2014:53–56. [Accessed March 20, 2022].

      . The images were first cropped around the optic disk, as they are available at a field of view of 30 degrees. Then, by subtracting the Drishti-GS cup segmentations from the disc segmentations, rim segmentations were acquired. These were used as ‘ground truth’ for validation testing. The database has been used to compare performance between published optic cup and disc segmentation models through metrics such as IoU and Dice coefficient

      Joshua AO, Nelwamondo F v., Mabuza-Hocquet G. Segmentation of Optic Cup and Disc for Diagnosis of Glaucoma on Retinal Fundus Images. In: Proceedings - 2019 Southern African Universities Power Engineering Conference/Robotics and Mechatronics/Pattern Recognition Association of South Africa, SAUPEC/RobMech/PRASA 2019. Institute of Electrical and Electronics Engineers Inc.; 2019:183–187.

      ,
      • Zilly J.
      • Buhmann J.M.
      • Mahapatra D.
      Glaucoma detection using entropy sampling and ensemble learning for automatic optic cup and disc segmentation.

      Sevastopolsky A. Optic disc and cup segmentation methods for glaucoma detection with modification of U-Net convolutional neural network. Pattern Recognition and Image Analysis 2017 27:3 2017;27:618–624. Available at: https://link.springer.com/article/10.1134/S1054661817030269 [Accessed March 20, 2022].

      Edupuganti VG, Chawla A, Kale A. Automatic optic disk and cup segmentation of fundus images using deep learning. In: Proceedings - International Conference on Image Processing, ICIP. IEEE Computer Society; 2018:2227–2231.

      • Al-Bander B.
      • Williams B.M.
      • Al-Nuaimy W.
      • et al.
      Dense fully convolutional segmentation of the optic disc and cup in colour fundus for glaucoma diagnosis.
      • Yu S.
      • Xiao D.
      • Frost S.
      • Kanagasingam Y.
      Robust optic disc and cup segmentation with deep learning for glaucoma detection.
      . Few investigators have attempted rim segmentations on the Drishti-GS database
      • Al-Bander B.
      • Williams B.M.
      • Al-Nuaimy W.
      • et al.
      Dense fully convolutional segmentation of the optic disc and cup in colour fundus for glaucoma diagnosis.
      . Therefore, RimNet rim segmentations were used to recreate cup segmentations to allow for comparison with other segmentation models. The intersection over union for cup segmentations (CupIoU) and disc segmentations (DiscIoU) were reported. Additionally, the Dice scores for the cup (CupDice) and disc (DiscDice) were reported.

      Evaluation Criteria

      The main outcome measures are the median absolute error (MAE) difference between the glaucoma specialists and RimNet for three metrics: mRDR, RDAR, and ARW. A secondary measure is the RimIoU, the IoU of the RimNet rim segmentation compared to that of the glaucoma specialists.
      The mRDR, RDAR, and ARW have been explained above. Two measures of segmentation accuracy are also reported: Intersection over Union and Dice scores. The Intersection over Union (IoU), also known as the Jaccard distance, is a measure of segmentation accuracy. It compares the ground truth with the segmentation by reporting the ratio of the intersection area over the union area. The Dice score for cup and disc segmentations are reported for the Drishti-GS dataset to compare segmentation performance. The Dice score compares the ground truth with the segmentation by reporting the ratio of two times the intersection area over the summed area of the ground truth and segmentation.

      Results

      A database of 1208 optic disc photographs of 1-21 eyes from 903 glaucoma patients were used for training, validation, and testing in an 80/10/10 split. Both scanned slides and original digital images were represented in the dataset. The average (±SD) age of the patients was 63.7 (± 14.9) with a 43:57 male-to-female ratio. Full demographics including gender, age, and race are listed in Table 1. The average (±SD) visual field mean deviation (MD) was –8.03 ± 8.59 dB (range: -31.64, 3.59). Of the 1208 optic disc photographs, 340 had incomplete neuroretinal rims. The diagnoses for the patients are listed in Table 2.
      Table 2Glaucoma Diagnosis for all 1208 patients included in the RimNet dataset.
      DiagnosisCount
      Primary Open-Angle Glaucoma530
      Glaucoma Suspect403
      Chronic Angle-Closure Glaucoma71
      Low-Tension Glaucoma47
      Secondary Open-Angle Glaucoma35
      Capsular glaucoma with psuedoexfoliation33
      Anatomical Narrow Angle27
      Glaucoma secondary to Eye Infection24
      Pigmentary Glaucoma15
      Secondary Angle Closure11
      Congenital glaucoma7
      Juvenile Glaucoma3
      Acute angle-closure glaucoma2

      Hyperparameter Architecture

      Optimized parameters were found through the random search of 64 model combinations, detailed in Table 318,28–38. The combination of the InceptionV3 backbone and LinkNet architecture proved to be the most accurate

      Szegedy C, Vanhoucke V, Ioffe S, et al. Rethinking the Inception Architecture for Computer Vision. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition 2015;2016-December:2818–2826. Available at: https://arxiv.org/abs/1512.00567v3 [Accessed March 13, 2022].

      ,

      Chaurasia A, Culurciello E. LinkNet: Exploiting Encoder Representations for Efficient Semantic Segmentation. 2017 IEEE Visual Communications and Image Processing, VCIP 2017 2017;2018-January:1–4. Available at: http://arxiv.org/abs/1707.03718 [Accessed March 13, 2022].

      . 18LinkNet is a lightweight decoder first published in 2017

      Chaurasia A, Culurciello E. LinkNet: Exploiting Encoder Representations for Efficient Semantic Segmentation. 2017 IEEE Visual Communications and Image Processing, VCIP 2017 2017;2018-January:1–4. Available at: http://arxiv.org/abs/1707.03718 [Accessed March 13, 2022].

      . Other parameters identified include the loss function of binary cross-entropy, learning rate of 10-3, and the Adam optimizer

      Kingma DP, Ba JL. Adam: A Method for Stochastic Optimization. 3rd International Conference on Learning Representations, ICLR 2015 - Conference Track Proceedings 2014. Available at: https://arxiv.org/abs/1412.6980v9 [Accessed March 13, 2022].

      .

      Segmentation Network Results

      The code used to train, run, and evaluate RimNet can be found on our public repository at https://github.com/TylerADavis/GlaucomaML. On the test set, a mRDR MAE (IQR) of 0.03 (0.05) was achieved on the intact rims while a ARW MAE (IQR) of 31 (89) degrees was achieved on the incomplete rims. 22 of 34 eyes with incomplete rims were correctly identified as incomplete on segmentation. A RDAR MAE (IQR) of 0.09 (0.10) was achieved on all images. A RimIoU of 0.68 was achieved on intact rims, while a RimIoU of 0.45 was achieved on incomplete rims. The results of RimNet are presented in Table 4. Figure 2 demonstrates examples of RimNet segmentation results. To better examine the accuracy of the mRDR and RDAR calculations, the difference between the estimated values and the ground truths were calculated. Bland-Altman plots comparing the estimated and ground truth mRDR and RDAR are shown in Figure 3.
      Table 4RimNet Results on internal test set and Drishti-GS dataset. The ARW cannot be calculated on the Drishti-GS dataset because all rims are intact.
      InternalDrishti-GS
      mRDR0.04 (±0.03)0.04(±0.04)
      ARW48.9 (±35.9)N/A
      RDAR0.10 (±0.09)0.10(±0.08)
      RimIoU0.680.67
      N120 (87 intact, 33 Incomplete)101 (101 intact, 0 Incomplete)
      Figure thumbnail gr3
      Figure 3Bland-Altman plots showing the agreements in mRDR and RDAR between clinician and RimNet in test images. Red dashed lines indicate 95% confidence limits.
      A comparison of RimNet segmentation on the Drishti-GS dataset to other published works is presented in Table 5. The mRDR MAE (IQR) was 0.03 (0.04) and the RDAR MAE (IQR) was 0.09 (0.10). The IoU of the optic cup (CupIoU) was 0.77 and a IoU of the optic disc (DiscIoU) was 0.91. The Dice score of the cup (CupDice) was 0.86 and Dice score of the optic disc (DiscDice) 0.95 was achieved.
      Table 5DRISHTI-GS segmentation performance of RimNet compared to published segmentation models

      Cybenko G. Approximation by superpositions of a sigmoidal function. Mathematics of Control, Signals and Systems 1989 2:4 1989;2:303–314. Available at: https://link.springer.com/article/10.1007/BF02551274 [Accessed March 20, 2022].

      ,
      • Elsken T.
      • Metzen J.H.
      • Hutter F.
      Neural Architecture Search: A Survey.

      Szegedy C, Vanhoucke V, Ioffe S, et al. Rethinking the Inception Architecture for Computer Vision. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition 2015;2016-December:2818–2826. Available at: https://arxiv.org/abs/1512.00567v3 [Accessed March 13, 2022].

      • Shoaib M.
      • Sayed N.
      YOLO Object Detector and Inception-V3 Convolutional Neural Network for Improved Brain Tumor Segmentation.
      • Salama W.M.
      • Aly M.H.
      Deep learning in mammography images segmentation and classification: Automated CNN approach.
      • Sivaswamy J.
      • Chakravarty A.
      • Datt Joshi G.
      • Abbas Syed T.
      A Comprehensive Retinal Image Dataset for the Assessment of Glaucoma from the Optic Nerve Head Analysis.
      .
      ModelCupIoUDiscIoUCupDiceDiscDice
      RimNet0.770.910.860.95
      Zilly et al. (2017)0.85-0.870.87
      Sevastopolsky (2017)0.75---
      Edupuganti et al. (2018)0.810.69--
      Al-Bander and Zheng et al. (2018)--0.830.95
      Joshua et al. (2019)0.79---
      Yu et al. (2019)--0.880.97

      Discussion

      These results demonstrate that RimNet is capable of reasonably accurate segmentation and analysis of optic discs with both intact and incomplete rims. Spaeth et al. distinguished different DDLS grades by mRDR steps of 0.15. The MAE of the mRDR is well within this value, showing that RimNet segmentations are clinically relevant. For more advanced glaucoma with DDLS grades of 6 and above, the neuroretinal rim is incomplete and Spaeth et al. uses the ARW to distinguish grades. The five categories are less than 45 degrees, 45 degrees to 90 degrees, 90 degrees to 180 degrees, 180 to 270 degrees, and greater than 270 degrees. The minimum step is 45 degrees; the MAE falls slightly below that category at 31 degrees with 22 of 34 total incomplete rims correctly identified as incomplete. However, the IQR demonstrates a broad range of ARW. The error echoes the difficulties faced by the glaucoma specialists. While segmenting these severely glaucomatous rims to create the ‘ground truth’ masks, glaucoma specialists often differed regarding where rims were interrupted and if rims were incomplete or intact. Though a forced consensus was eventually reached, this demonstrates the difficulty of the task and the variability of this ‘ground truth’. RimNet offers 65% accuracy in identifying incomplete rims and a relatively low ARW MAE. To our knowledge, RimNet is the first to offer such capabilities in published literature.
      This work offers three improvements in the current landscape of optic disc segmentation. First, we utilized a dataset of 1208 images with external validation on Drishti-GS

      Sivaswamy J, Krishnadas SR, Joshi GD, et al. Drishti-GS: Retinal image dataset for optic nerve head(ONH) segmentation. In: 2014 IEEE 11th International Symposium on Biomedical Imaging, ISBI 2014. Institute of Electrical and Electronics Engineers Inc.; 2014:53–56.

      . Second, while we have still reported IoU and Dice scores, we have focused on more clinically relevant metrics such as mRDR, RDAR, and ARW. Third, our study is the first to focus on accurate segmentation of incomplete rims. RimNet is a useful step towards completely automating the DDLS algorithm.
      Automated segmentation of the optic disc and cup have been previously explored. The original studies were initially based on image processing functions such as thresholding, level set, active contour, clustering, and component extraction with success on local and publicly available datasets
      • Thakur N.
      • Juneja M.
      Survey on segmentation and classification approaches of optic cup and optic disc for diagnosis of glaucoma.
      . As early as 2001, Chrástek et al. offered an automated method of optic disc segmentation with filtering and edge detection, which achieved a segmentation accuracy of 82%
      • Chrástek R.
      • Wolf M.
      • Donath K.
      • et al.
      Automated segmentation of the optic nerve head for diagnosis of glaucoma.
      . In 2008, Liu and collaborators used level set and thresholding methods to achieve 97% accuracy when comparing algorithm-determined CDR ratio to clinical CDR ratio on a dataset of 73 images from the Singapore Eye Research Centre

      Liu J, Wong DWK, Lim JH, et al. Optic cup and disk extraction from retinal fundus images for determination of cup-to-disc ratio. 2008 3rd IEEE Conference on Industrial Electronics and Applications, ICIEA 2008 2008:1828–1832. [Accessed March 20, 2022].

      . In 2015, Lotankar et al used active contouring to achieve a 99% pixel-to-pixel accuracy on a private database of 150 images

      Lotankar M, Noronha K, Koti J. Detection of optic disc and cup from color retinal images for automated diagnosis of glaucoma. 2015 IEEE UP Section Conference on Electrical Computer and Electronics, UPCON 2015 2016. [Accessed March 20, 2022].

      . However, each of these approaches were limited in scope. Level-setting and thresholding would fail with images with decreased or increased intensity caused by pathological findings, which can be commonly seen on optic disc photographs such as peripapillary atrophy. This leads to overestimating or underestimating CDRs. Active contouring may similarly be affected by abnormal pathology or bright artifacts fixating on local maxima or minima within the image. Therefore, though these methods have proven efficacy, they can be improved upon.
      An automated grading system for glaucoma diagnosis and progression needs a high efficiency, broadly applicable segmentation algorithm with an expansive learning capacity which could be applied to a variety of funduscopic images acquired with different imaging modalities with concurrent pathologies and variations. Though further work must be done, deep learning and convolutional neural networks may play an important role in the solution. They have an enormous learning capacity relative to their size

      Cybenko G. Approximation by superpositions of a sigmoidal function. Mathematics of Control, Signals and Systems 1989 2:4 1989;2:303–314. Available at: https://link.springer.com/article/10.1007/BF02551274 [Accessed March 20, 2022].

      . Rapid advances in computational memory and processing speed have made neural networks more accessible for optic disc segmentation. Zilly et al used ensemble learning to achieve 89% IoU on disc segmentation and 84% IoU on cup segmentation on the Drishti-GS dataset
      • Zilly J.
      • Buhmann J.M.
      • Mahapatra D.
      Glaucoma detection using entropy sampling and ensemble learning for automatic optic cup and disc segmentation.
      . Sevastopolsky and coworkers furthered this work by using a modified U-Net to achieve a comparable accuracy in less than a tenth of the time

      Sevastopolsky A. Optic disc and cup segmentation methods for glaucoma detection with modification of U-Net convolutional neural network. Pattern Recognition and Image Analysis 2017 27:3 2017;27:618–624. Available at: https://link.springer.com/article/10.1134/S1054661817030269 [Accessed March 20, 2022].

      . More on segmentation efforts, both image processing functions and neural network attempts, can be found on a review article by Thakur and Juneja et al.
      • Thakur N.
      • Juneja M.
      Survey on segmentation and classification approaches of optic cup and optic disc for diagnosis of glaucoma.
      .
      Several groups have pursued automated mRDR and RDAR calculations. In 2019, Kumar et al. proposed using an imaging processing technique called active discs to segment the optic disc and cup and perform general glaucoma classification (normal, moderate, severe) based on mRDR

      Kumar JRH, Seelamantula CS, Kamath YS, Jampala R. Rim-to-Disc Ratio Outperforms Cup-to-Disc Ratio for Glaucoma Prescreening. Scientific Reports 2019 9:1 2019;9:1–9. Available at: https://www.nature.com/articles/s41598-019-43385-2 [Accessed March 13, 2022].

      . Though direct mRDR accuracy was not reported, an mRDR-based approach demonstrated high classification accuracy. In 2020, Martins et al. proposed a smartphone-based glaucoma diagnosis pipeline, which focuses on glaucoma classification and calculates RDAR
      • Martins J.
      • Cardoso J.S.
      • Soares F.
      Offline computer-aided diagnosis for Glaucoma detection using fundus images targeted at mobile devices.
      . However, RDAR results were not directly reported. More recently, Pachade et al. proposed an NENet model consisting of EfficentNetB4 and adversarial learning that achieved an area-under-the-curve (AUC) of 0.901 on RDAR calculation for Drishti-GS
      • Pachade S.
      • Porwal P.
      • Kokare M.
      • et al.
      NENet: Nested EfficientNet and adversarial learning for joint optic disc and cup segmentation.
      .
      To the best of our knowledge, RimNet is the first engineering attempt to pursue segmentation and glaucoma grading efforts with incomplete neuroretinal rims. Thus, direct comparison of RimNet to other segmentation models is difficult. However, through the Drishti-GS dataset, an artificially-derived segmentation comparison is possible by recreating cup and disc masks from the RimNet rim segmentations. Table 5 demonstrates that RimNet performed well overall compared to recent segmentation models on the Drishti-GS dataset. While it outperformed several other models in CupDice, DiscIoU, and DiscDice segmentations, it was below average in CupIoU. These results must be understood in the context of three factors. First, the Drishti-GS images were available as 30-degree field of views. However, RimNet requires images centered and cropped near the optic disc margin. Therefore, RimNet has a significant information loss compared to other models that use the 30-degree field of view. Second, RimNet is unique in that it has been trained on both complete and incomplete rims. The models compared to RimNet have been trained only on complete rims. It is reasonable to expect a higher segmentation accuracy in these cases. Finally, the cup and disc segmentations produced by RimNet were artificially derived from the RimNet’s rim segmentation. By not directly predicting on the cup and disc, accuracy was lost. Considering these three factors, RimNet’s performance on Drishti-GS is acceptable. This is corroborated by the Drishti-GS mRDR MAE of 0.03 (0.04) and RDAR MAE of 0.09 (0.10), both of which are low.
      The findings of this study need to be interpreted with the shortcomings in mind. First, the hyperparameter architecture search was limited by the computating and memory limits of our workstation, which uses NVIDIA RTX 2080 Ti graphics cards. We could not include larger models such as ResNet152 into our search due to these memory constraints. Second, the number of ground truth masks and optic disc images, particularly those of more severe glaucoma is limited. Greater numbers of diverse samples would allow RimNet to better learn mRDR and ARW calculations.
      RimNet brings glaucomatous detection and DDLS grading a step closer to full automation

      Spaeth GL, Henderer J, Liu C, et al. The disc damage likelihood scale: reproducibility of a new method of estimating the amount of optic nerve damage caused by glaucoma. Trans Am Ophthalmol Soc 2002;100:181. Available at: /pmc/articles/PMC1358961/?report=abstract [Accessed March 12, 2022].

      . Automated grading of disc size is a necessary step to fully autonomous DDLS grading. A future goal would be to not only pursue full automation of DDLS grading, but to test their capabilities as diagnostic tools. One promising avenue for further investigation would be screening with smartphone fundoscopy. The increasing quality of smartphone cameras have made smartphone fundoscopy viable as a screening method

      Panwar N, Huang P, Lee J, et al. Fundus Photography in the 21st Century—A Review of Recent Technological Advances and Their Implications for Worldwide Healthcare. Telemedicine Journal and e-Health 2016;22:198. Available at: /pmc/articles/PMC4790203/ [Accessed March 12, 2022].

      ,

      Nazari Khanamiri H, Nakatsuka A, El-Annan J. Smartphone Fundus Photography. JoVE (Journal of Visualized Experiments) 2017:e55958. Available at: https://www.jove.com/v/55958/smartphone-fundus-photography [Accessed March 12, 2022].

      . This, combined with automated DDLS grading, could provide a powerful screening tool to revolutionize glaucoma detection.
      In conclusion, RimNet provides a method for high efficacy rim segmentation, mRDR, and ARW calculation. It also provides an example of how ophthalmic care be augmented by artificial intelligence. Though more work remains to be done, we believe that detection, diagnosis, and care of glaucoma can integrate with approaches such as these and aid ophthalmologists in decision making to provide higher quality care for a global population of patients.

      Uncited reference

      van Rossum G, Drake FL. Python 3 Reference Manual; CreateSpace. Scotts Valley, CA 2009:242. Available at: https://www.python.org/ [Accessed September 17, 2022].

      ,
      • Pedregosa FABIANPEDREGOSA F.
      • Michel V.
      • Grisel OLIVIERGRISEL O.
      • et al.
      Scikit-learn: Machine Learning in Python Gaël Varoquaux Bertrand Thirion Vincent Dubourg Alexandre Passos PEDREGOSA, VAROQUAUX, GRAMFORT ET AL. Matthieu Perrot.
      ,

      Harris CR, Millman KJ, van der Walt SJ, et al. Array programming with NumPy. Nature 2020 585:7825 2020;585:357–362. Available at: https://www.nature.com/articles/s41586-020-2649-2 [Accessed September 8, 2022].

      ,

      Abadi M, Barham P, Chen J, et al. {TensorFlow}: A System for {Large-Scale} Machine Learning. 2016. Available at: https://tensorflow.org. [Accessed September 8, 2022].

      ,

      Mannor S, Peleg B, Rubinstein R. The cross entropy method for classification. ICML 2005 - Proceedings of the 22nd International Conference on Machine Learning 2005:561–568. [Accessed June 10, 2022].

      ,

      Sandler M, Howard A, Zhu M, et al. MobileNetV2: Inverted Residuals and Linear Bottlenecks. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition 2018:4510–4520. Available at: https://arxiv.org/abs/1801.04381v4 [Accessed June 10, 2022].

      ,

      He K, Zhang X, Ren S, Sun J. Deep Residual Learning for Image Recognition. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition 2015;2016-December:770–778. Available at: https://arxiv.org/abs/1512.03385v1 [Accessed June 10, 2022].

      ,

      Tan M, Le Q v. EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks. 36th International Conference on Machine Learning, ICML 2019 2019;2019-June:10691–10700. Available at: https://arxiv.org/abs/1905.11946v5 [Accessed June 10, 2022].

      ,

      Simonyan K, Zisserman A. Very Deep Convolutional Networks for Large-Scale Image Recognition. 3rd International Conference on Learning Representations, ICLR 2015 - Conference Track Proceedings 2014. Available at: https://arxiv.org/abs/1409.1556v6 [Accessed June 10, 2022].

      ,

      Weng W, Zhu X. U-Net: Convolutional Networks for Biomedical Image Segmentation. IEEE Access 2015;9:16591–16603. Available at: https://arxiv.org/abs/1505.04597v1 [Accessed June 10, 2022].

      ,

      Lin T-Y, Dollár P, Girshick R, et al. Feature Pyramid Networks for Object Detection. 2016. Available at: https://arxiv.org/abs/1612.03144v2 [Accessed June 10, 2022].

      ,

      Zhao H, Shi J, Qi X, et al. Pyramid Scene Parsing Network. Proceedings - 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017 2016;2017-January:6230–6239. Available at: https://arxiv.org/abs/1612.01105v2 [Accessed June 10, 2022].

      ,

      Ruder S. An overview of gradient descent optimization algorithms. 2016. Available at: https://arxiv.org/abs/1609.04747v2 [Accessed June 10, 2022].

      .

      References

      1. Giangiacomo A, Coleman AL. The Epidemiology of Glaucoma. In: Glaucoma. Berlin, Heidelberg: Springer Berlin Heidelberg; :13–21.

      2. Michelson G, Hornegger J, Wärntges S, Lausen B. The Papilla as Screening Parameter for Early Diagnosis of Glaucoma. Dtsch Arztebl Int 2008;105:583. Available at: /pmc/articles/PMC2680559/ [Accessed March 13, 2022].

        • Shaikh Y.
        • Yu F.
        • Coleman A.L.
        Burden of undetected and untreated glaucoma in the United States.
        Am J Ophthalmol. 2014; 158 (Available at:) (Accessed): 1121-1129.e1
        https://pubmed.ncbi.nlm.nih.gov/25152501/
        Date accessed: March 13, 2022
      3. Cristina Leske M, Heijl A, Hussein M, et al. Factors for Glaucoma Progression and the Effect of Treatment The Early Manifest Glaucoma Trial. 2003. Available at: https://jamanetwork.com/.

      4. Spaeth GL, Henderer J, Liu C, et al. The disc damage likelihood scale: reproducibility of a new method of estimating the amount of optic nerve damage caused by glaucoma. Trans Am Ophthalmol Soc 2002;100:181. Available at: /pmc/articles/PMC1358961/?report=abstract [Accessed March 12, 2022].

      5. Kumar JRH, Seelamantula CS, Kamath YS, Jampala R. Rim-to-Disc Ratio Outperforms Cup-to-Disc Ratio for Glaucoma Prescreening. Scientific Reports 2019 9:1 2019;9:1–9. Available at: https://www.nature.com/articles/s41598-019-43385-2 [Accessed March 13, 2022].

      6. Formichella P, Annoh R, Zeri F, Tatham AJ. The role of the disc damage likelihood scale in glaucoma detection by community optometrists. Ophthalmic and Physiological Optics 2020;40:752–759.

        • Tong W.
        • Romero M.
        • Lim V.
        • et al.
        Reliability of Graders and Comparison with an Automated Algorithm for Vertical Cup-Disc Ratio Grading in Fundus Photographs.
        Ann Acad Med Singap. 2019; 48: 282-289
        • Sarvamangala D.R.
        • Kulkarni R v
        Convolutional neural networks in medical image understanding: a survey.
        Evol Intell. 2021; 15 (Available at:) (Accessed): 1-22
      7. Joshua AO, Nelwamondo F v., Mabuza-Hocquet G. Segmentation of Optic Cup and Disc for Diagnosis of Glaucoma on Retinal Fundus Images. In: Proceedings - 2019 Southern African Universities Power Engineering Conference/Robotics and Mechatronics/Pattern Recognition Association of South Africa, SAUPEC/RobMech/PRASA 2019. Institute of Electrical and Electronics Engineers Inc.; 2019:183–187.

      8. Cybenko G. Approximation by superpositions of a sigmoidal function. Mathematics of Control, Signals and Systems 1989 2:4 1989;2:303–314. Available at: https://link.springer.com/article/10.1007/BF02551274 [Accessed March 20, 2022].

      9. van Rossum G, Drake FL. Python 3 Reference Manual; CreateSpace. Scotts Valley, CA 2009:242. Available at: https://www.python.org/ [Accessed September 17, 2022].

        • Pedregosa FABIANPEDREGOSA F.
        • Michel V.
        • Grisel OLIVIERGRISEL O.
        • et al.
        Scikit-learn: Machine Learning in Python Gaël Varoquaux Bertrand Thirion Vincent Dubourg Alexandre Passos PEDREGOSA, VAROQUAUX, GRAMFORT ET AL. Matthieu Perrot.
        Journal of Machine Learning Research. 2011; 12 (Available at:) (Accessed): 2825-2830
        http://scikit-learn.sourceforge.net
        Date accessed: September 8, 2022
      10. Harris CR, Millman KJ, van der Walt SJ, et al. Array programming with NumPy. Nature 2020 585:7825 2020;585:357–362. Available at: https://www.nature.com/articles/s41586-020-2649-2 [Accessed September 8, 2022].

      11. Abadi M, Barham P, Chen J, et al. {TensorFlow}: A System for {Large-Scale} Machine Learning. 2016. Available at: https://tensorflow.org. [Accessed September 8, 2022].

        • O’Malley Tom
        • Bursztein Elie
        • Long
        • et al.
        Keras Tuner. 2019;
        • Elsken T.
        • Metzen J.H.
        • Hutter F.
        Neural Architecture Search: A Survey.
        Journal of Machine Learning Research. 2019; 20 (Available at:) (Accessed): 1-21
        http://jmlr.org/papers/v20/18-598.html
        Date accessed: March 13, 2022
      12. Szegedy C, Vanhoucke V, Ioffe S, et al. Rethinking the Inception Architecture for Computer Vision. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition 2015;2016-December:2818–2826. Available at: https://arxiv.org/abs/1512.00567v3 [Accessed March 13, 2022].

        • Shoaib M.
        • Sayed N.
        YOLO Object Detector and Inception-V3 Convolutional Neural Network for Improved Brain Tumor Segmentation.
        Traitement du Signal. 2022; 39 ([Accessed September 4, 2022]): 371-380
        • Salama W.M.
        • Aly M.H.
        Deep learning in mammography images segmentation and classification: Automated CNN approach.
        Alexandria Engineering Journal. 2021; 60 ([Accessed September 4, 2022]): 4701-4709
        • Sivaswamy J.
        • Chakravarty A.
        • Datt Joshi G.
        • Abbas Syed T.
        A Comprehensive Retinal Image Dataset for the Assessment of Glaucoma from the Optic Nerve Head Analysis.
        JSM Biomed Imaging Data Pap. 2015; 2 ([Accessed March 20, 2022]): 1004
      13. Sivaswamy J, Krishnadas SR, Joshi GD, et al. Drishti-GS: Retinal image dataset for optic nerve head(ONH) segmentation. 2014 IEEE 11th International Symposium on Biomedical Imaging, ISBI 2014 2014:53–56. [Accessed March 20, 2022].

        • Zilly J.
        • Buhmann J.M.
        • Mahapatra D.
        Glaucoma detection using entropy sampling and ensemble learning for automatic optic cup and disc segmentation.
        Computerized Medical Imaging and Graphics. 2017; 55 ([Accessed March 20, 2022]): 28-41
      14. Sevastopolsky A. Optic disc and cup segmentation methods for glaucoma detection with modification of U-Net convolutional neural network. Pattern Recognition and Image Analysis 2017 27:3 2017;27:618–624. Available at: https://link.springer.com/article/10.1134/S1054661817030269 [Accessed March 20, 2022].

      15. Edupuganti VG, Chawla A, Kale A. Automatic optic disk and cup segmentation of fundus images using deep learning. In: Proceedings - International Conference on Image Processing, ICIP. IEEE Computer Society; 2018:2227–2231.

        • Al-Bander B.
        • Williams B.M.
        • Al-Nuaimy W.
        • et al.
        Dense fully convolutional segmentation of the optic disc and cup in colour fundus for glaucoma diagnosis.
        Symmetry (Basel). 2018; 10
        • Yu S.
        • Xiao D.
        • Frost S.
        • Kanagasingam Y.
        Robust optic disc and cup segmentation with deep learning for glaucoma detection.
        Computerized Medical Imaging and Graphics. 2019; 74: 61-71
      16. Mannor S, Peleg B, Rubinstein R. The cross entropy method for classification. ICML 2005 - Proceedings of the 22nd International Conference on Machine Learning 2005:561–568. [Accessed June 10, 2022].

      17. Chaurasia A, Culurciello E. LinkNet: Exploiting Encoder Representations for Efficient Semantic Segmentation. 2017 IEEE Visual Communications and Image Processing, VCIP 2017 2017;2018-January:1–4. Available at: http://arxiv.org/abs/1707.03718 [Accessed March 13, 2022].

      18. Sandler M, Howard A, Zhu M, et al. MobileNetV2: Inverted Residuals and Linear Bottlenecks. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition 2018:4510–4520. Available at: https://arxiv.org/abs/1801.04381v4 [Accessed June 10, 2022].

      19. He K, Zhang X, Ren S, Sun J. Deep Residual Learning for Image Recognition. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition 2015;2016-December:770–778. Available at: https://arxiv.org/abs/1512.03385v1 [Accessed June 10, 2022].

      20. Tan M, Le Q v. EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks. 36th International Conference on Machine Learning, ICML 2019 2019;2019-June:10691–10700. Available at: https://arxiv.org/abs/1905.11946v5 [Accessed June 10, 2022].

      21. Simonyan K, Zisserman A. Very Deep Convolutional Networks for Large-Scale Image Recognition. 3rd International Conference on Learning Representations, ICLR 2015 - Conference Track Proceedings 2014. Available at: https://arxiv.org/abs/1409.1556v6 [Accessed June 10, 2022].

      22. Weng W, Zhu X. U-Net: Convolutional Networks for Biomedical Image Segmentation. IEEE Access 2015;9:16591–16603. Available at: https://arxiv.org/abs/1505.04597v1 [Accessed June 10, 2022].

      23. Lin T-Y, Dollár P, Girshick R, et al. Feature Pyramid Networks for Object Detection. 2016. Available at: https://arxiv.org/abs/1612.03144v2 [Accessed June 10, 2022].

      24. Zhao H, Shi J, Qi X, et al. Pyramid Scene Parsing Network. Proceedings - 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017 2016;2017-January:6230–6239. Available at: https://arxiv.org/abs/1612.01105v2 [Accessed June 10, 2022].

      25. Ruder S. An overview of gradient descent optimization algorithms. 2016. Available at: https://arxiv.org/abs/1609.04747v2 [Accessed June 10, 2022].

      26. Kingma DP, Ba JL. Adam: A Method for Stochastic Optimization. 3rd International Conference on Learning Representations, ICLR 2015 - Conference Track Proceedings 2014. Available at: https://arxiv.org/abs/1412.6980v9 [Accessed March 13, 2022].

      27. Sivaswamy J, Krishnadas SR, Joshi GD, et al. Drishti-GS: Retinal image dataset for optic nerve head(ONH) segmentation. In: 2014 IEEE 11th International Symposium on Biomedical Imaging, ISBI 2014. Institute of Electrical and Electronics Engineers Inc.; 2014:53–56.

        • Thakur N.
        • Juneja M.
        Survey on segmentation and classification approaches of optic cup and optic disc for diagnosis of glaucoma.
        Biomed Signal Process Control. 2018; 42 ([Accessed March 12, 2022]): 162-189
        • Chrástek R.
        • Wolf M.
        • Donath K.
        • et al.
        Automated segmentation of the optic nerve head for diagnosis of glaucoma.
        Med Image Anal. 2005; 9 (Available at:) (Accessed): 297-314
        https://pubmed.ncbi.nlm.nih.gov/15950894/
        Date accessed: March 12, 2022
      28. Liu J, Wong DWK, Lim JH, et al. Optic cup and disk extraction from retinal fundus images for determination of cup-to-disc ratio. 2008 3rd IEEE Conference on Industrial Electronics and Applications, ICIEA 2008 2008:1828–1832. [Accessed March 20, 2022].

      29. Lotankar M, Noronha K, Koti J. Detection of optic disc and cup from color retinal images for automated diagnosis of glaucoma. 2015 IEEE UP Section Conference on Electrical Computer and Electronics, UPCON 2015 2016. [Accessed March 20, 2022].

        • Martins J.
        • Cardoso J.S.
        • Soares F.
        Offline computer-aided diagnosis for Glaucoma detection using fundus images targeted at mobile devices.
        Comput Methods Programs Biomed. 2020; 192 ([Accessed June 12, 2022])105341
        • Pachade S.
        • Porwal P.
        • Kokare M.
        • et al.
        NENet: Nested EfficientNet and adversarial learning for joint optic disc and cup segmentation.
        Med Image Anal. 2021; 74 ([Accessed June 12, 2022])102253
      30. Panwar N, Huang P, Lee J, et al. Fundus Photography in the 21st Century—A Review of Recent Technological Advances and Their Implications for Worldwide Healthcare. Telemedicine Journal and e-Health 2016;22:198. Available at: /pmc/articles/PMC4790203/ [Accessed March 12, 2022].

      31. Nazari Khanamiri H, Nakatsuka A, El-Annan J. Smartphone Fundus Photography. JoVE (Journal of Visualized Experiments) 2017:e55958. Available at: https://www.jove.com/v/55958/smartphone-fundus-photography [Accessed March 12, 2022].