Figure 1.
The pipeline of the proposed method consists of four steps: preprocessing, retinal region segmentation, intraretinal layer segmentation, and postprocessing.
Figure 1.
The pipeline of the proposed method consists of four steps: preprocessing, retinal region segmentation, intraretinal layer segmentation, and postprocessing.
Figure 2.
Visual representation of different retinal layers and boundaries after segmentation. Our method segments 9 different regions (7 highlighted with yellow background on the left side of the image and other two region are: above ILM and below BM) from 8 different retinal boundaries (highlighted with cyan background on the right side of the image). Layer abbreviations are positioned within their respective layer; boundary abbreviations are positioned on top of their respective segmentation lines. Abbreviations: ILM—inner limiting membrane, mRNFL—macular retinal nerve fibre layer, GCL—ganglion cell layer, IPL—inner plexiform layer, GCIPL—macular ganglion cell/inner plexiform layer, INL—inner nuclear layer, OPL—outer plexiform layer, ONL—outer nuclear layer, ELM—external limiting membrane, MZ—myoid zone, EZ—ellipsoid zone, OS—outer segment, RPE—retinal pigment epithelium, OL—outer layer and BM—Bruch’s membrane.
Figure 2.
Visual representation of different retinal layers and boundaries after segmentation. Our method segments 9 different regions (7 highlighted with yellow background on the left side of the image and other two region are: above ILM and below BM) from 8 different retinal boundaries (highlighted with cyan background on the right side of the image). Layer abbreviations are positioned within their respective layer; boundary abbreviations are positioned on top of their respective segmentation lines. Abbreviations: ILM—inner limiting membrane, mRNFL—macular retinal nerve fibre layer, GCL—ganglion cell layer, IPL—inner plexiform layer, GCIPL—macular ganglion cell/inner plexiform layer, INL—inner nuclear layer, OPL—outer plexiform layer, ONL—outer nuclear layer, ELM—external limiting membrane, MZ—myoid zone, EZ—ellipsoid zone, OS—outer segment, RPE—retinal pigment epithelium, OL—outer layer and BM—Bruch’s membrane.
Figure 3.
The general architecture of the proposed compressed U-Net. RS-net and IS-Net both have the same architecture with different channel depth c, kernel size f, number of input channels n and number of output classes o.
Figure 3.
The general architecture of the proposed compressed U-Net. RS-net and IS-Net both have the same architecture with different channel depth c, kernel size f, number of input channels n and number of output classes o.
Figure 4.
The comparison of different loss functions on a single class (INL). As can be seen, the weighted focal loss is capable of producing the minimum false positives.
Figure 4.
The comparison of different loss functions on a single class (INL). As can be seen, the weighted focal loss is capable of producing the minimum false positives.
Figure 5.
The first column shows small holes, which appear during the classification in both networks. The second column consists of images without holes. The last column shows 3D reconstructed surfaces of different layers.
Figure 5.
The first column shows small holes, which appear during the classification in both networks. The second column consists of images without holes. The last column shows 3D reconstructed surfaces of different layers.
Figure 6.
The visualization of the segmentation lines computed by the proposed method for three different data sets.
Figure 6.
The visualization of the segmentation lines computed by the proposed method for three different data sets.
Figure 7.
All nine different classes segmented by the proposed method.
Figure 7.
All nine different classes segmented by the proposed method.
Figure 8.
Example segmentation with a low-quality scan to show the performance of the proposed method compared to state-of-the-art methods (HeyEx—device segmentation, AURA [
45], OCTExp [
49]).
Figure 8.
Example segmentation with a low-quality scan to show the performance of the proposed method compared to state-of-the-art methods (HeyEx—device segmentation, AURA [
45], OCTExp [
49]).
Table 1.
Background and foreground pixel imbalance: the table shows the percentage of foreground pixels (FP) with respect to the total pixels in the training data for each class.
Table 1.
Background and foreground pixel imbalance: the table shows the percentage of foreground pixels (FP) with respect to the total pixels in the training data for each class.
Data | Vitreous | mRNFL | GCIPL | INL | OPL | ONL | MZ | OL | b-BM |
---|
FP | 33% | 5% | 9% | 5% | 3% | 9% | 3% | 8% | 24% |
Table 2.
The compression experiments with different architectural parameters for both RS and IS networks. The selected configuration for each network is highlighted in bold.
Table 2.
The compression experiments with different architectural parameters for both RS and IS networks. The selected configuration for each network is highlighted in bold.
Network Architecture | f | c | Validation DSC | Test DSC | # Parameters | o | n | h | w |
---|
RS-Net | 5 | 4 | 0.9979 | 0.9976 | 79K | 3 | 1 | 512 | 512 |
| 5 | 8 | 0.9984 | 0.9975 | 0.31M | 3 | 1 | 512 | 512 |
| 5 | 16 | 0.9933 | 0.9933 | 1.2M | 9 | 2 | 512 | 512 |
IS-Net | 3 | 16 | 0.9917 | 0.9891 | 48K | 9 | 2 | 512 | 512 |
| 3 | 8 | 0.9826 | 0.9828 | 0.12M | 9 | 2 | 512 | 512 |
| 3 | 32 | 0.9933 | 0.9936 | 1.9M | 9 | 2 | 512 | 512 |
Table 3.
A comparison between the original U-Net and the compressed U-Net (proposed). The prediction time (in seconds) is observed on CPU only for a single B-scan. Abbreviations: VDSC—validation DSC, TDSC—test DSC, MS—model size, TT—training time, PT—prediction time. The entries corresponding to the proposed network are highlighted in bold.
Table 3.
A comparison between the original U-Net and the compressed U-Net (proposed). The prediction time (in seconds) is observed on CPU only for a single B-scan. Abbreviations: VDSC—validation DSC, TDSC—test DSC, MS—model size, TT—training time, PT—prediction time. The entries corresponding to the proposed network are highlighted in bold.
Network Arch. | Stage | # Parameters | VDSC | TDSC | MS(Mb) | TT | PT |
---|
U-Net [22] | RS | 31M | 0.9986 | 0.9985 | 121 | 114 | 1.97 |
| IS | 31M | 0.9935 | 0.9929 | 121 | 129 | 2.06 |
ResU-Net [46] | RS | 13M | 0.9986 | 0.9985 | 52 | 112 | 1.08 |
| IS | 13M | 0.9943 | 0.9944 | 52 | 130 | 1.51 |
Compressed | RS | 79K | 0.9979 | 0.9976 | 0.4 | 18 | 0.11 |
U-Net | IS | 1.2M | 0.9933 | 0.9933 | 5 | 51 | 0.31 |
Table 4.
Training-related metrics. Abbreviations: NetA—network architecture Tr—training, Te—test, Val—validation, Acc—accuracy.
Table 4.
Training-related metrics. Abbreviations: NetA—network architecture Tr—training, Te—test, Val—validation, Acc—accuracy.
NetA | Tr Acc | Tr DSC | Tr Loss | Val Acc | Val DSC | Val Loss | Te Acc | Te DSC | Te Loss | Epoch |
---|
RS | 0.9985 | 0.9992 | 0.0064 | 0.9984 | 0.9992 | 0.0064 | 0.9985 | 0.9991 | 0.0061 | 28 |
IS | 0.9953 | 0.9974 | 0.0175 | 0.9952 | 0.9974 | 0.0179 | 0.9950 | 0.9973 | 0.0185 | 11 |
Table 5.
A comparison between HeyEx segmentation and the proposed method (CCU-INSEG) using MAE (standard deviation) in μm. Abbreviations: HeyEx—Heidelberg eye explorer (device integrated segmentation software).
Table 5.
A comparison between HeyEx segmentation and the proposed method (CCU-INSEG) using MAE (standard deviation) in μm. Abbreviations: HeyEx—Heidelberg eye explorer (device integrated segmentation software).
Method | ILM | RNFL-GCL | IPL-INL | INL-OPL | OPL-ONL | ELM | EZ-OS | BM | Total |
---|
HeyEx | 3.5 (3.2) | 6.3 (2.6) | 6.6 (2.5) | 7.6 (2.5) | 4.8 (3.0) | 3.5 (2.8) | 5.2 (2.9) | 4.0 (2.7) | 5.2 (2.6) |
CCU | 2.1 (0.4) | 3.0 (0.6) | 2.6 (0.4) | 2.8 (0.5) | 2.6 (0.7) | 1.7 (0.4) | 1.7 (0.4) | 1.9 (0.6) | 2.3 (0.4) |
Table 6.
DSC between CCU-INSEG’s outcome and the manually segmented data.
Table 6.
DSC between CCU-INSEG’s outcome and the manually segmented data.
Data | Vitreous | mRNFL | GCIPL | INL | OPL | ONL | MZ | OL | b-BM | Total |
---|
CUB | 0.99 | 0.92 | 0.95 | 0.92 | 0.89 | 0.96 | 0.90 | 0.97 | 0.99 | 0.95 |
UCLA | 0.99 | 0.89 | 0.94 | 0.92 | 0.88 | 0.96 | 0.88 | 0.97 | 0.99 | 0.94 |
JHU | 0.99 | 0.90 | 0.94 | 0.84 | 0.83 | 0.94 | 0.85 | 0.95 | 0.99 | 0.92 |
Table 7.
MAE (standard deviation) in m between CCU-INSEG’s outcome and the manually corrected segmentation for each boundary.
Table 7.
MAE (standard deviation) in m between CCU-INSEG’s outcome and the manually corrected segmentation for each boundary.
Data | ILM | RNFL-GCL | IPL-INL | INL-OPL | OPL-ONL | ELM | EZ-OS | BM | Total |
---|
CUB | 2.1 (0.4) | 3.1 (0.6) | 2.5 (0.4) | 2.9 (0.4) | 2.6 (0.7) | 1.8 (0.4) | 1.7 (0.4) | 1.8 (0.4) | 2.3 (0.4) |
UCLA | 2.2 (0.2) | 4.2 (1.0) | 2.7 (0.4) | 3.2 (0.5) | 2.9 (0.4) | 2.0 (0.3) | 2.0 (0.4) | 2.0 (0.5) | 2.6 (0.3) |
JHU | 3.2 (0.4) | 4.0 (0.7) | 3.9 (0.6) | 5.3 (1.0) | 3.8 (1.0) | 2.6 (0.4) | 2.6 (0.4) | 4.0 (0.9) | 3.7 (0.3) |
Table 8.
The reliability of the standard parameters (top three rows) and fidelity of the computed parameters (MAE in m) from CCU-INSEG’s segmentation with respect to the manual segmentation (the last three rows). Abbreviations (The last two letters are either T or V, where T is thickness and V is volume. The other abbreviations are as follows: ICC—intra-class correlation coefficient, LCI—lower confidence interval, UCI—upper confidence interval, mRNFL—macular retinal nerve fiber layer, GCIPL—macular ganglion cell and inner plexiform layer, INL—inner nuclear layer, GCC—ganglion cell complex, TM—total macula).
Table 8.
The reliability of the standard parameters (top three rows) and fidelity of the computed parameters (MAE in m) from CCU-INSEG’s segmentation with respect to the manual segmentation (the last three rows). Abbreviations (The last two letters are either T or V, where T is thickness and V is volume. The other abbreviations are as follows: ICC—intra-class correlation coefficient, LCI—lower confidence interval, UCI—upper confidence interval, mRNFL—macular retinal nerve fiber layer, GCIPL—macular ganglion cell and inner plexiform layer, INL—inner nuclear layer, GCC—ganglion cell complex, TM—total macula).
Param | mRNFL T | GCIPL T | INL T | GCC T | TM T | mRNFL V | GCIPL V | INL V | GCC V | TM V |
---|
ICC | 0.93 | 0.99 | 0.96 | 0.99 | 0.99 | 0.93 | 0.99 | 0.96 | 0.99 | 0.99 |
LCI | 0.88 | 0.98 | 0.93 | 0.97 | 0.98 | 0.88 | 0.98 | 0.93 | 0.97 | 0.98 |
UCI | 0.96 | 0.99 | 0.98 | 0.99 | 0.99 | 0.96 | 0.99 | 0.98 | 0.99 | 0.99 |
CUB | 1.2 | 0.9 | 0.8 | 0.5 | 0.7 | 0.024 | 0.018 | 0.016 | 0.010 | 0.014 |
UCLA | 2.0 | 1.6 | 0.8 | 0.5 | 0.6 | 0.039 | 0.032 | 0.016 | 0.009 | 0.013 |
JHU | 0.9 | 1.8 | 6.9 | 1.4 | 2.4 | 0.018 | 0.035 | 0.140 | 0.028 | 0.044 |
Table 9.
DSCs are computed using 45 manually delineated B-scan images between AURA, OCTExp, RelayNet and CCU. ★ RelayNet and OCTExp segment ONL + MZ in a single class; therefore, MZ column is empty for both, and ONL column shows DSC for both ONL and MZ. ▲ RelayNet segments OL in two different classes. For this comparison, we combined both classes in OL and computed DSC with respect to manual segmentation. The best outcomes for each layer are highlighted in bold.
Table 9.
DSCs are computed using 45 manually delineated B-scan images between AURA, OCTExp, RelayNet and CCU. ★ RelayNet and OCTExp segment ONL + MZ in a single class; therefore, MZ column is empty for both, and ONL column shows DSC for both ONL and MZ. ▲ RelayNet segments OL in two different classes. For this comparison, we combined both classes in OL and computed DSC with respect to manual segmentation. The best outcomes for each layer are highlighted in bold.
Method | Vitreous | mRNFL | GCIPL | INL | OPL | ONL | MZ | OL | b-BM | Total |
---|
AURA [45] | 0.99 | 0.57 | 0.62 | 0.86 | 0.85 | 0.96 | 0.92 | 0.98 | 0.99 | 0.86 |
OCTExp [49] | 0.99 | 0.56 | 0.76 | 0.73 | 0.69 | 0.89 ★ | - | 0.85 | 0.99 | 0.80 |
RelayNet [20] | 0.87 | 0.70 | 0.73 | 0.64 | 0.86 | 0.77 ★ | - | 0.90 ▲ | 0.92 | 0.80 |
CCU | 0.99 | 0.88 | 0.93 | 0.88 | 0.84 | 0.96 | 0.89 | 0.98 | 0.99 | 0.93 |
Table 10.
A comparison between the proposed method (CCU, in bold) and the state-of-the-art methods in terms of the correction time (in seconds) and the deviation (in m) between the corrected and uncorrected versions of the selected 15 volumes. MAE: mean absolute error. The best outcomes are highlighted in bold.
Table 10.
A comparison between the proposed method (CCU, in bold) and the state-of-the-art methods in terms of the correction time (in seconds) and the deviation (in m) between the corrected and uncorrected versions of the selected 15 volumes. MAE: mean absolute error. The best outcomes are highlighted in bold.
Method | Avg Correction Time (Min-Max) (s) | Correction Needed (Volumes) | MAE (Min–Max) (in m) |
---|
HeyEx | 162 (99–320) | 11 (73.3%) | 0.016 (0–0.301) |
OCTExp [49] | 209 (86–638) | 6 (40%) | 0.034 (0–0.950) |
AURA [45] | 596 (204–1658) | 15 (100%) | 0.164 (0–5.938) |
CCU | 97 (82–150) | 2 (13.3%) | 0.001 (0–0.115) |