Figure 1.
Detection examples. This figure illustrated some detection examples of the polyp-detection system on our own data (EndoData).
Figure 1.
Detection examples. This figure illustrated some detection examples of the polyp-detection system on our own data (EndoData).
Figure 2.
Training datasets overview. This figure illustrates all the data we combined and gathered for training the polyp-detection system. Open-source data are combined with our data collected from different German private practices to create one dataset with 506,338 images. Storz, Pentax, and Olympus are different endoscope manufacturing companies, and we collected the data using their endoscope processors. The different open source datasets have the following number of images: ETIS-Larib: 196, CVC-Segmentation: 56, SUN Colonoscopy: 157,882, Kvasir-SEG: 1000, EDD2020: 127, CVC-EndoSceneStill: consist of CVC-ColonDB: 300 and CVC-ClinicDB: 612. Overall this sums up to 160,173 open-source images.
Figure 2.
Training datasets overview. This figure illustrates all the data we combined and gathered for training the polyp-detection system. Open-source data are combined with our data collected from different German private practices to create one dataset with 506,338 images. Storz, Pentax, and Olympus are different endoscope manufacturing companies, and we collected the data using their endoscope processors. The different open source datasets have the following number of images: ETIS-Larib: 196, CVC-Segmentation: 56, SUN Colonoscopy: 157,882, Kvasir-SEG: 1000, EDD2020: 127, CVC-EndoSceneStill: consist of CVC-ColonDB: 300 and CVC-ClinicDB: 612. Overall this sums up to 160,173 open-source images.
Figure 3.
Data augmentation for polyp detection. This figure shows the isolated augmentation we perform to create new training samples. All of these are executed together with a certain probability in our implementation.
Figure 3.
Data augmentation for polyp detection. This figure shows the isolated augmentation we perform to create new training samples. All of these are executed together with a certain probability in our implementation.
Figure 4.
Overview of the polyp-detection system. This figure shows all the steps of the whole polyp-detection system. The start is an input of a polyp sequence ending with the last frame from the endoscope (t). From this sequence, ws frames are extracted and given to CNN architecture. Then detections are performed with YOLOv5, and the predicted boxes are post-processed by RT-REPP. Afterward, final filtered detections are calculated.
Figure 4.
Overview of the polyp-detection system. This figure shows all the steps of the whole polyp-detection system. The start is an input of a polyp sequence ending with the last frame from the endoscope (t). From this sequence, ws frames are extracted and given to CNN architecture. Then detections are performed with YOLOv5, and the predicted boxes are post-processed by RT-REPP. Afterward, final filtered detections are calculated.
Figure 6.
Overview of the PANet of YOLOv5. This overview shows a more detailed view of the PANet structure in YOLOv5. The starting point is a polyp input image. The FPN feature pyramid architecture is illustrated in interaction with the PANet. Finally, three outputs are given. These three outputs are specially designed for small (p5), medium (p4), and large (p3) objects.
Figure 6.
Overview of the PANet of YOLOv5. This overview shows a more detailed view of the PANet structure in YOLOv5. The starting point is a polyp input image. The FPN feature pyramid architecture is illustrated in interaction with the PANet. Finally, three outputs are given. These three outputs are specially designed for small (p5), medium (p4), and large (p3) objects.
Figure 7.
The REPP modules used for video object detection post-processing. The object detector predicts a polyp for a sequence of frames and links all bounding boxes across frames with the help of the defined similarity. Lastly, detections are refined to minimize FPs. This figure is adapted from Sabater et al. [
13].
Figure 7.
The REPP modules used for video object detection post-processing. The object detector predicts a polyp for a sequence of frames and links all bounding boxes across frames with the help of the defined similarity. Lastly, detections are refined to minimize FPs. This figure is adapted from Sabater et al. [
13].
Figure 8.
Real-time REPP. It obtains a stream of video frames, where each frame is forwarded into a detection network. The result of the current frame is stored into the buffer (green) and REPP is executed afterward. The improved result are then displayed.
Figure 8.
Real-time REPP. It obtains a stream of video frames, where each frame is forwarded into a detection network. The result of the current frame is stored into the buffer (green) and REPP is executed afterward. The improved result are then displayed.
Figure 9.
This figure illustrates the setting for the examination room.
Figure 9.
This figure illustrates the setting for the examination room.
Figure 10.
The AI pipeline. This figure depicts the AI pipeline used to apply the created polyp-detection system in a clinical environment.
Figure 10.
The AI pipeline. This figure depicts the AI pipeline used to apply the created polyp-detection system in a clinical environment.
Figure 11.
The display pipeline. This figure depicts the display pipeline used to display the final detection results to the gastroenterologist.
Figure 11.
The display pipeline. This figure depicts the display pipeline used to display the final detection results to the gastroenterologist.
Figure 12.
Detection shift through latency.
Figure 12.
Detection shift through latency.
Figure 13.
Example images of the Endodata dataset for evaluation.
Figure 13.
Example images of the Endodata dataset for evaluation.
Figure 14.
Heatmaps for polyp detection. This figure illustrates the detections of the model using the Grad-CAM algorithm. Thereby, the pixels most relevant for the detection are marked in warm colors such as red, and pixels less relevant for the detection in cold colors such as blue. The CNN has three detection outputs for small, medium, and large objects.
Figure 14.
Heatmaps for polyp detection. This figure illustrates the detections of the model using the Grad-CAM algorithm. Thereby, the pixels most relevant for the detection are marked in warm colors such as red, and pixels less relevant for the detection in cold colors such as blue. The CNN has three detection outputs for small, medium, and large objects.
Figure 15.
Examples of errors in video 12 of the CVC-VideoClinicDB dataset. The left image shows a correct polyp detection, the middle image misidentifies the size of the polyp and the right image shows no detection due to oversaturation.
Figure 15.
Examples of errors in video 12 of the CVC-VideoClinicDB dataset. The left image shows a correct polyp detection, the middle image misidentifies the size of the polyp and the right image shows no detection due to oversaturation.
Figure 16.
Examples of errors in video 15 of the CVC-VideoClinicDB dataset. The left image shows a missed polyp and the middle image a proper detection. On the right image, another polyp in the same frame is detected, while the other is missed.
Figure 16.
Examples of errors in video 15 of the CVC-VideoClinicDB dataset. The left image shows a missed polyp and the middle image a proper detection. On the right image, another polyp in the same frame is detected, while the other is missed.
Figure 17.
Examples of errors in video 17 of the CVC-VideoClinicDB dataset. The left image shows the detection of a flat polyp. The middle image shows the same polyp being missed because it is blocked by the colon wall. The right image shows a (short) re-detection.
Figure 17.
Examples of errors in video 17 of the CVC-VideoClinicDB dataset. The left image shows the detection of a flat polyp. The middle image shows the same polyp being missed because it is blocked by the colon wall. The right image shows a (short) re-detection.
Table 1.
Overview of polyp detection models on still image datasets. The table includes the following abbreviation: DenseNet-UDCS: densely connected neural network with unbalanced discriminant and category sensitive constraints; ADGAN: attribute-decomposed generative adversarial networks; CenterNet: center network; SSD: single shot detector; YOLO: you only look once; R-CNN: region-based convolutional neural network.
Table 1.
Overview of polyp detection models on still image datasets. The table includes the following abbreviation: DenseNet-UDCS: densely connected neural network with unbalanced discriminant and category sensitive constraints; ADGAN: attribute-decomposed generative adversarial networks; CenterNet: center network; SSD: single shot detector; YOLO: you only look once; R-CNN: region-based convolutional neural network.
Author | Year | Method | Test Dataset | F1-Score | Speed |
---|
Yuan et al. [25] | 2020 | DenseNet-UDCS | Custom | 81.83% | N/A |
Liu et al. [26] | 2020 | ADGAN | Custom | 72.96% | N/A |
Wang et al. [27] | 2019 | CenterNet | CVC-ClinicDB | 97.88% | 52 FPS |
Liu et al. [28] | 2019 | SSD | CVC-ClinicDB | 78.9% | 30 FPS |
Zhang et al. [29] | 2019 | SSD | ETIS-Larib | 69.8 | 24 FPS |
Zheng et al. [30] | 2018 | YOLO | ETIS-Larib | 75.7% | 16 FPS |
Mo et al. [31] | 2018 | Faster R-CNN | CVC-ClinicDB | 91.7% | 17 FPS |
Table 2.
Overview of polyp detection models on video datasets.
Table 2.
Overview of polyp detection models on video datasets.
Author | Year | Method | Test Dataset | F1-Score | Speed |
---|
Nogueira et al. [53] | 2022 | YOLOv3 | Custom | 88.10% | 30 FPS |
Xu et al. [54] | 2021 | CNN + SSIM | CVC-VideoClinicDB | 75.86% | N/A |
Livovsky et al. [50] | 2021 | RetinaNet | Custom | N/A | 30 FPS |
Misawa et al. [11] | 2021 | YOLOv3 | SUN-Colonoscopy | 87.05% | 30 FPS |
Qadir et al. [55] | 2020 | Faster R-CNN | CVC-VideoClinicDB | 84.44% | 15 FPS |
| | SSD | CVC-VideoClinicDB | 71.82% | 33 FPS |
Yuan et al. [25] | 2020 | DenseNet-UDCS | Custom | 81.83% | N/A |
Zhang et al. [40] | 2019 | SSD-GPNet | Custom | 84.24% | 50 FPS |
Misawa et al. [52] | 2019 | 3D-CNN | Custom | N/A | N/A |
Itoh et al. [51] | 2019 | 3D-ResNet | Custom | N/A | N/A |
Shin et al. [33] | 2018 | Inception ResNet | ASU-Mayo-Video-DB | 86.9% | 2.5 FPS |
Yuan et al. [24] | 2017 | AlexNet | ASU-Mayo-Video-DB | N/A | N/A |
Tajbakhsh et al. [23] | 2016 | AlexNet | Custom | N/A | N/A |
Table 3.
Results of the 5-fold cross-validation for selecting the final model deep learning model. Values displayed in bold font indicate the highest or most optimal results. The abbreviation “adv.” is an acronym for the term “advanced”.
Table 3.
Results of the 5-fold cross-validation for selecting the final model deep learning model. Values displayed in bold font indicate the highest or most optimal results. The abbreviation “adv.” is an acronym for the term “advanced”.
| Precision | Recall | F1 | mAP | Speed | Parameter |
---|
Faster R-CNN [32] | 81.79 | 85.58 | 83.64 | 79.43 | 15 | 91 M |
YOLOv3 [35] | 80.45 | 82.46 | 81.44 | 81.92 | 41 | 65 M |
YOLOv4 [79] | 83.04 | 83.68 | 82.36 | 83.54 | 47 | 81 M |
YOLOv5 (adv.) | 88.02 | 89.38 | 88.70 | 86.44 | 43 | 79 M |
SSD [36] | 75.52 | 76.19 | 75.85 | 78.69 | 30 | 64 M |
Table 4.
Prototype components.
Table 4.
Prototype components.
Component | Type | Info |
---|
CPU | AMD Ryzen 7 3800X | 8 Cores, 3.9 GHz |
GPU | MSI GeForce RTX 3080 Ti | 12 GB GDDR6X |
RAM | G.Skill RipJaws V DDR4-3200 | 2 × 8 GB |
Disk | Samsung SSD 970 EVO Plus | 500 GB |
Mainboard | B550 Vision D | - |
Frame Grabber | DeckLink Mini Recorder 4 K | - |
Table 5.
A 5000 frames system test. This table shows the speed of the detection system of two GPUs. Considering an image input with a speed of 50 FPS.
Table 5.
A 5000 frames system test. This table shows the speed of the detection system of two GPUs. Considering an image input with a speed of 50 FPS.
GPU | AI Exe. Count | AI Avg. Exe. Time | AI Evaluation Rate |
---|
RTX 3080 Ti | 2996 | 19.5 ms | 29.4 FPS |
GTX 1050 Ti | 313 | 306.7 ms | 3.1 FPS |
Table 6.
Evaluation CVC-VideoClinicDB dataset. This table compares six different polyp detection approaches on the benchmarking data CVC-VideoClinicDB. The first two models are baseline models, and the third is the best model of the current literature. The last three models are different stages of our polyp-detection system. Precision, Recall, F1, and mAP are given in %, and the speed is given in FPS. Values displayed in bold font indicate the highest or most optimal results. The abbreviation “adv.” is an acronym for the term “advanced”.
Table 6.
Evaluation CVC-VideoClinicDB dataset. This table compares six different polyp detection approaches on the benchmarking data CVC-VideoClinicDB. The first two models are baseline models, and the third is the best model of the current literature. The last three models are different stages of our polyp-detection system. Precision, Recall, F1, and mAP are given in %, and the speed is given in FPS. Values displayed in bold font indicate the highest or most optimal results. The abbreviation “adv.” is an acronym for the term “advanced”.
| Precision | Recall | F1 | mAP | Speed | RT Capable |
---|
YOLOv5 (base) | 92.15 | 69.98 | 79.55 | 73.21 | 44 | yes |
Faster R-CNN | 93.84 | 74.79 | 83.24 | 79.78 | 15 | no |
Qadir et al. [55] | 87.51 | 81.58 | 84.44 | - | 15 | no |
YOLOv5 (adv.) | 98.53 | 76.44 | 86.09 | 77.99 | 44 | yes |
REPP | 99.71 | 87.05 | 92.95 | 86.98 | 42 | no |
RT-REPP | 99.06 | 82.86 | 90.24 | 83.15 | 43 | yes |
Table 7.
Detailed detection approaches on the benchmarking data CVC-VideoClinicDB. The first two models are baseline models, and the last three are different stages of our polyp-detection system. F1, and mAP are given in %. The abbreviation “adv.” is an acronym for the term “advanced”.
Table 7.
Detailed detection approaches on the benchmarking data CVC-VideoClinicDB. The first two models are baseline models, and the last three are different stages of our polyp-detection system. F1, and mAP are given in %. The abbreviation “adv.” is an acronym for the term “advanced”.
Video | YOLOv5 (Base) | F-RCNN | YOLOv5 (Adv.) | REPP | RT-REPP |
---|
| mAP | F1 | mAP | F1 | mAP | F1 | mAP | F1 | mAP | F1 |
1 | 78.22 | 87.41 | 92.56 | 88.14 | 85.17 | 91.47 | 94.56 | 97.44 | 89.38 | 94.18 |
2 | 87.35 | 91.87 | 89.48 | 89.19 | 94.62 | 96.91 | 97.48 | 98.48 | 96.48 | 97.96 |
3 | 75.58 | 80.09 | 81.48 | 77.71 | 80.18 | 84.42 | 86.48 | 87.64 | 82.65 | 85.01 |
4 | 90.04 | 92.16 | 93.35 | 90.39 | 98.00 | 98.99 | 98.35 | 99.50 | 98.29 | 98.99 |
5 | 76.29 | 82.53 | 78.01 | 85.85 | 78.40 | 87.64 | 83.01 | 90.71 | 78.88 | 88.27 |
6 | 86.23 | 88.59 | 87.05 | 89.42 | 90.07 | 94.83 | 92.05 | 95.43 | 88.41 | 92.83 |
7 | 60.75 | 67.15 | 69.56 | 78.38 | 66.23 | 76.15 | 74.56 | 85.71 | 71.95 | 82.35 |
8 | 53.93 | 69.52 | 77.22 | 82.65 | 59.16 | 73.66 | 82.22 | 90.11 | 82.22 | 90.11 |
9 | 74.27 | 77.29 | 84.10 | 87.21 | 76.50 | 87.01 | 89.10 | 94.18 | 85.15 | 91.89 |
10 | 75.28 | 77.36 | 86.33 | 86.00 | 78.22 | 87.25 | 91.33 | 95.29 | 86.61 | 92.61 |
11 | 90.17 | 92.19 | 94.19 | 94.92 | 95.41 | 97.44 | 99.19 | 99.50 | 98.65 | 99.50 |
12 | 30.81 | 46.22 | 42.51 | 60.09 | 36.78 | 54.01 | 47.51 | 64.86 | 39.85 | 57.14 |
13 | 84.48 | 89.48 | 84.68 | 87.06 | 89.37 | 94.29 | 89.68 | 93.83 | 90.00 | 94.74 |
14 | 74.35 | 80.49 | 82.20 | 86.42 | 79.09 | 87.88 | 87.20 | 93.05 | 82.20 | 90.11 |
15 | 48.88 | 62.62 | 52.51 | 66.56 | 52.18 | 69.04 | 57.51 | 73.15 | 55.65 | 71.79 |
16 | 89.45 | 92.97 | 93.63 | 90.32 | 94.54 | 97.44 | 98.63 | 99.50 | 98.36 | 98.99 |
17 | 52.25 | 64.61 | 56.29 | 68.15 | 57.77 | 72.59 | 61.29 | 75.78 | 49.80 | 65.75 |
Mean | 73.21 | 79.55 | 79.78 | 83.24 | 77.99 | 86.09 | 86.98 | 92.95 | 83.15 | 90.24 |
Table 8.
Details of the EndoData. This table shows the details of our own evaluation data (EndoData). Width and height state the size of the used frames.
Table 8.
Details of the EndoData. This table shows the details of our own evaluation data (EndoData). Width and height state the size of the used frames.
Video | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 |
---|
Frames | 14,947 | 18,026 | 1960 | 1923 | 9277 | 14,362 | 347 | 4627 | 6639 | 766 |
Polyps | 2 | 5 | 1 | 1 | 2 | 5 | 1 | 2 | 4 | 1 |
Width | 1920 | 1920 | 1920 | 1920 | 1920 | 1920 | 1920 | 1920 | 1920 | 1920 |
Height | 1080 | 1080 | 1080 | 1080 | 1080 | 1080 | 1080 | 1080 | 1080 | 1080 |
Table 9.
Evaluation of EndoData. This table compares five different polyp detection approaches on our EndoData dataset. The first two models are baseline models. The last three models are different stages of our polyp-detection system. F1, and mAP are given in %. Values displayed in bold font indicate the highest or most optimal results. The abbreviation “adv.” is an acronym for the term “advanced”.
Table 9.
Evaluation of EndoData. This table compares five different polyp detection approaches on our EndoData dataset. The first two models are baseline models. The last three models are different stages of our polyp-detection system. F1, and mAP are given in %. Values displayed in bold font indicate the highest or most optimal results. The abbreviation “adv.” is an acronym for the term “advanced”.
| Precision | Recall | F1 | mAP | Speed | RT Capable |
---|
YOLOv5 (base) | 78.39 | 80.54 | 79.45 | 77.09 | 44 | yes |
Faster R-CNN | 81.85 | 86.20 | 83.97 | 81.74 | 15 | no |
YOLOv5 (adv.) | 86.21 | 86.43 | 86.32 | 82.28 | 44 | yes |
REPP | 90.63 | 89.32 | 89.97 | 87.24 | 42 | no |
RT-REPP | 88.11 | 87.83 | 87.97 | 84.29 | 43 | yes |
Table 10.
Time to first detect on our own dataset (EndoData). This table compares five different polyp detection approaches on EndoData with our new metric time to first detection (FDT). The first two models are baseline models, and the last three are different stages of our polyp-detection system. FDT is measured in seconds. FP denotes the number of FPs in the video. Values displayed in bold font indicate the highest or most optimal results. The abbreviation “adv.” is an acronym for the term “advanced”.
Table 10.
Time to first detect on our own dataset (EndoData). This table compares five different polyp detection approaches on EndoData with our new metric time to first detection (FDT). The first two models are baseline models, and the last three are different stages of our polyp-detection system. FDT is measured in seconds. FP denotes the number of FPs in the video. Values displayed in bold font indicate the highest or most optimal results. The abbreviation “adv.” is an acronym for the term “advanced”.
Video | YOLOv5 (Base) | F-RCNN | YOLOv5 (Adv.) | REPP | RT-REPP |
---|
| FDT | FP | FDT | FP | FDT | FP | FDT | FP | FDT | FP |
1 | 0.07 | 201 | 0.00 | 159 | 0.00 | 155 | 0.00 | 109 | 0.00 | 150 |
2 | 0.68 | 13 | 0.62 | 11 | 0.51 | 4 | 0.51 | 8 | 0.51 | 5 |
3 | 0.10 | 21 | 0.00 | 17 | 0.00 | 30 | 0.00 | 12 | 0.00 | 13 |
4 | 0.00 | 234 | 0.00 | 198 | 0.00 | 145 | 0.00 | 135 | 0.00 | 123 |
5 | 1.33 | 663 | 1.07 | 572 | 0.93 | 425 | 0.93 | 379 | 0.93 | 352 |
6 | 0.13 | 35 | 0.07 | 31 | 0.03 | 127 | 0.03 | 22 | 0.03 | 68 |
7 | 5.00 | 50 | 3.40 | 33 | 2.60 | 51 | 2.67 | 22 | 2.63 | 28 |
8 | 0.20 | 99 | 0.08 | 83 | 0.05 | 152 | 0.05 | 58 | 0.05 | 50 |
9 | 0.68 | 41 | 0.32 | 35 | 0.32 | 83 | 0.32 | 25 | 0.32 | 115 |
10 | 0.03 | 22 | 0.00 | 19 | 0.00 | 15 | 0.00 | 13 | 0.00 | 9 |
Mean | 0.82 | 137.9 | 0.56 | 118.7 | 0.44 | 113.5 | 0.45 | 78.3 | 0.44 | 91.3 |
Table 11.
False positive rate (FPR) on our own dataset (EndoData). This table extentends
Table 10 by providing the FPR for five different polyp detection approaches on EndoData. The first two models are baseline models, and the last three are different stages of our polyp-detection system. Values displayed in bold font indicate the highest or most optimal results. The abbreviation “adv.” is an acronym for the term “advanced”. The FPR is given in %.
Table 11.
False positive rate (FPR) on our own dataset (EndoData). This table extentends
Table 10 by providing the FPR for five different polyp detection approaches on EndoData. The first two models are baseline models, and the last three are different stages of our polyp-detection system. Values displayed in bold font indicate the highest or most optimal results. The abbreviation “adv.” is an acronym for the term “advanced”. The FPR is given in %.
Video | YOLOv5 (Base) | F-RCNN | YOLOv5 (Adv.) | REPP | RT-REPP |
---|
1 | 88.15 | 90.39 | 90.60 | 93.20 | 90.88 |
2 | 99.28 | 99.39 | 99.78 | 99.56 | 99.72 |
3 | 90.32 | 92.02 | 86.73 | 94.23 | 93.78 |
4 | 45.11 | 49.27 | 57.01 | 58.75 | 60.99 |
5 | 58.32 | 61.86 | 68.58 | 71.00 | 72.49 |
6 | 97.62 | 97.89 | 91.88 | 98.49 | 95.48 |
7 | 40.97 | 51.26 | 40.49 | 61.20 | 55.34 |
8 | 82.37 | 84.79 | 75.27 | 88.86 | 90.25 |
9 | 94.18 | 94.99 | 88.89 | 96.37 | 85.24 |
10 | 77.69 | 80.13 | 83.62 | 85.49 | 89.49 |
Mean | 77.40 | 80.20 | 78.29 | 84.72 | 83.37 |
Table 12.
Detailed evaluation of EndoData. This table shows a comparison of five different polyp-detection approaches on the our EndoData dataset. The first two models are baseline models, and the last three models are different stages of our polyp-detection system. F1 and mAP are given in %, and the speed is given in FPS. The abbreviation “adv.” is an acronym for the term “advanced”.
Table 12.
Detailed evaluation of EndoData. This table shows a comparison of five different polyp-detection approaches on the our EndoData dataset. The first two models are baseline models, and the last three models are different stages of our polyp-detection system. F1 and mAP are given in %, and the speed is given in FPS. The abbreviation “adv.” is an acronym for the term “advanced”.
Video | YOLOv5 (Base) | F-RCNN | YOLOv5 (Adv.) | REPP | RT-REPP |
---|
| mAP | F1 | mAP | F1 | mAP | F1 | mAP | F1 | mAP | F1 |
1 | 72.77 | 72.69 | 84.23 | 82.26 | 79.25 | 82.23 | 89.84 | 89.43 | 82.98 | 84.26 |
2 | 86.30 | 86.71 | 86.04 | 90.51 | 89.06 | 94.18 | 92.83 | 95.91 | 90.01 | 94.74 |
3 | 85.65 | 85.71 | 93.10 | 92.88 | 91.20 | 91.50 | 99.10 | 97.99 | 98.51 | 97.00 |
4 | 70.57 | 73.88 | 82.88 | 78.17 | 76.96 | 79.99 | 85.43 | 85.36 | 83.67 | 83.99 |
5 | 39.45 | 54.84 | 44.23 | 56.79 | 45.84 | 58.98 | 49.60 | 63.98 | 49.28 | 62.40 |
6 | 90.22 | 90.94 | 94.02 | 92.11 | 96.13 | 96.00 | 98.38 | 97.48 | 96.75 | 97.50 |
7 | 15.12 | 34.89 | 29.13 | 47.81 | 21.66 | 43.40 | 31.72 | 53.33 | 28.41 | 46.39 |
8 | 91.14 | 86.35 | 96.32 | 92.71 | 96.66 | 94.43 | 99.46 | 98.48 | 98.67 | 97.00 |
9 | 77.49 | 80.87 | 78.48 | 84.72 | 82.61 | 87.44 | 85.11 | 89.29 | 81.61 | 86.59 |
10 | 88.73 | 87.08 | 88.28 | 89.10 | 91.95 | 94.43 | 95.82 | 96.50 | 92.28 | 94.91 |
Mean | 77.09 | 79.45 | 81.74 | 83.97 | 82.28 | 86.32 | 87.24 | 89.97 | 84.29 | 87.97 |