Figure 1.
Installation site and camera cruise route.
Figure 1.
Installation site and camera cruise route.
Figure 2.
Examples of tomato disease leaf: healthy, Tomato bacterial spot (TBS), Tomato early blight (TEB), Tomato late blight (TLB), Tomato leaf mold (TLM), Tomato mosaic virus (TMV), Tomato septoria leaf spot (TSLS), Tomato target spot (TTS), Tomato two-spotted spider mite (TTSSM), and Tomato yellow leaf curl virus (TYLCV) respectively.
Figure 2.
Examples of tomato disease leaf: healthy, Tomato bacterial spot (TBS), Tomato early blight (TEB), Tomato late blight (TLB), Tomato leaf mold (TLM), Tomato mosaic virus (TMV), Tomato septoria leaf spot (TSLS), Tomato target spot (TTS), Tomato two-spotted spider mite (TTSSM), and Tomato yellow leaf curl virus (TYLCV) respectively.
Figure 3.
Preprocessing result. Green box represents the target bounding box.
Figure 3.
Preprocessing result. Green box represents the target bounding box.
Figure 4.
An example of picture expanded about TMV. Original, horizontal mirroring, vertical mirroring, diagonal mirroring, horizontal-vertical mirroring, diagonal-horizontal mirroring, diagonal-vertical mirroring and diagonal-horizontal-vertical mirroring respectively.
Figure 4.
An example of picture expanded about TMV. Original, horizontal mirroring, vertical mirroring, diagonal mirroring, horizontal-vertical mirroring, diagonal-horizontal mirroring, diagonal-vertical mirroring and diagonal-horizontal-vertical mirroring respectively.
Figure 5.
Existed difficulties in greenhouse image. (a) Image fuzzy and wrapping paper and words. (b) Ground water pipes. (c) Uneven brightness and shading. (d) Different plant morphology with different growth cycles.
Figure 5.
Existed difficulties in greenhouse image. (a) Image fuzzy and wrapping paper and words. (b) Ground water pipes. (c) Uneven brightness and shading. (d) Different plant morphology with different growth cycles.
Figure 6.
Preliminary segmentation results. (a) Original image, (b) Bounding box annotation, and (c) Preliminary segmentation.
Figure 6.
Preliminary segmentation results. (a) Original image, (b) Bounding box annotation, and (c) Preliminary segmentation.
Figure 7.
Weakly supervised organ segmentation results. Left: Original; Right: Segmentation results.
Figure 7.
Weakly supervised organ segmentation results. Left: Original; Right: Segmentation results.
Figure 8.
Multiscale residual learning module.
Figure 8.
Multiscale residual learning module.
Figure 9.
Lightweight residual learning module-module1.
Figure 9.
Lightweight residual learning module-module1.
Figure 10.
Lightweight residual learning module-module2.
Figure 10.
Lightweight residual learning module-module2.
Figure 11.
Reduction module.
Figure 11.
Reduction module.
Figure 12.
Framework of leaf disease recognition model.
Figure 12.
Framework of leaf disease recognition model.
Figure 13.
Overall framework of two-step strategy.
Figure 13.
Overall framework of two-step strategy.
Figure 14.
Practical examples of diseases diagnosis with failures. Real tomato disease: (a)TBS, (b)TBS, (c)TLB, (d)TLM, (e)TSLS, (f)TTS, (g)TTS, (h)TTS, (i)TTSSM, (j)TTSSM, (k)TTSSM, (l)TYLCV, respectively. Wrong diagnosis: (a)TTS, (b)TYLCV, (c)TEB, (d)TSLS, (e)TEB, (f)TBS, (g)TMV, (h)TTSSM, (i)healthy, (j)TTS, (k)TTS, (l)TTSSM, respectively.
Figure 14.
Practical examples of diseases diagnosis with failures. Real tomato disease: (a)TBS, (b)TBS, (c)TLB, (d)TLM, (e)TSLS, (f)TTS, (g)TTS, (h)TTS, (i)TTSSM, (j)TTSSM, (k)TTSSM, (l)TYLCV, respectively. Wrong diagnosis: (a)TTS, (b)TYLCV, (c)TEB, (d)TSLS, (e)TEB, (f)TBS, (g)TMV, (h)TTSSM, (i)healthy, (j)TTS, (k)TTS, (l)TTSSM, respectively.
Figure 15.
Framework of five-stage leaf diseases recognition model.
Figure 15.
Framework of five-stage leaf diseases recognition model.
Figure 16.
Practical examples of leaf segmentation and diseases detection with successes.
Figure 16.
Practical examples of leaf segmentation and diseases detection with successes.
Table 1.
Detailed information of tomato leaf diseases dataset.
Table 1.
Detailed information of tomato leaf diseases dataset.
Classes | Images Number | Expand Number |
---|
healthy | 1591 | 3182 |
TBS | 2127 | 4254 |
TEB | 1000 | 3000 |
TLB | 1909 | 3818 |
TLM | 952 | 3808 |
TMV | 373 | 2984 |
TSLS | 1771 | 3542 |
TTS | 1404 | 4212 |
TTSSM | 1676 | 3352 |
TYLCV | 5357 | 5357 |
Total | 18,160 | 37,509 |
Table 2.
Algorithm implementation process.
Table 2.
Algorithm implementation process.
Step 1 | The rectangle’s external pixels are marked as background and the internal pixels are marked as unknown. |
Step 2 | Create an initial split, the unknown pixels are classified as foreground, and the background pixels are classified as background. |
Step 3 | Create a GMM for the initial foreground and background |
Step 4 | Each pixel in the foreground class is assigned to the most probable Gaussian component in the foreground GMM. The background class does the same thing. |
Step 5 | Update the GMM according to the assigned pixel set in the previous step. |
Step 6 | Create a graph and execute the Graph cut [23] algorithm to generate a new pixel classification (possible foreground and background) |
Step 7 | Repeat steps 4–6 until convergence |
Table 3.
Model module output size.
Table 3.
Model module output size.
Layer | Input | Stage1 | MaxPool | Reduction1 | Stage2 | Reduction2 |
Output Size | 256 × 256 × 3 | 128 × 128 × 64 | 64 × 64 × 64 | 32 × 32 × 128 | 32 × 32 × 128 | 16 × 16 × 256 |
Layer | Stage3 | Reduction3 | Stage4 | AvgPool | FC | Softmax |
Output Size | 16 × 16 × 256 | 8 × 8 × 512 | 8 × 8 × 512 | 512 | 10 | 10 |
Table 4.
Image pixel segmentation accuracy.
Table 4.
Image pixel segmentation accuracy.
| 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 |
---|
FCM [31] | 0.483 | 0.689 | 0.554 | 0.852 | 0.785 | 0.841 | 0.723 | 0.862 | 0.723 | 0.897 |
Coseg [32] | 0.858 | 0.798 | 0.891 | 0.673 | 0.735 | 0.564 | 0.745 | 0.884 | 0.885 | 0.913 |
LDA [33] | 0.612 | 0.710 | 0.563 | 0.465 | 0.687 | 0.715 | 0.543 | 0.737 | 0.674 | 0.687 |
Proposed | 0.963 | 0.961 | 0.975 | 0.925 | 0.892 | 0.963 | 0.917 | 0.905 | 0.937 | 0.954 |
Table 5.
Comparison of different depth model recognition indexes.
Table 5.
Comparison of different depth model recognition indexes.
Models | Accuracy(%) | GFlops | Fps(Images/sec) | Model Loading Time (s) |
---|
VGG16 [19] | 94.65 | 35.82 | 73 | 1.81 |
VGG19 [19] | 95.19 | 46.69 | 64 | 2.15 |
ResNet-18 [20] | 97.2 | 6.98 | 368 | 0.27 |
ResNet-50 [20] | 96.95 | 14.06 | 119 | 0.85 |
Inception V4 [34] | 97.35 | 33.91 | 90 | 2.11 |
Inception-ResNet V2 [34] | 98.24 | 25.29 | 115 | 1.64 |
MobileNet-V1 [27] | 96.52 | 1.49 | 291 | 0.59 |
MobileNet-V2 [35] | 95.14 | 0.96 | 229 | 0.74 |
Res2net-50 [36] | 97.26 | 9.53 | 112 | 1.88 |
Proposed | 98.61 | 2.80 | 276 | 1.07 |
Table 6.
Output dimensions of each module of the model.
Table 6.
Output dimensions of each module of the model.
Layer | Image | Stage1 | MaxPool | Reduction1 | Stage2 | Reduction2 | Stage3 |
Output Size | 256 × 256 × 3 | 128 × 128 × 64 | 64 × 64 × 64 | 32 × 32 × 128 | 32 × 32 × 128 | 16 × 16 × 256 | 16 × 16 × 256 |
Layer | Reduction3 | Stage4 | Reduction4 | Stage5 | AvgPool | FC | Softmax |
Output Size | 8 × 8 × 512 | 8 × 8 × 512 | 4 × 4 × 1024 | 4 × 4 × 1024 | 1024 | 10 | 10 |
Table 7.
Comparison of different depth model identification indicators.
Table 7.
Comparison of different depth model identification indicators.
Models | Accuracy(%) | GFlops | Fps(Images/sec) | Model Loading Time |
---|
Proposed-S5 | 98.72 | 3.74 | 187 | 1.12 |
Proposed | 98.61 | 2.80 | 276 | 1.07 |