Design of a Non-Destructive Seed Counting Instrument for Rapeseed Pods Based on Transmission Imaging
Abstract
:1. Introduction
- After removing the seeds for counting, various methods have been developed to enhance accuracy and efficiency. WU et al. introduced an automatic evaluation system for soybean thousand-seed weight using a marker-controlled watershed algorithm and area threshold method, effectively separating the touching soybean seeds to mitigate the impact of seed contact on counting accuracy [5]. LIU Shuangxi proposed an adhesion segmentation algorithm based on concave point matching for rice seed recognition, achieving an average recognition accuracy of 97.50% with an average processing time of 0.95 s per 100 seeds [6]. PENG Shunzheng developed a rapeseed counting method that utilizes hole detection and corner point detection matching, resulting in a detection accuracy rate of 88% [7]. TAN et al. presented a contact rice seed separation counting algorithm that combines a watershed algorithm, an improved corner point detection algorithm, and a BP neural network classification algorithm, attaining an accuracy of 94.63% under complex contact scenarios [8]. WU et al. devised an automatic identification method for corn seed counting that maintains robustness under varying lighting conditions, with a counting accuracy exceeding 93% [9]. PENG et al. introduced a dynamic rice seed counting algorithm based on stack elimination, achieving 100% MOTA and 83% MOTP with a MAP of 99.5% for the improved YOLOv7 model [10]. MA et al. proposed a lightweight real-time wheat seed detection model, YOLOv8-HD, which outperformed other mainstream networks with an average detection accuracy (MAP) of 99.3% across all scenes [11]. SONG et al. proposed an improved watershed algorithm based on first segmentation and then fusion for the images of touched kernels, which realized the segmentation of touched kernels, and the detection accuracy of single ear corn kernels reached 98% [12]. WANG Ling integrated the Swin Transformer module into the YOLO v7-ST model, which accurately and quickly detected grain occlusion and adhesion issues at different dispersion levels, with an average counting accuracy of 99.16%, an F1 value of 93%, and an average counting time of 1.19 s [13]. CHEN et al. used continuous images to associate specific soybean seed shapes based on a forced adjacency association criterion, avoiding duplicate counts and obtaining multi-angle shape features [14]. XI Xiaobo developed a recognition and detection method based on improved concave point segmentation for wheat seed distribution during sowing, achieving a detection accuracy of up to 95% [15]. LI Qiong designed a new algorithm for detecting soybean particles using MATLAB, involving spatial filtering for noise removal and the “Otsu” method for optimal global threshold segmentation, showing a strong correlation between the soybean particle area and the hundred-seed weight [16]. ZOHAIB Mussadiq evaluated four forms of open-source image analysis software for cereal crop seed counting and found CellProfiler’s image analysis program to be faster, reliable, and reproducible compared to the standard manual method [17];
- For counting seeds enclosed in pods, non-destructive seed counting techniques have seen significant advancements through the application of deep learning and imaging technologies. KHAKI et al. utilized deep learning to count corn kernels on single or multiplecorn ears without prior threshing, achieving a root mean square error (RMSE) of 33.11 and an R2 of 95.86% across 20 different corn ears [18]. WANG Ying developed a soybean seed counting method based on VGG-T, integrating density estimation with convolutional neural network (CNN) methods to enhance accuracy [19]. SUN et al. proposed an improved Faster R-CNN algorithm for precise counting of overlapping rice seeds using pre-labeling with contour grouping, resulting in a low error rate of 1.06% [20]. DOMHOEFER et al. employed a CT system to generate X-ray projection 2D images of individual peanut pods and predict kernel and shell weights through X-ray image processing, establishing regression equations between the predicted and actual values, and achieved an R2 of 0.93 and an average error estimate of 0.17 [21]. ZHAO et al. used backlighting to capture images of seeds within pods and applied OTSU, Faster-RCNN, and DeepLabV3+ methods for segmentation and counting of rapeseed under varying light intensities, with the DeepLabV3+ method demonstrating the highest accuracy for rapeseed segmentation and counting (R and F1 scores of 91% and 94%, respectively) [22]. UZAL et al. developed a custom feature extraction (FE) and support vector machine (SVM) classification model to estimate the number of seeds per soybean pod, achieving a recognition accuracy of 86.2% [23]. LI et al. combined density estimation with two-column neural networks to accurately calculate the number of seeds in single-pod images from one perspective, reporting an average absolute error of 13.2 and a mean squared error of 17.62 [24]. ZHAO et al. employed an improved P2PNet for field soybean seed counting and localization, reducing the average absolute error from 105.5 to 12.94 [25]. Yao Yehao et al. proposed a non-destructive image processing algorithm for measuring pod length and established a relationship model between the pod length and the number of grains per pod. By accurately measuring pod length, the grain number was estimated with an accuracy of 83.87%, which is significantly lower than the accuracy of manual measurement. Therefore, this method cannot be used by scientists [26]. Due to its relatively low accuracy, which fails to meet the standard required for practical applications, this method cannot be utilized in actual yield measurement processes. Additionally, Tran, KD., et al. developed deep learning networks for segmenting melon surface spots, introducing new variants of atrous spatial pyramid pooling (ASPP) and waterfall spatial pooling (WASP) based on the multi-head self-attention (MHSA) method, enhancing the original structure and providing an effective approach for particle detection in two-dimensional image-like tasks [27].
2. Materials and Methods
2.1. Experimental Materials and Data Acquisition
2.2. Design and Implementation of Rape Pod Transmission Imaging Device
2.2.1. Integral Structure Design
2.2.2. Semi-Automated Transmission Imaging System for Rapeseed Pod Analysis
- Manual Handling and Placement:
- Trigger Mechanism Activation:
- Signal Transmission and Image Capture Initiation:
- Camera Triggering via RJ-45 Interface:
- Image Storage and Processing:
- Results Display and Documentation:
2.2.3. Workflow
- Step 1: Turn on the main power switch and the microcomputer power switch. This action will automatically activate the transmission light source, preparing the device for operation;
- Step 2: Launch the human–machine interface (HMI) software on the microcomputer. This software serves as the control center for the entire imaging process, allowing users to adjust the settings, initiate imaging sequences, and review the captured images;
- Step 3: Adjust the brightness of the LED light source while simultaneously observing the pod through the transmission effect with your eyes. The goal is to achieve a clear and well-defined view of the internal structure of the pod. It is important to note that this calibration needs to be performed only once per batch of pods, as the same settings should suffice for all of the subsequent imaging within that batch;
- Step 4: Carefully hold a rapeseed pod in an upright position and place it into the designated slot of the imaging device. Ensure that the pod is correctly oriented so that its internal seeds are fully visible and can be completely penetrated by the transmission light, guaranteeing a comprehensive image capture.
2.2.4. Comprehensive Design Overview of the Transmission Imaging Device for Rapeseed Pod Analysis
- Image Acquisition Module Design
- The microcomputer (Intel(r) I5-12400, USA/UHD Graphics 730/8 GB RAM/512G SSD) acts as the upper machine, controlling the entire imaging process, including sending control commands, managing the camera’s image capture sequence, and deploying image processing algorithms. The microcomputer is mounted within the device frame through fixed holes;
- The touchscreen (EIMIO E16S, China) serves as the user interface, allowing operators to interact with the system, initiate imaging processes, and review the captured images;
- The industrial camera (Hikvision MV-CS060-10GM, China, 3072 × 2048 pixels) captures the high-resolution images essential for detailed seed analysis;
- The lens (MVL-HF0628M-6MPE, China, 6 mm focal length) ensures sharp focus and clear imaging of the rapeseed pods;
- The LED white light source (Optical Reach 12 V 5050 LED Strip, China) provides optimal illumination for transmission imaging. After evaluating various light sources (white, purple, orange, blue, red, green), white LED was determined to offer the best transmission effect and image quality;
- The LED control module (Geput DC12-24 V 30 A, China) adjusts the brightness of the LED light source to accommodate different pod translucencies, ensuring consistent imaging quality across diverse pod varieties.
- 2.
- Control Circuit Design
- The core board (QiXingChong stm32f103zet6, China) was developed to handle the control logic of the device;
- Photoelectric switches (Jiance JC-04PT Type, Wuhan) were installed 3 cm above the LED strip, with two switches spaced 5 cm apart, to detect when a pod is properly positioned for imaging. These switches output a 12 V signal;
- The optocoupler isolation module converts the 12 V signal from the photoelectric switches to 3.3 V to match the microcontroller pin voltage, ensuring safe and effective signal transmission;
- 3.
- Power Circuit Design
- The battery supply (Wheel Fun 24 V 2 Ah lithium battery, China) provides a 24 V DC power source to the device, which is then converted to meet the specific voltage requirements of each component;
- For the buck converter modules, one module steps down the voltage to 19.5 V for the microcomputer while another converts it to 12 V DC for the photoelectric switches, imaging camera, and the LED intensity controller;
- For the charging port, a hole on the side of the device facilitates battery recharging, ensuring uninterrupted operation.
2.2.5. Software Design
- Click “open camera” to test and connect the camera. The live video feed will be displayed on the touchscreen. If the camera connection fails, check the device for issues, resolve them, and then click “open camera” again;
- Click “start measurement” to create folders for saving images and an Excel file for storing detection results. The system is now prepared to detect a batch of pods. Manually place the pod in the correct position at the detection port, which will automatically trigger the photoelectric switch to capture and process the pod image. The detection results are stored in the created Excel file, enabling convenient data management and querying;
- Click “undo” to delete all currently saved images and results data. This is applicable when the detection results observed by the human eye have significant errors and require reloading of the sample for re-detection;
- “Manual capture” is used for manually capturing images of the special-shaped pods that cannot be normally triggered by the automatic system;
- Click “end measurement” to conclude the detection of the current batch of pods and prepare for the next batch of samples to be inspected.
2.3. Seed Recognition Algorithm of Rape Pod Transmission Image
2.3.1. Pedicle and Beak Segmentation and Removal Algorithm
- Use the trained U-Net network (Figure 3A(b)) to segment the original color image of the pod (Figure 3A(a)), obtaining a mask image of the pod pedicle and beak (Figure 3A(c)). This mask is then applied to the binary image of the complete pod (Figure 3A(f)) to obtain the binary image of just the pod pedicle and beak (Figure 3A(d));
- Subtract the binary image of the pod pedicle and beak from the binary image of the complete pod to obtain a binary image containing only the main body of the pod (Figure 3A(g));
- Apply the mask of the main body of the rape pod to the original color image of the pod to obtain the color image of just the main body of the pod (Figure 3A(h)).
2.3.2. Rape Kernel Segmentation and Counting
- Subtract the result of step 2 from the result of step 1 to obtain a binary image containing only the rapeseeds (Figure 3C(f));
- Detect connected components in the final segmentation image, calculate the average area of all connected components, denoted as M, and use Formula (1) to estimate the number of rapeseeds:
2.3.3. Adaptive Contrast Enhancement Algorithm for Rape Pod Transmission Image
- Seed grayscale image preparation: The binary seed image obtained from the first step (Figure 3C(a)) and the pod color image without the stem and beak (Figure 3B(a)) are masked and converted to grayscale to produce the seed grayscale image (Figure 3C(c)). This image is then used to calculate the standard deviation of the seeds;
- Pod skin grayscale image preparation: The R (red) channel of the main body of the pod color image (Figure 3C(e)) is extracted and subjected to fixed threshold segmentation (threshold set at 235) to obtain the binary image of the main body of the pod (Figure 3C(f)). By subtracting the central ridge and seeds, the binary image of the pod skin (Figure 3C(g)) is obtained. This binary image is then masked with the pod’s main body color image (Figure 3C(d)) to obtain the pod skin color image (Figure 3C(h)), which is then converted to grayscale to produce the pod skin grayscale image (Figure 3C(i)). This image is used to calculate the standard deviation of the pod skin;
- Image contrast calculation: Using Formula (2), the image contrast between the seeds and the pod skin is calculated.
- Contrast enhancement of the G-channel image: Based on the calculated value, contrast enhancement is applied to the G-channel image of the pod (Figure 3C(j)) using the formula provided in Equation (3).
3. Performance Test of Automatic Yield Measuring Instrument for Rape
3.1. Testing of Pedicle and Beak Image Segmentation and Removal Algorithm
3.2. Test of Adaptive Contrast Enhancement Algorithm for Pod Transmission Image
3.3. Comparison Between the Algorithm in This Paper and the Deep Learning Method
3.4. Performance Test of Rapeseed Pod Grain Counter
4. Conclusions
- A rapeseed pod transmission image processing algorithm was designed. The U-Net network was used to segment the pedicle and beak in the transmission images of the pod, with the stem and beak removed to eliminate their impact on grain counting. Using the symmetrical structure characteristics of the central axis of the pod, a targeted rapeseed grain image segmentation algorithm was designed, which effectively dealt with adhesive grains and accurately counted the number of grains. To overcome the errors in grain segmentation caused by differences in pod translucency, an adaptive contrast enhancement algorithm was proposed, which appropriately converted the contrast between the skin and seeds, significantly improving the effect of grain image segmentation. Overall, this image processing algorithm has high detection accuracy for rapeseed pods of different varieties, sizes, and maturities during the green ripeness period, and its performance exceeds that of deep learning methods;
- To address the laborious and strenuous issue of counting rapeseed pod grains, a semi-automatic counting instrument for rapeseed pod grains was developed. It was easy to operate, portable, and can work offline for up to 3.5 h, with a grain detection accuracy rate of 97.2% and a throughput of 372 pods per hour. Compared to manual counting methods, the counting instrument has significant advantages in terms of its accuracy, throughput, and labor intensity, and is expected to become a powerful assistant for rapeseed researchers.
Author Contributions
Funding
Institutional Review Board Statement
Data Availability Statement
Conflicts of Interest
References
- Li, F.; Guo, K.; Liao, X. Risk Assessment of China Rapeseed Supply Chain and Policy Suggestions. Int. J. Environ. Res. Public Health 2022, 20, 465. [Google Scholar] [CrossRef] [PubMed]
- Chen, J.; LI, Q.; Tan, Q.; Gui, S.; Wang, X.; Yi, D.; Jiang, D.; Zhou, J. Combining lightweight wheat spikes detecting model and offline Android software development for in-field wheat yield prediction. Trans. Chin. Soc. Agric. Eng. 2021, 37, 156–164. [Google Scholar] [CrossRef]
- Ashtiani, S.-H.M.; Javanmardi, S.; Jahanbanifard, M.; Martynenko, A.; Verbeek, F.J. Detection of Mulberry Ripeness Stages Using Deep Learning Models. IEEE Access 2021, 9, 100380–100394. [Google Scholar] [CrossRef]
- Ding, Y.; Wang, K.; Du, C.; Liu, X.; Chen, L.; Liu, W. Design and experiment of high-flux small-size seed flow detection device. Trans. Chin. Soc. Agric. Eng. 2020, 36, 20–28. [Google Scholar] [CrossRef]
- Wu, W.; Zhou, L.; Chen, J.; Qiu, Z.; He, Y. GainTKW: A Measurement System of Thousand Kernel Weight Based on the Android Platform. Agronomy 2018, 8, 178. [Google Scholar] [CrossRef]
- Liu, S.; Liu, Y.; Hu, A.; Zhang, Z.; Wang, H.; Li, J. Online Identification of Weedy Rice Seeds Based on ECMM Segmentation. Trans. Chin. Soc. Agric. Mach. 2022, 53, 323–333. [Google Scholar] [CrossRef]
- Peng, S.Z.; Yue, Y.B.; Feng, E.Y.; Li, L.J.; Sun, C.Q.; Zhao, Z.Y. Development and design of rapeseed counting system based on machine vision. J. Comput. Appl. 2020, 40, 142–146. [Google Scholar]
- Tan, S.; Ma, X.; Mai, Z.; Qi, L.; Wang, Y. Segmentation and counting algorithm for touching hybrid rice grains. Comput. Electron. Agric. 2019, 162, 493–504. [Google Scholar] [CrossRef]
- Wu, D.; Cai, Z.; Han, J.; Qin, H. Automatic kernel counting on maize ear using RGB images. Plant Methods 2020, 16, 79. [Google Scholar] [CrossRef]
- Peng, J.; Yang, Z.; Lv, D.; Yuan, Z. A dynamic rice seed counting algorithm based on stack elimination. Measurement 2024, 227, 114275. [Google Scholar] [CrossRef]
- Ma, N.; Su, Y.; Yang, L.; Li, Z.; Yan, H. Wheat Seed Detection and Counting Method Based on Improved YOLOv8 Model. Sensors 2024, 24, 1654. [Google Scholar] [CrossRef] [PubMed]
- Song, P.; Zhang, H.; Wang, C.; Luo, B.; Zhao, Y.; Pan, D. Design and Experiment of Maize Kernel Traits Acquisition Device. Trans. Chin. Soc. Agric. Mach. 2017, 48, 19–25. [Google Scholar]
- Wang, L.; Zhang, Q.; Feng, T.; Wang, Y.; Li, Y.; Chen, D. Wheat Grain Counting Method Based on YOLO v7-ST Model. Trans. Chin. Soc. Agric. Mach. 2023, 54, 188–197+204. [Google Scholar] [CrossRef]
- Chen, Z.; Fan, W.; Luo, Z.; Guo, B. Soybean seed counting and broken seed recognition based on image sequence of falling seeds. Comput. Electron. Agric. 2022, 196, 106870. [Google Scholar] [CrossRef]
- Xi, X.; Zhao, J.; Shi, Y.; Qu, J.; Gan, H.; Zhang, R. Online Detection Method for Wheat Seeding Distribution Based on Improved Concave Point Segmentation. Trans. Chin. Soc. Agric. Mach. 2024, 55, 75–82. [Google Scholar]
- Li, Q.; Yao, Y.; Yang, Q.; Shu, W.; Li, J.; Zhang, B.; Zhang, D.; Geng, Z. A Study on Soybean Seed Detection Method Based on MATLAB Image Processing. Chin. Agric. Sci. Bull. 2018, 34, 20–25. [Google Scholar]
- Mussadiq, Z.; Laszlo, B.; Helyes, L.; Gyuricza, C. Evaluation and comparison of open source program solutions for automatic seed counting on digital images. Comput. Electron. Agric. 2015, 117, 194–199. [Google Scholar] [CrossRef]
- Khaki, S.; Pham, H.; Han, Y.; Kuhl, A.; Kent, W.; Wang, L. Convolutional Neural Networks for Image-Based Corn Kernel Detection and Counting. Sensors 2020, 20, 2721. [Google Scholar] [CrossRef]
- Wang, Y.; Li, Y.; Wu, T.; Sun, S.; Wang, M. Counting Method of Soybean Seeds Based on Density Estimation and VGG-Two. Smart Agric. 2021, 3, 111–122. [Google Scholar] [CrossRef]
- Sun, J.; Zhang, Y.; Zhu, X.; Zhang, Y. Deep learning optimization method for counting overlapping rice seeds. J. Food Process Eng. 2021, 44, e13787. [Google Scholar] [CrossRef]
- Domhoefer, M.; Chakraborty, D.; Hufnagel, E.; Claußen, J.; Wörlein, N.; Voorhaar, M.; Anbazhagan, K.; Choudhary, S.; Pasupuleti, J.; Baddam, R.; et al. X-ray driven peanut trait estimation: Computer vision aided agri-system transformation. Plant Methods 2022, 18, 76. [Google Scholar] [CrossRef] [PubMed]
- Zhao, Y.; Wu, W.; Zhou, Y.; Zhu, B.; Yang, T.; Yao, Z.; Ju, C.; Sun, C.; Liu, T. A backlight and deep learning based method for calculating the number of seeds per silique. Biosyst. Eng. 2021, 213, 182–194. [Google Scholar] [CrossRef]
- Uzal, L.; Grinblat, G.; Namías, R.; Larese, M.; Bianchi, J.; Morandi, E.; Granitto, P. Seed-per-pod estimation for plant breeding using deep learning. Comput. Electron. Agric. 2018, 150, 196–204. [Google Scholar] [CrossRef]
- Li, Y.; Jia, J.; Zhang, L.; Khattak, A.M.; Sun, S.; Gao, W.; Wang, M. Soybean Seed Counting Based on Pod Image Using Two-Column Convolution Neural Network. IEEE Access 2019, 7, 64177–64185. [Google Scholar] [CrossRef]
- Zhao, J.; Kaga, A.; Yamada, T.; Komatsu, K.; Hirata, K.; Kikuchi, A.; Hirafuji, M.; Ninomiya, S.; Guo, W. Improved Field-Based Soybean Seed Counting and Localization with Feature Level Considered. Plant Phenomics 2023, 5, 0026. [Google Scholar] [CrossRef] [PubMed]
- Yao, Y.; Li, Y.; Chen, Y.; Ding, Q.; He, R. Testing method for the seed number per silique of oilrape based on recognizing the silique length images. Trans. Chin. Soc. Agric. Eng. 2021, 37, 153–160. [Google Scholar] [CrossRef]
- Tran, K.-D.; Ho, T.-T.; Huang, Y.; Le, N.Q.K.; Tuan, L.Q.; Ho, V.L. MASPP and MWASP: Multi-head self-attention based modules for UNet network in melon spot segmentation. Food Meas. 2024, 18, 3935–3949. [Google Scholar] [CrossRef]
Voltage | Module |
---|---|
24 V | Lithium battery |
19.5 V | Microcomputer |
12 V | LED backlight, photoelectric switch, camera |
Hardware/Software | Version |
---|---|
CPU | 12th Gen Intel® Core™ i9-12900k 3.20 GHz |
RAM | 64 GB |
GPU | NIVIDIA GeForce RTX 3090ti (24 GB) |
System | Windows 11 |
Algorithmic language | Python3.7 |
Deep learning framework | Pytorch1.7.1 |
Computer vision library | OpenCV4 |
CUDA | CUDA11.0 |
CUDNN | CUDNN8.0.5.39 |
Detection Object | Image Quantity | Original Contrast Mean | Average Contrast After Enhancement | No Contrast Enhancement Kernel N Number Detection Accuracy | Contrast Enhancement Kernel Number Detection Accuracy |
---|---|---|---|---|---|
Low-contrast image | 100 | 0.076 | 0.315 | 92.5% | 95.7% |
Medium-contrast image | 100 | 0.184 | 0.342 | 93.2% | 96.2% |
High-contrast image | 100 | 0.256 | 0.332 | 93.7% | 96.4% |
Methods | Different Contrast Accuracy (%) | ||
---|---|---|---|
Low | Medium | High | |
YOLO | 95.9 | 96.6 | 97.2 |
Method of this paper | 96.5 | 97.3 | 97.5 |
Test Items | Manual | Automatic |
---|---|---|
Accuracy of detection (%) | 93.6 | 97.2 |
Flux (/h) | 150 | 372 |
Intensity of labor | Higher | Lower |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Xu, S.; Xu, R.; Ma, P.; Huang, Z.; Wang, S.; Yang, Z.; Liao, Q. Design of a Non-Destructive Seed Counting Instrument for Rapeseed Pods Based on Transmission Imaging. Agriculture 2024, 14, 2215. https://doi.org/10.3390/agriculture14122215
Xu S, Xu R, Ma P, Huang Z, Wang S, Yang Z, Liao Q. Design of a Non-Destructive Seed Counting Instrument for Rapeseed Pods Based on Transmission Imaging. Agriculture. 2024; 14(12):2215. https://doi.org/10.3390/agriculture14122215
Chicago/Turabian StyleXu, Shengyong, Rongsheng Xu, Pan Ma, Zhenhao Huang, Shaodong Wang, Zhe Yang, and Qingxi Liao. 2024. "Design of a Non-Destructive Seed Counting Instrument for Rapeseed Pods Based on Transmission Imaging" Agriculture 14, no. 12: 2215. https://doi.org/10.3390/agriculture14122215
APA StyleXu, S., Xu, R., Ma, P., Huang, Z., Wang, S., Yang, Z., & Liao, Q. (2024). Design of a Non-Destructive Seed Counting Instrument for Rapeseed Pods Based on Transmission Imaging. Agriculture, 14(12), 2215. https://doi.org/10.3390/agriculture14122215