Next Article in Journal
Review on Photocatalytic Applications for Deodorization in Livestock and Poultry Farms
Previous Article in Journal
Severity Estimation of Inter-Turn Short-Circuit Fault in PMSM for Agricultural Machinery Using Bayesian Optimization and Enhanced Convolutional Neural Network Architecture
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Design of a Non-Destructive Seed Counting Instrument for Rapeseed Pods Based on Transmission Imaging

1
College of Engineering, Huazhong Agricultural University, Wuhan 430070, China
2
Key Laboratory of Agricultural Equipment for the Middle and Lower Reaches of the Yangtze River, Ministry of Agriculture, Huazhong Agricultural University, Wuhan 430070, China
*
Author to whom correspondence should be addressed.
Agriculture 2024, 14(12), 2215; https://doi.org/10.3390/agriculture14122215
Submission received: 2 November 2024 / Revised: 26 November 2024 / Accepted: 28 November 2024 / Published: 4 December 2024
(This article belongs to the Section Agricultural Technology)

Abstract

:
Pod counting of rapeseed is a critical step in breeding, cultivation, and agricultural machinery research. Currently, this process relies entirely on manual labor, which is both labor-intensive and inefficient. This study aims to develop a semi-automatic counting instrument based on transmission image processing and proposes a new algorithm for processing transmission images of pods to achieve non-destructive, accurate, and rapid determination of the seed count per pod. Initially, the U-NET network was used to segment and remove the stem and beak from the pod image; subsequently, adaptive contrast enhancement was applied to adjust the contrast of the G-channel image of the pod to an appropriate range, effectively eliminating the influence of different varieties and maturity levels on the translucency of the pod skin. After enhancing the contrast, the Sauvola algorithm was employed for threshold segmentation to remove the pod skin, followed by thinning and dilation of the binary image to extract and remove the central ridge lines, detecting the number and area of connected domains. Finally, the seed count was determined based on the ratio of each connected domain’s area to the mean area of all connected domains. A transmission imaging device that mimics the human eye’s method of counting seeds was designed, incorporating an LED transmission light source, photoelectric switch-triggered imaging slot, an industrial camera, and an integrated packaging frame. Human–machine interaction software based on PyQt5 was developed, integrating functions such as communication between upper and lower machines, image acquisition, storage, and processing. Operators simply need to place the pod in an upright position into the imaging device, where its transmission image will be automatically captured and processed. The results are displayed on a touchscreen and stored in Excel spreadsheets. The experimental results show that the instrument is accurate, user-friendly, and significantly reduces labor intensity. For various varieties of rapeseed pods, the seed counting accuracy reached 97.2% with a throughput of 372 pods/h, both of which are significantly better than manual counting and have considerable potential for practical applications.

1. Introduction

Rapeseed, ranking among the top four global oil crops, is a primary source of edible vegetable oils and plant proteins. It is also the most extensively cultivated and important oil crop in China. In the fields of rapeseed breeding, cultivation, and mechanization research, yield is a critical indicator [1]. Rapeseed yield depends on three factors: the number of pods per plant, average number of seeds per pod, and the weight of 1000 seeds [2]. Among these, the most crucial yet labor-intensive aspect involves the identification and counting of seeds per pod. Traditional methods involve manually holding the pods under light and visually counting the number of seeds through human observation. According to surveys, the labor cost for manual seed counting ranges between $20 to $30 per hour. A detailed seed count measurement involving a large sample set may require $50–$100 or more. For tasks requiring extensive rapeseed seed counts with numerous samples, the daily labor and time costs can accumulate to $400–$800. Additionally, the accuracy rate of manual counting is quite low, with an average error rate of about 10%. Moreover, prolonged working hours lead to operator fatigue, further reducing counting precision. These limitations highlight the inefficiency and inaccuracy of manual counting in rapeseed yield assessment, underscoring the necessity for automated or semi-automated solutions to improve efficiency and accuracy.
Currently, seed counting techniques are primarily categorized into electronic sensor counting and computer vision counting. The application of deep learning networks in agricultural product detection has made it increasingly convenient, fast, and more accurate to measure indicators such as pod ripeness and quantity [3]. Therefore, this paper will discuss the contributions and limitations of different methods of seed counting. Counting methods based on photoelectric and piezoelectric sensors can achieve large-scale seed counts in a relatively short time, with high efficiency and good repeatability, and have been extensively researched and applied in seed counting [4]. In contrast, counting methods based on computer vision offer advantages such as high speed, efficiency, precision, low cost, and non-destructiveness, meeting the broader needs of current agricultural development and resulting in significant advancements in recent years.
According to whether or not the seeds are removed for detection, the existing seed detection methods can be divided into two main categories:
  • After removing the seeds for counting, various methods have been developed to enhance accuracy and efficiency. WU et al. introduced an automatic evaluation system for soybean thousand-seed weight using a marker-controlled watershed algorithm and area threshold method, effectively separating the touching soybean seeds to mitigate the impact of seed contact on counting accuracy [5]. LIU Shuangxi proposed an adhesion segmentation algorithm based on concave point matching for rice seed recognition, achieving an average recognition accuracy of 97.50% with an average processing time of 0.95 s per 100 seeds [6]. PENG Shunzheng developed a rapeseed counting method that utilizes hole detection and corner point detection matching, resulting in a detection accuracy rate of 88% [7]. TAN et al. presented a contact rice seed separation counting algorithm that combines a watershed algorithm, an improved corner point detection algorithm, and a BP neural network classification algorithm, attaining an accuracy of 94.63% under complex contact scenarios [8]. WU et al. devised an automatic identification method for corn seed counting that maintains robustness under varying lighting conditions, with a counting accuracy exceeding 93% [9]. PENG et al. introduced a dynamic rice seed counting algorithm based on stack elimination, achieving 100% MOTA and 83% MOTP with a MAP of 99.5% for the improved YOLOv7 model [10]. MA et al. proposed a lightweight real-time wheat seed detection model, YOLOv8-HD, which outperformed other mainstream networks with an average detection accuracy (MAP) of 99.3% across all scenes [11]. SONG et al. proposed an improved watershed algorithm based on first segmentation and then fusion for the images of touched kernels, which realized the segmentation of touched kernels, and the detection accuracy of single ear corn kernels reached 98% [12]. WANG Ling integrated the Swin Transformer module into the YOLO v7-ST model, which accurately and quickly detected grain occlusion and adhesion issues at different dispersion levels, with an average counting accuracy of 99.16%, an F1 value of 93%, and an average counting time of 1.19 s [13]. CHEN et al. used continuous images to associate specific soybean seed shapes based on a forced adjacency association criterion, avoiding duplicate counts and obtaining multi-angle shape features [14]. XI Xiaobo developed a recognition and detection method based on improved concave point segmentation for wheat seed distribution during sowing, achieving a detection accuracy of up to 95% [15]. LI Qiong designed a new algorithm for detecting soybean particles using MATLAB, involving spatial filtering for noise removal and the “Otsu” method for optimal global threshold segmentation, showing a strong correlation between the soybean particle area and the hundred-seed weight [16]. ZOHAIB Mussadiq evaluated four forms of open-source image analysis software for cereal crop seed counting and found CellProfiler’s image analysis program to be faster, reliable, and reproducible compared to the standard manual method [17];
  • For counting seeds enclosed in pods, non-destructive seed counting techniques have seen significant advancements through the application of deep learning and imaging technologies. KHAKI et al. utilized deep learning to count corn kernels on single or multiplecorn ears without prior threshing, achieving a root mean square error (RMSE) of 33.11 and an R2 of 95.86% across 20 different corn ears [18]. WANG Ying developed a soybean seed counting method based on VGG-T, integrating density estimation with convolutional neural network (CNN) methods to enhance accuracy [19]. SUN et al. proposed an improved Faster R-CNN algorithm for precise counting of overlapping rice seeds using pre-labeling with contour grouping, resulting in a low error rate of 1.06% [20]. DOMHOEFER et al. employed a CT system to generate X-ray projection 2D images of individual peanut pods and predict kernel and shell weights through X-ray image processing, establishing regression equations between the predicted and actual values, and achieved an R2 of 0.93 and an average error estimate of 0.17 [21]. ZHAO et al. used backlighting to capture images of seeds within pods and applied OTSU, Faster-RCNN, and DeepLabV3+ methods for segmentation and counting of rapeseed under varying light intensities, with the DeepLabV3+ method demonstrating the highest accuracy for rapeseed segmentation and counting (R and F1 scores of 91% and 94%, respectively) [22]. UZAL et al. developed a custom feature extraction (FE) and support vector machine (SVM) classification model to estimate the number of seeds per soybean pod, achieving a recognition accuracy of 86.2% [23]. LI et al. combined density estimation with two-column neural networks to accurately calculate the number of seeds in single-pod images from one perspective, reporting an average absolute error of 13.2 and a mean squared error of 17.62 [24]. ZHAO et al. employed an improved P2PNet for field soybean seed counting and localization, reducing the average absolute error from 105.5 to 12.94 [25]. Yao Yehao et al. proposed a non-destructive image processing algorithm for measuring pod length and established a relationship model between the pod length and the number of grains per pod. By accurately measuring pod length, the grain number was estimated with an accuracy of 83.87%, which is significantly lower than the accuracy of manual measurement. Therefore, this method cannot be used by scientists [26]. Due to its relatively low accuracy, which fails to meet the standard required for practical applications, this method cannot be utilized in actual yield measurement processes. Additionally, Tran, KD., et al. developed deep learning networks for segmenting melon surface spots, introducing new variants of atrous spatial pyramid pooling (ASPP) and waterfall spatial pooling (WASP) based on the multi-head self-attention (MHSA) method, enhancing the original structure and providing an effective approach for particle detection in two-dimensional image-like tasks [27].
In summary, technical approaches to counting seeds after removal have been well-developed. However, methods of in situ counting of seeds without threshing holds greater research and application value because they can obtain the seed information for each pod. They can also eliminate procedures such as threshing, despite presenting greater research challenges. For pods, there are two imaging methods: orthographic and transillumination. The orthographic imaging method is convenient and low-cost, indirectly identifying seeds by observing the impact of seeds on the exterior of the pod, but its recognition rate is difficult to guarantee. The transillumination imaging method can directly obtain seed information inside the pod. With advanced computer vision algorithms it can achieve a high recognition rate, but it is more expensive and the imaging setup is more complex. Both X-rays and visible light can be used as transillumination sources. Compared to X-rays, visible light is lower in cost and higher in throughput, making it more suitable for use as a transillumination source. Therefore, this article proposes a semi-automatic non-destructive seed counting solution for rapeseed pods. A transillumination image acquisition device based on white LED light was designed and manufactured, along with the development of a pod transillumination image processing algorithm and human–machine software, integrated into a highly portable automatic yield measurement instrument. This instrument can accurately measure the number of seeds in a single upright pod, promising to replace the cumbersome and inefficient manual measurement method, and holds significant potential for widespread application.
In this paper, we first provide an overview of the background related to the research topic (Section 1), highlighting the motivations and necessity of the study. Next, in Section 2, we describe the materials and methods used in this study, including the specific parameters of the designed device and the proposed algorithm. Section 3 presents the performance of the device and discusses it in detail, particularly in comparison to existing studies. Finally, Section 4 summarizes the key findings of this research and suggests future directions for further investigation.

2. Materials and Methods

2.1. Experimental Materials and Data Acquisition

From April to May 2023, a total of 20 plants in two rapeseed varieties, ‘Huayou No. 62’ and ‘Zhongshuang’, at the green maturity stage were collected from the experimental rapeseed field at Huazhong Agricultural University, Wuhan, China. Using scissors, the pods were individually clipped from the plants along the growth direction of the stem, resulting in single, intact rapeseed pods that included the pedicel and beak.
Transmission images of the pods were captured using a self-developed pod transmission imaging system. During the image acquisition process, the industrial camera was positioned vertically downward with the lens approximately 10 cm from the pod (optimal imaging distance). The experiment was conducted in a naturally lit room, with the rapeseed pods placed on an adjustable brightness LED backlight panel set to a color temperature of 5500 K. Backlit image acquisition was controlled using self-developed software. For each plant, 100 pods were randomly selected for imaging, resulting in a total of 4000 pods being captured. Subsequently, the number of seeds per pod was counted manually in two steps: first, by visually counting the seeds in each pod, then by carefully opening the pods to count the seeds accurately. The time taken for both counting methods was also recorded and compared.

2.2. Design and Implementation of Rape Pod Transmission Imaging Device

2.2.1. Integral Structure Design

To simulate the human eye’s observation of the pods, the light source and observation angle were positioned on either side of the pod for backlight observation. A transmission imaging system was designed based on this setup to provide high-quality image sources for the transmission image processing algorithm. Given the significant variation in size and shape among rapeseed pods, developing a fully automated loading mechanism proved challenging; hence, manual loading and unloading were adopted instead. The schematic diagram of the device is illustrated as Figure 1.
The device featured an integrated enclosure that ensured structural stability, compactness, portability, and ease of use by operators, thereby minimizing user fatigue. Its main frame was constructed from 1 mm thick steel plates through stamping and welding processes. All components were powered by a unified 12 V DC supply, with connections facilitated by pluggable connectors. Screw mounting holes and cable passageways were strategically incorporated to simplify the assembly, disassembly, wiring, and securing of various modules.
To assemble the transmission illumination source, custom LED strips equipped with diffusers were employed. An LED controller allowed for stepless adjustment of the light intensity, accommodating different batches, varieties, and moisture content levels of the rapeseed pods to ensure optimal image quality while also reducing the power consumption and light pollution. A deep-channel image capture area was situated above the backlight source, equipped with an opposed photoelectric switch arrangement that enabled automatic triggering of the imaging process.

2.2.2. Semi-Automated Transmission Imaging System for Rapeseed Pod Analysis

  • Manual Handling and Placement:
The user manually holds the rapeseed pod in an upright position and places it into the detection slot of the device. This step requires careful alignment to ensure that the pod is correctly positioned for imaging;
  • Trigger Mechanism Activation:
Upon insertion, the pod activates two sets of photoelectric switches located within the detection slot. These switches are strategically positioned to detect the presence and proper orientation of the pod, ensuring consistent imaging conditions;
  • Signal Transmission and Image Capture Initiation:
The activation of the photoelectric switches generates a trigger signal, which is sent to the lower-level machine (slave unit). In response, the lower-level machine sends an image capture command to the upper-level machine (master unit);
  • Camera Triggering via RJ-45 Interface:
The upper-level machine receives the command and, via an RJ-45 interface, triggers the camera to capture a transmission image of the rapeseed pod. This interface ensures reliable communication between the control unit and the camera, facilitating precise timing for image acquisition;
  • Image Storage and Processing:
Once captured, the images are automatically saved and processed using pre-configured algorithms. These algorithms analyze the internal structure of the rapeseed pod, identifying and counting the individual seeds with high accuracy, even in complex arrangements where the seeds may overlap or be partially obscured;
  • Results Display and Documentation:
The processed results, including the seed counts and any relevant morphological data, are displayed on a human–machine interface (HMI) software screen for immediate review by the operator. Additionally, the system saves these results in an Excel document, allowing for easy record-keeping, data analysis, and sharing with other stakeholders or for further research purposes.

2.2.3. Workflow

Here is a comprehensive guide through each step:
  • Step 1: Turn on the main power switch and the microcomputer power switch. This action will automatically activate the transmission light source, preparing the device for operation;
  • Step 2: Launch the human–machine interface (HMI) software on the microcomputer. This software serves as the control center for the entire imaging process, allowing users to adjust the settings, initiate imaging sequences, and review the captured images;
  • Step 3: Adjust the brightness of the LED light source while simultaneously observing the pod through the transmission effect with your eyes. The goal is to achieve a clear and well-defined view of the internal structure of the pod. It is important to note that this calibration needs to be performed only once per batch of pods, as the same settings should suffice for all of the subsequent imaging within that batch;
  • Step 4: Carefully hold a rapeseed pod in an upright position and place it into the designated slot of the imaging device. Ensure that the pod is correctly oriented so that its internal seeds are fully visible and can be completely penetrated by the transmission light, guaranteeing a comprehensive image capture.
Once the pod is properly placed, the device automatically triggers the camera to take a transmission image of the pod. This image is then automatically saved for further processing and analysis. The system is designed to handle this process seamlessly, reducing user intervention and ensuring consistency across multiple images.

2.2.4. Comprehensive Design Overview of the Transmission Imaging Device for Rapeseed Pod Analysis

The design of the transmission imaging device for the rapeseed pod analysis encompasses three critical components: the image acquisition module, control circuit, and power circuit. Each component plays a pivotal role in ensuring the system’s operational efficiency and accuracy in capturing high-quality images for seed analysis.
  • Image Acquisition Module Design
The image acquisition module is the core of the imaging device and is responsible for capturing the transmission images of the rapeseed pods. It consists of several key elements:
  • The microcomputer (Intel(r) I5-12400, USA/UHD Graphics 730/8 GB RAM/512G SSD) acts as the upper machine, controlling the entire imaging process, including sending control commands, managing the camera’s image capture sequence, and deploying image processing algorithms. The microcomputer is mounted within the device frame through fixed holes;
  • The touchscreen (EIMIO E16S, China) serves as the user interface, allowing operators to interact with the system, initiate imaging processes, and review the captured images;
  • The industrial camera (Hikvision MV-CS060-10GM, China, 3072 × 2048 pixels) captures the high-resolution images essential for detailed seed analysis;
  • The lens (MVL-HF0628M-6MPE, China, 6 mm focal length) ensures sharp focus and clear imaging of the rapeseed pods;
  • The LED white light source (Optical Reach 12 V 5050 LED Strip, China) provides optimal illumination for transmission imaging. After evaluating various light sources (white, purple, orange, blue, red, green), white LED was determined to offer the best transmission effect and image quality;
  • The LED control module (Geput DC12-24 V 30 A, China) adjusts the brightness of the LED light source to accommodate different pod translucencies, ensuring consistent imaging quality across diverse pod varieties.
2.
Control Circuit Design
The control circuit acts as the lower machine, interfacing between the hardware components and the upper machine:
  • The core board (QiXingChong stm32f103zet6, China) was developed to handle the control logic of the device;
  • Photoelectric switches (Jiance JC-04PT Type, Wuhan) were installed 3 cm above the LED strip, with two switches spaced 5 cm apart, to detect when a pod is properly positioned for imaging. These switches output a 12 V signal;
  • The optocoupler isolation module converts the 12 V signal from the photoelectric switches to 3.3 V to match the microcontroller pin voltage, ensuring safe and effective signal transmission;
3.
Power Circuit Design
Power management is crucial for the reliable operation of the imaging device;
  • The battery supply (Wheel Fun 24 V 2 Ah lithium battery, China) provides a 24 V DC power source to the device, which is then converted to meet the specific voltage requirements of each component;
  • For the buck converter modules, one module steps down the voltage to 19.5 V for the microcomputer while another converts it to 12 V DC for the photoelectric switches, imaging camera, and the LED intensity controller;
  • For the charging port, a hole on the side of the device facilitates battery recharging, ensuring uninterrupted operation.
This meticulously designed imaging device integrates advanced technology to deliver precise and efficient seed analysis, catering to the needs of agricultural researchers and enhancing the overall productivity and accuracy of rapeseed pod assessments.

2.2.5. Software Design

The physical device and accompanying software for the Brassica pod and seed counter are shown in Figure 2. The interface design was carried out using Qt Designer, a visual interface design tool provided by PyQt5. The UI file generated from the system interface designed in Qt Designer was converted into a Python file using the PYUIC toolkit, and can be directly invoked by a Python program. The software integrates four core functions: the camera connection and detection start/stop control; image capture, display, and storage; Brassica seed counting, and operation cancellation and re-inspection. These functions are arranged top-down on the human–machine interface (HMI). The main function buttons include “open camera”, “start measurement”, “end measurement”, “undo”, and “manual capture”. The workflow is as follows:
  • Click “open camera” to test and connect the camera. The live video feed will be displayed on the touchscreen. If the camera connection fails, check the device for issues, resolve them, and then click “open camera” again;
  • Click “start measurement” to create folders for saving images and an Excel file for storing detection results. The system is now prepared to detect a batch of pods. Manually place the pod in the correct position at the detection port, which will automatically trigger the photoelectric switch to capture and process the pod image. The detection results are stored in the created Excel file, enabling convenient data management and querying;
  • Click “undo” to delete all currently saved images and results data. This is applicable when the detection results observed by the human eye have significant errors and require reloading of the sample for re-detection;
  • “Manual capture” is used for manually capturing images of the special-shaped pods that cannot be normally triggered by the automatic system;
  • Click “end measurement” to conclude the detection of the current batch of pods and prepare for the next batch of samples to be inspected.

2.3. Seed Recognition Algorithm of Rape Pod Transmission Image

The overall process for processing the transmitted images of Brassica pods is illustrated in Figure 3. It encompasses three primary stages: pedicel and beak segmentation and removal (Figure 3A), seed segmentation and counting (Figure 3B), and adaptive contrast enhancement of the transmitted image (Figure 3C). The actual execution sequence of the program is A->B->C->B, with the process in Figure 3B being executed twice. During the first run, preliminary segmentation of the Brassica seed images is performed and output to Figure 3C. After adaptive contrast enhancement is applied to the source image in Figure 3B, C is run again to output the final detection results.

2.3.1. Pedicle and Beak Segmentation and Removal Algorithm

The pedicel, beak, and the surrounding structures of the rape pod are prone to being misidentified as seeds due to their resemblance to seeds at their junction with the main body of the rape pod in low-light conditions, thus they require prior removal. A U-Net semantic segmentation network was used to perform image segmentation on the pods and seeds. The PASCAL VOC dataset was used as a template, and a training dataset consisting of 4000 images of rape pods was constructed. Manual annotation was performed using the Labelme image annotation software, where the pedicel and beak were treated separately. The pedicel and beak were labeled and named ‘GB’ and ‘GH’, respectively. The annotated information was saved in the JSON files corresponding to the original images. To enhance the generalization ability of the model, data augmentation techniques were applied to expand the dataset. Operations such as translation, brightness change, and noise addition were performed on both the annotated images and the corresponding JSON files. This resulted in a dataset of 12,000 images. The expanded dataset was divided into three non-overlapping subsets: 8400 for training, 2400 for validation, and 1200 for testing. The model was trained on a Windows 11 system, with the hardware and software configurations listed in Table 1. During the model training, the input image size was set to 960 × 960 and a stochastic gradient descent (SGD) was used as the optimizer, with a maximum learning rate of 0.01. Mosaic data augmentation was enabled and the weight decay was set to 5 × 10−4. Label smoothing was applied with a factor of 0.005. The batch size was set to 8 and the training was conducted over 300 epochs. Cosine annealing scheduling was utilized to optimize the training loss reduction speed. The model was saved every 10 epochs and the best-performing model, based on validation performance, was selected as the final model (Table 2).
The process for removing the pod pedicle and beak is shown in Figure 3A. The specific steps are as follows:
  • Use the trained U-Net network (Figure 3A(b)) to segment the original color image of the pod (Figure 3A(a)), obtaining a mask image of the pod pedicle and beak (Figure 3A(c)). This mask is then applied to the binary image of the complete pod (Figure 3A(f)) to obtain the binary image of just the pod pedicle and beak (Figure 3A(d));
  • Extract the red (R) channel image from the original color image of the pod (Figure 3A(e)). Apply inverse thresholding with a fixed threshold of 230, setting the pixels below this value to 255 (white) and the rest to 0 (black), resulting in the binary image of the complete pod (Figure 3A(f));
  • Subtract the binary image of the pod pedicle and beak from the binary image of the complete pod to obtain a binary image containing only the main body of the pod (Figure 3A(g));
  • Apply the mask of the main body of the rape pod to the original color image of the pod to obtain the color image of just the main body of the pod (Figure 3A(h)).

2.3.2. Rape Kernel Segmentation and Counting

The seeds in the pod are relatively evenly distributed on both sides of the central ridge, and there is a difference in brightness between the seeds and the pod skin. This characteristic is used to design a seed segmentation algorithm. The specific process for segmenting and counting rapeseeds is shown in Figure 3B. The processing steps are as follows:
  • Extract the green (G) channel of the color image of the main body of the pod (Figure 3B(a)), then apply the Sauvola thresholding algorithm to obtain a binary image containing only the seeds and the central ridge (Figure 3B(c));
  • Extract the skeleton from the binary image (Figure 3C(d)) and perform moderate morphological dilation to extract the central ridge (Figure 3C(e));
  • Subtract the result of step 2 from the result of step 1 to obtain a binary image containing only the rapeseeds (Figure 3C(f));
  • Input the result of Figure 3C(f) into the process shown in Figure 3C, enhance the contrast, then rerun steps 1–3 to obtain the final segmentation result of the rapeseeds (Figure 3C(g));
  • Detect connected components in the final segmentation image, calculate the average area of all connected components, denoted as M, and use Formula (1) to estimate the number of rapeseeds:
N u m b e r   o f   s e e d s = T o t a l   a r e a   o f   s e g m e n t e d   s e e d s M
This formula helps estimate the total number of seeds based on the average area per seed and the total area occupied by all detected seeds in the segmented image.
C O U N T i = 0 S i M < 0.1 1 0.1 S i M < 1.8 2 1.8 S i M < 2.6 3 2.6 S i M < 3.4 4 3.4 S i M
In the equation, C O U N T I represents the number of rapeseeds in the i-th connected component and S I represents the area of the i-th connected component.

2.3.3. Adaptive Contrast Enhancement Algorithm for Rape Pod Transmission Image

The varying thickness of the rape pod skin leads to inconsistent image translucency, which often results in significant errors when directly identifying the seeds. This issue is particularly pronounced in low-contrast pod images, where the overall darkness of the image region and the small difference between the rapeseeds and the pod skin can cause severe misdetection. The use of a piecewise gamma transformation algorithm enhances the contrast of the G-channel images of the main body of the pod. This process adjusts the contrast of images with different translucencies, bringing the contrast between the pod skin and seed regions to within an appropriate range. Consequently, this effectively improves the accuracy and robustness of the subsequent seed segmentation.
In the transmission images, the brightness of the transparent material backlight panel is the highest, followed by the pod skin area, then the seed area, with the central ridge having the lowest brightness. Based on this characteristic, an adaptive contrast image enhancement method was designed, as shown in Figure 3C, with the following steps:
  • Seed grayscale image preparation: The binary seed image obtained from the first step (Figure 3C(a)) and the pod color image without the stem and beak (Figure 3B(a)) are masked and converted to grayscale to produce the seed grayscale image (Figure 3C(c)). This image is then used to calculate the standard deviation of the seeds;
  • Pod skin grayscale image preparation: The R (red) channel of the main body of the pod color image (Figure 3C(e)) is extracted and subjected to fixed threshold segmentation (threshold set at 235) to obtain the binary image of the main body of the pod (Figure 3C(f)). By subtracting the central ridge and seeds, the binary image of the pod skin (Figure 3C(g)) is obtained. This binary image is then masked with the pod’s main body color image (Figure 3C(d)) to obtain the pod skin color image (Figure 3C(h)), which is then converted to grayscale to produce the pod skin grayscale image (Figure 3C(i)). This image is used to calculate the standard deviation of the pod skin;
  • Image contrast calculation: Using Formula (2), the image contrast between the seeds and the pod skin is calculated.
C O N = 1 N 1 i = 1 N 1 x i u 1 2 1 N 2 i = 1 N 2 y i u 2 2
In the equation, C O N represents the image contrast between the seeds and the pod skin, and N 1 and x i are the total number of pixels and the pixel value of the i -th pixel in the seed grayscale image, respectively. Similarly, N 2 and y i are the total number of pixels and the pixel value of the i -th pixel in the pod skin grayscale image, respectively.
  • Contrast enhancement of the G-channel image: Based on the calculated value, contrast enhancement is applied to the G-channel image of the pod (Figure 3C(j)) using the formula provided in Equation (3).
S = r 2.5 C O N < 0.1 r 1.6 0.1 C O N < 0.22 r 1.25 0.22 C O N < 0.3 r 1 0.3 C O N
In the equation, S represents the pixel value of the enhanced grayscale image and r represents the pixel value of the original grayscale image. The gamma coefficient in the equation is an empirical parameter obtained through extensive experimentation. This parameter for the gamma transformation has been determined based on contrast calculations and enhancement effect comparisons across a large number of rapeseed pods.

3. Performance Test of Automatic Yield Measuring Instrument for Rape

3.1. Testing of Pedicle and Beak Image Segmentation and Removal Algorithm

The performance of the U-Net network in segmenting the pedicel and beak regions in pod images was evaluated. The trained model’s performance was comprehensively assessed using metrics such as the average precision (AP), average recall (AR), mean intersection over the union (mIoU), and the mean average precision (mAP). The results indicated that the pedicel segmentation model achieved an AP of 96.18%, an AR of 97.42%, an mIoU of 95.02%, and an mAP of 98.06%. Similarly, the beak segmentation model attained an AP of 97.26%, an AR of 97.33%, an mIoU of 96.42%, and an mAP of 97.63%. These values demonstrate that the image segmentation network performed exceptionally well in segmenting both the pedicel and beak. Furthermore, the impact of the pedicel and beak on the yield estimation accuracy was investigated. After removing the pedicel and beak, the recognition accuracy of the rapeseed grain count improved by an average of 1.26%. This result confirms that the presence of the pedicel and beak indeed affects the detection accuracy and underscores the necessity of their removal.

3.2. Test of Adaptive Contrast Enhancement Algorithm for Pod Transmission Image

This paper proposes an adaptive contrast enhancement algorithm aimed at eliminating the low contrast between seeds and the outer skin of rape pods in the images, which may result from variations in the placement and seed coat thickness. These factors can hinder effective feature extraction. The algorithm adjusts the contrast of different images to a range of 0.3–0.4, which has been identified as the optimal range through experiments. This approach ensures that experimental data collection can accommodate a wider range of lighting conditions, and can also be utilized in counting tasks involving deep learning networks. The following section tests the impact of the adaptive contrast enhancement algorithm on the performance of rape seed counting. Figure 4 shows three sets of rapeseed pod transmission images with different contrasts and their contrast enhancement effects. Before contrast enhancement, the grayscale images of low- and medium-contrast pods were blurred, making it difficult to identify the seeds. After enhancement, the brightness difference between the seeds and skin became obvious in all images, and the seeds were clearly visible.
A random selection of 100 pod images for each of the three contrasts was made to test and analyze the changes in the contrast values and the seed count measurement accuracy before and after contrast enhancement. The experimental results are shown in Table 3. After adaptive contrast enhancement, the contrasts of the images with different contrasts was increased to a consistent range, and the detection accuracy of the seed count increased by 3.2%, 3.0%, and 2.7%, respectively.

3.3. Comparison Between the Algorithm in This Paper and the Deep Learning Method

In addition, this paper also developed a rape pod seed recognition algorithm based on YOLOv8 for testing and comparison with the proposed algorithm, the results of which are presented in Table 4. Compared with the method described in reference [22], a more advanced YOLOv8 model was employed. However, due to the complex structure of the YOLOv8 network, which is designed to accommodate a broader range of application scenarios, its feature extraction approach lacks specificity. Furthermore, the small size and the dense arrangement of seeds within the rape pods posed additional challenges. Specifically, the network’s receptive field may not fully encompass the individual seeds or effectively distinguish between the different features within the same pod. Consequently, the proposed algorithm in this paper achieved a higher accuracy rate compared to using the YOLO network for recognition. Detection was performed on low-, medium-, and high- contrast rapeseed pod images, and one set of the resulting images is shown in Figure 5, Figure 6 and Figure 7. YOLO detection errors are marked with yellow boxes and the detection results are displayed at the top right corner of the image. The main types of errors and their causes are analyzed as follows. (1) Single grain missed detection, mainly due to thicker skin, immature rapeseed, adhesion of rapeseed to the pedicle or beak, and impurities on the skin, which cause changes in the shape of individual rapeseed grains. (2) Adhesion missed detection and repeated detection, mainly due to changes in the shape of image areas formed by the adhesion of multiple rapeseed grains, causing grains to be unrecognized or recognized repeatedly. Compared to deep learning methods, the algorithm in this paper can better detect individual rapeseed grains, accurately segment adhesive rapeseed areas, and determine the number of grains based on area, effectively reducing counting errors.

3.4. Performance Test of Rapeseed Pod Grain Counter

A total of 1000 rapeseed pods were randomly selected to compare the differences between traditional manual counting methods and automatic counting methods from three dimensions: accuracy, throughput, and labor intensity. The results are shown in Table 5. The accuracy of manual counting was 93.6%, with a throughput of 150 units per hour and high labor intensity. Using a counting instrument, the accuracy reached 97.2%, with a throughput of 372 units per hour and lower labor intensity. The efficiency of the automatic counting instrument was 2.5 times that of manual counting, with an accuracy exceeding manual counting by 3.6%, with significantly reduced labor intensity.

4. Conclusions

This paper designed a non-destructive rapeseed pod grain counting instrument. The main conclusions are as follows:
  • A rapeseed pod transmission image processing algorithm was designed. The U-Net network was used to segment the pedicle and beak in the transmission images of the pod, with the stem and beak removed to eliminate their impact on grain counting. Using the symmetrical structure characteristics of the central axis of the pod, a targeted rapeseed grain image segmentation algorithm was designed, which effectively dealt with adhesive grains and accurately counted the number of grains. To overcome the errors in grain segmentation caused by differences in pod translucency, an adaptive contrast enhancement algorithm was proposed, which appropriately converted the contrast between the skin and seeds, significantly improving the effect of grain image segmentation. Overall, this image processing algorithm has high detection accuracy for rapeseed pods of different varieties, sizes, and maturities during the green ripeness period, and its performance exceeds that of deep learning methods;
  • To address the laborious and strenuous issue of counting rapeseed pod grains, a semi-automatic counting instrument for rapeseed pod grains was developed. It was easy to operate, portable, and can work offline for up to 3.5 h, with a grain detection accuracy rate of 97.2% and a throughput of 372 pods per hour. Compared to manual counting methods, the counting instrument has significant advantages in terms of its accuracy, throughput, and labor intensity, and is expected to become a powerful assistant for rapeseed researchers.
In practical applications, this seed counting device has demonstrated good performance in detecting and recognizing seeds in most rapeseed pod shapes. However, it fails to achieve the expected accuracy for unusually shaped or extremely small pods. Similarly, pods damaged by pests or diseases, which are characterized by blemishes on their surfaces, cannot be accurately processed. This limitation is likely due to the irregular shape of these pods, which alters their light transmittance during two-dimensional image acquisition, resulting in algorithmic detection failures in certain regions.
In future work, we plan to upgrade the algorithm and both the hardware and software components to improve the robustness of the system and ensure more consistent accuracy across diverse pod conditions.

Author Contributions

Conceptualization, S.X. and Z.H.; Methodology, S.X., R.X., P.M., Z.H. and S.W.; Software, R.X. and Z.H.; Validation, R.X., P.M. and Z.Y.; Data curation, Z.H.; Writing—original draft, R.X. and P.M.; Writing—review & editing, S.X.; Visualization, S.W. and Z.Y.; Supervision, Q.L.; Project administration, S.X. and Q.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Key Research and Development Program of China (Program No. 2018YFD1000904), the Fundamental Research Funds for the Central Universities (Program No. 2662022GXYJ001).

Institutional Review Board Statement

This study is not involving humans or animals.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Li, F.; Guo, K.; Liao, X. Risk Assessment of China Rapeseed Supply Chain and Policy Suggestions. Int. J. Environ. Res. Public Health 2022, 20, 465. [Google Scholar] [CrossRef] [PubMed]
  2. Chen, J.; LI, Q.; Tan, Q.; Gui, S.; Wang, X.; Yi, D.; Jiang, D.; Zhou, J. Combining lightweight wheat spikes detecting model and offline Android software development for in-field wheat yield prediction. Trans. Chin. Soc. Agric. Eng. 2021, 37, 156–164. [Google Scholar] [CrossRef]
  3. Ashtiani, S.-H.M.; Javanmardi, S.; Jahanbanifard, M.; Martynenko, A.; Verbeek, F.J. Detection of Mulberry Ripeness Stages Using Deep Learning Models. IEEE Access 2021, 9, 100380–100394. [Google Scholar] [CrossRef]
  4. Ding, Y.; Wang, K.; Du, C.; Liu, X.; Chen, L.; Liu, W. Design and experiment of high-flux small-size seed flow detection device. Trans. Chin. Soc. Agric. Eng. 2020, 36, 20–28. [Google Scholar] [CrossRef]
  5. Wu, W.; Zhou, L.; Chen, J.; Qiu, Z.; He, Y. GainTKW: A Measurement System of Thousand Kernel Weight Based on the Android Platform. Agronomy 2018, 8, 178. [Google Scholar] [CrossRef]
  6. Liu, S.; Liu, Y.; Hu, A.; Zhang, Z.; Wang, H.; Li, J. Online Identification of Weedy Rice Seeds Based on ECMM Segmentation. Trans. Chin. Soc. Agric. Mach. 2022, 53, 323–333. [Google Scholar] [CrossRef]
  7. Peng, S.Z.; Yue, Y.B.; Feng, E.Y.; Li, L.J.; Sun, C.Q.; Zhao, Z.Y. Development and design of rapeseed counting system based on machine vision. J. Comput. Appl. 2020, 40, 142–146. [Google Scholar]
  8. Tan, S.; Ma, X.; Mai, Z.; Qi, L.; Wang, Y. Segmentation and counting algorithm for touching hybrid rice grains. Comput. Electron. Agric. 2019, 162, 493–504. [Google Scholar] [CrossRef]
  9. Wu, D.; Cai, Z.; Han, J.; Qin, H. Automatic kernel counting on maize ear using RGB images. Plant Methods 2020, 16, 79. [Google Scholar] [CrossRef]
  10. Peng, J.; Yang, Z.; Lv, D.; Yuan, Z. A dynamic rice seed counting algorithm based on stack elimination. Measurement 2024, 227, 114275. [Google Scholar] [CrossRef]
  11. Ma, N.; Su, Y.; Yang, L.; Li, Z.; Yan, H. Wheat Seed Detection and Counting Method Based on Improved YOLOv8 Model. Sensors 2024, 24, 1654. [Google Scholar] [CrossRef] [PubMed]
  12. Song, P.; Zhang, H.; Wang, C.; Luo, B.; Zhao, Y.; Pan, D. Design and Experiment of Maize Kernel Traits Acquisition Device. Trans. Chin. Soc. Agric. Mach. 2017, 48, 19–25. [Google Scholar]
  13. Wang, L.; Zhang, Q.; Feng, T.; Wang, Y.; Li, Y.; Chen, D. Wheat Grain Counting Method Based on YOLO v7-ST Model. Trans. Chin. Soc. Agric. Mach. 2023, 54, 188–197+204. [Google Scholar] [CrossRef]
  14. Chen, Z.; Fan, W.; Luo, Z.; Guo, B. Soybean seed counting and broken seed recognition based on image sequence of falling seeds. Comput. Electron. Agric. 2022, 196, 106870. [Google Scholar] [CrossRef]
  15. Xi, X.; Zhao, J.; Shi, Y.; Qu, J.; Gan, H.; Zhang, R. Online Detection Method for Wheat Seeding Distribution Based on Improved Concave Point Segmentation. Trans. Chin. Soc. Agric. Mach. 2024, 55, 75–82. [Google Scholar]
  16. Li, Q.; Yao, Y.; Yang, Q.; Shu, W.; Li, J.; Zhang, B.; Zhang, D.; Geng, Z. A Study on Soybean Seed Detection Method Based on MATLAB Image Processing. Chin. Agric. Sci. Bull. 2018, 34, 20–25. [Google Scholar]
  17. Mussadiq, Z.; Laszlo, B.; Helyes, L.; Gyuricza, C. Evaluation and comparison of open source program solutions for automatic seed counting on digital images. Comput. Electron. Agric. 2015, 117, 194–199. [Google Scholar] [CrossRef]
  18. Khaki, S.; Pham, H.; Han, Y.; Kuhl, A.; Kent, W.; Wang, L. Convolutional Neural Networks for Image-Based Corn Kernel Detection and Counting. Sensors 2020, 20, 2721. [Google Scholar] [CrossRef]
  19. Wang, Y.; Li, Y.; Wu, T.; Sun, S.; Wang, M. Counting Method of Soybean Seeds Based on Density Estimation and VGG-Two. Smart Agric. 2021, 3, 111–122. [Google Scholar] [CrossRef]
  20. Sun, J.; Zhang, Y.; Zhu, X.; Zhang, Y. Deep learning optimization method for counting overlapping rice seeds. J. Food Process Eng. 2021, 44, e13787. [Google Scholar] [CrossRef]
  21. Domhoefer, M.; Chakraborty, D.; Hufnagel, E.; Claußen, J.; Wörlein, N.; Voorhaar, M.; Anbazhagan, K.; Choudhary, S.; Pasupuleti, J.; Baddam, R.; et al. X-ray driven peanut trait estimation: Computer vision aided agri-system transformation. Plant Methods 2022, 18, 76. [Google Scholar] [CrossRef] [PubMed]
  22. Zhao, Y.; Wu, W.; Zhou, Y.; Zhu, B.; Yang, T.; Yao, Z.; Ju, C.; Sun, C.; Liu, T. A backlight and deep learning based method for calculating the number of seeds per silique. Biosyst. Eng. 2021, 213, 182–194. [Google Scholar] [CrossRef]
  23. Uzal, L.; Grinblat, G.; Namías, R.; Larese, M.; Bianchi, J.; Morandi, E.; Granitto, P. Seed-per-pod estimation for plant breeding using deep learning. Comput. Electron. Agric. 2018, 150, 196–204. [Google Scholar] [CrossRef]
  24. Li, Y.; Jia, J.; Zhang, L.; Khattak, A.M.; Sun, S.; Gao, W.; Wang, M. Soybean Seed Counting Based on Pod Image Using Two-Column Convolution Neural Network. IEEE Access 2019, 7, 64177–64185. [Google Scholar] [CrossRef]
  25. Zhao, J.; Kaga, A.; Yamada, T.; Komatsu, K.; Hirata, K.; Kikuchi, A.; Hirafuji, M.; Ninomiya, S.; Guo, W. Improved Field-Based Soybean Seed Counting and Localization with Feature Level Considered. Plant Phenomics 2023, 5, 0026. [Google Scholar] [CrossRef] [PubMed]
  26. Yao, Y.; Li, Y.; Chen, Y.; Ding, Q.; He, R. Testing method for the seed number per silique of oilrape based on recognizing the silique length images. Trans. Chin. Soc. Agric. Eng. 2021, 37, 153–160. [Google Scholar] [CrossRef]
  27. Tran, K.-D.; Ho, T.-T.; Huang, Y.; Le, N.Q.K.; Tuan, L.Q.; Ho, V.L. MASPP and MWASP: Multi-head self-attention based modules for UNet network in melon spot segmentation. Food Meas. 2024, 18, 3935–3949. [Google Scholar] [CrossRef]
Figure 1. 3D modeling drawings and objects of rapeseed pod transmission imaging device (1. Side panel, 2. camera, 3. LED light source, 4. transparent plate, 5. lithium battery, 6. microcomputer host, 7. touch screen, 8. light source adjustment knob, 9. photoelectric switch).
Figure 1. 3D modeling drawings and objects of rapeseed pod transmission imaging device (1. Side panel, 2. camera, 3. LED light source, 4. transparent plate, 5. lithium battery, 6. microcomputer host, 7. touch screen, 8. light source adjustment knob, 9. photoelectric switch).
Agriculture 14 02215 g001
Figure 2. Seed counter of rapeseed silique and its software.
Figure 2. Seed counter of rapeseed silique and its software.
Agriculture 14 02215 g002
Figure 3. Rapeseed identification and testing flowchart.
Figure 3. Rapeseed identification and testing flowchart.
Agriculture 14 02215 g003
Figure 4. Contrast enhancement effect of images with different contrast.
Figure 4. Contrast enhancement effect of images with different contrast.
Agriculture 14 02215 g004
Figure 5. Comparison of seed detection by different detection methods in low contrast. (a) Single missed detection, (b) missed detection due to adhesion, (c) duplicate detection of adhesion.
Figure 5. Comparison of seed detection by different detection methods in low contrast. (a) Single missed detection, (b) missed detection due to adhesion, (c) duplicate detection of adhesion.
Agriculture 14 02215 g005
Figure 6. Comparison of seed detection with different detection methods in medium contrast. (a) Single missed detection, (b) missed detection due to adhesion, (c) duplicate detection of adhesion.
Figure 6. Comparison of seed detection with different detection methods in medium contrast. (a) Single missed detection, (b) missed detection due to adhesion, (c) duplicate detection of adhesion.
Agriculture 14 02215 g006
Figure 7. Comparison of seed detection by different detection methods in high contrast. (a) Single missed detection, (b) missed detection due to adhesion, (c) duplicate detection of adhesion.
Figure 7. Comparison of seed detection by different detection methods in high contrast. (a) Single missed detection, (b) missed detection due to adhesion, (c) duplicate detection of adhesion.
Agriculture 14 02215 g007
Table 1. Power distribution.
Table 1. Power distribution.
VoltageModule
24 VLithium battery
19.5 VMicrocomputer
12 VLED backlight, photoelectric switch, camera
Table 2. Software and hardware platform information.
Table 2. Software and hardware platform information.
Hardware/SoftwareVersion
CPU12th Gen Intel® Core™ i9-12900k 3.20 GHz
RAM64 GB
GPUNIVIDIA GeForce RTX 3090ti (24 GB)
SystemWindows 11
Algorithmic languagePython3.7
Deep learning frameworkPytorch1.7.1
Computer vision libraryOpenCV4
CUDACUDA11.0
CUDNNCUDNN8.0.5.39
Table 3. Comparison of the influence with image contrast enhancement.
Table 3. Comparison of the influence with image contrast enhancement.
Detection ObjectImage QuantityOriginal Contrast MeanAverage Contrast After EnhancementNo Contrast Enhancement
Kernel N
Number Detection Accuracy
Contrast Enhancement
Kernel Number Detection Accuracy
Low-contrast image1000.0760.31592.5%95.7%
Medium-contrast image1000.1840.34293.2%96.2%
High-contrast image1000.2560.33293.7%96.4%
Table 4. Comparison of algorithm results.
Table 4. Comparison of algorithm results.
MethodsDifferent Contrast Accuracy (%)
LowMediumHigh
YOLO95.996.697.2
Method of this paper96.597.397.5
Table 5. Comparison of manual and automatic production measurement.
Table 5. Comparison of manual and automatic production measurement.
Test ItemsManualAutomatic
Accuracy of detection (%)93.697.2
Flux (/h)150372
Intensity of laborHigherLower
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Xu, S.; Xu, R.; Ma, P.; Huang, Z.; Wang, S.; Yang, Z.; Liao, Q. Design of a Non-Destructive Seed Counting Instrument for Rapeseed Pods Based on Transmission Imaging. Agriculture 2024, 14, 2215. https://doi.org/10.3390/agriculture14122215

AMA Style

Xu S, Xu R, Ma P, Huang Z, Wang S, Yang Z, Liao Q. Design of a Non-Destructive Seed Counting Instrument for Rapeseed Pods Based on Transmission Imaging. Agriculture. 2024; 14(12):2215. https://doi.org/10.3390/agriculture14122215

Chicago/Turabian Style

Xu, Shengyong, Rongsheng Xu, Pan Ma, Zhenhao Huang, Shaodong Wang, Zhe Yang, and Qingxi Liao. 2024. "Design of a Non-Destructive Seed Counting Instrument for Rapeseed Pods Based on Transmission Imaging" Agriculture 14, no. 12: 2215. https://doi.org/10.3390/agriculture14122215

APA Style

Xu, S., Xu, R., Ma, P., Huang, Z., Wang, S., Yang, Z., & Liao, Q. (2024). Design of a Non-Destructive Seed Counting Instrument for Rapeseed Pods Based on Transmission Imaging. Agriculture, 14(12), 2215. https://doi.org/10.3390/agriculture14122215

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop