Next Article in Journal
Modifications to ArduSub That Improve BlueROV SITL Accuracy and Design of Hybrid Autopilot
Previous Article in Journal
Thermal and Thermomechanical Analysis of Amorphous Metals: A Compact Review
Previous Article in Special Issue
Research Progress on Key Mechanical Components of the Pneumatic Centralized Fertilizer Discharge System
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Mapping Gaps in Sugarcane Fields in Unmanned Aerial Vehicle Imagery Using YOLOv5 and ImageJ

by
Inacio Henrique Yano
1,2,*,
João Pedro Nascimento de Lima
1,
Eduardo Antônio Speranza
1 and
Fábio Cesar da Silva
1,3
1
Brazilian Agricultural Research Corporation (Embrapa), Embrapa Digital Agriculture, Campinas 13083-886, SP, Brazil
2
Paula Souza Center (CEETEPS), Technology Faculty of Santana de Parnaiba (FATEC-SPB), Santana de Parnaiba 06529-001, SP, Brazil
3
Paula Souza Center (CEETEPS), Technology Faculty “Deputado Roque Trevisan” of Piracicaba (FATEC Piracicaba), Piracicaba 13414-155, SP, Brazil
*
Author to whom correspondence should be addressed.
Appl. Sci. 2024, 14(17), 7454; https://doi.org/10.3390/app14177454
Submission received: 24 May 2024 / Revised: 5 July 2024 / Accepted: 1 August 2024 / Published: 23 August 2024
(This article belongs to the Collection Agriculture 4.0: From Precision Agriculture to Smart Farming)

Abstract

:
Sugarcane plays a pivotal role in the Brazilian economy as a primary crop. This semi-perennial crop allows for multiple harvests throughout its life cycle. Given its longevity, farmers need to be mindful of avoiding gaps in sugarcane fields, as these interruptions in planting lines negatively impact overall crop productivity over the years. Recognizing and mapping planting failures becomes essential for replanting operations and productivity estimation. Due to the scale of sugarcane cultivation, manual identification and mapping prove impractical. Consequently, solutions utilizing drone imagery and computer vision have been developed to cover extensive areas, showing satisfactory effectiveness in identifying gaps. However, recognizing small gaps poses significant challenges, often rendering them unidentifiable. This study addresses this issue by identifying and mapping gaps of any size while allowing users to determine the gap size. Preliminary tests using YOLOv5 and ImageJ 1.53k demonstrated a high success rate, with a 96.1% accuracy in identifying gaps of 50 cm or larger. These results are favorable, especially when compared to previously published works.

1. Introduction

Sugarcane was the initial crop during the early days of Brazilian colonization, and today, it remains a pivotal component of the country’s economy. Nearly all regions across Brazil engage in sugarcane cultivation, with the state of São Paulo prominently leading the nation in production. Brazil holds the global distinction as the largest sugar producer and the second-largest ethanol producer [1]. Consequently, sugarcane plays a crucial role in fostering economic growth, generating employment opportunities, and contributing substantially to the country’s foreign exchange reserves [2].
Sugarcane stands out as a semi-perennial crop, setting it apart from annual crops. Unlike most crops that follow an annual planting and harvesting cycle, sugarcane boasts a longer life span, typically ranging from three to six years before it is necessary to establish a new sugarcane crop; i.e., sugarcane fields are renewed, on average, every five years [3].
This semi-perennial characteristic arises from its capacity to regenerate and yield stalks after harvest, thus ensuring its sustained presence in the field. However, maintaining the health and productivity of the crop over the years demands effective management practices, including proper fertilization, control of pests and diseases, and precise harvest management. Cultivating sugarcane also entails a substantial initial investment, particularly in tasks such as land preparation, planting, and the initial phases of crop development [4].
Because it lasts for several years, farmers must consider and avoid planting failures in sugarcane fields, because this negatively interferes with the productivity of the crop over these several years of the life of the sugarcane field. The crop’s economic return depends on good plantation maintenance, and it is also affected by the presence of planting failures [5].
Among the factors that cause planting failures are the attack of pests such as nematodes, termites, and sugarcane boll weevil, among others. In addition, failures can be caused by weed infestations, trampling on the stumps, and miscalibrated or worn machinery in harvesting operations [6]. The identification of field gaps allows for the estimation of production breaks, and if gaps exceed 10% [7], gap filling operations are conducted by sugarcane producers [8] to maintain the planned planting density [9]. Gaps are considered empty spaces, measured from center-to-center of the culms at ground level, that are greater than 50 cm [10].
Because sugarcane fields occupy large areas, manual identification of field gaps must be performed by sampling and without creating location maps, in addition to possible bias problems caused by the sampling system. Therefore, there are some works that identify field gaps based on an analysis of the imagery captured from unmanned aerial vehicles (UAVs) [11,12]. UAVs cover large areas in a short time, justifying their use. UAV imagery differs from satellite imagery, having sufficient spatial and temporal resolution to identify various objects of interest, such as gaps in sugarcane crops. UAVs are also affordable for most producers, especially when compared with aircraft flights that are excessively expensive [13,14].
In this study, we utilized the deep learning neural network YOLOv5 [15] due to its lower computational demands compared to R-CNNs and because it also has a high accuracy. The neural network can identify gaps of any size, including those smaller than 50 cm. Subsequently, using ImageJ software, rural producers can determine the size of the field gaps to be displayed on the map in a parameterized manner.

2. Literature Review

2.1. Computer Vision and Object Detection

Computer vision is a multidisciplinary research area within artificial intelligence and machine learning that aims to enable computers to interpret and understand visual information [16]. Object detection, a critical aspect of computer vision, has garnered significant attention [17,18].
Deep neural networks (DNNs) are a type of neural network that is widely employed in computer vision, primarily for identifying objects in images. Convolutional neural networks (CNNs), a specialized form of DNNs, are frequently used to process and analyze visual data [16]. CNNs excel at automatically extracting features from images, which are then used to create models for object classification [19].

2.2. YOLO and Object Detection Techniques

Object detection systems are generally categorized into two groups: two-stage and single-stage detectors. Two-stage detectors, which have a more complex architecture, first identify regions of interest and then pass these regions to a convolutional neural network for further analysis. Single-stage detectors, on the other hand, identify objects in one step using a simpler architecture [18].
Typically, two-stage detection systems, such as R-CNNs, achieve a higher accuracy than single-stage systems. However, single-stage detectors perform faster detections. Notably, the advent of YOLO (you only look once) and its subsequent iterations has significantly enhanced the accuracy of single-stage systems, sometimes surpassing that of two-stage systems [20].
Both YOLO and Faster R-CNN are transformative in precision agriculture for tasks like detecting and monitoring crop pests and diseases, thereby aiding in resource optimization and management. While both systems are effective for various environmental monitoring tasks, YOLO’s flexible architecture makes it particularly adaptable to specific needs [21].
Although R-CNNs generally offer better accuracy compared to YOLO, the latter also provides substantial accuracy and can sometimes outperform R-CNNs. In this study, YOLO was chosen, because R-CNNs require more advanced computational resources, such as state-of-the-art GPUs, making it less accessible for many users [18,20].

2.3. ImageJ for Image Processing

ImageJ is a Java-based platform for image processing, developed by Wayne Rasband at the National Institutes of Health (NIH). Since its launch in 1997, it has been widely utilized in various image-processing applications [22].
While object detection systems like YOLO are designed for identifying and counting objects in images, they lack the capability of detailed image manipulation provided by software like ImageJ. ImageJ offers a range of features, including measuring line lengths, calculating areas, and identifying and counting objects in images. It also supports advanced operations like filtering and performing mathematical operations on pixel values. For color images, these operations are carried out channel by channel, such as the red, green, and blue channels in RGB images [23].
One notable feature of ImageJ is its ability to generate customizable reports that are tailored to specific applications. In this study, it was used to map planting failures in images. The location of an object within an image can be specified in pixels, such as the object’s centroid or the top-left pixel. Additionally, pixel measurements can be converted to other units, such as meters or centimeters, provided that a standard scale is available [23].

2.4. Related Works

In [14], the authors used the linear discriminant analysis (LDA) technique to assess the quality of a sugarcane crop on a farm in Nicaragua, achieving an overall accuracy of 92.9%.
In [11,12], the authors tested the performance of the Inforow software (https://inforow.com.br/en) in detecting planting failures using pixel sizes of 3.5, 6.0, and 8.2 cm in an experimental field with plant heights of 0.5, 0.9, 1.5, 2.0, and 2.5 m. The software was unable to identify planting failures smaller than 1.0 m when using pixel sizes larger than 3.5 cm. Even with the highest spatial resolution of 3.5 cm, the software could not detect planting failures between 0.5 and 1.0 m for sugarcane plants that were 0.9 m or taller.

3. Materials and Methods

This project comprises four activities outlined in Figure 1, illustrating the work breakdown structure (WBS) for developing a system to map gaps in sugarcane fields using RGB drone imagery. Below is a concise overview of each activity:
(1)
The initial activity involves capturing fields images, and following image capture, generating orthomosaics from the acquired images.
(2)
The second activity involves segmenting samples from the orthomosaic to train the convolutional neural network and generate models for mapping field gaps.
(3)
In the third phase, the optimal model will be utilized to identify field gaps within the orthomosaics.
(4)
Lastly, the fourth activity entails producing the mapping of field gaps, accompanied by a comprehensive report detailing the location of these gaps.

3.1. Taking Field Images and Generating Orthomosaics

This task involves generating an image database sourced from the sugarcane plantation, which will later serve as the foundation for mapping the locations of field gaps. For this purpose, a Drone DJI Phantom 4 Standard equipped with its 20 MP RGB camera was employed (Figure 2). The experimental fields for this endeavor were situated in the municipality of Tambaú, state of São Paulo, Brazil, at the following coordinates: latitude: −21.7023, longitude: −47.2814. The drones conducted flights over the crops, capturing images from an altitude of 60 m when the sugarcane plants were between three and five months old after sprouting. This altitude provided a GSD of 1.4 cm/pixel, which was sufficient to identify, select, and map the planting failures in a test image.
This task also involved creating orthomosaics (Figure 2). A total of 545 images were captured from the experimental field and stored in a database for orthomosaic construction using the OpenDroneMap 1.9.14 software [24]. The images were taken using the Pix4D (https://www.pix4d.com/) application, which allows for flight plan tracing and the adjustment of image overlap. In this instance, the overlap was set at 90% to facilitate the construction of the orthomosaic [25].

3.2. Training the YOLO Model

During the training phase of the convolutional neural network, it is crucial to have multiple input images of consistent dimensions, each containing samples of the objects that the neural network is designed to detect. These input images must have a resolution that is a multiple of 32 pixels. In this study, the input images were obtained by manually cropping sub-images from the orthomosaic. The chosen size for the input images was 416 × 416 pixels2; a standard dimension for such purposes [26,27]. The neural network in this study was tasked with identifying two classes: field gaps designated as class 0 and plants designated as class 1, as depicted in Figure 3.
Annex A includes the command utilized for neural network training. Subsequently, image samples and corresponding annotations for the two classes were employed to train the YOLOv5 neural network using the Google Colab platform [28,29] (Figure 4), negating the need to invest in computers equipped with GPUs for this endeavor. LabelImg software 1.8.6 [30,31] was utilized for class annotations (Figure 5).

3.3. Applying the YOLO Best Model

Upon completion of YOLOv5 training, executing the “detect” command with the best model file will proceed the identification of field gaps within the orthomosaics (Figure 6). Figure 7 illustrates field gaps, identified by a red rectangle. In this figure, the rectangles emphasize the field gap class to enhance visualization. Moreover, it omits the confidence index, which typically indicates the accuracy of object identification, the class name (whether field gap or plant), and any potential overlaps. Annex B encompasses the command utilized for field gap detection.

3.4. Generate Gaps Mapping

The fifth and final task in the field gap map process involves selecting the identified gaps based on a parameterized size measured in pixels2. This approach is necessary because convolutional neural networks excel at identifying objects, regardless of their size in the image. Consequently, for planting failures, only empty spaces larger than 50 cm are considered to be commercially significant. The goal is to map only these substantial failures.
To address this, the mapping process is conducted in two stages. Initially, convolutional neural networks are used to identify planting failures of any size. In the second stage, commercially significant planting failures are selected. These failures will then be mapped using ImageJ digital image processing software.
Since YOLO identifies objects within rectangular bounding boxes, the selection of planting failures identified by YOLO must be refined using ImageJ based on the size of the area demarcated by YOLO. Typically, gaps exceeding 50 cm in length are classified as failures. To achieve the selection based on area, the width of the bounding box is multiplied by its length. However, these dimensions are in pixels, so a conversion is necessary to determine the length in centimeters.
In Figure 8, the red line represents the distance between cultivation rows, which is 150 cm for sugarcane. The red line, as shown in Table 1, measures 114.54 pixels. This pixel length was obtained from ImageJ by drawing a line and selecting Analyze > Measure. The same method was applied to the lines in orange, brown, blue, green, and pink. Using the known measurement of 150 cm and the corresponding pixel length, an index was derived to convert all pixel lengths to centimeters, as shown in Table 1.
To differentiate between failure sizes to be recorded and those to be discarded, the area in pixels must be calculated. In Figure 8, an example using a subimage of Figure 16, where the rectangle in black marked with the number 3 is selected as a failure to be recorded, while the white-bordered rectangles should be discarded. Referring to Table 1, an area threshold of 4200 pixels2 can be used to distinguish the black rectangle from the white-bordered ones (Figure 8a,b).
The index for converting pixel lengths to centimeters is calculated as follows:
Ind_pixel_cm = red line length/length in pixels = 1.3096
Using this index, the lengths in pixels can be converted to centimeters for the rectangles in Figure 8. In Figure 8c, the widths, highlighted in orange, are approximately the same size, ranging from 52 to 61 pixels. The following formula is used to calculate the area for a length of 50 cm in pixels:
length in pixels = length in cm/Ind_pixel_cm
and for a length of 50 cm:
length in pixels for 50 cm = 50/1.3096 = 38.18 pixels
To estimate the area of a failure with a length of 50 cm, the following formula is used:
Area in pixels2 = avg_width × length_pixel
The average width of the failures is denoted as avg_width, assuming avg_width = 55.07 pixels.
Area for 50 cm length = 38.18 × 55.07 = 2102.57
Therefore, an area of 2100 pixels2 will be used as a benchmark in this study for a length of 50 cm. To illustrate the entire process, an area value of 4200 pixels2 will be used to demonstrate the exclusion of certain planting failures.
Once the planting failures have been selected using the ImageJ script, a report is generated detailing the location and size of each gap within the image. We utilized the freely available software ImageJ [23,32] to execute the selection process and generate the report. Figure 9 illustrates the steps for obtaining the field gap map, followed by a description of each step in the ImageJ script.
The initial step involves splitting the RGB image into three channels (Image > Color > Split Channels) [33], each containing the field gaps identified by YOLOv5, because ImageJ functions mostly work with 8 bit images. This process permits the selection of field gaps based on size, along with other operations such as field gap numbering (Figure 10).
In the next step, the red channel is used to identify field gaps based on a specified size, utilizing the built-in Analyze Particles plugin of ImageJ [34]. For this instance, the size parameter was set to 4200 pixels2. This method was chosen because the neural network can sometimes identify smaller empty spaces that may not be relevant for commercial field gap analysis. This step helps to correct such potential misidentifications. Figure 11a illustrates the outcome of this procedure. Additionally, this analysis is set to generate a report in a table format, indicating the locations of the field gap map, as demonstrated in Table 2.
In Figure 11b, the gaps are counted and numbered using the Analyze Particles plugin [35] based on their position, following a horizontal arrangement from top to bottom and a vertical arrangement from left to right. These numbers correspond to the lines in the field gap map table (Table 2), where each line represents a gap with the same numbering, as seen in the image. This table serves to verify the accuracy of the identification system and can be used in sugarcane replanting operations.
Table 2 displays the identified gaps, with the gap numbers in the image aligning with the “Gap” column, allowing for easy association of data between the table and the numbered field gaps in the image (Figure 11b). The second column in Table 2 indicates the area of each gap in pixels2. The “X” and “Y” columns represent the centroid coordinates of the field gap in the image, which will eventually be replaced with georeferenced data, enabling precise identification and replanting operations in these areas if required by farmers in a continuation of this work. The “XM” and “YM” columns denote the center of mass of each gap. The “BX” and “BY” coordinates specify the upper-left corner of the gap rectangle. The “Width” and “Length” columns provide the dimensions of each gap, respectively.
Figure 11c presents an inversion of the values from Figure 11b (Edit > Invert) [36], where the identified gaps are represented by maximum pixel values (255). Subsequently, subtracting Figure 11c from the original image highlights the field gaps and their corresponding numbers, as depicted in Figure 15. Since the field gaps are assigned maximum pixel values (255), subtracting these values (Process > Image Calculator) from the red, green, and blue channels [37] results in rectangles with zeroed-out pixels, effectively rendering them as black rectangles (Image > Color > Merge Channels) [38].
The subsequent step involves dividing the original image into channels (Figure 12). This process is designed to subtract the selected and numbered field gaps from each channel, as illustrated in Figure 13, and then merge (Image > Color > Merge Channels) these channels (Figure 14a for red, Figure 14b for green, and Figure 14c for blue) into a new image that displays the selected field map, depicted in Figure 14d. This newly merged image represents the field gap map report.
Figure 15a depicts the outcome of the selection process, while Figure 15b illustrates the output of YOLOv5′s detection of planting failures. In Figure 15c, the field gap selections are highlighted in black, and the rejected marks, deemed too small, are outlined using white rectangles (Analyze > Analyze Particles, with minimum size set to 4,200 pixels2). Figure 16 showcases the ultimate result of the field gap selection, providing improved visualization.

4. Results

The field gap map relies on two processes managed by two distinct files. The first file contains the YOLOv5 model, which is responsible for identifying all field gaps. Within this file are the neural network weights used for field gap identification. The training parameters included an input size of 416 × 416 pixels2, a batch size of 16, and a planned duration of 300 epochs. However, training was stopped early at 210 epochs, after approximately 2 h, due to a lack of improvement in the validation metrics (early stopping). The final training loss achieved was 0.728, with a mean average precision (mAP) of 0.691.
The second file is the ImageJ script, which selects field gaps based on the minimum and maximum values specified during the Analyze Particles procedure. Additionally, it generates a table containing information such as the area and location of each selected field gap. After the selection process, the script assigns a unique number to each field gap according to the information provided in the table, enabling visualization of each field gap on a map and access to its corresponding data in the table.
Figure 17 shows the results of applying the best model to a larger image. The next step involves using an ImageJ script for selection. First, planting failures larger than 4200 pixels2, which represent gaps greater than 100 cm, will be selected and mapped. Subsequently, failures larger than 2100 pixels2, representing gaps larger than 50 cm, will be identified and mapped.
Figure 18 illustrates the output of the ImageJ script for failures exceeding 4200 pixels2, identifying 53 planting failures with 3 instances of displacement errors, as shown in Figure 19. Based on the findings, the accuracy can be calculated by dividing these three misidentification issues by the total number of planting failures, resulting in an accuracy of 94.34%. The report detailing planting failures larger than 4200 pixels2 is available in Table A1 of Appendix A.
Figure 20 shows the detection of failures larger than 2100 pixels2, resulting in the identification of 154 planting failures. In this second analysis, in addition to the displacement errors, three false negatives were identified (Figure 21). These false negatives, which are smaller than 4200 pixels2, were only considered in the second run. The accuracy in this second analysis is the 6 errors (three misidentifications and three false negatives) divided by 154, resulting in a 96.1% accuracy. The report mapping failures larger than 2100 pixels2 can be found in Table A2 of Appendix B.

5. Discussion and Conclusions

Both analyses, for gaps larger than 4200 pixels2 and those larger than 2100 pixels2, outperformed those in [14]. The developed application shows promise for effectively identifying small field gaps between 50 and 100 cm. The YOLOv5 neural network successfully identified nearly all the field gaps, allowing users to customize the threshold for gap size consideration and choose whether to discard smaller gaps based on their specific needs.
Field gaps present a significant challenge to sugarcane productivity, especially given the crop’s semi-perennial nature. These gaps not only affect the yield of the current year but also have repercussions on productivity in subsequent years. The identification and mapping of field gaps play a crucial role in informing replanting strategies and estimating productivity.
Some studies have focused on identifying and mapping field gaps, but they often struggle to detect small gaps due to spatial resolution limitations and occlusions from sugarcane leaves [11,12]. In our solution, using GSD imagery with a resolution of 1.4 cm per pixel, we successfully mapped almost all the small gaps of 50 cm or larger in an image of 2309 × 2309 pixels. Since ImageJ selects particles based on area, the 50 cm gaps were converted to an area of 2100 pixels2.
The application presented in this work has the potential to be an innovative solution by integrating computer vision, machine learning, convolutional neural networks, and digital image processing. Recognizing objects in images is challenging due to variations in scale, as objects may appear at different distances, yet their sizes still need to be identified. For instance, when detecting planting failures, objects of various sizes can be considered failures, but economically significant failures are typically those longer than 50 cm. Convolutional neural networks are not inherently suited for this task, so using a digital image processing tool like ImageJ enables the accurate selection of empty spaces that truly qualify as failures. This approach also offers flexibility for producers to tailor the identification process, accommodating factors such as failures caused by improperly calibrated harvesting machines and other equipment.
Since occlusions from leaves or weeds reduce the visible space between tillers, thereby concealing planting failures, we plan to treat these spaces as indicators of planting failures in future research. This approach will enhance the accuracy of identifying and mapping planting failures.

Author Contributions

Conceptualization, I.H.Y. and E.A.S.; methodology, I.H.Y.; scripting and software configuration, I.H.Y. and J.P.N.d.L.; validation, I.H.Y. and J.P.N.d.L.; research, I.H.Y. and E.A.S.; sample annotation, J.P.N.d.L. and I.H.Y.; resources, I.H.Y. and E.A.S.; data curation, I.H.Y.; writing—original draft preparation, I.H.Y.; writing—review, E.A.S. and F.C.d.S.; supervision, E.A.S.; project administration, F.C.d.S.; funding acquisition, F.C.d.S. All authors have read and agreed to the published version of the manuscript.

Funding

The authors thank the Embrapa and Embracal project 30.21.90.004.00.00—Improvement of technical recommendations for correcting soil acidity and its phytotechnical implications in sugarcane fields for the support provided to carry out this work.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Acknowledgments

The authors thank Coplacana employees Fabio Salvaia, Daniel Christofoletti, and Gabriel Camarinha for the technical support in the experimental areas located in Piracicaba, SP, Brazil. The authors also thank AgroAzul Company located in Sertaozinho/SP, Brazil for their excellent work in providing the imagery services.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

Table A1. List of planting failures greater than 4200 pixels2 with sizes given in pixels2 and their location in Figure 18.
Table A1. List of planting failures greater than 4200 pixels2 with sizes given in pixels2 and their location in Figure 18.
GapAreaXYXMYMBXBYWidthLength
146111793.5833.801793.1933.72175448060
217,4693097.9766.133098.3365.82301610172122
35539530.5481.40530.7381.33488458769
463681934.4690.461933.7890.391886569668
54593663.66143.40663.44143.076261127662
670872312.32144.992311.71144.84224611813254
766032428.00156.452427.84156.3823841188876
846552264.53213.662263.92213.7122301807068
973933152.99233.013152.43232.96309619911367
1064352534.51260.002534.23260.04247723211556
1163351081.59326.701081.61326.6110362929370
1277601956.02429.101956.01429.1019103859287
1343282566.47433.492566.26433.5025304027262
14488965.96436.1465.36436.17244078461
1554991733.49466.011733.56466.0016934328168
1653923113.41780.123112.97780.2030727468268
1743432744.471517.402743.891516.89270714867362
1851803118.361552.083118.361552.02307215198865
1956612825.381671.022825.161670.73277816409462
204330139.071743.97138.781743.6010217147660
215358485.541770.50485.361770.4344317388765
2267013087.521775.463087.281775.213036174210466
2344193068.491862.013068.161861.87302918347956
2450241294.601897.341294.241897.01125618647866
257979938.111930.96937.771930.83886186699118
2642561064.582010.361064.342010.25102619807858
2749102564.822010.422564.472010.222513198210054
285489389.882085.57389.962085.55340205810056
2942311345.082121.361344.332121.21130820927458
3061052073.642200.352073.682199.932022217010460
3146981454.522275.031454.322275.00141022488955
3294872326.712311.402327.032311.142258227613870
337766110.062326.41109.822326.2460228610080
345116769.072356.35768.572356.1673023227868
3553401274.152426.991273.802426.79123123968762
367767216.532521.59216.262521.62154249012464
3759282694.502559.422694.322559.25264825269266
3884232828.432582.592828.382582.582770254611674
3973441926.122591.371926.242591.221857256413954
4042741770.942642.681771.062642.95173026168254
414400728.022673.00727.902672.9169026447658
4246732180.982682.012180.422681.92214226527860
4374621030.022700.601029.782700.4998426609283
4451123131.532709.253131.072708.94308726808958
4550261494.502764.451493.522764.19145627307668
467834464.092796.32464.002796.0341627549684
4776121611.042906.011610.912905.921560286810276
4855053120.992918.003120.502917.953068289210652
494228216.392929.32216.252929.2518228957165
5060361229.492973.781229.232973.79118229429664
5151221904.053027.441903.693027.16186229968462
524612301.063059.96300.843059.8126530277365
535271760.013146.02759.953146.1071631168860

Appendix B

Table A2. List of planting failures greater than 2100 pixels2 with sizes given in pixels2 and their location in Figure 20.
Table A2. List of planting failures greater than 2100 pixels2 with sizes given in pixels2 and their location in Figure 20.
GapAreaXYXMYMBXBYWidthLength
13041897.4222.01896.8122.1486207044
231131153.0120.011152.9019.83111407840
346111793.5833.801793.1933.72175448060
429841927.3529.591927.0929.73189087244
517,4693097.9766.133098.3365.82301610172122
637111489.0040.061488.5140.081452147453
734811570.5370.021570.0569.971537446952
85539530.5481.40530.7381.33488458769
932362186.5873.422186.3672.992156466254
1063681934.4690.461933.7890.391886569668
113665809.57110.03809.33109.99778806461
1224992058.49134.442058.03133.7820341085052
1327171755.45135.971755.27136.0417261115949
144593663.66143.40663.44143.076261127662
1570872312.32144.992311.71144.84224611813254
1666032428.00156.452427.84156.3823841188876
1726042689.48167.022689.26167.2026661384658
1846552264.53213.662263.92213.7122301807068
1973933152.99233.013152.43232.96309619911367
2028142048.46248.362048.20248.2820212225552
2164352534.51260.002534.23260.04247723211556
2240461685.36324.041685.05323.7116522906666
2363351081.59326.701081.61326.6110362929370
2434212511.57330.022511.40330.0524842985665
2535272271.99332.012271.81331.9522383066852
2629702920.02359.522919.58359.7028923325656
2731311509.48371.501509.42371.8414813445756
2831821835.45394.021835.16394.0118083645460
2933082406.96393.452406.97393.3123763646058
3034822200.01405.002199.74405.0221703746062
3177601956.02429.101956.01429.1019103859287
3243282566.47433.492566.26433.5025304027262
33488965.96436.1465.36436.17244078461
3454991733.49466.011733.56466.0016934328168
3538121543.11503.931542.55503.8815094767056
3639902763.02533.012762.82532.9327265067454
3726442940.99606.452940.81606.3729205744264
382704687.46606.46686.84606.366605805452
3936971822.44608.351821.94608.2017875806956
4023761924.48638.931924.43638.7519036104358
412438661.52698.02661.18697.746386724852
4223572518.91769.402518.88769.2724957424754
4353923113.41780.123112.97780.2030727468268
4431821126.50892.981126.11893.0510998645558
4524551373.51892.551373.32892.7713528644458
4633762564.88901.162564.67901.3025348726168
472615926.07923.85925.88923.839008985252
4832481222.54926.981222.48927.1711909026650
4926912530.54989.502530.30989.4325069625055
503842453.431070.61452.901070.4942210386170
5133521645.981098.001645.441097.58161610706056
523027923.481161.55923.151161.5589611345556
5327172406.001166.042406.001166.19237811405652
5428623170.041354.423169.871354.54314413265256
5530131762.071371.971761.491372.03173313465952
5631993147.551450.123147.281450.00311614226357
5725552846.431466.012846.231465.74282214384856
5830642413.471497.952413.031498.02238314706054
5929152132.451502.522131.981502.43210414765654
6043432744.471517.402743.891516.89270714867362
6151803118.361552.083118.361552.02307215198865
6238311746.491564.991746.241565.04171015387354
633660100.981597.57100.801597.317015686260
6425482646.461594.042646.481594.05262115685153
6530212328.001596.012327.491595.96230015695655
663458710.021650.00709.841649.7867816226456
672150227.581664.96227.291665.1320616404450
6856612825.381671.022825.161670.73277816409462
69230719.491692.4219.441692.62016624061
703031369.551721.38369.081721.3234016946154
713486890.461740.67890.081740.3886217105762
724330139.071743.97138.781743.6010217147660
735358485.541770.50485.361770.4344317388765
7467013087.521775.463087.281775.213036174210466
7529702841.461799.002841.441799.04281517705358
7637731098.031828.461097.951828.29106617986460
7730482657.081844.312656.911844.04263218125064
7844193068.491862.013068.161861.87302918347956
7950241294.601897.341294.241897.01125618647866
807979938.111930.96937.771930.83886186699118
8130822840.971901.552840.781901.55281218745856
823076544.001914.48543.721914.5151618865656
8338331065.551930.641065.461930.92103219026858
8432942842.562007.642842.402007.64281819745069
8542561064.582010.361064.342010.25102619807858
8649102564.822010.422564.472010.222513198210054
8728041355.512026.511355.382026.59132920005353
883972122.452087.67122.012087.748820586860
895489389.882085.57389.962085.55340205810056
902648714.962101.60714.822101.7569020745056
9142311345.082121.361344.332121.21130820927458
9238061698.522129.161698.472129.09166720986362
933847241.502130.39240.972130.5320821006660
943450857.512153.38856.912153.3282521266554
953020352.542181.41352.032181.3932621525358
964035640.952195.64640.322195.6361021626269
9761052073.642200.352073.682199.932022217010460
9836321884.972227.561884.912227.65185222006656
9946981454.522275.031454.322275.00141022488955
10094872326.712311.402327.032311.142258227613870
1013525660.972310.49660.882310.5762822826556
1027766110.062326.41109.822326.2460228610080
10341571558.172313.961557.942313.73152122867556
1045116769.072356.35768.572356.1673023227868
10536131937.102363.451937.152363.34190323367054
10637761143.982383.531143.522383.76111223546461
10753401274.152426.991273.802426.79123123968762
1082650811.012480.48810.542480.4978524545352
109360128.492517.6028.102517.77024865864
1107767216.532521.59216.262521.62154249012464
11140872080.012532.392079.822532.30204225047656
11231311485.502533.481485.242533.37145825055657
11334723033.022549.503032.372549.23299825247052
11459282694.502559.422694.322559.25264825269266
11533062491.952572.412491.652572.32246225396061
11684232828.432582.592828.382582.582770254611674
1174069232.692589.20232.892589.2519425628057
11873441926.122591.371926.242591.221857256413954
1193955574.992624.00574.712624.1253625987852
12042741770.942642.681771.062642.95173026168254
1214400728.022673.00727.902672.9169026447658
12226581854.982673.531854.822673.64182826485454
12346732180.982682.012180.422681.92214226527860
12474621030.022700.601029.782700.4998426609283
12532362479.452688.582479.112688.67244926626154
12651123131.532709.253131.072708.94308726808958
12750261494.502764.451493.522764.19145627307668
1287834464.092796.32464.002796.0341627549684
12934131754.542847.981754.122847.88172628185860
1303015349.032862.97348.822862.9432228345458
13123552387.572869.962387.362869.52236428444852
13224892721.542885.482721.302885.37269828584854
13376121611.042906.011610.912905.921560286810276
13432811366.502915.481366.132915.30133228906850
13555053120.992918.003120.502917.953068289210652
1364228216.392929.32216.252929.2518228957165
13731072545.002934.012544.882934.18251429086252
13830772616.552952.972616.452952.96258829265854
13960361229.492973.781229.232973.79118229429664
14036762725.503007.132725.443007.20269429766264
1413751176.233020.24176.213020.3514329906864
14228462798.513020.002798.193020.03277029945751
14351221904.053027.441903.693027.16186229968462
14426442508.463024.002508.393024.04248329985152
1454612301.063059.96300.843059.8126530277365
1462742789.013067.12788.953067.1276530374959
1472588579.553070.01579.333070.0455430425254
14837041472.133077.961471.793077.59143830507056
1495271760.013146.02759.953146.1071631168860
1503972554.443149.65554.113149.7252031206860
15138921096.003155.351096.083155.36106231266858
15226973186.943168.423186.703168.64316231354765
15339111470.393170.931469.953170.58143431406862
15424083008.063186.513008.033186.49298031645845
  • Annex A—Command used for YOLOv5 Training [15]:
  • The command:
  • !python/content/yolov5/train.py --img 416 --batch 16 --epochs 300 --data falha.yaml --weights yolov5s.pt –cache
  • where:
  • --img 416:
  • Sets the input image size during training to 416 × 416 pixels. Larger image sizes can lead to a better accuracy, but they require more GPU memory and training time. Smaller image sizes may result in faster training but could sacrifice some detection performance.
  • --batch 16:
  • Defines the batch size used during training. The batch size defines how many images are processed in one forward and backward pass. A larger batch size may speed up training but requires more GPU memory. Smaller batch sizes might be slower but can be beneficial if there is limited GPU memory.
  • --epochs 300:
  • Sets the number of training epochs, i.e., the number of times the model goes through the entire training dataset. Training for more epochs might lead to better convergence and accuracy, but there is a risk of overfitting if the model is trained for too long.
  • --data falha.yaml:
  • Specifies the path to the data configuration file (falha.yaml in this case), which contains information about the dataset, including the paths to image and label files, the number of classes, etc.
  • content of falha.yaml:
  • path: ../. # dataset root dir
  • train: ./train_data35/images/train # train images (relative to ‘path’)
  • val: ./train_data35/images/val # val images (relative to ‘path’)
  • test: # test images (optional)
  • # Classes
  • names:
  • 0: gap
  • 1: plant
  • # Download script/URL (optional)
  • --weights yolov5s.pt:
  • Specifies the path to the initial weights file to initialize the YOLOv5 model before training. In this case, it starts with the yolov5s.pt weights, which represent the “small” version of the YOLOv5 model.
  • --cache:
  • This parameter enables caching during data loading. Caching can speed up the training process, especially when using large datasets. Cached data are stored on the disk for faster retrieval during subsequent epochs.
  • Annex B—Command used for YOLOv5 Detect [15]:
  • The command:
  • !python/content/yolov5/detect.py --weights/content/best.pt --img 1309 --conf 0.25 --source/content/tambau_1309.jpg --hide-conf --hide-labels --class 0 --iou 0
  • Where:
  • --weights/content/best.pt:
  • This parameter indicates the path to the model weights file to be used for detection. In this case, the model loaded is best.pt, located in the “/content” directory.
  • --img 1309:
  • Set the size of the input image during detection. In this case, the input images will have dimensions of 1309 × 1309 pixels.
  • --conf 0.25:
  • This parameter sets the confidence threshold to filter detections during inferencing. Only detections with a confidence score above 0.25 will be considered, which is the default score.
  • --source/content/tambau_1309.jpg:
  • Specifies the path to the source image that will be used for detection. In this case, the file “tambau_1309.jpg” located in the directory “/content” will be used as input.
  • --hide-conf:
  • With this parameter, the confidence score of the detections will not be displayed in the output.
  • --hide-labels:
  • This parameter causes the labels (class names) of detections not to be displayed in the output.
  • --class 0:
  • Specifies the index of the class you want to detect. In this case, the value “0” indicates that only the class with index 0 will be detected. The index of classes is based on the order in which they were defined during training.
  • --iou 0:
  • Defines the value of the IoU (Intersection over Union) overlap threshold for suppressing non-maximums. A value of 0 disables non maximum suppression.

References

  1. Marin, F.R.; Martha, G.B., Jr.; Cassman, K.G.; Grassini, P. Prospects for increasing sugarcane and bioethanol production on existing crop area in Brazil. BioScience 2016, 66, 307–316. [Google Scholar] [CrossRef] [PubMed]
  2. de Souza Assaiante, B.A.; Cavichioli, F.A. A utilização de veículos aéreos não tripulados (VANT) na cultura da cana-de-açúcar. Rev. Interface Tecnológica 2020, 17, 444–455. [Google Scholar] [CrossRef]
  3. Molin, J.P.; Veiga, J.P.S. Spatial variability of sugarcane row gaps: Measurement and mapping. Ciência Agrotecnologia 2016, 40, 347–355. [Google Scholar] [CrossRef]
  4. Maciel, L.L.L. Biomassa: Uma fonte renovável para geração de energia elétrica no Brasil. Rev. Trab. Acadêmicos-Universo Campos Goytacazes 2020, 1, 13. [Google Scholar]
  5. Molin, J.P.; Veiga, J.P.S.; Cavalcante, D.S. Measuring and Mapping Sugarcane Gaps; University of São Paulo: São Paulo, Brazil, 2014. [Google Scholar]
  6. Oliveira, M.P.D. VANT-RTK: Uma Tecnologia Precisa e Acurada Para Mapeamento de Falhas em Cana-de-açúcar; Universidade Estadual Paulista (Unesp): São Paulo, Brazil, 2023. [Google Scholar]
  7. Shukla, S.K.; Sharma, L.; Jaiswal, V.P.; Pathak, A.D.; Awasthi, S.K.; Zubair, A.; Yadav, S.K. Identification of appropriate agri-technologies minimizing yield gaps in different sugarcane-growing states of India. Sugar Tech 2021, 23, 580–595. [Google Scholar] [CrossRef]
  8. Montibeller, M.; da Silveira, H.L.F.; Sanches, I.D.A.; Körting, T.S.; Fonseca, L.M.G.; Aragão, L.E.O.e.C.e.; Picoli, M.C.A.; Duft, D.G. Identification of gaps in sugarcane plantations using UAV images. In Proceedings of the Simpósio Brasileiro de Sensoriamento Remoto, Santos, Brazil, 28–31 May 2017. [Google Scholar]
  9. Singh, S.N.; Yadav, D.V.; Singh, T.; Singh, G.K. Optimizing plant population density for enhancing yield of ratoon sugarcane (Saccharum spp) in sub-tropical climatic conditions. Indian J. Agric. Sci. 2011, 81, 571. [Google Scholar]
  10. Stolf, R. Metodologia de avaliação de falhas nas linhas de cana-de-açúcar. Stab Piracicaba 1986, 4, 22–36. [Google Scholar]
  11. Barbosa Júnior, M.R. Mapeamento de falhas em cana-de-açúcar por imagens de veículo aéreo não tripulado. Master’s Dissertation, Universidade Estadual Paulista (Unesp), São Paulo, Brazil, 2021. [Google Scholar]
  12. Barbosa Júnior, M.R.; Tedesco, D.; Corrêa, R.D.G.; Moreira, B.R.D.A.; Silva, R.P.D.; Zerbato, C. Mapping gaps in sugarcane by UAV RGB imagery: The lower and earlier the flight, the more accurate. Agronomy 2021, 11, 2578. [Google Scholar] [CrossRef]
  13. Rocha, B.M.; Vieira, G.S.; Fonseca, A.U.; Sousa, N.M.; Pedrini, H.; Soares, F. Detection of Curved Rows and Gaps in Aerial Images of Sugarcane Field Using Image Processing Techniques. IEEE Can. J. Electr. Comput. Eng. 2022, 45, 303–310. [Google Scholar] [CrossRef]
  14. Luna, I.; Lobo, A. Mapping crop planting quality in sugarcane from UAV imagery: A pilot study in Nicaragua. Remote Sens. 2016, 8, 500. [Google Scholar] [CrossRef]
  15. Ultralytics. YOLOv5. GitHub. 2021. Available online: https://github.com/ultralytics/yolov5 (accessed on 3 July 2024).
  16. Karn, A. Artificial intelligence in computer vision. Int. J. Eng. Appl. Sci. Technol. 2021, 6, 249–254. [Google Scholar] [CrossRef]
  17. Gupta, A.K.; Seal, A.; Prasad, M.; Khanna, P. Salient object detection techniques in computer vision—A survey. Entropy 2020, 22, 1174. [Google Scholar] [CrossRef] [PubMed]
  18. Diwan, T.; Anirudh, G.; Tembhurne, J.V. Object detection using YOLO: Challenges, architectural successors, datasets and applications. Multimed. Tools Appl. 2023, 82, 9243–9275. [Google Scholar] [CrossRef]
  19. Thenmozhi, K.; Reddy, U.S. Crop pest classification based on deep convolutional neural network and transfer learning. Comput. Electron. Agric. 2019, 164, 104906. [Google Scholar] [CrossRef]
  20. Joiya, F. Object detection: Yolo vs Faster R-CNN. Int. Res. J. Mod. Eng. Technol. Sci. 2022, 9, 1911–1915. [Google Scholar]
  21. Rane, N. YOLO and Faster R-CNN object detection for smart Industry 4.0 and Industry 5.0: Applications, challenges, and opportunities. 2023. Available online: https://ssrn.com/abstract=4624206 (accessed on 28 June 2024).
  22. Rueden, C.T.; Schindelin, J.; Hiner, M.C.; DeZonia, B.E.; Walter, A.E.; Arena, E.T.; Eliceiri, K.W. ImageJ2: ImageJ for the next generation of scientific image data. BMC Bioinform. 2017, 18, 529. [Google Scholar] [CrossRef] [PubMed]
  23. Ferreira, T.; Rasband, W. ImageJ User Guide; National Institutes of Health: Bethesda, MD, USA, 2011. [Google Scholar]
  24. Santos, T.T.; Koenigkan, L.V. Produção de ortomapas com VANTs e OpenDroneMap; Embrapa São Paulo: Campinas, SP, Brazil, 2018. [Google Scholar]
  25. Kameyama, S.; Sugiura, K. Effects of differences in structure from motion software on image processing of unmanned aerial vehicle photography and estimation of crown area and tree height in forests. Remote Sens. 2021, 13, 626. [Google Scholar] [CrossRef]
  26. Aishwarya, N.; Prabhakaran, K.M.; Debebe, F.T.; Reddy, M.S.S.A.; Pranavee, P. Skin cancer diagnosis with YOLO deep neural network. Procedia Comput. Sci. 2023, 220, 651–658. [Google Scholar] [CrossRef]
  27. Montalbo, F.J.P. A computer-aided diagnosis of brain tumors using a fine-tuned YOLO-based model with transfer learning. KSII Trans. Internet Inf. Syst. (TIIS) 2020, 14, 4816–4834. [Google Scholar]
  28. Ranjan, A.; Machavaram, R. Detection and localisation of farm mangoes using YOLOv5 deep learning technique. In Proceedings of the 2022 IEEE 7th International conference for Convergence in Technology (I2CT), Mumbai, India, 7–9 April 2022; pp. 1–5. [Google Scholar]
  29. Alnajjar, M. Image-based detection using deep learning and Google Colab. Int. J. Acad. Inf. Syst. Res. (IJAISR) 2021, 5, 30–35. [Google Scholar]
  30. Yang, W.; Zhang, X.; Ma, B.; Wang, Y.; Wu, Y.; Yan, J.; Liu, Y.; Zhang, C.; Wan, J.; Wang, Y.; et al. An open dataset for intelligent recognition and classification of abnormal condition in longwall mining. Sci. Data 2023, 10, 416. [Google Scholar] [CrossRef] [PubMed]
  31. Tzutalin. Tzutalin/Labelimg. 2018. Available online: https://github.com/tzutalin/labelImg (accessed on 29 July 2023).
  32. Rishi, K.; Rana, N. Particle size and shape analysis using Imagej with customized tools for segmentation of particles. Int. J. Comput. Sci. Commun. Netw 2015, 4, 23–28. [Google Scholar]
  33. Ramadhani, D.; Rahardjo, T.; Nurhayati, S. Automated Measurement of Haemozoin (Malarial Pigment) Area in Liver Histology Using ImageJ 1.6. In Proceedings of the 6th Electrical Power, Electronics Communication, Control and Informatics Seminar (EECCIS), Malang, Indonesia, 30–31 May 2012. [Google Scholar]
  34. Haeri, M.; Haeri, M. ImageJ plugin for analysis of porous scaffolds used in tissue engineering. J. Open Res. Softw. 2015, 3, e1. [Google Scholar] [CrossRef]
  35. O’Brien, J.; Hayder, H.; Peng, C. Automated quantification and analysis of cell counting procedures using ImageJ plugins. J. Vis. Exp. (JoVE) 2016, 117, 54719. [Google Scholar] [CrossRef]
  36. Mirabet, V.; Dubrulle, N.; Rambaud, L.; Beauzamy, L.; Dumond, M.; Long, Y.; Milani, P.; Boudaoud, A. NanoIndentation, an ImageJ Plugin for the Quantification of Cell Mechanics. In Plant Systems Biology: Methods and Protocols; Springer: New York, NY, USA, 2021; pp. 97–106. [Google Scholar]
  37. Broeke, J.; Pérez, J.M.M.; Pascau, J. Image Processing with ImageJ; Packt Publishing Ltd.: Birmingham, UK, 2015. [Google Scholar]
  38. Gallagher, S.R. Digital image processing and analysis with ImageJ. Curr. Protoc. Essent. Lab. Tech. 2014, 9, A.3C.1–A.3C.29. [Google Scholar] [CrossRef]
Figure 1. Work breakdown structure (WBS) of the mapping gaps.
Figure 1. Work breakdown structure (WBS) of the mapping gaps.
Applsci 14 07454 g001
Figure 2. Sugarcane fields location and the drone used to take images for orthomosaic generation.
Figure 2. Sugarcane fields location and the drone used to take images for orthomosaic generation.
Applsci 14 07454 g002
Figure 3. YOLO training after image sampling and object annotation.
Figure 3. YOLO training after image sampling and object annotation.
Applsci 14 07454 g003
Figure 4. Google Colab platform used for training YOLOv5.
Figure 4. Google Colab platform used for training YOLOv5.
Applsci 14 07454 g004
Figure 5. Object annotations using LabelImg 1.8.6. software.
Figure 5. Object annotations using LabelImg 1.8.6. software.
Applsci 14 07454 g005
Figure 6. Use of Yolov5 (detect command) to identify field gaps.
Figure 6. Use of Yolov5 (detect command) to identify field gaps.
Applsci 14 07454 g006
Figure 7. Example of field gap image identification.
Figure 7. Example of field gap image identification.
Applsci 14 07454 g007
Figure 8. (a) Two sizes of rectangles; (b) Black rectangle selection; (c) Rectangles with width and length identification by color code, and the distance between crop rows marked in red.
Figure 8. (a) Two sizes of rectangles; (b) Black rectangle selection; (c) Rectangles with width and length identification by color code, and the distance between crop rows marked in red.
Applsci 14 07454 g008
Figure 9. ImageJ script flowchart to print the field gap map.
Figure 9. ImageJ script flowchart to print the field gap map.
Applsci 14 07454 g009
Figure 10. Result of the RGB image split into three channels, which contain the field gaps map: (a) original image, (b) red channel, (c) green channel, and (d) blue channel.
Figure 10. Result of the RGB image split into three channels, which contain the field gaps map: (a) original image, (b) red channel, (c) green channel, and (d) blue channel.
Applsci 14 07454 g010
Figure 11. Process of selecting and numbering the field gaps: (a) Selection of field gaps based on size; (b) numbered field gaps; (c) inversion of pixel values for future mathematical operations based on the selected and numbered field gaps.
Figure 11. Process of selecting and numbering the field gaps: (a) Selection of field gaps based on size; (b) numbered field gaps; (c) inversion of pixel values for future mathematical operations based on the selected and numbered field gaps.
Applsci 14 07454 g011
Figure 12. Division of the original image (a) into red (b), green (c), and blue (d) channels.
Figure 12. Division of the original image (a) into red (b), green (c), and blue (d) channels.
Applsci 14 07454 g012
Figure 13. Results of subtracting the failures in the red (a), green (b), and blue (c) channels from the original image.
Figure 13. Results of subtracting the failures in the red (a), green (b), and blue (c) channels from the original image.
Applsci 14 07454 g013
Figure 14. Merge of the red, green, and blue channels of the selected field gaps into a final composite that constructs the RGB mapping of the sugarcane field gaps: (a) red channel, (b) green channel, (c) blue channel, and (d) composite of the three channels.
Figure 14. Merge of the red, green, and blue channels of the selected field gaps into a final composite that constructs the RGB mapping of the sugarcane field gaps: (a) red channel, (b) green channel, (c) blue channel, and (d) composite of the three channels.
Applsci 14 07454 g014
Figure 15. (a) selection of gaps not less than 4200 pixels2, (b) result of YOLOv5 identification, (c) field gap selection in black, discarded field gaps outlined in white.
Figure 15. (a) selection of gaps not less than 4200 pixels2, (b) result of YOLOv5 identification, (c) field gap selection in black, discarded field gaps outlined in white.
Applsci 14 07454 g015
Figure 16. Map of gaps not less than 4200 pixels2.
Figure 16. Map of gaps not less than 4200 pixels2.
Applsci 14 07454 g016
Figure 17. Gaps identified using YOLOv5.
Figure 17. Gaps identified using YOLOv5.
Applsci 14 07454 g017
Figure 18. Gaps greater than 4,200 pixels2 selected using the ImageJ script.
Figure 18. Gaps greater than 4,200 pixels2 selected using the ImageJ script.
Applsci 14 07454 g018
Figure 19. Three wrong identifications: (a) misidentification joined with another gap (b) misplaced gap (c) misplaced gap joined with another gap.
Figure 19. Three wrong identifications: (a) misidentification joined with another gap (b) misplaced gap (c) misplaced gap joined with another gap.
Applsci 14 07454 g019
Figure 20. Gaps greater than 2100 pixels2 selected using the ImageJ script.
Figure 20. Gaps greater than 2100 pixels2 selected using the ImageJ script.
Applsci 14 07454 g020
Figure 21. Three gaps not identified (false negatives).
Figure 21. Three gaps not identified (false negatives).
Applsci 14 07454 g021
Table 1. Lengths of lines in Figure 8.
Table 1. Lengths of lines in Figure 8.
ColorXYLength PixelsArea Pixels2Length cm
Red329.00213.00114.54 150.00
Orange215.50103.0054.082595.8470.82
Brown190.0073.0048.0062.86
Orange154.50191.5061.014209.6979.90
Blue116.50218.0069.0090.36
Orange249.00218.0052.152816.1068.29
Green222.00240.0054.0070.72
Orange404.00295.5053.042917.7369.46
Pink377.50321.5055.0172.04
Table 2. List of planting failures with sizes in pixels2 and their location in the image.
Table 2. List of planting failures with sizes in pixels2 and their location in the image.
AreaXYXMYMBXBYWidthLength
15670533.7221.60533.5321.72468013245
25188654.63133.92654.34133.816091049160
34255117.99192.41117.77192.16811617361
451891053.96379.691053.62379.8110133488264
54938788.24397.10788.13397.137413689460
613590752.44661.61751.91661.73642608218123
74710173.57727.64173.62727.651306958566
847811184.46755.001184.29754.9811447248162
9697651.97803.1451.43802.9847669774
104686527.03881.00526.98880.944888507862
117368646.00905.74646.18905.6658487612460
12490646.46933.4246.17933.40128966874
134946392.00935.42391.36935.033539017767
145235164.97960.40164.50960.141269267868
1513881568.301001.60568.031001.5348895816088
1693421006.561024.181006.841023.9295098211584
17443535.581038.1035.421037.84010067464
184580586.931107.25586.551107.2855010737267
195792122.021174.49122.091174.157611429264
205598228.961201.99228.831201.7718411709065
214410806.001278.01806.001277.9376012549248
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yano, I.H.; de Lima, J.P.N.; Speranza, E.A.; da Silva, F.C. Mapping Gaps in Sugarcane Fields in Unmanned Aerial Vehicle Imagery Using YOLOv5 and ImageJ. Appl. Sci. 2024, 14, 7454. https://doi.org/10.3390/app14177454

AMA Style

Yano IH, de Lima JPN, Speranza EA, da Silva FC. Mapping Gaps in Sugarcane Fields in Unmanned Aerial Vehicle Imagery Using YOLOv5 and ImageJ. Applied Sciences. 2024; 14(17):7454. https://doi.org/10.3390/app14177454

Chicago/Turabian Style

Yano, Inacio Henrique, João Pedro Nascimento de Lima, Eduardo Antônio Speranza, and Fábio Cesar da Silva. 2024. "Mapping Gaps in Sugarcane Fields in Unmanned Aerial Vehicle Imagery Using YOLOv5 and ImageJ" Applied Sciences 14, no. 17: 7454. https://doi.org/10.3390/app14177454

APA Style

Yano, I. H., de Lima, J. P. N., Speranza, E. A., & da Silva, F. C. (2024). Mapping Gaps in Sugarcane Fields in Unmanned Aerial Vehicle Imagery Using YOLOv5 and ImageJ. Applied Sciences, 14(17), 7454. https://doi.org/10.3390/app14177454

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop