Next Article in Journal / Special Issue
A Novel YOLOv6 Object Detector for Monitoring Piling Behavior of Cage-Free Laying Hens
Previous Article in Journal
Installation and Adjustment of a Hydraulic Evapotranspiration Multisensor Prototype
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Development and Assessment of a Field-Programmable Gate Array (FPGA)-Based Image Processing (FIP) System for Agricultural Field Monitoring Applications

1
Department of Engineering, Faculty of Agriculture, Dalhousie University, Truro, NS B2N 5E3, Canada
2
Department of Agricultural & Biosystems Engineering, South Dakota State University, Brookings, SD 57006, USA
3
Department of Plant, Food, and Environmental Sciences, Faculty of Agriculture, Dalhousie University, Truro, NS B2N 5E3, Canada
*
Author to whom correspondence should be addressed.
AgriEngineering 2023, 5(2), 886-904; https://doi.org/10.3390/agriengineering5020055
Submission received: 21 March 2023 / Revised: 2 May 2023 / Accepted: 8 May 2023 / Published: 11 May 2023
(This article belongs to the Special Issue Implementation of Artificial Intelligence in Agriculture)

Abstract

:
Field imagery is an effective way to capture the state of the entire field; yet, current field inspection approaches, when accounting for image resolution and processing speed, using existent imaging systems, do not always enable real-time field inspection. This project involves the innovation of novel technologies by using an FPGA-based image processing (FIP) device that eliminates the technical limitations of the current agricultural imaging services available in the market and will lead to the development of a market-ready service solution. The FIP prototype developed in this study was tested in both a laboratory and outdoor environment by using a digital single-lens reflex (DSLR) camera and web camera, respectively, as the reference system. The FIP system had a high accuracy with a Lin’s concordance correlation coefficient of 0.99 and 0.91 for the DLSR and web camera reference system, respectively. The proposed technology has the potential to provide on-the-spot decisions, which in turn, will improve the compatibility and sustainability of different land-based systems.

1. Introduction

The agricultural sector has been increasingly challenged to feed the rising population, where there is a clear need for the optimization and sustainable intensification of global crop production [1]. However, the conventional means of agriculture relies upon manual labor, which is not only physically intensive [2], but also expensive [3] to carry out. During this digital era, agriculture has continued to evolve toward the use of data-driven technologies, often involving the use of geographic positioning systems (GPS), geographical information systems (GIS), and precision agriculture (PA) to inform seeding and harvesting practices, as well as the application of agricultural inputs [4].
Precision agriculture constitutes a suite technology that can capture and analyze field data to inform the targeted management of farms while increasing cost-efficiency and productivity, and minimizing environmental impacts [5]. For example, the real-time and site-specific management of cultivated lands can lead to productivity growth by using PA technologies for the targeted application of agrochemicals at optimal frequencies and amounts [5]. Another technological development has been the advent of agricultural robots or ‘agrobots’, which may assist farmers with a wide range of operations, such as weeding, pest control, and crop harvesting [6].
One important area of research in PA involves digital image analysis, which provides the means to detect, recognize, and describe objects to support management decisions [7]. These image processing techniques are often based on the color, shape, and geometric features of target objects. Depending on the need to acquire real-time farm data, digital image analysis techniques may be used in applications, such as crop row detection, canopy measurement, weed detection, fruit sorting, and grading; the monitoring of plant growth, and fruit defects; and the measurement of vegetative indices [8,9]. However, the use of image processing techniques for the discrimination of crops and weeds based on color, shape, and texture, may be computationally complex and unsuitable for real-time field applications due to the limitations caused by computational time [10]. Burgos-Artizzu et al. [10] also described the economic cost as another obstacle to the commercialization of real-time weed or crop detection systems.
To address these limitations, on-board processing systems based on field programmable gate arrays (FPGAs) are reconfigurable, lightweight, and may be used for real-time object identification and tracking [11,12,13,14,15]. FPGAs are flexible and may be integrated into a wide range of devices, and they reduce the amount of hardware and increase the cost-effectiveness of systems [14]. Furthermore, FPGAs are also described as reconfigurable hardware because the same device may be reused by downloading a new program for different purposes. FPGAs only require hours to carry out the design-implement-test-debug cycle; in comparison, application-specific integrated circuit (ASIC) designs require days to complete all the processing [13,14]. Lastly, their ability to implement parallel processing ensures that multiple local operations may be carried out simultaneously [14] and applied to image processing techniques, as described by Price et al. [15].
FPGA implementations do not have widespread acceptance in the vision community yet [14], although a few studies have included vehicle identification systems [16], real-time license plate localization systems [17], and human presence detection systems [16]. In addition, they have been tested across a wide range of low-resolution dimensions; for example, 180 × 180 px to 600 × 600 px [15], 128 × 128 px [11], 256 × 256 px [13], and 640 × 480 px [17,18]; however, there is the potential for FPGAs to be applicable to higher-resolution imagery compared to VGA (640 × 480). Hence, the development of real-time image acquisition and processing systems based on FPGA hardware for crop monitoring purposes is the major focus of this research. As the first step of the development, we started with an 800 × 600 resolution for the verification of system fabrication while aiming to extend up to 4-k images in the next step.
From the literature review of current image processing techniques in PA, it was found that processing images on-board while collecting images has some computational limitations [10,19]. While FPGA hardware provides the flexibility to perform both operations on a single processor [16,17], it has not been tested on agricultural crop monitoring applications yet [20,21,22]. Therefore, this study proposes an FPGA-based image processing (FIP) system with multiple operational blocks, as a prototype, to perform higher-resolution crop monitoring than VGA that is computationally efficient. The specific objectives of this study are as follows: (1) to design the high-resolution FIP system; (2) to develop a system for transferring the processed image data in real-time; and (3) to evaluate the performance of the FIP system in both a lab and outdoor environment using reference camera systems.

2. Materials and Methods

To summarize the methods of this study, image acquisition was performed using a mobile industry processor interface (MIPI)-based D8 M camera board (Terasic Inc.; Hsinchu City, Taiwan), with a resolution of 800 × 600 px. The captured imagery was processed using a DE2-115 FPGA development board (Terasic Inc.; Hsinchu City, Taiwan) from the Altera Cyclone IV FPGA family (Intel Inc.; Santa Clara, CA, USA). Real-time processing utilized various module blocks of the Altera Cyclone IV processor, which applied three different color ratio filters and a threshold filter. The processed data consisted of the number of pixels detected, whereby the detected pixel area was transferred to another computing device, in real-time, by following a serial communication protocol. The performance of the proposed system was evaluated both in the lab and outdoor environment, where the real-time data were compared with manually processed images of the same target, captured by a DSLR camera as a reference for the lab environment, and a web camera as a reference for the outdoor environment.

2.1. Overview of the FIP System

An Altera Cyclone IV FPGA device with a DE2-115 development board was selected as the main controller of the total system. It had a universal serial bus (USB) blaster (onboard) port to download programs for specific applications. Furthermore, a part of the 2 MB static random-access memory (SRAM) and 128 MB synchronous dynamic random-access memory (SDRAM) memory buffers were used primarily to store the camera sensor outputs that needed to be processed. In addition, a few pushbuttons and slide switches were used to control the algorithms for image processing. To display the processed image, a video graphics array (VGA) and 8-bit high-speed triple digital-to-analog converter (DAC) integrated circuits (ICs) with a VGA-out connector were used as output pipelines. A Recommended Standard 232 (RS232) transceiver IC with a 9-pin connector and flow control was used to transfer the detected pixel area in real-time. Another important component of the DE2-115 development board was a 40-pin expansion header with diode protection and a General-Purpose Input/Output (GPIO) interface to communicate with the camera board.
The overall functionality of this FIP system is summarized in Figure 1. Firstly, the system acquired real-time camera sensor input by using look-up tables. Following this, the 10-bit serial image stream was written to the SDRAM memory buffer and read using an application-specific image resolution mask. The read image frame from the memory buffer was then buffered again using horizontal and vertical control signals in the SRAM line buffer. Next, a high-level control signal was used to convert the raw 10-bit image data from the line buffer into a 24-bit RGB image. To read pixels within the image frame, the control signal depended on the VGA clock, vertical synchronization signal, and read request control signal generated from the VGA controller system.
The image processing unit utilized the 24-bit RGB images and processed them in two steps by using the R/G/B ratio filter and the thresholding filter. As the image processing system used three different basic color detection algorithms (R-ratio, G-ratio, and B-ratio filters), a switching logic was developed with the combination of four switches from the development board to provide four different output images. Finally, the processing unit provided two different outputs: the original or binary images on the VGA monitor; and the number of pixels, detected as red, green, or blue, to the external computing device for data analysis.
The software packages were used to design, program, and extract the image data needed for developing the FIP system (Figure 2). Once the design was completed, the Quartus Prime (Intel Inc.; Santa Clara, CA, USA) software was used to generate an SRAM object file (SOF) in a file directory. The SOF contained the data for configuring all the SRAM-based Altera devices, supported by the Quartus Prime software. The USB blaster circuitry provided the program download interface to the Altera device’s processor using a Type A-B USB cable. Finally, the FPGA hardware was configured to the developed design by using the Programmer Tool.

2.2. Fabrication of System

2.2.1. Image Acquisition Unit

The image acquisition hardware consisted of a digital camera development package (Figure 3), D8 M, which included an MIPI camera module, and an MIPI decoder that provided 10-bit parallel Bayer pattern image data. The MIPI camera module outputted 4 lanes of MIPI interface image data, which could be converted to parallel data by passing through the MIPI decoder IC to the GPIO interface. The D8 M board was connected to the DE2-115 FPGA development board via a 2 × 20 GPIO pin connector. Both the MIPI camera module and MIPI decoder of the D8 M camera were controlled by the FPGA using an inter-integrated circuit (I2 C) communication protocol.
The D8 M board was equipped with an MIPI image sensor, OV8865, with a lens size of 1/3.2” and pixel size of 1.4 × 1.4 μm (OmniVision Inc.; Santa Clara, CA, USA). The OV8865 sensor could acquire an RGB image with a 70° view angle. It should be noted that the sensor had additional flexibility in acquiring imagery at multiple resolutions using the windowing and cropping functions, while maintaining the corresponding field of view.
For programming the real-time image acquisition unit, the Verilog programming language was used in the Quartus Prime Lite 18.0 software tool. To change the output image resolutions, the OV8865 needed to be configured via I2 C so the camera could output the desired image format. Furthermore, the analog gain, digital gain (i.e., red, green, and blue channel gain), and exposure gain were chosen by several experiments and adjustments for the required 800 × 600 px resolution. The required clock frequency needed for the acquisition of the imagery was determined by adjusting the parameters from the Quartus Prime’s IP resources. For this study, an output clock of 40 MHz was used to achieve the acquisition of 800 × 600 px resolution imagery at 40 frames per second.

2.2.2. Image Processing Unit

The image processing hardware consisted of the DE2-115 development board. No additional hardware was used for the processing except for the image processing filter pipeline between the raw image to RGB converter and the VGA display controller. The raw image to RGB converter received a 10-bit raw image output from the D8 M camera board and converted that into 24-bit RGB images.
After establishing the communication between the D8 M camera board and the DE2-115 FPGA board (Figure 4), the raw image was converted to RGB image data, consisting of three different color components (i.e., red, green, and blue) using a high-level logic control derived from the VGA display controller module. These three-color components were used to display the original RGB image of the object, which was placed in front of the D8 M camera board and viewed on the VGA display monitor. The image processing unit inputted the 24-bit RGB image data (i.e., 8-bit R component, 8-bit G component, and 8-bit B component of the color image) and applied the color ratio filters (R-ratio, G-ratio, or B-ratio) followed by a threshold filter on the R, G, and B color components. However, only one out of four processing operations (i.e., original RGB image; binary image of red objects; binary image of green objects; and binary image of blue objects) could be performed at a time using the developed switching logic. A sample of the original object, its original RGB image, and the detected binary image is shown in Figure 5.
Previous studies have used the following G-ratio formula: (255 × G)/(R+G+B) for 24-bit RGB image analysis from wild blueberry fields for the spot-application of herbicide [23]. Therefore, the formula was modified to perform an R-ratio and B-ratio filter analysis as follows: (255 × R)/(R+G+B) and (255 × B)/(R+G+B), respectively. A threshold of intensity 90 was selected for each of the color ratio filters to produce the binary image with the detected area as white by setting the processed R, G, and B output color components (Ro, Bo, and Go) at the maximum intensity, 255. Lastly, the final formulas for the real-time image processing unit for the three-color detection techniques are shown in Equations (1)–(3):
R r a t i o :   I f   R > 0   & 255 × R R + G + B > 90 ,   R o ,   B o ,   a n d   G o = 255 ,   o t h e r w i s e   0
G r a t i o : I f   G > 0   & 255 × G R + G + B > 90 ,   R o ,   B o ,   a n d   G o = 255 ,   o t h e r w i s e   0
B r a t i o : I f   B > 0   & 255 × B R + G + B > 90 ,   R o ,   B o ,   a n d   G o = 255 ,   o t h e r w i s e   0
The developed switching logic used four switches to select one image processing operation among four (Table 1). For displaying the original RGB image of the region of interest (ROI), all four switches were set to low. To select any of the R, G, and B color detection techniques, 3 were switched and one of the corresponding switches 0, 1, or 2 were set to high.

2.2.3. Real-Time Data Transfer Unit

The number of pixels detected as R, G, or B determined from their respective color ratio filters needed to be acquired from the system. Each time a pixel from an image frame satisfied the specified color detection formula (Equations (1)–(3)), the corresponding pixel was modified from a color pixel to a white pixel and counted as a detected pixel inside the ROI. When a pixel did not satisfy the specified color detection formula, the pixel was considered as a black pixel; hence, a binary image of the ROI was produced. After completing the real-time processing on one frame, the image processing unit provided two types of data for two different outputs. Firstly, the R, G, and B components of the binary images that were controlled by the VGA controller were displayed on the VGA monitor. Secondly, the total number of pixels counted from an image frame that satisfied the specific color detection formula were determined.
A real-time data transfer unit was developed to transfer the total number of detected pixels to an external processor to record the percentage of an area that was detected as R, G, or B. Here, the universal asynchronous receiver and transmitter (UART) communication protocol, along with the RS232 standard for serial communication, were used (Figure 6).
The transmitter software was designed using the Quartus Prime Lite 18.0 programming tool using a personal computer (PC). For this communication channel, two different modules, Transmit Trigger and RS232 Transmitter, were created. The Transmit Trigger module inputted these six digits as six bytes and sent it to the RS232 Transmit module, one by one, and maintained a one-byte time interval with a baud rate of 115,200 for this study, and the UART module was running with a 50 MHz clock. The receiver software was designed using the Python programming tool. The Python 3.8.8 application package (©Python Software Foundation Inc.; Wilmington, DE, USA) was installed on the receiver PC by using the Anaconda navigator. The same Python tool was used to launch several Conda packages, such as Spyder and PySerial.

2.3. Testing of the FIP System—Lab Environment

The experimental setup for the lab evaluation of the FIP system comprised of the DE2-115 FPGA development board, D8 M camera board, the receiver PC, a VGA display monitor, a custom-built wooden frame, one additional direct current (DC) light source with an SMD2835 light-emitting diode (LED; Vision Global Media Group Inc.; Waterloo, ON, Canada), and a digital lux meter from Aoputtriver® (Figure 7). The wooden frame consisted of a 122 × 61 cm base to place the test object on and a 152.4 × 5 cm vertical board to embed the DE2-115 FPGA board along with the D8 M camera board.
To ensure a consistent lighting condition, the DC light and the alternating current (AC) light, installed in the lab ceiling, were used. Before testing the system, the same light intensity for each object was ensured, which was 600–601 Lux with a room temperature of 21–22 °C. For evaluation purposes, several objects with different structures were formed using 28 cm × 22 cm color sheets. Here, the colors included re-entry red, gamma green, and blast-off blue (Astrobrights Inc.; Alpharetta, GA, USA; Figure 8).
To test the image processing unit, 16 objects with different shapes were made by resizing the three-color sheets (Table 2). In this experiment, rectangle (RA)-, triangle (T)-, circle (C)-, square (S)-, diamond (D)-, and oval (O)-shaped objects were used with three different colors (R, G, and B) for each [24].

2.3.1. Data Collection Using the FIP System

The FIP system was mounted at 99 cm above the flat surface of the custom-built wooden frame for the data collection, where the different objects were placed for imaging. Images were collected while covering the area on the ground of 27.5 × 21.5 cm. During the data collection period, 10 sets of pixel data were recorded for each of the 16 objects that resulted in 160 values for each color detection algorithm.

2.3.2. Acquisition of Reference Data

To compare the performance of the developed FIP system, a Canon EOS 600D DSLR camera (Canon Inc.; Tokyo, Japan) with a Canon EFS Lens EF-S 55–250 mm f/4–5.6 IS II (Canon Inc.; Tokyo, Japan) was used. During the acquisition of the reference images, the same experimental setup as the FPGA data collection was maintained. Here, the F-stop, exposure time, and ISO speed of the camera were maintained at f/5.6, 1/30 s, and ISO-200, respectively.
After collecting all the reference images, they were cropped and resized to match the ROI area and resolution of the FPGA imaging system. Here, the Adobe Photoshop CC 2019 software (Adobe Inc.; San Jose, CA, USA) was used to make the modifications so they matched the 27.5 × 21.5 cm ground area and the 800 × 600 px image resolution. Finally, all the reference images were saved in a file directory for data analysis. Lastly, the Python programming tool and the corresponding color detection formulas were applied on the 160 reference images to determine the pixel areas and were saved in a text file.

2.4. Testing of the FIP System—Outdoor Environment

To evaluate the effectiveness of the FIP system in the outdoor environment, the system was tested at the Agricultural Campus of Dalhousie University, Truro, Canada (45.374° N, 63.264° W). The data collection unit was placed as a stationary unit, which included the following: the FIP system, installed on the custom-built T-shaped wooden frame and placed on top of a tripod; the battery and inverter to supply power; the PC to store collected data; the live streamer CAM 313 (PW313) 1080 p web camera (AverMedia Inc.; New Taipei City, Taiwan) to collect the reference images; and other necessary cables (Figure 9). The D8 M camera was placed 29 cm down from the tripod top and 121 cm above the object, and the web camera was placed 82 cm down from the tripod top and 68 cm above the object. Two legs of the tripod were 152.4 cm, and one was 139.70 cm in length. The size of the container that carried the objects was 34.29 × 29.21 cm, and the wooden frame used to maintain the 800 × 600 px image resolution of the FIP system from 121 cm was 30 × 22.5 cm in size.
The date by which the system was tested was selected based on the weather conditions. The test occurred on a bright, clear day with a temperature of 15 °C, a wind speed of 17 km/h, a humidity of 72%, and an atmospheric pressure of 100.6 kPa. A location was selected where there was consistent shade and a lighting intensity of 3900–4000 Lux. For data collection purposes, 22 live lettuce (Lactuca sativa L. var. longifolia) plants were collected from the field and placed inside the container with its soil to create a field prototype and avoid system movements in the primary validation stage. The lettuce was selected due to its popularity within cropping systems and easy validation of its broad leaves. The number of lettuce plants was increased from the lab test sample number (n = 16) to account for the variability of outdoor lighting conditions and wind. From each plant using the Python Programming Tool, a total of 10 data samples were collected as the FPGA detected pixel area using the G-ratio detection formula, and 10 reference images using the web camera. Finally, 220 processed data from the FIP system were saved in a text file. Next, the 220 reference images were cropped by using the custom-built blue frame to match with the same ground resolution as the FPGA camera and processed using the same formula as used by the FPGA camera to detect plant leafage area, to generate the same set of 220 reference data for performance evaluation.

2.5. Performance Evaluation of the FIP System

The DSLR and web camera imagery were used to compare and evaluate the performance of the FIP system, as these two image acquisition sources have been widely used in real-time image processing systems over the past few years [25,26,27]. Since this research focused on providing a cheaper, faster, and reliable real-time image processing system alternative, the performance of the developed system was compared with high-end image acquisition systems. Statistical analysis was performed to evaluate the developed system, whereby the mean, standard deviation (SD), and the percentage root mean square error (RMSE) of the detected pixel area (Equation (4)) were the main metrics for comparison.
% R M S E = R M S E   i n   P i x e l T o t a l   P i x e l   A r e a   C o n s i d e r e d × 100
For the lab evaluation of the FIP system, there were 10 samples per 16 objects, computing to a total of 160 samples from the FIP output and 160 samples from the DSLR reference system for each of the three corresponding ratio algorithms. The FIP data and DSLR data were averaged using 10 samples for each object and a total of 48 combinations were computed for the FIP data and DSLR data, respectively, including three-color ratio algorithms (16 × 3 = 48). For the outdoor evaluation of the FIP system, there were 10 samples per 22 plants, with a total of 220 samples from the FIP and web camera systems. The FIP data and web camera data were averaged using 10 samples for each plant to obtain 22 samples for each system. These data were analyzed and compared using the G-ratio algorithm for real-time detection in an outdoor environment.
The detected areas determined from the FIP system were correlated with the areas detected using the DSLR and the web camera via a regression analysis. Lin’s concordance correlation coefficient (CCC) from the lab and outdoor test results was calculated and used to measure the accuracy of the FIP results [28]. For hypothesis testing, Lin [28] indicates that rather than just testing whether the CCC is zero, it is more logical to test whether the CCC is greater than a threshold value, CCC0. The threshold was calculated using the following Equation (5), where Xa is the measure of accuracy calculated using Equation (6), υ2 and ω are a squared difference in means relative to the product of 2 standard deviations and a scale shift that is a ratio of 2 standard deviations, respectively, ρ represents the Pearson correlation coefficient when the FIP data was regressed on the reference data, and d is the % loss in precision that can be tolerated [28].
C C C 0 = X a ρ 2 d
X a = 2 υ 2 + ω + 1 ω
This is analogous to a non-inferiority test of the CCC. The null and alternative hypotheses are H0: CCC ≤ CCC0 (there is no significant concordance between the FIP data and the reference data) and H1: CCC > CCC0 (there is a significant concordance between the FIP data and the reference data), respectively. If CCC > CCC0, the null hypothesis is hence rejected, and the concordance of the new test procedure is established. In addition, the RMSE was calculated and used to compare the performance of the FIP system with the reference systems using the same algorithms. As the FIP system was a combination of image acquisition and image processing systems, the reference images acquired were processed pixel by pixel using Python by applying the same algorithms used in the FIP system’s image processing unit.

3. Results

The result of the FIP system was a text file including a list of the detected pixel areas from the ROI in the ground for every 16 objects and 22 plants selected for the lab and outdoor experiment, respectively. The detected areas, in pixels, of the selected objects were quantified using the serial data transfer module of the FIP system. The collected reference data were post-processed by applying the color ratio and the threshold filter.

3.1. Testing of the FIP System—Lab Environment

The developed FIP system’s output from the lab experiment was the number of pixels detected as R, G, or B by applying the color detection algorithms. The detected pixel areas were compared with the pixel areas detected from the reference images captured by the DSLR camera. The numerical representation of the complete dataset is shown in Table 3 for three ratio filters.
From the numerical data analyses, the variability in the SD of the 16 objects from the FIP system were found to be considerably low (around 0.074% to 5.72% of the ROI considered), which implied the consistent behavior of the imagery system. Thus, during the lab trials with the controlled luminance conditions (600–601 Lux) and constant distance between the camera and the objects, the FIP system performed well in terms of plain color object detection (see Table 4).
The results from the lab evaluation are shown as bar charts in Figure 10, Figure 11 and Figure 12. The bar charts of three different color ratio filters show the consistent performance of the FIP system’s image sensor compared to the DSLR reference system for the R, G, and B color object detection from a defined ROI with minimal noise.
The performance of the developed system was compared with the DSLR-based system for estimating the R-ratio, G-ratio, and B-ratio using regression analyses. The detected area using the FIP system was found to be strongly correlated with the DSLR imagery-based reference system (FIP = 1.0327 DSLR; R2 = 0.9956; RMSE = 6019.923 Pixels; n = 480; p-value < 0.05), which implied that the developed system could be used to explain 99.56% of the variability in the area detected using the DSLR (Figure 13) with a substantial accuracy (CCC = 0.9873). In addition, a 1:1 trend line was generated along with the regression model with a zero y-intercept to visualize the performance of the FIP system and compare it with the ideal system. It was shown that the FIP system performed considerably well for color object detection.
The RMSE metric was used to quantify the discrepancies between the observed area (pixel area detected using web camera system) versus predicted area using the three-color detection algorithms. Here, the RMSE for the detected pixel area was only 1.25% of the total pixel area considered, which also explained the low error. From the hypothesis testing using Equations (5) and (6), the CCC0 was 0.9665 with a tolerable 5% loss of precision, whereas the CCC was 0.9873 with the 95% confidence interval range being 0.9852 to 0.9893.

3.2. Testing of the FIP System—Outdoor Environment

The FIP system’s output from the outdoor experiment consisted of the number of pixels detected as lettuce plant leaf area by applying the green color detection algorithm. The detected plant leaf areas were compared to the detected areas from the reference images captured by the web camera while maintaining the same experimental setup as the FIP system (Table 5). From the numerical data analyses, the variability in the SD of the 22 objects (lettuce plants) from the FIP system were found to be considerably low (around 0.023% to 0.064% of the ROI considered). The low variability of the FIP imagery system in the outdoor environment also implied the same consistent behavior as the lab environment. Thus, during the outdoor trials, the FIP system performed well in terms of the plant leaf area detection.
The results from the outdoor evaluation are shown as a bar chart in Figure 14. The bar chart shows the consistent performance of the FIP system’s image sensor compared to the web camera reference system for the plant leaf area detection from a defined ROI with minimal noise.
The performance of the FIP system was compared to the web camera reference system based on the G-ratio using regression analyses. The area detected by the FIP system was strongly correlated to the web camera system (FIP = 0.8868 WebCamera; R² = 0.9994; RMSE = 9409.591 Pixels; n = 220; p-value < 0.05), which showed that the FIP system explained 99.94% of the variability in the area detected using a web camera with a moderately high accuracy (CCC = 0.9101). Similar to the hypothesis testing for the lab evaluation using Equations (5) and (6), the CCC0 was 0.8894 with a tolerable 5% loss of precision for the outdoor evaluation, whereas Lin’s CCC = 0.9101 with a 95% confidence interval range of 0.8945 to 0.9257. As CCC > CCC0, the test was statistically significant, and the null hypothesis was rejected (Figure 15).
The discrepancies between the observed area versus the predicted area using the G-ratio algorithm was quantified using the RMSE metric where a low RMSE was observed for the detected pixel area in the outdoor experiment. This was only 0.7% higher than the error found in the lab experiment. This could have been caused by pixel noise from the images captured by the FIP’s image sensor and the movement of leaves due to an outdoor wind effect. There may also have been the effects of illumination conditions, observation geometry, atmospheric phenomena, and topographic variations on the spectral signatures of the objects [29]. Though the RMSE of the FIP system was slightly higher while comparing it to the web camera reference system rather than the DSLR reference system, it can still be used to predict the leaf area using an empirical correction factor when considering the cost-effectiveness, bulkiness, and computational simplicity of the web camera over the DSLR.

4. Discussion

The proposed FIP system was able to acquire higher-resolution imagery (800 × 600 px) than the previous literature during the lab and outdoor evaluations of the system [11,13,15,17,18]. In addition, the integration of image acquisition and processing on a lightweight FPGA platform for PA crop monitoring purposes responded adequately to the current needs of real-time farm management decision support systems [19]. The FIP demonstrated its effectiveness by providing a 98.73% accuracy during the lab test and a 91.01% accuracy during the outdoor test (Table 6), which were similar to the detection rate achieved by Zhai et al. [17] for another field of study related to license plate detection.
Despite the success of the system, there were still data collection challenges during the experimental phase. For example, the motion effect of the researcher on the experimental setup due to continuous monitoring, the light reflection of the objects alongside the edges, and a little effect of the AC light source inside the lab. In the outdoor experiment, the results may have been affected by wind on the plant leaves, the presence of shadows of leaves, plants with multiple leaves, ambient sunlight, and clouds.
From the scatter plots shown in the results section, there was a slight under- or overestimation for the object area detection during the experimental trials. The potential reasons for this could be the manually adjusted exposure and brightness of the image sensor, the value chosen for the threshold filter, and the luminance effects on the ROI. A slight under- or overestimation is a common discrepancy found in remote sensing research [30,31]. Despite these factors, this study tried to ensure a consistent experimental setup during the data collection processes. For example, the region of interest was kept stationary for both the FIP and reference data collection. Moreover, the FIP was placed at the same height as the ground surface for all data collection with the help of a custom-built wooden frame. Overall, the FIP system showed great potential in the lab and outdoor environment for real-time object and plant detection using single, lightweight, and computationally effective FPGA hardware.

5. Conclusions

After analyzing the current methods of acquiring agricultural imagery, it was determined that a new strategy and system for real-time crop monitoring using cost-effective FPGA hardware might be a powerful solution to solve current demands in PA. Hence, a cost-effective FIP prototype was developed in this study to support future on-the-spot decisions in agriculture.
The developed FIP system was evaluated under both lab and outdoor environments. The evaluation under the lab environment was carried out based on a DSLR system for estimating three different color ratios with 16 different objects. The detected area using the FIP system was found to be strongly correlated with the DSLR imagery-based reference system (FIP = 1.0327 DSLR; R2 = 0.9956; RMSE = 6019.923 Px (1.25% of total pixel area); n = 480; p-value < 0.05) with a substantial accuracy (CCC = 0.9873). The evaluation under the outdoor environment was also performed by comparing the developed system with the reference web camera system using a G-ratio algorithm with 22 lettuce plants. Based on the accuracy metrics, the FIP system showed a strong correlation to the web camera system (FIP = 0.8868 WebCamera; R² = 0.9994; RMSE = 9409.591 Px; n = 220; p-value < 0.05) with a moderately high accuracy (CCC = 0.9101).
To address the current limitations of high-resolution imagery for real-time crop monitoring PA applications, this study showed great potential for on-field crop monitoring applications. Here, a real-time applicable crop monitoring system was developed that included a high-resolution, high-speed image acquisition and processing unit using the same FPGA platform to address the current PA limitations for on-the-spot farm management solutions.
The proposed system was able to minimize the imaging limitations in digital agriculture related to computational complexity, image resolution, and time of deploying this photographic technology and facilitate real-time, actionable management strategies in the field. This technology can contribute to other crop types, especially in vegetable cropping systems and assist in the estimation of crop yields. Furthermore, the cost-effectiveness of this technology would be particularly beneficial for small- and medium-sized farms. To reduce human errors and achieve a more precise real-time data acquisition, future research should include the adaptation of higher-resolution images with an FIP up to 4 K and wide dynamic range global shutter camera for the moving image acquisition circumstance, and the development of an agrobot that integrates the FIP with a real-time kinematics global positioning system and battery source. Furthermore, the developed system, with a lightweight FPGA nano board, would also be applicable for integration on an unmanned aerial vehicle.

Author Contributions

Conceptualization, Y.K.C.; Data curation, S.S.A.; Formal analysis, S.S.A. and Y.K.C.; Funding acquisition, Y.K.C.; Investigation, S.S.A. and Y.K.C.; Methodology, S.S.A. and Y.K.C.; Project administration, Y.K.C.; Resources, Y.K.C.; Software, S.S.A.; Supervision, Y.K.C.; Validation, S.S.A. and Y.K.C.; Writing—Original draft, S.S.A.; Writing—Review & editing, Y.K.C., T.N.-Q. and B.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Natural Science and Engineering Research Council of Canada (NSERC) Discovery Grants Program (RGPIN-2017-05815), internship program from MITACS Accelerate program (IT20902), and USDA National Institute of Food and Agriculture Hatch (3AH777) and Hatch-Multistate (3AR730).

Data Availability Statement

No data is available.

Acknowledgments

In this section, the authors would like to acknowledge administrative and technical support from Travis Esau and Ahmad Al-Mallahi.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. FAO. Sustainable Crop Production Intensification; Food and Agriculture Organization of the United Nations: Rome, Italy, 2021. [Google Scholar]
  2. Statistics Canada. Change in Total Area of Land in Crops; Statistics Canada: Ottawa, ON, Canada, 2018. [Google Scholar]
  3. Statistics Canada. Employee Wages by Occupation; Statistics Canada: Ottawa, ON, Canada, 2021. [Google Scholar]
  4. Statistics Canada. Growing Opportunity through Innovation in Agriculture; Statistics Canada: Ottawa, ON, Canada, 2017. [Google Scholar]
  5. Tsouros, D.C.; Bibi, S.; Sarigiannidis, P.G. A review on UAV-based applications for precision agriculture. Information 2019, 10, 349. [Google Scholar] [CrossRef]
  6. Valle, S.S. Agriculture 4.0; Food and Agriculture Organization of the United Nations: Rome, Italy, 2020. [Google Scholar]
  7. Schellberg, J.; Hill, M.J.; Gerhards, R.; Rothmund, M.; Braun, M. Precision agriculture on grassland: Applications, perspectives and constraints. Eur. J. Agron. 2008, 29, 59–71. [Google Scholar] [CrossRef]
  8. Saxena, L.; Armstrong, L. A survey of image processing techniques for agriculture. In Proceedings of the Asian Federation for Information Technology in Agriculture; Australian Society of Information and Communication Technologies in Agriculture: Perth, WA, Australia, 2014; pp. 401–413. [Google Scholar]
  9. Vibhute, A.; Bodhe, S.K. Applications of Image Processing in Agriculture: A Survey. Int. J. Comput. Appl. 2012, 52, 34–40. [Google Scholar] [CrossRef]
  10. Burgos-Artizzu, X.P.; Ribeiro, A.; Guijarro, M.; Pajares, G. Real-time image processing for crop/weed discrimination in maize fields. Comput. Electron. Agric. 2011, 75, 337–346. [Google Scholar] [CrossRef]
  11. Ramirez-Cortes, J.M.; Gomez-Gil, P.; Alarcon-Aquino, V.; Martinez-Carballido, J.; Morales-Flores, E. FPGA-based educational platform for real-time image processing experiments. Comput. Appl. Eng. Educ. 2013, 21, 193–201. [Google Scholar] [CrossRef]
  12. Johnston, C.T.; Gribbon, K.T.; Bailey, D.G. Implementing image processing algorithms on FPGAs. In Proceedings of the Eleventh Electronics New Zealand Conference, ENZCon’04, Palmerston North, New Zealand, 15 November 2004; pp. 118–123. [Google Scholar]
  13. Bannister, R.; Gregg, D.; Wilson, S.; Nisbet, A. FPGA implementation of an image segmentation algorithm using logarithmic arithmetic. In Proceedings of the 48th Midwest Symposium on Circuits and Systems, Cincinnati, OH, USA, 7 August 2005; pp. 810–813. [Google Scholar]
  14. MacLean, W.J. An evaluation of the suitability of FPGAs for embedded vision systems. In Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05)-Workshops, San Diego, CA, USA, 21 September 2005; p. 131. [Google Scholar]
  15. Price, A.; Pyke, J.; Ashiri, D.; Cornall, T. Real time object detection for an unmanned aerial vehicle using an FPGA based vision system. In Proceedings of the 2006 IEEE International Conference on Robotics and Automation, 2006, ICRA 2006, Orlando, FL, USA, 15 May 2006; pp. 2854–2859. [Google Scholar]
  16. Moshnyaga, V.G.; Hasimoto, K.; Suetsugu, T. FPGA design for user’s presence detection. In Proceedings of the 2008 15th IEEE International Conference on Electronics, Circuits and Systems, St. Julian’s, Malta, 31 August 2008; pp. 1316–1319. [Google Scholar]
  17. Zhai, X.; Bensaali, F.; Ramalingam, S. Real-Time License Plate Localisation on FPGA. CVPR 2011 WORKSHOPS; Institute of Electrical and Electronics Engineers: Golden, CO, USA; pp. 14–19.
  18. Cointault, F.; Journaux, L.; Rabatel, G.; Germain, C.; Ooms, D.; Destain, M.F.; Gorretta, N.; Grenier, G.; Lavialle, O.; Marin, A. Texture, Color and Frequential Proxy-Detection Image Processing for Crop Characterization in a Context of Precision Agriculture; InTech Open: Rijeka, Croatia, 2012. [Google Scholar]
  19. Saddik, A.; Latif, R.; Elhoseny, M.; Ouard, A.E. Real-time evaluation of different indexes in precision agriculture using a heterogeneous embedded system. Sustain. Comput. Inform. Syst. 2021, 30, 100506. [Google Scholar] [CrossRef]
  20. Gonzalez, C.; Mozos, D.; Resano, J.; Plaza, A. FPGA implementation of the N-FINDR algorithm for remotely sensed hyperspectral image analysis. IEEE Trans. Geosci. Remote Sens. 2011, 50, 374–388. [Google Scholar] [CrossRef]
  21. El-Medany, W.M.; El-Sabry, M.R. GSM-based remote sensing and control system using FPGA. In Proceedings of the 2008 International Conference on Computer and Communication Engineering, Karur, Tamil Nadu, India, 13 May 2008; pp. 1093–1097. [Google Scholar]
  22. González, C.; Sánchez, S.; Paz, A.; Resano, J.; Mozos, D.; Plaza, A. Use of FPGA or GPU-based architectures for remotely sensed hyperspectral image processing. Integration 2013, 46, 89–103. [Google Scholar] [CrossRef]
  23. Chattha, H.S.; Zaman, Q.U.; Chang, Y.K.; Read, S.; Schumann, A.W.; Brewster, G.R.; Farooque, A.A. Variable rate spreader for real-time spot-application of granular fertilizer in wild blueberry. Comput. Electron. Agric. 2014, 100, 70–78. [Google Scholar] [CrossRef]
  24. Abbadi, N.E.; Saad, L.A. Automatic detection and recognize different shapes in an image. Int. J. Comput. Sci. Issues IJCSI 2013, 10, 162. [Google Scholar]
  25. Das, A.K. Development of an Automated Debris Detection System for Wild Blueberry Harvesters using a Convolutional Neural Network to Improve Food Quality. Master’s Thesis, Dalhousie University, Truro, NS, Canada, 2020. [Google Scholar]
  26. Rehman, T.U.; Zaman, Q.U.; Chang, Y.K.; Schumann, A.W.; Corscadden, K.W.; Esau, T.J. Optimising the parameters influencing performance and weed (goldenrod) identification accuracy of colour co-occurrence matrices. Biosyst. Eng. 2018, 170, 85–95. [Google Scholar] [CrossRef]
  27. Shin, J.; Chang, Y.K.; Heung, B.; Nguyen-Quang, T.; Price, G.W.; Al-Mallahi, A. Effect of directional augmentation using supervised machine learning technologies: A case study of strawberry powdery mildew detection. Biosyst. Eng. 2020, 194, 49–60. [Google Scholar] [CrossRef]
  28. Lin, L. Assay validation using the concordance correlation coefficient. Biometrics 1992, 1, 599–604. [Google Scholar] [CrossRef]
  29. Teillet, P.M. Image correction for radiometric effects in remote sensing. Int. J. Remote Sens. 1986, 7, 1637–1651. [Google Scholar] [CrossRef]
  30. Silván-Cárdenas, J.L.; Wang, L. Sub-pixel confusion–uncertainty matrix for assessing soft classifications. Remote Sens. Environ. 2008, 112, 1081–1095. [Google Scholar] [CrossRef]
  31. Sayer, A.M.; Govaerts, Y.; Kolmonen, P.; Luffarelli, M.; Mielonen, T.; Patadia, F.; Popp, T.; Povey, A.C.; Stebel, K.; Witek, M.L. A review and framework for the evaluation of pixel-level uncertainty estimates in satellite aerosol remote sensing. Atmos. Meas. Tech. 2020, 13, 373–404. [Google Scholar] [CrossRef]
Figure 1. Block diagram of the FPGA-based real-time image processing system.
Figure 1. Block diagram of the FPGA-based real-time image processing system.
Agriengineering 05 00055 g001
Figure 2. FPGA configuration chain using the Quartus Prime programmer.
Figure 2. FPGA configuration chain using the Quartus Prime programmer.
Agriengineering 05 00055 g002
Figure 3. D8 M board block diagram.
Figure 3. D8 M board block diagram.
Agriengineering 05 00055 g003
Figure 4. DE2-115 development board connected with the D8 M camera board.
Figure 4. DE2-115 development board connected with the D8 M camera board.
Agriengineering 05 00055 g004
Figure 5. Binary image of the region of interest (ROI) using the color filter and image acquisition hardware.
Figure 5. Binary image of the region of interest (ROI) using the color filter and image acquisition hardware.
Agriengineering 05 00055 g005
Figure 6. UART communication channel for real-time data transfer.
Figure 6. UART communication channel for real-time data transfer.
Agriengineering 05 00055 g006
Figure 7. Experimental setup of the FIP system for lab evaluation.
Figure 7. Experimental setup of the FIP system for lab evaluation.
Agriengineering 05 00055 g007
Figure 8. Test shapes considered for object detection (RA, T, C, S, D, and O mean rectangle, triangle, circle, square, diamond, and oval shape and a–r alphabets differ the shape and color of the objects).
Figure 8. Test shapes considered for object detection (RA, T, C, S, D, and O mean rectangle, triangle, circle, square, diamond, and oval shape and a–r alphabets differ the shape and color of the objects).
Agriengineering 05 00055 g008
Figure 9. Outdoor data collection setup to validate the FIP system in an outdoor environment.
Figure 9. Outdoor data collection setup to validate the FIP system in an outdoor environment.
Agriengineering 05 00055 g009
Figure 10. Comparison between pixels detected as red using the FIP system and DSLR reference imaging system.
Figure 10. Comparison between pixels detected as red using the FIP system and DSLR reference imaging system.
Agriengineering 05 00055 g010
Figure 11. Comparison between pixels detected as green using the FIP System and DSLR reference imaging system.
Figure 11. Comparison between pixels detected as green using the FIP System and DSLR reference imaging system.
Agriengineering 05 00055 g011
Figure 12. Comparison between pixels detected as blue using the FIP system and DSLR reference imaging system.
Figure 12. Comparison between pixels detected as blue using the FIP system and DSLR reference imaging system.
Agriengineering 05 00055 g012
Figure 13. Correlation between the ground truth detected area using DSLR and FIP.
Figure 13. Correlation between the ground truth detected area using DSLR and FIP.
Agriengineering 05 00055 g013
Figure 14. Comparison between pixels detected as green (i.e., plant leaf area) using the FIP web camera imaging systems.
Figure 14. Comparison between pixels detected as green (i.e., plant leaf area) using the FIP web camera imaging systems.
Agriengineering 05 00055 g014
Figure 15. Correlation between the detected area using the FIP system and the web camera.
Figure 15. Correlation between the detected area using the FIP system and the web camera.
Agriengineering 05 00055 g015
Table 1. Switch control logic for the desired output.
Table 1. Switch control logic for the desired output.
IndexSwitch0Switch1Switch2Switch3Filtered Output
10000Original Color Image
21001Detected Binary Image of Red Object
30101Detected Binary Image of Green Object
40011Detected Binary Image of Blue Object
Table 2. Object formation with shapes and colors for the performance evaluation of R-ratio, G-ratio, and B-ratio filters.
Table 2. Object formation with shapes and colors for the performance evaluation of R-ratio, G-ratio, and B-ratio filters.
ObjectsShapes and Colors
1a
2b
3c
4a, b, c
5d
6e
7f
8d, e, f
9a, g, m
10b, h, n
11c, i, o
12a, g, m, b, h, n, c, i, o
13d, j, p
14e, k, q
15f, l, r
16e, j, r
Table 3. Average of pixels detected from the ROI along with the standard deviation for 16 objects using the three-color ratio filters.
Table 3. Average of pixels detected from the ROI along with the standard deviation for 16 objects using the three-color ratio filters.
Object/
Algorithm
Red RatioGreen RatioBlue Ratio
FPGA DataDSLR DataFPGA DataDSLR DataFPGA DataDSLR Data
1134,151.80 ±
834.41
130,136.40 ±
585.13
129,375.30 ±
117.15
128,983.70 ±
416.84
125,053.40 ±
224.35
128,860.60 ±
522.42
295,945.29 ±
559.21
90,020.60 ±
433.29
91,250.84 ±
180.95
89,730.70 ±
430.43
87,393.63 ±
210.37
89,642.60 ±
243.12
383,379.30 ±
724.86
76,736.70 ±
239.39
78,565.35 ±
58.75
76,243.10 ±
185.45
73,506.82 ±
4204.90
76,120.40 ±
226.45
485,048.98 ±
150.66
77,680.10 ±
242.57
80,049.29 ±
112.62
77,401.80 ±
491.16
75,687.72 ±
123.49
77,395.10 ±
346.25
588,135.14 ±
544.02
80,491.30 ±
316.36
83,603.98 ±
257.07
80,598.40 ±
213.01
79,938.65 ±
408.07
81,393.40 ±
466.21
658,291.06 ±
846.56
49,540.60 ±
403.84
54,033.82 ±
74.60
50,168.00 ±
137.73
48,966.72 ±
85.94
49,049.80 ±
141.46
752,463.82 ±
936.82
43,343.50 ±
314.96
47,350.33 ±
49.76
43,074.30 ±
197.56
43,262.76 ±
82.77
42,958.90 ±
143.41
8177,685.10 ±
456.34
173,227.40 ±
594.43
174,745.90 ±
640.63
173,506.00 ±
705.79
169,075.20 ±
1092.86
173,244.60 ±
703.39
933,582.16 ±
664.40
23,505.40 ±
153.50
28,999.28 ±
206.07
23,541.60 ±
112.58
24,382.31 ±
182.88
23,467.50 ±
131.55
1042,318.15 ±
1218.94
29,399.40 ±
285.42
35,727.60 ±
69.68
29,252.10 ±
104.37
30,478.85 ±
296.90
29,380.90 ±
193.23
1136,895.99 ±
1985.01
24,987.20 ±
642.13
30,869.18 ±
342.36
24,572.70 ±
120.14
25,698.35 ±
321.88
24,722.00 ±
92.57
1287,487.72 ±
1109.38
77,107.80 ±
471.15
81,759.20 ±
940.22
76,909.40 ±
356.59
76,172.50 ±
311.27
77,492.80 ±
352.62
1390,208.14 ±
521.19
80,244.90 ±
337.83
84,924.05 ±
518.82
80,535.10 ±
343.73
79,265.16 ±
1394.43
81,525.50 ±
522.96
1460,431.49 ±
750.09
49,255.30 ±
187.37
54,687.74 ±
278.97
49,952.10 ±
287.92
48,963.39 ±
1543.55
48,928.50 ±
198.09
1553,714.00 ±
271.81
43,303.30 ±
246.39
48,004.31 ±
397.67
43,184.10 ±
143.53
43,602.31 ±
993.33
43,300.30 ±
224.30
1660,697.53 ±
74.67
49,413.80 ±
241.26
84,488.80 ±
844.14
80,210.20 ±
343.71
44,202.75 ±
248.94
43,158.60 ±
164.38
Table 4. Percentage of deviation for the three-color ratio filters.
Table 4. Percentage of deviation for the three-color ratio filters.
Ratio FilterSD (% w.r.t the Total ROI)
MinimumMaximum
Red0.1235.38
Green0.0741.15
Blue0.1635.72
Table 5. Number of pixels detected using the FIP and web camera imaging systems for the ROI with the SD for 22 lettuce plants’ leaf area using green color detection.
Table 5. Number of pixels detected using the FIP and web camera imaging systems for the ROI with the SD for 22 lettuce plants’ leaf area using green color detection.
ObjectGreen Ratio
FIP (Pixel Area)Web Camera (Pixel Area)
142,722.50 ± 211.9147,486.00 ± 444.48
249,166.00 ± 161.6655,160.10 ± 157.08
377,899.00 ± 201.1187,778.30 ± 806.07
456,169.30 ± 184.0859,048.50 ± 253.32
559,309.40 ± 244.8666,557.00 ± 130.09
6103,772.50 ± 284.45123,096.60 ± 560.73
762,601.40 ± 111.6269,735.30 ± 252.44
855,114.70 ± 185.3762,462.10 ± 136.29
9111,721.70 ± 288.07125,856.40 ± 418.17
1054,093.70 ± 165.0059,904.00 ± 192.54
11109,164.80 ± 251.19124,714.70 ± 523.44
1275,281.00 ± 267.1984,839.10 ± 359.07
1354,974.80 ± 184.1861,213.60 ± 285.60
1473,511.60 ± 310.2381,856.70 ± 304.31
1552,212.20 ± 129.0656,741.80 ± 240.93
1651,257.60 ± 208.6158,214.80 ± 255.57
1775,015.70 ± 236.8082,448.20 ± 368.71
1875,490.70 ± 209.6084,037.70 ± 464.33
1978,750.50 ± 215.4489,323.70 ± 358.14
2073,722.60 ± 222.0684,770.30 ± 259.95
2191,987.70 ± 245.64102,659.20 ± 257.67
2251,590.80 ± 228.4956,940.80 ± 189.22
Table 6. Statistical comparison between lab and outdoor experiments.
Table 6. Statistical comparison between lab and outdoor experiments.
Evaluation MethodsSD (%)RMSE (%)CCC (%)
MinMax
Lab0.075.721.2598.73
Outdoor0.020.061.9691.01
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Antora, S.S.; Chang, Y.K.; Nguyen-Quang, T.; Heung, B. Development and Assessment of a Field-Programmable Gate Array (FPGA)-Based Image Processing (FIP) System for Agricultural Field Monitoring Applications. AgriEngineering 2023, 5, 886-904. https://doi.org/10.3390/agriengineering5020055

AMA Style

Antora SS, Chang YK, Nguyen-Quang T, Heung B. Development and Assessment of a Field-Programmable Gate Array (FPGA)-Based Image Processing (FIP) System for Agricultural Field Monitoring Applications. AgriEngineering. 2023; 5(2):886-904. https://doi.org/10.3390/agriengineering5020055

Chicago/Turabian Style

Antora, Sabiha Shahid, Young K. Chang, Tri Nguyen-Quang, and Brandon Heung. 2023. "Development and Assessment of a Field-Programmable Gate Array (FPGA)-Based Image Processing (FIP) System for Agricultural Field Monitoring Applications" AgriEngineering 5, no. 2: 886-904. https://doi.org/10.3390/agriengineering5020055

APA Style

Antora, S. S., Chang, Y. K., Nguyen-Quang, T., & Heung, B. (2023). Development and Assessment of a Field-Programmable Gate Array (FPGA)-Based Image Processing (FIP) System for Agricultural Field Monitoring Applications. AgriEngineering, 5(2), 886-904. https://doi.org/10.3390/agriengineering5020055

Article Metrics

Back to TopTop