Exploiting the Internet Resources for Autonomous Robots in Agriculture
Abstract
:1. Introduction
- Sensors to acquire geolocated biodata of crops and soil, e.g., nitrogen sensors, vision cameras, global navigation satellite systems (GNSS), etc.
- Computers for analyzing those data and running simple algorithms to help farmers make simple decisions (applying or not applying a given process, modifying a process application map, etc.).
- Actuators in charge of executing the decisions (opening/closing valves, altering a trajectory, etc.) for modifying crops. As an actuator, we consider the agricultural tool, also called the agricultural implement, and the vehicle, manually or automatically driven, to move the tool throughout the working field and apply the farming process.
- Detecting static or dynamic objects in their surroundings.
- Detecting row crops for steering purposes.
- Identifying plants and locating their positions for weeding are clear examples of the current use of AI techniques in agricultural robotics [9].
2. Materials and Methods
2.1. System Components
2.1.1. Main Process Loop in PA Autonomous Robots
- Selecting the references for the magnitudes to be controlled, i.e., defining the desired plan.
- Measuring the magnitudes of interest.
- Making decisions based on the measured and desired values of the magnitudes (control strategy).
- Executing the decided actions
2.1.2. Agricultural Robot
2.1.3. Perception System
- Guiding vision system: This system aims to detect static and dynamic obstacles in the robot’s path to prevent the robot tracks from stepping on the crops during the robot’s motion. Furthermore, it is also used to detect crop rows in their early growth stage to guide the robot in GNSS-denied areas [8]. The selected perception system consisted of a red–green–blue (RGB) wavelength vision camera and a time-of-flight (ToF) camera attached to the front of the mobile platform using a pan-tilt device, which allows control of the camera angle with respect to the longitudinal axis of the mobile platform, x. Figure 4 illustrates both cameras and their locations onboard the robot.
- Weed–meristem vision system: The system is based on 3D vision cameras to provide the controller with data on crops and weeds. These data are used to carry out the main activity of the tool for which it has been designed: weed management, in this case. For example, the perception system used in this study consists of an AI vision system capable of photographing the ground and discriminating crops from weeds in a first step using deep learning algorithms. In the second step, the meristems of the detected weeds are identified. Figure 3 sketches this procedure.
2.1.4. Agricultural Tools
2.1.5. The Smart Navigation Manager (SNM)
Smart Operation Manager (SoM)
- (a)
- Global Mission Planner
- Map information according to the data models on the Internet;
- Other information provided by third parties, such as weather forecasts;
- Data models to create maps for accessing already known treatment maps (sets of points in the field) which commonly originate from third-party map descriptions (Google Earth; Geographic Information System (GIS); GeoJSON.io, an open standard format to represent geographical features with nonspatial qualities).
- Absolute location based on GNSS: GNSS integrates several controllers for line tracking and is based on Dubins paths [11];
- ○
- Relative location based on RGB and ToF cameras, LIDAR, and IoT sensors: These methods are based on different techniques for navigation in the field and navigation on the farm, such as hybrid topological maps, semantic localization and mapping, and identification/detection of natural and artificial elements (crops, trees, people, vehicles, etc.) through machine learning techniques.
- (b)
- Global Mission Supervisor
- Receiving alarms from the system components (vehicle, sensors, weeding tool, etc.).
- Detecting faults in real-time.
- Executing diagnosis protocols.
- Collecting all available geo-referred data generated by every module onboard the robot. The data are stored in both the robot and the cloud.
- (c)
- Map Builder
- Select the field in GeoJSON.IO, an open-source geographic mapping tool that allows maps and geospatial data to be created, visualized, and shared in a simple and multiformat way.
- Assign essential attributes to comply with FIWARE. These attributes are those based on the farmer’s knowledge. They can include static (i.e., location, type, category) and dynamic (i.e., crop type and status, seeding date, etc.) attributes.
- Export in * GeoJSON format. The map obtained will be imported for extracting the information required to fill in the FIWARE templates, which include the farms and parcel data models, and other elements in a farm, such as buildings and roads.
- (d)
- IoT System
- The autonomous vehicle: The data and images acquired with IoT sensors onboard the vehicle are used to monitor and evaluate performances and efficiency and to identify the effects of treatments and traffic on surfaces.
- The environment: Data acquired with IoT sensors deployed on the cropland are used to (i) monitor crop development and (ii) collect weather and soil information.
- Robot–IoT set: It consists of two WiFi high-definition cameras installed onboard the autonomous robot (IoT-R1 and IoT-R2 in Figure 3). The cameras are triggered from the cloud or the central controller to obtain a low frame rate (approximately 1/5 sec). The pictures are stored in the cloud and are used to monitor the effects of the passage of the autonomous vehicle; therefore, they should include the robot’s tracks.
- Field–IoT set: It consists of the following (see Figure 3):
- ○
- Two multispectral cameras (IoT-F1 and IoT-F2) placed at the boundary of cropped areas to obtain hourly pictures of crops.
- ○
- A weather station (IoT-F3) to measure precipitation, air temperature (Ta), relative humidity (RH), radiation, and wind.
- ○
- Three soil multi-depth probes (IoT-F4) for acquiring moisture (Ts) data and three respiration probes (IoT-F5) to measure CO2 and H2O.
- (e)
- Cloud Computing System
- A data lake repository for storing mission data to be downloaded in batches for post-mission analysis.
- A web interface for post-mission data analysis based on graphical dashboards, georeferenced visualizations, key performance indicators, and indices.
- A container framework for implementing “Decision Support System” functionalities that define missions to be sent to the robot. These functionalities (e.g., the mission planner) can be implemented and launched from the cloud platform.
- A soft real-time web interface for missions. The interface visualizes real-time robot activities and performances or sends high-level commands to the robot (e.g., start, stop, change mission).
Central Manager
- Obstacle detection system. This module acquires visual information from the front of the robot (robot vision system) to detect obstacles based on machine vision techniques.
- Local mission planner and supervisor. The planner plans the motion of the robot near its surroundings. The local mission supervisor oversees the execution of the mission and reports malfunctions to the operator (see Section 2.1.5).
- Guidance system. This system is responsible for steering the mobile platform to follow the trajectory calculated by the planner. It is based on the GNSS if its signal is available. Otherwise, the system uses the information from the robot vision system to extract the crop row positions and follow them without harming the crop.
- Human–machine interface
- -
- Supervise the mission.
- -
- Monitor and control the progress of agricultural tasks.
- -
- Identify and solve operational problems.
- -
- Obtain real-time in-field access in an ergonomic, easy-to-use, and robust way.
- -
- Maintain the real-time safety of the entire system.
2.1.6. Sequence of Actions
- A0
- The system is installed in the field, The operator/farmer defines or selects a previously described mission using the HMI and starts the mission.
- A1
- The sensors of the perception module (M1) installed onboard the autonomous robot (M2) extract features from the crops, soil, and environment in the area of interest in front of the robot.
- A2
- The data acquired in action A1 are sent to the smart operation manager, determining the consequent instructions for the robots and the agricultural tool.
- A3
- The required robot motions and agricultural tool actions are sent to the robot controller, which generates the signal to move the robot to the desired positions.
- A4
- The robot controller forwards the commands sent by the smart navigation manager or generates the pertinent signals for the agricultural tool to carry out the treatment.
- A5
- The treatment is applied, and the procedure is repeated from action A1 to action A5 until field completion (A6).
- A6
- End of mission.
2.2. Integration Methods
2.2.1. Computing Architecture
- The modules (Mi), presented in the previous sections.
- The interconnection between modules, presented in the next section.
- The communication technologies and protocols to configure agricultural robotic systems that integrate IoT and cloud computing technologies.
2.2.2. Interfaces between System Components
Autonomous Robot (M2)/Agricultural Tool (M3) interface
Perception System (M1)/Agricultural Tool (M3)
IoT system/Cloud
2.2.3. Operation Procedure
- Creating the map: The user creates the field map following the procedure described in the MapBuilder module (see Section 2.1.5).
- Creating the mission: The user creates the mission by selecting the mission’s initial point (home garage) and destination field (study site).
- Sending the mission: The user selects the mission to be executed with the HMI (all defined missions are stored in the system) and sends it to the robot using the cloud services (see Section Smart Operation Manager (SoM)).
- Executing the mission: The mission is executed autonomously following the sequence of actions described in Section 2.1.6. The user does not need to act except for when alarms or collision situations are detected and warned of by the robot.
- Applying the treatment: When the robot reaches the crop field during the mission, it sends a command to activate the weeding tool, which works autonomously. The tool is deactivated when the robot performs the turns at the headland of the field and is started again when it re-enters. The implement was designed to work with its own sensory and control systems, only requiring the mobile platform for mobility and information when it must be activated/deactivated.
- Supervising the mission: When the robotic system reaches the crop field, it also sends a command to the IoT sensors, warning that the treatment is in progress. Throughout the operation, the mission supervisor module analyzes all the information collected by the cloud computing system, generated by both the robotic system and the IoT sensors. It evaluates if there is a possible deviation from the trajectory or risk of failure.
- Ending the mission: The mission ends when the robot reaches the last point in the field map computed by the MapBuilder. Optionally, the robot can stay in the field or return to the home garage. During the mission execution, the user can stop, resume, and abort the mission through the HMI.
3. Experimental Assessment
3.1. Study Site
3.2. Description of the Test Mission
- Acquiring data from the IoT sensor network.
- Taking pictures of the crop.
- Acquiring data from the guidance system.
- Sending all the acquired information to the cloud.
- Perception system procedure
- Guiding vision system: This experiment was conducted in the treatment stage, where the crop was detected to adjust the errors derived from planning and the lack of precision of the maps. YOLOv4 [20], a real-time object detector based on a one-stage object detection network, was the base model for detecting early-stage growth in maize [8], a wide-row crop. The model was trained using a dataset acquired in an agricultural season before these tests using the same camera system [21]. Moreover, in the case of wheat, which is a narrow-row crop, a different methodology was applied through the use of segmentation models, such as MobileNet, a convolutional neural network for mobile vision applications [22], trained using a dataset acquired in an agricultural season before these tests [23], with the same camera system. The detection of both crops was evaluated with regard to the GNSS positions collected manually for the different crop lines.
- The AI vision system: This system uses data from the installed RGB cameras to enable robust automated plant detection and discrimination. For this purpose, the state-of-the-art object detection algorithm Yolov7 is used in combination with the Nvidia framework DeepStream. Tracking the detected plants is performed in parallel by a pretrained DeepSort algorithm [24]. The reliability of the object detection algorithm is evaluated using test datasets with the commonly used metrics “intersection over union” (IoU) and “mean average precision” (mAP). This system works cooperatively with laser scanners as a stand-alone system. The information is not stored in the cloud.
- Autonomous robot procedure
- Smart Navigation Manager procedure:
- Smart operation manager: The processing time, latency, success rate, response time, and response status based on requests of the mission planner, IoT sensors, and cloud computing services were evaluated using ROS functionalities that provide statistics related to the following:
- ○
- The period of messages by all publishers.
- ○
- The age of messages.
- ○
- The number of dropped messages.
- ○
- Traffic volume to be measured in real-time.
- Central manager: The evaluation is similar to that used for the navigation controller.
- Obstacle detection system: YOLOv4 and a model already developed based on the COCO database were introduced to detect common obstacles in agricultural environments and were also used for evaluation. YOLOv4 is a one-stage object detection model, and COCO (common objects in context) is a large-scale object detection, segmentation, and captioning dataset.
4. System Assessment and Discussion
- Crop images: During the robot’s motion, images are acquired at a rate of 4 frames/s to guide the robot. The RGB images are 2048 × 1536 pixels with a weight of 2.2 MB (see Figure 8 and Figure 9), and the ToF images feature 352 × 264 points (range of 300–5000 mm) (see Figure 10). The images are sent to the guiding and obstacle detection system through the Ethernet using ROS (perception–ROS bridge in the perception system and ROS manager in the central manager). A subset of these images is stored in the cloud for further analysis. Using a FIWARE–ROS bridge with the NGSI application programming interface, the system sends up to 4 frames/s.
- Sensor data: IoT devices send the acquired data using 2.4 GHz WiFi with the MQTT protocol and JSON format.
- Traffic information: The ROS functionalities mentioned above revealed that during a field experiment (10 min duration), the total number of delivered messages was 2,395,692, with a rate of only 0.63% dropped messages (messages that were dropped due to not having been processed before their respective timeout), with average traffic of 10 MB/s and maximum traffic of 160 MB at any instant of time. No critical messages (command messages) were lost, demonstrating robustness within the smart navigation manager. Regarding cloud traffic, during a period of time of approximately 3 h, the messages sent to the cloud were monitored, where the number of messages received by the cloud was measured; the delay time of the transmission of the messages between the robot (edge) and the OCB, and between the robot and the KAFKA bus (see Figure 3), were also measured. During this interval of time, around 4 missions were executed, and a total of 14,368 messages were sent to the cloud, mainly the robot status and the perception system data. An average delay of about 250 ms was calculated between the moment the message is sent from the robot and the moment it is received in the OCB (see Figure 11a). Moreover, the KAFKA overhead, i.e., the time it takes for a message received by the OCB to be forwarded to the KAFKA bus and eventually processed by a KAFKA consumer, was approximately 1.24 ms, demonstrating that the internal communications within the server and hosted cloud services are robust (see Figure 11b).
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Data Availability Statement
Conflicts of Interest
References
- Ghose, B. Food security and food self-sufficiency in China: From past to 2050. Food Energy Secur. 2014, 3, 86–95. [Google Scholar] [CrossRef]
- Zhang, N.; Wang, M.; Wang, N. Precision agriculture—A worldwide overview. Comput. Electron. Agric. 2002, 36, 113–132. [Google Scholar] [CrossRef]
- Stentz, A.; Dima, C.; Wellington, C.; Herman, H.; Stager, D. A System for Semi-Autonomous Tractor Operations. Auton. Robot. 2002, 13, 87–104. [Google Scholar] [CrossRef]
- Bergerman, M.; Maeta, S.M.; Zhang, J.; Freitas, G.M.; Hamner, B.; Singh, S.; Kantor, G. Robot Farmers: Autonomous Orchard Vehicles Help Tree Fruit Production. IEEE Robot. Autom. Mag. 2015, 22, 54–63. [Google Scholar] [CrossRef]
- Gonzalez-De-Santos, P.; Ribeiro, A.; Fernandez-Quintanilla, C.; Lopez-Granados, F.; Brandstoetter, M.; Tomic, S.; Pedrazzi, S.; Peruzzi, A.; Pajares, G.; Kaplanis, G.; et al. Fleets of robots for environmentally-safe pest control in agriculture. Precis. Agric. 2017, 18, 574–614. [Google Scholar] [CrossRef]
- Underwood, J.P.; Calleija, M.; Taylor, Z.; Hung, C.; Nieto JFitch, R.; Sukkarieh, S. Real-time target detection and steerable spray for vegetable crops. In Proceedings of the International Conference on Robotics and Automation: Robotics in Agriculture Workshop, Seattle, WA, USA, 9–11 May 2015. [Google Scholar]
- Kongskilde. New Automated Agricultural Platform—Kongskilde Vibro Crop Robotti. 2017. Available online: http://conpleks.com/robotech/new-automated (accessed on 14 December 2022).
- Emmi, L.; Herrera-Diaz, J.; Gonzalez-De-Santos, P. Toward Autonomous Mobile Robot Navigation in Early-Stage Crop Growth. In Proceedings of the 19th International Conference on Informatics in Control, Automation and Robotics (ICINCO 2022), Lisbon, Portugal, 14–16 July 2022; pp. 411–418. [Google Scholar] [CrossRef]
- Bannerjee, G.; Sarkar, U.; Das, S.; Ghosh, I. Artificial Intelligence in Agriculture: A Literature Survey. Int. J. Sci. Res. Comput. Sci. Appl. Manag. Stud. 2018, 7, 3. [Google Scholar]
- Osinga, S.A.; Paudel, D.; Mouzakitis, S.A.; Athanasiadis, I.N. Big data in agriculture: Between opportunity and solution. Agric. Syst. 2021, 195, 103298. [Google Scholar] [CrossRef]
- Yang, D.; Li, D.; Sun, H. 2D Dubins Path in Environments with Obstacle. Math. Probl. Eng. 2013, 2013, 291372. [Google Scholar] [CrossRef]
- Emmi, L.; Parra, R.; González-de-Santos, P. Digital representation of smart agricultural environments for robot navigation. In Proceedings of the 10th International Conference on ICT in Agriculture, Food & Environment (HAICTA 2022), Athens, Greece, 22–25 September 2022; pp. 1–6. [Google Scholar]
- Orion Context Broker. Telefonica. Available online: https://github.com/telefonicaid/fiware-orion (accessed on 22 February 2023).
- Francia, M.; Gallinucci, E.; Golfarelli, M.; Leoni, A.G.; Rizzi, S.; Santolini, N. Making data platforms smarter with MOSES. Futur. Gener. Comput. Syst. 2021, 125, 299–313. [Google Scholar] [CrossRef]
- ROS—The Robot Operating System. 2023. Available online: https://www.ros.org/ (accessed on 24 April 2020).
- ROSLink. 2023. Available online: https://github.com/aniskoubaa/roslink (accessed on 5 January 2023).
- Koubaa, A.; Alajlan, M.; Qureshi, B. ROSLink: Bridging ROS with the Internet-of-Things for Cloud Robotics. In Robot Operating System (ROS); Koubaa, A., Ed.; Studies in Computational Intelligence; Springer: Cham, Switzerland, 2017; Volume 707. [Google Scholar] [CrossRef]
- Fiware Community Fiware: The Open Source Platform for Our Smart Digital Future. Available online: https://www.fiware.org/ (accessed on 5 January 2023).
- López-Riquelme, J.; Pavón-Pulido, N.; Navarro-Hellín, H.; Soto-Valles, F.; Torres-Sánchez, R. A software architecture based on FIWARE cloud for Precision Agriculture. Agric. Water Manag. 2017, 183, 123–135. [Google Scholar] [CrossRef]
- Bochkovskiy, A.; Wang, C.Y.; Liao, H.Y.M. YOLOv4: Optimal speed and accuracy of object detection. arXiv 2020, arXiv:2004.10934. [Google Scholar]
- Herrera-Diaz, J.; Emmi, L.A.; Gonzalez de Santos, P. Maize Dataset. 2022. Available online: https://digital.csic.es/handle/10261/264581 (accessed on 1 April 2023).
- Howard, A.G.; Zhu, M.; Chen, B.; Kalenichenko, D.; Wang, W.; Weyand, T.; Andreetto, M.; Adam, H. MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications. arXiv 2017, arXiv:1704.04861. [Google Scholar]
- Herrera-Diaz, J.; Emmi, L.; Gonzalez de Santos, P. Wheat Dataset. 2022. Available online: https://digital.csic.es/handle/10261/264622 (accessed on 1 April 2023).
- Wojke, N.; Bewley, A.; Paulus, D. Simple Online and Realtime Tracking with a Deep Association Metric. In Proceedings of the IEEE International Conference on Image Processing (ICIP), Beijing, China, 17–20 September 2017. [Google Scholar]
Architecture Component | Solutions/Comments |
---|---|
Operating system | ROS (Robot Operating System) |
IoT–controller bridge | Hypertext Transfer Protocol (HTTP) to FIWARE Note: FIWARE is used as a communication protocol in the cloud; therefore, it is not necessary to use ROSLink. |
ROS-based system for FIWARE tools | HTTP protocol to FIWARE Note: FIROS has several disadvantages when developing new data models to represent the robot, so a particular enabler will not be used to establish communication between the robot and the cloud. |
Communication with IoT devices | WiFi, serial communication Note: Since a certain amount of data needs to be transmitted, WiFi would suffice. |
The Internet | 4G LTE-M modem |
Devices onboard the mobile platform | CANopen, serial |
Human–machine interface (HMI). | Synchronous remote procedure call-style communication over services protocol. |
Asynchronous communications to ensure the safety of the robot. Note: The HMI is used to provide access to SoM services through a web interface. |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Emmi, L.; Fernández, R.; Gonzalez-de-Santos, P.; Francia, M.; Golfarelli, M.; Vitali, G.; Sandmann, H.; Hustedt, M.; Wollweber, M. Exploiting the Internet Resources for Autonomous Robots in Agriculture. Agriculture 2023, 13, 1005. https://doi.org/10.3390/agriculture13051005
Emmi L, Fernández R, Gonzalez-de-Santos P, Francia M, Golfarelli M, Vitali G, Sandmann H, Hustedt M, Wollweber M. Exploiting the Internet Resources for Autonomous Robots in Agriculture. Agriculture. 2023; 13(5):1005. https://doi.org/10.3390/agriculture13051005
Chicago/Turabian StyleEmmi, Luis, Roemi Fernández, Pablo Gonzalez-de-Santos, Matteo Francia, Matteo Golfarelli, Giuliano Vitali, Hendrik Sandmann, Michael Hustedt, and Merve Wollweber. 2023. "Exploiting the Internet Resources for Autonomous Robots in Agriculture" Agriculture 13, no. 5: 1005. https://doi.org/10.3390/agriculture13051005
APA StyleEmmi, L., Fernández, R., Gonzalez-de-Santos, P., Francia, M., Golfarelli, M., Vitali, G., Sandmann, H., Hustedt, M., & Wollweber, M. (2023). Exploiting the Internet Resources for Autonomous Robots in Agriculture. Agriculture, 13(5), 1005. https://doi.org/10.3390/agriculture13051005