Next Article in Journal
Self-Organisation in Urban Community Gardens: Autogestion, Motivations, and the Role of Communication
Next Article in Special Issue
A Data-Based Fault-Detection Model for Wireless Sensor Networks
Previous Article in Journal
The Development of the Athens Water Supply System and Inferences for Optimizing the Scale of Water Infrastructures
Previous Article in Special Issue
Adapting Models to Warn Fungal Diseases in Vineyards Using In-Field Internet of Things (IoT) Nodes
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Augmented Reality in the Integrative Internet of Things (AR-IoT): Application for Precision Farming

by
Pilaiwan Phupattanasilp
1 and
Sheau-Ru Tong
2,*
1
Department of Tropical Agriculture and International Cooperation, National Pingtung University of Science and Technology, Pingtung 91201, Taiwan
2
Department of Management Information Systems, National Pingtung University of Science and Technology, Pingtung 91201, Taiwan
*
Author to whom correspondence should be addressed.
Sustainability 2019, 11(9), 2658; https://doi.org/10.3390/su11092658
Submission received: 30 March 2019 / Revised: 1 May 2019 / Accepted: 6 May 2019 / Published: 9 May 2019

Abstract

:
Benefitted by the Internet of Things (IoT), visualization capabilities facilitate the improvement of precision farming, especially in dynamic indoor planting. However, conventional IoT data visualization is usually carried out in offsite and textual environments, i.e., text and number, which do not promote a user’s sensorial perception and interaction. This paper introduces the use of augmented reality (AR) as a support to IoT data visualization, called AR-IoT. The AR-IoT system superimposes IoT data directly onto real-world objects and enhances object interaction. As a case study, this system is applied to crop monitoring. Multi-camera, a non-destructive and low-cost imaging platform of the IoT, is connected to the internet and integrated into the system to measure the three-dimensional (3D) coordinates of objects. The relationships among accuracy, object coordinates, augmented information (e.g., virtual objects), and object interaction are investigated. The proposed system shows a great potential to integrate IoT data with AR resolution, which will effectively contribute to updating precision agricultural techniques in an environmentally sustainable manner.

1. Introduction

Crops planted by farmers grow in nature, which is dynamic and is affected by unpredictable factors, such as weather and soil conditions, pest and disease challenges, and changing crop conditions. These factors affect not only planting natures, but also potential outcomes. Precision farming is an important term related to a combination of information technology and sustainable devices to increase efficiency, decrease managerial costs, and provide expert knowledge needed for farmers in agricultural management.
IoT (Internet of Things) technologies have been adopted in today’s precision farming due to their scalable and environmentally friendly capabilities [1]. The use of IoT in crop positioning is critical to identify crop locations and improve invisibility problems. Furthermore, updated and visualized IoT information allows farmers to cope with and even to benefit from these changes. However, traditional methods for visualizing IoT data in agriculture have relegated these processes, from data collection to result displaying, to a totally textural and offsite environment. This textual environment deprives human senses of physical characteristics, such as shape, size, or color. Moreover, the surrounding physical context is usually not presented. It is not intuitive or efficient to examine or interpret IoT data without senses of shape and necessary context [2]. Moreover, the relevant IoT parameters are normally recorded via paper entry, which increases paper waste and explanation time. These demands motivate the development of interactive IoT data visualization.
Augmented reality (AR) technology provides an excellent option to enrich such sensorial perception and interaction demands [3]. AR supplements reality, which cannot be sufficiently embodied in an object label by superimposing virtual (computer-generated) objects, such as graphics, texts, and sound, over a user’s real-world environment. It could be imagined how crop positioning facilitated by IoT devices and AR contents could benefit from data visualization and interpretation with which a farmer could interact with the target crop and study daily records through virtual contents.
With the goal of utilizing AR technologies to support IoT data visualization, in this paper, we propose a novel framework that integrates IoT into an AR-based environment, called AR-IoT. The IoT part is based on a multi-camera identification approach. It must be able to identify the coordinates of the crop precisely, while IoT data can be superimposed onto a physical crop or space in real-time, simulating virtual contents with AR (virtual cube or virtual infographic). Accordingly, it is designed in such a way that a farmer can visualize the crops from different angles, thus solving visibility problems. The farmer also be able to interact with IoT data directly from the real-world environment. Therefore, AR-IoT would enhance monitoring tasks and help farmers to more precisely ensure crop quality and reduce planting operation costs.
The rest of the paper is organized as follows. Section 2 contains a review of related works. Section 3 elaborates the architecture of the integrated AR-IoT system, as well as the relevant methodologies. In Section 4, the proposed system is implemented in a case study, followed by a discussion in Section 5. Finally, conclusions are drawn in Section 6.

2. Related Work

The IoT is the inter-networking paradigm enabled by various devices (things) to provide intelligent services, including identifying, sensing, networking, processing, and visualization capabilities [4]. IoT technology has been an essential element in agricultural processes due to its ability to provide quantitative information. Such information will improve farming management strategies; for example, long-term historical data collected by IoT systems can be used to conduct integrated pest management applications to prevent the damage caused by pets [5]. Yang et al. [6] adopted the cloud framework along with IoT devices to improve farm management systems. Kamilaris et al. [7] proposed agri-IoT framework for decision making and event detection, which can be used in medium-to-large farms. Liao et al. [8] developed an IoT-based system for monitoring the growth status of orchids by developing an IoT-based wireless imaging platform and image-processing algorithm to estimate orchid’s leaf area. The analysis of long-term monitoring data has an influence on farmer’s cultivation decisions. In other cases, the IoT can be used to create applications in precision agriculture that will control crop conditions and actuators’ management [9]. Cameras in combination with the IoT and sensors are developed to improve the care of bees and facilitate the job of beekeepers [10]. The presented work reveals the potential of IoT technologies in the sphere of agriculture. However, other requirements might be integrated in order to make progress in agriculture areas, such as optimizing IoT data representation in a realistic and intuitive way, in which virtual objects integrate IoT technologies [4,11].
AR superimposes physical objects with virtual objects [3]. The merging of virtual objects with the real-world environment gives AR the potential to be one of the main visual spaces for supporting immersive data applications [3]. For example, Tatić et al. [12] presented an AR system for occupational safety. The instruction and checklist are shown as virtual objects on the screen showing the list of tasks and instructions for each worker based on their qualifications and responsibilities. Moreover, structural analysis of the integration with AR system was presented in [2]. Measurement visualization uses a three-dimensional (3D) mesh model and lucid illustrations that change their colors according to stress weight. Velázquez et al. [13] conducted a learning unit, “Energy and its transformation”, in which students can analyze or get the perspective of wind turbines with small turns or movements of devices. ElSayed et al. [14] proposed the use of AR to allow consumers to visually interact with information. Moreover, consumers are able to compare information associated with multiple physical objects. Evaluation of this prototype was done by shopping analytics tasks, which showed that consumers preferred the prototype to the manual method. In addition, when using the AR approach, tasks were performed more quickly and accurately.
Under these considerations, an exponential growth of AR in combination with the IoT can be predicted due to the rise of smart farming. AR in combination with the IoT is rarely used in agriculture, whereas in other fields, there is some literature about this type of integration technology. For example, Rashid et al. [15] present a system that allows wheelchair users to locate and consult physical items presented on the shelf by using AR technology. In order to identify the items, the authors employed the phenomena of IoT-based on radio frequency identification (RFID). The authors of [16] applied both AR and visual reality (VR) as immersive analytical tools for maintenance tasks. Using this proposed system, a user can monitor and decide what to do based on displayed safety information. This approach will ensure the safety of workers, as they can make quick decisions with more information. Similarly, a driver assistance system has been explored in order to enhance on-road safety for vehicles [17]. Additionally, more integration of the IoT with AR technologies can be found in the literature [18].
Undoubtedly, tracking modules can be considered to be a fundamental part of an IoT and AR system because of its measurement possibilities, such as object position, user position, and orientation. Computer vision with suitable functions and algorithms might be a good solution, because it could provide a non-destructive and low-cost approach to inspect the growth of plants. Obviously, a camera has major tracking sensors and methods [19]. However, a markerless vision-based camera provides an advantage over a marker-based one, because it can identify and track objects, even the hidden objects [20], without applying fiducial markers in all locations [3,21]. Moreover, objects with little or no difference in their looks can also be identified and tracked by a markerless vision-based camera; for example, different crops can be identified in spite of the fact that they are the same color [22]. In other domains, Lima et al. [23] presented a system that allows users to track a vehicle and identify its parts by using markerless techniques. Bauer et al. [24] used a markerless vision-based camera for anatomy visualization. For some AR and IoT applications, the unique usage of a vision-based tracking technique cannot provide a robust tracking solution due to the difference of viewing angle observations [3]. For this reason, Hirschmueller [25] and Hirschmueller et al. [26] introduced multiple windowing aggregation strategies. Moreover, an example of a system for natural feature tracking over wide areas, based on two- and three-view relations can be found in [27].
AR is an existing approach that provides immersive object and supportive data visualization. However, as far as we are concerned, no previous study have integrated the visualization of IoT agricultural data with AR solutions to allow a farm manager to interact with and visualize such information in a realistic and interactive way. Hence, our study is advancing progress toward the achievement of this goal. Multi-cameras are considered to be one of the utility factors used to provide different-angled visualization, identify the coordinates of physical objects, and superimpose the virtual objects on physical crops’ coordinates. A simple visual object is used to represent physical crops, while its interaction with the environment allows farmers to identify and explore to gather more information.

3. Materials and Methods

3.1. AR-IoT Concept

Main objective of AR-IoT is to implement monitorial tasks using different-angle visualization. A farmer can interact with physical objects, as well as the IoT information virtually attached to them. The proposed AR-IoT system includes two integrated modules: IoT and AR. The first module is based on sensor technologies, which are capable of identifying physical objects and collecting data sources. Meanwhile, an AR module is deployed to provide a 3D visual representing in physical world.
The overall conceptual diagram of the proposed AR-IoT is shown in Figure 1. To begin with, the left side of the diagram illustrates the connected devices, “things”, which live at the edge of the network, whereas, the middle of the diagram represents the storage where data from things are aggregated in real time. Besides the storage, the system contains three stages, called the offline preparation stage, online measuring stage, and graphic processing stage. First, the offline preparation stage is used to estimate the camera parameters and to ensure that the relativity among the cameras is respected for all of them. For example, the position and orientation of camera 2 are relative to camera 1, the position and orientation of camera 3 are relative to camera 2, and so on. Next, the online measuring stage provides data relating to the objects (e.g., coordinates). Then, the graphic processing stage elaborates IoT information and imposes them on 3D virtual objects (e.g., 3D virtual cube, virtual text). Lastly, the right side of the diagram depicts AR development associated with IoT information. Here, a farmer enters the information associated with physical objects and tries to gain IoT virtual contents through device display based on AR-IoT interaction. In this case, a farmer may execute on the storage or on the thing itself. Each of the parts of the diagram will be discussed below.

3.2. Things and Communication

In order to use IoT paradigms to communicate and create information structures, connected devices, or things such as sensors/actuators, control devices should be defined. Furthermore, agricultural subsystems (crop, soil, climate, water, nutrients, and energy) should be associated with them to determine development requirements [6]. Meanwhile, sensors enable acquisition of various sensor data, such as coordinates of crops, soil moisture content, temperature, water level, nutrients, and luminance. For example, in Figure 2, various devices, such as multi-cameras, are positioned around the farm, which a farmer manager can use to visualize the same object from different angles. Thus, the coordinates of crops will be identified to provide virtual visualization. In our case study, an IoT-based multi-camera was deployed to measure coordinates due to their stability and accuracy. Accordingly, a WSN (Wireless Sensor Network) was established to visualize the virtual contents.
In our AR-IoT system, a hierarchy of crop production is composed of regions of the farm (FARM_REGION_VO), the farmer manager (FARMER), sensor devices (SENSOR), and crops (PLANT). The farm is divided into several lots (FARM_LOT_VO). The farm manager is assigned names that can be used to access several lots (FARMER_LOT_VO). Things (SENSOR) are composed of control and sensor devices. Each crop (PLANT) is listed by crop type (e.g., PLANT1), as shown in Figure 3.
An example of interactive and sensor data is shown in Figure 4. First, rules for the farm, plant, and farmer are developed. This level attempts to guarantee that a message is received, stored, and retransmitted periodically. In this situation, all sensors and actuators are required. Changes in control processes and new message creation are performed. A farmer is able to locate a position of a crop and also set the region of the crop. For example, PLANT1 is located in single arrangement, and PLANT2 is located in a group.

IoT Based Multi-Camera

The IoT system evaluated in this study is principally based on a camera. Thus, we start with the camera model and its parameters. Figure 5 illustrates the pinhole camera. It is used to describe the camera model in this study. The line from the center of the lens, which is perpendicular to the image plane, is called the optical axis. In addition, the point where the optical axis meets the image plane is called the principal point. A distance between the center of the lens o and the principal point is called the focal length f. If o is an origin, then we can develop the camera coordination frame, where the z-axis represents the optical axis, the x-axis is parallel to the screen, and the y-axis is parallel to the longitudinal direction of the screen.
The pinhole camera parameters are represented in a 4 × 3 matrix called camera matrix C. Camera matrix C is modeled as:
C = [ R t ] K
where a rotation R and a translation t, called the extrinsic parameters, are rigid transformations from 3D world coordinates to 3D camera coordinates. K is the intrinsic parameter representing a projective transformation from 3D camera coordinates into two-dimensional (2D) image coordinates. The camera intrinsic matrix is defined as:
K = [ f x 0 0 s f y 0 c x c y 1 ]
where cx and cy are principal points, fx and fy are the focal length of the camera in terms of pixel dimension in the x and y directions, respectively. The added parameter s is the skew parameter. These camera parameters and additional lens distortion are estimated below.

3.3. Offline Preparation Stage

The major issue we try to deal with is how to superimpose IoT information associated with each physical object (e.g., plant, space) on the views of cameras. In this regard, the offline preparation stage is conducted to estimate the parameters of camera (i.e., extrinsic and intrinsic) to calculate the relative position and orientation between cameras, as well as to rectify the distorted images caused by lens distortion. However, to estimate these parameters, camera calibration is presented. The essential procedure of camera calibration is shown in Figure 6. The authors implemented a camera calibration by concentrating on the procedures of (A), (B), and (C), where (A) is the procedures of preparation: preparing the chessboard pattern, setting up the camera, and capturing the chessboard pattern images. Then, the captured image is added to (B). After the calibration, (C) is therefore adopted to evaluate the accuracy of calibration. The procedure of (D) is needed if the evaluation gives a pessimistically high value of the overall mean reprojection error. The software used for performing the calibration procedure is the toolbox of MATLAB [28], and we assumed that the cameras are calibrated (see Figure 7 and Figure 8) and that their images are rectified, as shown in Figure 9.

3.4. Online Measuring Stage

The technique of Section 3.3 is adopted to rectify an uncertainty in the image, and then, the coordinates of the object can be estimated. A conceptual block diagram of coordinate estimation for a single or a group of objects is shown in Figure 10. Moreover, the original RGB (red, green, blue) frame is converted into a grayscale frame to minimize the effect of lights and shadows. In practice, there are some challenges due to the fact that a camera is attached to the objects to measure the coordinates of objects from a depression angle. Therefore, only the tops of the objects would be extracted to measure their positions [30]. To solve this problem, a sequence of region of interest (ROI) selections is used in the study to narrow down the object regions and eventually help to estimate their coordinates.
The ROI is defined by using rectangular masks (green line)
R O I = [ x y w h ]
where a rectangular ROI begins at (x, y), and extends to width w and height h. The geometric center of ROI (POI) can be found by
R O I = [ x y w h ] .
In Equation (4), we consider POI to be the center point of the ROI. Then, we replaced POI with point x in the first camera C1 and point x’ in the other camera C2. Consequently, we use the triangulation method [31] to determine the coordinate of object P at the intersection of two back-projected rays from these two points, as illustrated in Figure 11.

3.5. Graphic Processing Stage

A graphic processing stage elaborates the frame captured from a camera and imposes them onto virtual objects using the coordinate P. A simple virtual cube is applied to visualize virtual objects in the AR environment. Moreover, it is defined by polygons, also referred to as “faces”, and each face is defined by a list of vertices that specifies its position (X, Y, Z). For example, consider Figure 12: There is a virtual cube with 6 faces and 8 vertices; the bottom face (gray color) is formed by vertex v1, v2, v3, and v4.
Accordingly, we can complete a virtual cube by finding the vertices of cube around coordinate P:
X i + [ ± h ± h ± h ± h ] T ,   Y i + [ ± h ± h ± h ± h ] T ,   Z i + [ ± h ± h ± h ± h ] T
where h = w/2 and w is the full-width of each face.

3.6. Display and Interaction

A conceptual block diagram of the display and interaction of an object is shown in Figure 13. By using calibration results, we can solve for projective 3D camera coordinates into 2D pixel coordinates p = (x, y, 1). In this case, (x, y) are an ordered pair of real numbers, and 1 is an extra coordinate to this pair that we declare to represent the same point. The coordinates of this mapping are written by
w ( x y 1 ) = ( Y Y Z 1 ) C
where w is the scale factor. Then, it can be written compactly as
p = P C .
After the virtual cube model and projective transformation are recognized, a visual cube must be superimposed onto the coordinate p (as a red circle) of each object. AR-IoT interaction is determined by the closest coordinate values when a farmer clicks on the nearest p
( x x p ) 2 + ( y y p ) 2
where (x, y) is the coordinate p and (xp, yp) is the current point clicked. The white plus line is used to interact with the object to explore AR-IoT information. After the interaction, a visual cube that is associated with the physical object can be superimposed from the coordinate p. Meanwhile, a visual text can be located right next to the coordinate p. The coordinate p and the appearance of IoT information are changed based on the coordinates of physical objects and the IoT information associated with them.

3.7. Extension to Multi-Cameras

Suppose we have camera 0, camera 1, …, camera n, and they are arranged in a way in which neighboring cameras can be calibrated (as shown in Figure 14). We use the center of the lens of camera 0 as the origin of the world’s coordinate. Then, in order to obtain R and t for camera i with respect to camera 0, we assume that are R(i−1,i)’ and t(i−1,i)’ are the rotational and translational matrix from two neighboring cameras, from camera i to camera i − 1. The rotational and translational matrix from camera i to camera 0 can be derived as
R i = j = 1 , ,   i R ( j 1 ,   j )   and   t i = j = 1 , ,   i t ( j 1 ,   j ) .

3.8. User Study

We conducted a user study to validate the proposed AR-IoT paradigm. The aim of the user study was to evaluate the benefit of an immersive AR in enhancing a farmer’s understanding of IoT information. We measured an error and completion time for a monitoring task, comparing the AR-IoT approach and the manual approach. Farmers were asked to complete monitoring tasks with the AR-IoT and manual approach. The virtual representations of this study were based on designs determined from a planting process (see Table 1). Figure 13 depicts the interaction and visualization approaches employed in our study. The completion time was also recorded. Farmers did not have a time limit for the AR-IoT or manual tasks but were asked to finish the tasks as quickly and accurately as possible.
In the AR-IoT approach, the farmers were asked to go through the AR-IoT environment. They were not allowed to check any information regarding the crop manually. First, they used the AR-IoT interaction to select or deselect the crop and filter it. Next, they recorded the numbers of the crop that had been selected with a red frame. Then, they were asked to visualize the crop based on virtual content to complete their monitoring tasks.
In the manual approach, the farmers were asked to use the real-world environment. They were asked to use daily records (paper) to complete their monitoring tasks. They were also allowed to view the crop onsite.

4. Results

4.1. Completion Time

The study was performed by 10 farmers. Their ages ranged from 33 to 60 years. All of them reported lacking IoT or AR experience. We ran the completion time results of the studies through a paired t-Test with Ha: μManual-ARIoT > 0, and the AR-IoT approach was shown to produce better results (lower times). In other words, the completion time of the monitoring task with the AR-IoT approach was faster than that of the manual approach. The results showed that, the AR-IoT approach was faster to a statistically significant effect (M = 0.3470, SD = 0.04270) compared with than the manual approach (M = 2.4900, SD = 0.26854): t (9) = 27.717, p <0.001 (see Table 2). Note that the result shown here was a planting process during days 19–25. It represents the assumption that the crop was processed based on the complex times (once in 3 days).

4.2. Accuracy

The aim of the case study was to evaluate the usefulness of immersive AR for enhancing a user’s perception of and interaction with IoT information. Through the study, we measured the error. Incorrect error is when coordinate P and the visual objects (i.e., cube and text) were located in the wrong position. Each position of the physical crop and the distance between a camera and the physical crop was reconstructed and compared with a known value, which was measured with a laser distance meter. The results of these comparisons are presented in Figure 15. The incorrect error of coordinate P for each physical crop is as follows: physical crop A 0.003, B 0.00, and C 0.002. Based on coordinate P, an incorrect error of the visual cube (A–B) was 0.00. The results obtained for visual text (A–C) were analogous.
Figure 16 demonstrates how the usage of the proposed AR-IoT approach can support a farmer in the manipulation of IoT data mapped to physical crops with virtual object representation. It also demonstrates how this technology can maximize the potential outcome. The simple virtual objects (e.g., virtual text and cube), which were located at crop’s coordinates are used to represent IoT data. Figure 16a shows how using the virtual text method facilitated the visualization of IoT data. Moreover, virtual text at the bottom left of the plant’s coordinate (red circle) was used for details on demand. Whenever there is a clicking/interacting within the physical crop area, the user can explore for more details regarding the IoT information, such as the date/growth of the crop and the fertilizers used. These attached virtual texts are highlighted with a semi-transparent layer (gray layer). The virtual cube in Figure 16b is used to convey IoT data, such as the meaning of the crop’s growth. For example, the color of the virtual cube changes as the crop grows. Meanwhile, the virtual cube in the yellow color means that the physical crop is almost ready for sale. Therefore, a farmer manager can prepare for the next action (i.e., for the market). This result provides more accurate precision farming techniques to assist with a crop’s planting and growing. Thus, there are fewer recorded errors and wasted paper and time.

5. Discussion

With pre-visualized IoT data, AR solutions are integrated by superimposing virtual object visualization and interaction. Thus, the absolute coordinates of a physical object are significant when visual objects are superimposed. Coordinate estimation with a low-cost image platform is first estimated by the camera parameters via a calibration. It is worth mentioning that the obtained coordinates had both positive and negative values for different distances, which means the reconstructed Z coordinate was always either slightly overestimated or underestimated by the cameras, respectively, as shown in Figure 17a,b. The coordinates can be roughly estimated, although the objects are complex (e.g., single, group, or co-located). In addition, coordinate estimation is not reliable for applications demanding large-scale accuracy. To improve this, a farm can utilize more cameras (as mentioned in Section 3.7). Accordingly, visual object superimposing can be performed synchronously to ensure that IoT information is updated in every coordinate. In this regard, we show that the IoT provides not only physical objects, but also virtual objects to facilitate the development of services and applications.
Experienced farmers might have been able to understand the plant’s condition without spending time reading historical logs, for example, days 1–7 (sowing and transplanting). However, the quantitative experimental results showed that there was a significant improvement in the completion time when using the AR-IoT approach on the complex time (days 19–25), as farmers misunderstood when exactly the fertilizing date occurred. This effect showed the benefits of using AR-IoT for a complex planting process. Moreover, the results showed that the time required to complete a task did not affect the resulting accuracy.

6. Conclusions

AR-IoT is a promising solution for either reducing or enhancing the visualizing of IoT information. This study proposes a framework to integrate IoT data into an AR-based environment, called AR-IoT. Integrating IoT into an AR-based environment allows both the superimposing of IoT information onto physical objects and the facilitating of the interpretation of such information. The AR-IoT is able to acquire IoT information directly from the onsite environment.
A case study was carried out using a crop, which was growing in nature. The relationships among the camera calibration, object coordinates, and accuracy of the visual representation and interaction were investigated. The conducted study showed that using our AR-IoT technology was less error prone and much more promising than traditional visualize methods. Furthermore, this study highlights an implication that could help developing decision-makers, reducing waste or lost time and advancing precision farming into the future.
In this study, we evaluated not only the visual representation and interaction of the AR-IoT concept, but also employed it in a real environment setting (i.e., from simulated plantation to actual plantation). Moreover, the virtual object was represented in a comprehensive visualization, using color scale to represent the crop parameters (e.g., growth of crop or disease damage).
The energy consumption of the embedded devices (e.g., multi-camera or image processing) is another issue that can be addressed though AR-IoT services. Moreover, AR-IoT can be used for preliminary studies about the feasibility of such technologies (e.g., sensor or complex measurement devices), which are expensive and, in most cases, are not suitable for farmers. For this reason, it could be said that the use of AR-IoT systems in precision farming is a promising target for future research.
Thus, this study proposed a novel method to integrate IoT into an AR-based environment, which has the potential to be applied to monitor agricultural crops in a simple and effective manner.

Author Contributions

P.P. performed the experiments and wrote the manuscript; S.-R.T. conceived of the research and provided important advice in the development of the study and manuscript.

Funding

This research received no external funding.

Acknowledgments

We would like to thank the reviewers of this paper for their valuable comments.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Popović, T.; Latinović, N.; Pešić, A.; Zečević, Ž.; Krstajić, B.; Djukanović, S. Architecting an IoT-enabled platform for precision agriculture and ecological monitoring: A case study. Comput. Electron. Agric. 2017, 140, 255–265. [Google Scholar] [CrossRef]
  2. Huang, J.M.; Ong, S.K.; Nee, A.Y.C. Real-time finite element structural analysis in augmented reality. Advances in Engineering Software. Adv. Eng. Softw. 2015, 87, 43–56. [Google Scholar] [CrossRef]
  3. Daponte, P.; Vito, L.D.; Picariello, F.; Riccio, M. State of the art and future developments of the Augmented Reality for measurement applications. Measurement 2014, 57, 53–70. [Google Scholar] [CrossRef]
  4. Čolaković, A.; Hadžialić, M. Internet of Things (IoT): A review of enabling technologies, challenges, and open research issues. Comput. Netw. 2018, 144, 17–39. [Google Scholar] [CrossRef]
  5. Chuang, C.L.; Yang, E.C.; Tseng, C.L.; Chen, C.P.; Lien, G.S.; Jiang, J.A. Toward anticipating pest responses to fruit farms: Revealing factors influencing the population dynamics of the Oriental Fruit Fly via automatic field monitoring. Comput. Electron. Agric. 2014, 109, 148–161. [Google Scholar] [CrossRef]
  6. Yang, F.; Wang, K.; Han, Y.; Qiao, Z. A Cloud-Based Digital Farm Management System for Vegetable Production Process Management and Quality Traceability. Sustainability 2018, 10, 4007. [Google Scholar] [CrossRef]
  7. Kamilaris, A.; Gao, F.; Prenafeta-Boldu, F.X.; Ali, M.I. Agri-IoT: A semantic framework for Internet of Things-enabled smart farming applications. In Proceedings of the IEEE 3rd World Forum on Internet of Things (WF-IoT), Reston, VA, USA, 12–14 December 2016. [Google Scholar]
  8. Liao, M.S.; Chen, S.F.; Chou, C.Y.; Chen, H.Y.; Yeh, S.H.; Chang, Y.C.; Jiang, J.A. On precisely relating the growth of Phalaenopsis leaves to greenhouse environmental factors by using an IoT-based monitoring system. Comput. Electron. Agric. 2017, 136, 125–139. [Google Scholar] [CrossRef]
  9. Ferrández-Pastor, F.J.; García-Chamizo, J.M.; Nieto-Hidalgo, M.; Mora-Pascual, J.; Mora-Martínez, J. Developing Ubiquitous Sensor Network Platform Using Internet of Things: Application in Precision Agriculture. Sensors 2016, 16, 1141. [Google Scholar] [CrossRef]
  10. Murphy, F.E.; Magno, M.; O’Leary, L.; Troy, K.; Whelan, P.; Popovici, E.M. Big Brother for Bees (3B)—Energy Neutral Platform for Remote Monitoring of Beehive Imagery and Sound. In Proceedings of the 6th International Workshop on Advances in Sensors and Interfaces (IWASI), Gallipoli, Italy, 18–19 June 2015. [Google Scholar]
  11. Díaz, M.; Martín, C.; Rubio, B. State-of-the-art, challenges, and open issues in the integration of Internet of things and cloud computing. J. Netw. Comput. Appl. 2016, 67, 99–117. [Google Scholar] [CrossRef]
  12. Tatić, D.; Tešić, B. The application of augmented reality technologies for the improvement of occupational safety in an industrial environment. Comput. Ind. 2017, 85, 1–10. [Google Scholar] [CrossRef]
  13. Velázquez, F.; Morales Méndez, G. Augmented Reality and Mobile Devices: A Binominal Methodological Resource for Inclusive Education (SDG 4). An Example in Secondary Education. Sustainability 2018, 10, 3446. [Google Scholar] [CrossRef]
  14. ElSayed, N.A.M.; Thomas, B.H.; Marriott, K.; Piantadosi, J.; Smith, R.T. Situated Analytics: Demonstrating immersive analytical tools with Augmented Reality. J. Vis. Lang. Comput. 2016, 36, 13–23. [Google Scholar] [CrossRef]
  15. Rashid, Z.; Melià-Seguí, J.; Pous, R.; Peig, E. Using Augmented Reality and Internet of Things to improve accessibility of people with motor disabilities in the context of Smart Cities. Future Gener. Comput. Syst. 2017, 76, 248–261. [Google Scholar] [CrossRef] [Green Version]
  16. Alam, M.F.; Katsikas, S.; Beltramello, O.; Hadjiefthymiades, S. Augmented and virtual reality based monitoring and safety system: A prototype IoT platform. J. Netw. Comput. Appl. 2017, 89, 109–119. [Google Scholar] [CrossRef]
  17. Gomes, P.; Olaverri-Monreal, C.; Ferreira, M. Making Vehicles Transparent Through V2V Video Streaming. IEEE Trans. Intell. Transp. Syst. 2012, 13, 930–938. [Google Scholar] [CrossRef]
  18. Gushima, K.; Nakajima, T. A Design Space for Virtuality-Introduced Internet of Things. Future Internet 2017, 9, 60. [Google Scholar] [CrossRef]
  19. Jeone, B.; Yoon, J. Competitive Intelligence Analysis of Augmented Reality Technology Using Patent Information. Sustainability 2017, 9, 497. [Google Scholar] [CrossRef]
  20. Hamuda, E.; Ginley, B.M.; Glavin, M.; Jones, E. Improved image processing-based crop detection using Kalman filtering and the Hungarian algorithm. Comput. Electron. Agric. 2018, 148, 37–44. [Google Scholar] [CrossRef]
  21. Phupattanasilp, P.; Tong, S.R. Application of multiple view geometry for object positioning and inquiry in agricultural augmented reality. In Proceedings of the 2nd International Conference on Agricultural and Biological Sciences, Shanghai, China, 23–26 July 2016. [Google Scholar]
  22. Hamuda, E.; Ginley, B.M.; Glavin, M.; Jones, E. Automatic crop detection under field conditions using the HSV colour space and space and morphological operations. Comput. Electron. Agric. 2017, 133, 97–107. [Google Scholar] [CrossRef]
  23. Lima, J.P.; Roberto, R.; Simões, F.; Almeida, M.; Figueiredo, L.; Teixeira, J.M.; Teichrieb, V. Markerless tracking system for augmented reality in the automotive industry. Expert Syst. Appl. 2017, 82, 100–114. [Google Scholar] [CrossRef]
  24. Bauer, A.; Neog, D.R.; Dicko, A.H.; Pai, D.K.; Faure, F.; Palombi, O.; Troccaz, J. Anatomical augmented reality with 3D commodity tracking and image-space alignment. Comput. Graph. 2017, 69, 140–153. [Google Scholar] [CrossRef] [Green Version]
  25. Hirschmüller, H. Improvements in Real-Time Correlation-Based Stereo Vision. In Proceedings of the IEEE Workshop on Stereo and Multi-Baseline Vision, Kauai, HI, USA, 9–10 December 2001. [Google Scholar]
  26. Hirschmüller, H.; Innocent, P.R.; Garibaldi, J. Real-Time Correlation-Based Stereo Vision with Reduced Border Errors. Int. J. Comput. Vis. 2002, 47, 229–246. [Google Scholar] [CrossRef]
  27. Ku, K.; Chia, K.W.; Cheok, A.D. Real-time camera tracking for marker-less and unprepared augmented reality environments. Image Vis. Comput. 2008, 26, 673–689. [Google Scholar]
  28. MATLAB. Available online: https://www.matlab.com (accessed on 6 June 2017).
  29. Heikkilä, J.; Silvén, O. A four-step camera calibration procedure with implicit image correction. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Juan, UT, USA, 17–19 June 1997. [Google Scholar]
  30. Lin, N.; Shi, W. The research on Internet of Things application architecture based on web. In Proceedings of the IEEE Workshop on Advanced Research and Technology in Industry Applications, Ottawa, ON, Canada, 29–30 September 2014. [Google Scholar]
  31. Hartley, R.; Zisserman, A. Multiple View Geometry in Computer Vision, 2nd ed.; Cambridge University Press: Cambridge, UK, 2004; pp. 262–278. [Google Scholar]
Figure 1. Diagram of the proposed augmented reality and Internet of Things system (AR-IoT). ROI: region of interest.
Figure 1. Diagram of the proposed augmented reality and Internet of Things system (AR-IoT). ROI: region of interest.
Sustainability 11 02658 g001
Figure 2. Things. 3D: three-dimensional.
Figure 2. Things. 3D: three-dimensional.
Sustainability 11 02658 g002
Figure 3. Hierarchical things information structure.
Figure 3. Hierarchical things information structure.
Sustainability 11 02658 g003
Figure 4. Example of interactive and sensor data designed by the farmer manager.
Figure 4. Example of interactive and sensor data designed by the farmer manager.
Sustainability 11 02658 g004
Figure 5. Pinhole camera model.
Figure 5. Pinhole camera model.
Sustainability 11 02658 g005
Figure 6. Camera calibration procedure.
Figure 6. Camera calibration procedure.
Sustainability 11 02658 g006
Figure 7. Mean reprojection error per image, along with the overall mean error. The overall mean reprojection error was 0.15 pixels, which coincided with the theoretical expectation values (i.e., less than 1 pixel) introduced in [29].
Figure 7. Mean reprojection error per image, along with the overall mean error. The overall mean reprojection error was 0.15 pixels, which coincided with the theoretical expectation values (i.e., less than 1 pixel) introduced in [29].
Sustainability 11 02658 g007
Figure 8. Extrinsic visualization, (a) a total of 20 chessboard patterns were captured, and (b) the patterns appeared in front of the cameras.
Figure 8. Extrinsic visualization, (a) a total of 20 chessboard patterns were captured, and (b) the patterns appeared in front of the cameras.
Sustainability 11 02658 g008
Figure 9. Sample results of image rectification. (a) original images; (b) the images are rectified as the lines have the same row coordinates.
Figure 9. Sample results of image rectification. (a) original images; (b) the images are rectified as the lines have the same row coordinates.
Sustainability 11 02658 g009aSustainability 11 02658 g009b
Figure 10. Conceptual block diagram of the ROI for locating a single plant or a group of plants. RGB: red, green, blue.
Figure 10. Conceptual block diagram of the ROI for locating a single plant or a group of plants. RGB: red, green, blue.
Sustainability 11 02658 g010
Figure 11. Triangulation method determining the coordinate P at the intersection of the two rays back projected from point correspondences x and x’ (matched points) proposed in [31].
Figure 11. Triangulation method determining the coordinate P at the intersection of the two rays back projected from point correspondences x and x’ (matched points) proposed in [31].
Sustainability 11 02658 g011
Figure 12. Coordinates of an object and the virtual cube model.
Figure 12. Coordinates of an object and the virtual cube model.
Sustainability 11 02658 g012
Figure 13. Conceptual block diagram of display and interaction. The red circles represent the coordinates of an object, while the black lines represents the interaction.
Figure 13. Conceptual block diagram of display and interaction. The red circles represent the coordinates of an object, while the black lines represents the interaction.
Sustainability 11 02658 g013
Figure 14. Multi-camera calibration model. Each camera is separately calibrated to get the intrinsic parameters. Then, cameras 0 and 1 are calibrated as a pair to obtain R and t between cameras 0 and 1. Similarly, cameras 1 and 2 are calibrated as a pair to obtain R and t between cameras 1 and 2, …, camera n.
Figure 14. Multi-camera calibration model. Each camera is separately calibrated to get the intrinsic parameters. Then, cameras 0 and 1 are calibrated as a pair to obtain R and t between cameras 0 and 1. Similarly, cameras 1 and 2 are calibrated as a pair to obtain R and t between cameras 1 and 2, …, camera n.
Sustainability 11 02658 g014
Figure 15. Systematic relative error of coordinates P (X, Y, Z) with laser and virtual objects.
Figure 15. Systematic relative error of coordinates P (X, Y, Z) with laser and virtual objects.
Sustainability 11 02658 g015
Figure 16. Example of visual representation: (a) virtual text with semi-transparent layer and (b) virtual cube with color to visualize the IoT information.
Figure 16. Example of visual representation: (a) virtual text with semi-transparent layer and (b) virtual cube with color to visualize the IoT information.
Sustainability 11 02658 g016
Figure 17. Example of negative coordinate value, with (a) object distance, and (b) object coordinate with negative coordinate value.
Figure 17. Example of negative coordinate value, with (a) object distance, and (b) object coordinate with negative coordinate value.
Sustainability 11 02658 g017
Table 1. Planting process.
Table 1. Planting process.
DayPlanting Processes
1Sowing the seeds
1–45Daily watering
7Transplanting seedlings into the greenhouse and applying organic fertilizer
10, 13, 16, 19, 22Spraying organic hormones
25Applying chicken manure
45Harvesting
Table 2. Paired samples test.
Table 2. Paired samples test.
ApproachnMeanS.D.tp
Manual102.49000.26854
AR-IoT100.34700.04270
Manual-AR-IoT102.143000.2445027.717 *<0.001
Note: * p < 0.05.

Share and Cite

MDPI and ACS Style

Phupattanasilp, P.; Tong, S.-R. Augmented Reality in the Integrative Internet of Things (AR-IoT): Application for Precision Farming. Sustainability 2019, 11, 2658. https://doi.org/10.3390/su11092658

AMA Style

Phupattanasilp P, Tong S-R. Augmented Reality in the Integrative Internet of Things (AR-IoT): Application for Precision Farming. Sustainability. 2019; 11(9):2658. https://doi.org/10.3390/su11092658

Chicago/Turabian Style

Phupattanasilp, Pilaiwan, and Sheau-Ru Tong. 2019. "Augmented Reality in the Integrative Internet of Things (AR-IoT): Application for Precision Farming" Sustainability 11, no. 9: 2658. https://doi.org/10.3390/su11092658

APA Style

Phupattanasilp, P., & Tong, S. -R. (2019). Augmented Reality in the Integrative Internet of Things (AR-IoT): Application for Precision Farming. Sustainability, 11(9), 2658. https://doi.org/10.3390/su11092658

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop