Next Article in Journal
Dynamic Flow-Adaptive Spectrum Leasing with Channel Aggregation in Cognitive Radio Networks
Next Article in Special Issue
Recognition of Cosmic Ray Images Obtained from CMOS Sensors Used in Mobile Phones by Approximation of Uncertain Class Assignment with Deep Convolutional Neural Network
Previous Article in Journal
Laboratory Testing of FBGs for Pipeline Monitoring
Previous Article in Special Issue
Joint Unsupervised Learning of Depth, Pose, Ground Normal Vector and Ground Segmentation by a Monocular Camera Sensor
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Feature Point Registration Model of Farmland Surface and Its Application Based on a Monocular Camera

1
Key Laboratory of Bionic Engineering, Ministry of Education, Jilin University, Changchun 130022, China
2
College of Biological and Agricultural Engineering, Jilin University, Changchun 130022, China
3
Graduate School of Agriculture, Kyoto University, Kyoto 6068502, Japan
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Sensors 2020, 20(13), 3799; https://doi.org/10.3390/s20133799
Submission received: 12 June 2020 / Revised: 3 July 2020 / Accepted: 6 July 2020 / Published: 7 July 2020
(This article belongs to the Special Issue Camera as a Smart-Sensor (CaaSS))

Abstract

:
In this study, an image registration algorithm was applied to calculate the rotation angle of objects when matching images. Some commonly used image feature detection algorithms such as features from accelerated segment test (FAST), speeded up robust features (SURF) and maximally stable extremal regions (MSER) algorithms were chosen as feature extraction components. Comparing the running time and accuracy, the image registration algorithm based on SURF has better performance than the other algorithms. Accurately obtaining the roll angle is one of the key technologies to improve the positioning accuracy and operation quality of agricultural equipment. To acquire the roll angle of agriculture machinery, a roll angle acquisition model based on the image registration algorithm was built. Then, the performance of the model with a monocular camera was tested in the field. The field test showed that the average error of the rolling angle was 0.61°, while the minimum error was 0.08°. The field test indicated that the model could accurately obtain the attitude change trend of agricultural machinery when it was working in irregular farmlands. The model described in this paper could provide a foundation for agricultural equipment navigation and autonomous driving.

1. Introduction

With the development of computer technology, currently, smart cameras are widely used in navigation, positioning, tracking, obstacle avoidance, monitoring, etc. [1,2]. Among them, visual measurement is becoming a research hotspot, which uses a camera to capture the static single frame image or dynamic sequence images of the target [3,4]. Image processing and analysis technologies are used to measure the target structural parameters and motion parameters. It has the advantages of non-contact, fast dynamic response and high efficiency [5]. Thus, it has been widely used in the field of industrial pose measurement. Intelligent agricultural machinery equipment is one of the important technologies of modern agriculture, and its working width has gradually broadened. During field operations, the attitude angles of the wide-amplitude agricultural machine, especially the roll angle, affect the tillage depth and compressing strength of the machine, thereby impacting the tillage quality [6,7]. Therefore, the accurate roll angle of the implement could make the control of automatic navigation agricultural vehicles more precise [8], and it would help to warn of a possible roll before climbing or descending, thereby effectively assisting to decrease the loss of life and property [9]. There are many ways to obtain the roll angle of agricultural machine [10,11]. Since the camera cost is low and it can provide rich visual information, with the rapid development of hardware computing capabilities, image processing has become a popular method for acquiring the roll angle.
At present, there are many types of methods for obtaining the attitude angle of an object based on image processing [12,13]. One of them is realized by extracting the geometric features of the object. For example, Arakawa et al. implemented the attitude estimation system from the visible horizon in the images for unmanned aerial vehicles (UAVs). This system finds the horizon in the image, and then estimates attitude from the horizon. The horizon is detected using morphological smoothing, Sobel filter and Hough transform [12]. Timotheatos et al. proposed a vehicle attitude estimation method based on horizon detection, which detects the horizon line based on a canny edge and a Hough detector along with an optimization step performed by a particle swarm optimization (PSO) algorithm. The roll angle of the vehicle is determined from the angle formed by the slope of the horizon, and the pitch angle is computed using the current and initial horizon positions [14]. There is another type of method that uses the correspondence between the target model point and its imaging point to obtain the attitude angle of the object. For example, Wang presented a method for the perspective-n-point (PnP) problem for determining the position and orientation of a calibrated camera from known reference points. The method transfers the pose estimation problem into an optimal problem, which only requires solving a seventh-order polynomial and a fourth-order univariate polynomial, respectively, making the processes more easily understood and significantly improving the performance [15]. Zhang et al. proposed a method for attitude angle measurement using a single captured image to assist with the landing of small-scale fixed-wing UAVs. The method has the advantage that the attitude angles are obtained from just one image containing five coded landmarks, which reduces the time to solve the attitude angle while having more than one image in most methods [16]. These kinds of methods require the manual setting of landmarks to deal with the corresponding problems. In the actual experiment, due to the large number of target points, there is a certain error in placing the control points manually or solving the corresponding problems by computers, and solving the corresponding problems also needs large amount of computation.
In addition, the method of attitude acquisition based on SLAM (simultaneous localization and mapping) is also a research hotspot in recent years. SLAM usually refers to a robot or a moving rigid body, equipped with a specific sensor, estimating its own motion and building a model (certain kinds of description) of the surrounding environment without prior information [17]. At present, SLAM-based home service robots are widely used, and some scholars also research car autopilot and drone navigation based on SLAM [18,19]. However, few studies have been applied in complex field environments. Both this method and the visual odometer in SLAM use image feature points for registration. Considering the complex environment of the field, map building is not carried out in this paper.
Both types of methods need to extract object features, thus the feature extraction and matching methods would directly affect the applicability and accuracy of the algorithm. The efficient registration algorithm of visual images has become an important content in the study of visual image technology. In this paper, the image registration algorithm is used to obtain the rotation angle of the object, and different feature detection algorithms in image registration are compared, such as FAST corner point [20], SURF [21] and MSER [22] detection algorithms. The method in this paper overcomes the shortcomings of high demand on the shape of the object when calculating the rotation angle. There is no need for camera calibration and manual setting of marker points. The rotation angle can be obtained at the same time when matching the images.
Section 2 introduces and details our method. In Section 3, different field feature point registration models are compared. Then, based on a farmland feature point registration model, an attitude angle acquisition model for agricultural machinery is proposed. Section 4 shows the field test results and analyses. Finally, Section 5 draws the conclusions.

2. Theory and Method of Feature Point Registration

2.1. Theory of Image Registration

The core of image registration is finding a feasible mathematical model to explain the relationship between corresponding pixels in two images. Although these images may come from different angles and locations in a scene, or come from different sensors, statuses or times, it is possible to find a suitable way to register images according to different features.
Image registration includes spatial variation and grayscale transformation between two pictures. When using a two-dimensional matrix to express an image, the relationship between registration image Pi−1 and Pi is
Pi(x, y) = g(Pi−1 (f(x, y)))
where Pi−1(x, y) and Pi(x, y) are the grayscale values of a pixel at (x, y), f is a two-dimensional space geometric transformation function and g is a linear grayscale transformation function.

2.2. Image Registration Procedure

Figure 1 is the image registration algorithm procedure. Firstly, take two consecutive images Pi−1 and Pi. The image Pi−1 is referred to as the original image and Pi is referred to as the target image. Then, convert the two images to grayscale and extract feature points from them with feature extraction algorithm, which means pinpointing the feature points on each grayscale image. Afterwards, pick out the feature descriptor (including scale and direction information) and location information from each feature point. Finally, extract local feature points from images. Next, enter the feature point registration step. Set the original image as the reference, and then match the target image with the original image, i.e. match the original image and target image with feature descriptors. After that, eliminate mismatches. Subsequently, calculate a transformation matrix with the location of feature point pairs, eventually deriving the angle of two consecutive images calculated by this matrix.
In this method, Euclidean distance is applied to measure the registration between two images. When there is a small difference in the distribution of luminance information in multiple regions of the image, a feature point in the original image may match multiple feature points in a certain region of the target image. At this time, the multiple feature points in the target image may not be correct registration points. In addition, when the feature points in the original image are known, the corresponding feature points may not be detected in the target image, and the points found directly by the closest Euclidean distance might be mismatched points. Thus, this paper uses the ratio of the closest distance to the second-closest distance to find the most obvious feature points within the set threshold, which can reduce the occurrence of the above situation. Assume that there are M1 and M2 feature points in images Pi−1 and Pi, respectively. For any feature point m1 in M1, the two feature points m2 and m2* with the shortest Euclidean distance from m1 in M2 correspond to distance dij and dij*. If dij ≤ α × dij* (in this paper, α is 0.6), then m1 and m2 are taken as the corresponding registration feature point pairs. This method is used to find all possible registration pairs for all the feature points in Pi−1 and Pi.

2.3. Method of Image Registration

The feature-based image registration algorithm mainly includes feature extraction, feature description and feature registration. If the ratio of correctly matched feature pairs is greater than a certain proportion, M-estimator Sample Consensus (MSAC) algorithm [22] is applied to obtain these pairs, and then parameters of geometric transformation model between two images are calculated. Thus, before executing image registration, the priority is extracting features.
Feature extraction mainly includes feature detection and feature description, and feature detection algorithms consist of two classes: one is a feature-point-based detection algorithm and the other is a feature-area-based detection algorithm. Corner point detection is a commonly used feature point detection algorithm, such as Harris corner point, SUSAN corner point, FAST corner point, SURF and scale-invariant feature transform (SIFT) descriptor [23]. FAST considers 16 pixels in a cirle near the pixel point. For example, p is the center pixel point; if the value of n consecutive points from 16 pixel points in the circle are greater or smaller than the value of the center pixel p, then the center point will be the corner point. SURF is an improved and accelerated version of SIFT. SIFT detects extreme points in scale space to find feature points in different scale space and calculate the direction of key points. SURF operates on the integral image. The value of each pixel in the integral image is the sum of all elements in the upper left corner of the corresponding position on the original image; thus, acceleration can be realized. Because the FAST corner detection algorithm is fast and SURF algorithm is faster than SIFT algorithm while maintaining high accuracy, these two algorithms were chosen as representatives for comparison.
Feature-area-based detection methods find feature points and their surrounding area, which contains feature points and other information [24]. The MSER method denotes a set of distinguished regions that are detected in a grayscale image. It shows better adaptation on detecting gray consistency regions with strong discriminative boundary and structured and textured images. MSER also has better performance when light intensity changes compared to other area detection operators [25].
The image registration algorithm based on FAST corner point (hereafter, FAST−M), SURF (hereafter, SURF−M) and MSER (hereafter, MSER−M) was selected to build the image registration algorithm. For the sake of choosing the most suitable algorithm to build field feature point image registration model, the images of two typical field operations were tested in MATLAB. The software for the experiment was windows 10, MATLAB R2014a and Microsoft visual C++ 2015; the processor was Intel Core i7-8700 CPU, running at 3.2 GHz, 16.0 GB RAM.

3. Field Feature Point Registration Model and Application

3.1. Comparation and Analyze of Detection Operator

Considering the working environment for the agricultural machinery, images captured by a camera in the field may be affected by scaling, noise, brightness, etc. Therefore, the effects of image scaling, image noise and image brightness on the registration accuracy were tested, respectively. According to the results of the communique of the second national land survey data main achievements [26], the area of land whose slope is less than 2° is 57.1%. Therefore, the actual deflection angle between the two original images and the target image was set as 2° in the experiment. The images used for the three tests are displayed in Figure 2. To test the effects of image scaling, image noise and image brightness on the registration accuracy, firstly, the target image was obtained by rotating and cropping the original image. Then, three different operations were performed on the image shown in Figure 2b to obtain new target images. Finally, they were registered with the original images, respectively. Each test was run 100 times, and the average value of the time required for program operation and the angle error were taken as the evaluation indices.

3.1.1. Experiment on the Effect of Scaling on Registration Accuracy

Target images were not affected by brightness and noise relative to the original image. Based on the target image in Figure 2b, different scale (0.6, 0.8, 1.2 and 1.4) transformations using the resize function in OpenCV were performed, and then the transformed images were used for experiments. The test results for target image in Figure 2b are shown in Figure 3, Figure 4 and Figure 5, respectively.
The experimental results of FAST−M, SURF−M and MSER−M are shown in Figure 6. According to the curves in Figure 6, the scale change has a certain influence on the accuracy of MSERF−M and FAST−M algorithms, and it has the greatest influence on the accuracy of Fast−M algorithm. When the transform scale is 0.6 or 1.4, Fast−M algorithm cannot complete the registration. When the accuracies of the SURF−M and MSERF−M algorithms are close, the SURF−M algorithm takes less time than the MSERF−M algorithm. It can be seen that SURF−M algorithm is the best choice at different scales.

3.1.2. Experiment on the Effect of Noise on Registration Accuracy

Target images were not affected by brightness relative to the original image. To verify the adaptability of the algorithms to noise variation, salt and pepper noise with noise densities of 0.02, 0.04, 0.06 and 0.08 were added to the target image in Figure 2b, respectively, and then registered with the original image. The result of FAST−M, SURF−M and MSER−M are shown in Figure 7.
As shown in Figure 7, with the noise intensity increasing, the accuracy of the three algorithms all decreases, and the accuracy of the FAST−M algorithm decreases the most. In terms of the running time, with the increase of noise intensity, the time of FAST−M algorithm increases more, while the times of SURF−M and MSERF−M algorithm increase less. The reason is that the FAST corner detection algorithm is more sensitive to noise. When the noise is strengthened, it detects too many noise points, thereby increasing the running time, and the accuracy is reduced accordingly.

3.1.3. Experiment on the Effect of Brightness on Registration Accuracy

To verify the adaptability of the algorithms to brightness variation, target images were affected by brightness relative to the original image. The brightness of target images in Figure 2b was changed with MATLAB and then registered with the original images. The specific operation of brightness change means increasing or decreasing by a certain value each pixel in the image. In this study, four values were selected: −50, −30, 30 and 50.
The results of detection operators FAST−M, SURF−M and MSER−M are shown in Figure 8. As shown in Figure 8, the change of image brightness has a certain impact on the accuracy of the three algorithms but has little effect on running time. Overall, the SURF−M algorithm takes the least time. Although the MSERF−M has the highest accuracy, it also takes the most time.

3.1.4. Conclusion and Discussion

With regard to running time, MSER−M was the slowest and FAST−M was the fastest. In terms of accuracy, all three algorithms had error less than 0.1°. FAST−M algorithm was more sensitive to noise and scale transformations than the other algorithms. Although the accuracy of MSER−M was the highest for most cases, the running time was the longest. Considering the perspective of running time and accuracy, the best registration algorithm was SURF−M. As a result, in this study, SURF−M was elected algorithm to register feature points on images.

3.2. Farmland Feature Point Registration Model and Procedure

3.2.1. Farmland Feature Point Registration Model

When an agricultural machinery is working in a straight line in the field, the environment in front of or behind the machinery changes slightly within a short time interval. To eliminate the influence caused by the variation in depth, in the actual field test, the program collected pictures within a short interval. Because the agricultural machinery works along a straight line at low speed, compared with the field environment, the depth change in this short period of time would not have a significant impact on parallax. In the experiment, a monocular camera was mounted to agricultural machinery for recording field images. Field images were captured in real-time when agricultural machinery was working. Then, field images were registered by using the SURF−M algorithm, and the rotation angle was calculated from the transformation matrix of the two consecutive images.
In Figure 9, f is the focal length; Lz is the distance between object and lens; X1 and Y1 are the height and width of the image plane, respectively; H is the actual size of object; and h and w are height and width of objects in camera image plane coordinate system, respectively.
Referring   to   the   triangle   similarity   principle : H h = L z f
That is, the object in reality is proportional to the object in the image. When the camera rotates around the optical axis, the object in the image plane also rotates. Because the rotation angle of the object in image plan equals the camera rotation angle, the rotation angle between two consecutive images can be derived, and then the camera rotation angle can be obtained. When the camera is fixed on the agricultural machinery, the corresponding axes of the two coordinate systems are parallel in the direction of the optical axis of the camera. Thus, the rotation angle of the camera is the roll angle of the agricultural machinery.
The process of finding the roll angle by the transformation matrix is as follows. After the matching pairs from images are obtained, the transformation relationship between images can be estimated. During the field test, the camera was facing down and at an angle of 15° to the horizontal; thus, most of the area in the captured image was the field. At this time, the feature points on the field were approximately on the same plane, thus the homography matrix was used to calculate rotation matrix. Suppose that, for the coordinates A( x 1 , y 1 ) of any point on the original map, after the transformation of the homography matrix H, the corresponding coordinate points A’( x 2 , y 2 ) can be found on the target image; then, the transformation formula is as follows:
( x 2 y 2 1 ) = H ( x 1 y 1 1 ) = ( h 11 h 12 h 13 h 21 h 22 h 23 h 31 h 32 h 33 ) ( x 1 y 1 1 )
Then, we can get:
x 2 = x 1 h 11 + y 1 h 12 + h 13 x 1 h 31 + y 1 h 32 + h 33
y 2 = x 1 h 21 + y 1 h 22 + h 23 x 1 h 31 + y 1 h 32 + h 33
It could be written as:
x 1 h 31 x 2 + y 1 h 32 x 2 + h 33 x 2 x 1 h 11 y 1 h 12 h 13 = 0
x 1 h 31 y 2 + y 1 h 32 y 2 + h 33 y 2   x 1 h 21 y 1 h 22 h 23 = 0
Then, we can get:
( x 1 y 1 1 0 0 0 x 1 x 2 y 1 x 2 x 2 0 0 0 x 1 y 1 1 x 1 y 2 y 1 y 2 y 2 ) h = 0
The homography matrix H is multiplied by the number a. Because the new homography matrix turns the homogeneous point A into the homogeneous point aA, and the points on the image corresponding to aA and A are the same, then aH and H have the same effect. Therefore, there are only eight unknowns in the homography matrix H. Generally, in actual processing, it is usually multiplied by a non-zero factor so that h33 = 1. H contains eight unknowns and requires eight equations to solve, which can be solved through four sets of corresponding points. After obtaining the homography matrix, it is decomposed into the corresponding rotation matrix according to the decomposition method [27]. Finally, the roll angle can be obtained based on the rotation matrix.

3.2.2. Procedure of Farmland Feature Point Registration

The workflow of the attitude angle acquisition model is displayed in Figure 10. First, load an initial field image P0 as the original image at the initial state of operation, choose θ0 as the initial angle, and set the initial image P0 as the original image in comparison with images received later. After that, capture an image every certain period of time T1 (T1 = 1000 ms) and calculate the rotation angle θi, between images Pi−1 and Pi. Then, calculate the relative rotation angle δi = δi1 + θi of the loaded image with respect to the original image. This is the description of the first loop.
To eliminate the effect brought by distance variation, a new loop is needed. Before the new loop starts, when there is a certain time interval T2 from the initial time, capture a new initial image P0′ as the original image and angle data θ0′ (θ0′ is the relative angle δi of the last image Pi in the previous loop). Choose the original image as the template in comparison with images received later. After that, the new loop starts. In the new loop, load the image every certain period of time T1 (T1 = 1000 ms) and calculate the rotation angle θi between images Pi−1 and Pi. Then, calculate the relative rotation angle δi = δi−1 + θi of the loaded image with respect to the original image. This is the description of the second loop.
The hardware required by the system includes a monocular USB camera and a computer, and the running software is MATLAB R2014a. The attitude angle acquisition model of agricultural machinery is constructed by using SURF−M algorithm, and the system is built in MATLAB GUI.

4. Field Test and Analyses

To testify the feasibility of the attitude acquisition system, a field test was conducted in Boli county, Heilongjiang Province, China.
Experimental devices included the 2BGD-6 soybean variable fertilization seeder, an 88.2-kw CASE IH PUMA 210 tractor, a monocular camera, a digital level ruler (CHERVON Trading Co., Ltd., Nanjing, China, DEVON 9409.1) and the monitoring terminal with attitude angle acquisition model. The display accuracy of the digital level ruler is ±0.05°.
During the test, the facilities were installed as shown in Figure 11. The camera was facing down and at an angle of 15° to the horizontal; thus, most of the area in the captured image was the field. In the dynamic comparison test, the speed of the tractor was 3.6 km/h. The tractor operated along a straight line. When the speed of the tractor was stable, the monitoring test started. The frequency of attitude monitoring was 1 Hz, and a marking point was made every 1 m along the ridge with a tape measure along the driving direction of the seeder. At each marking point, the angle of the agricultural machine was collected as a true angle value by a digital angle ruler. At the same time, the image was collected through the monitoring terminal at each marking point, and the measured value was displayed and saved in real-time. The interface of the monitoring system is shown in Figure 12. By comparing the two values, the performance of the image feature point registration algorithm, which was applied to the roll angle measurement, was evaluated.
As displayed in Table 1 and Figure 13, comparing the angle value calculated by the attitude monitoring algorithm with the angle value measured by the digital display angle ruler, the maximum, minimum and average values of the absolute error of the attitude monitoring algorithm were 0.97°, 0.08°, and 0.61°, respectively.
The test results reveal that, when the agricultural machinery was working in the field, if its attitude changed, the model established could accurately perceive the attitude change trend and obtain the rotation angle of the agricultural machinery. However, there was a certain error in the attitude angle obtained by the system. The cause of the error is mainly because the camera was installed on the agricultural machine. The vibration of the agricultural machinery during the operation had an impact on the camera, which caused a slight deviation of the object in the captured images, resulting in a certain error in the rotation angle. In this study, since the feature variation between two images was small in a short time, the rotation angle of the two images could be obtained, and then the rotation angle of the camera at different times could be obtained. After obtaining the rotation angle of the camera, the roll angle of the agricultural machinery could be acquired. Compared with the SLAM algorithm, the function of the algorithm in this paper is relatively simple, but the algorithm is more concise, and the complexity is lower.

5. Conclusions

This paper presents a method for obtaining feature points with a monocular camera. Image registrations algorithm based on common image feature detection algorithms, such as FAST, SURF and MSER feature detection algorithm, were studied. By comparing the running time and accuracy of the three detection algorithms, the results demonstrate that FAST−M algorithm was more sensitive to noise and scale transformation than the other algorithms. Regarding the running time, the MSER−M algorithm took the longest time, followed by SURF−M, while the FAST−M algorithm took the shortest time. With respect to accuracy, the errors of the three algorithms are all less than 0.1°. In light of the test results, the SURF−M algorithm was selected for field feature point registration.
In this study, model for obtaining the attitude angle based on a monocular camera was built. When tested in the field, the average error of the rolling angle was 0.61°, and the minimum error was 0.08°. Field experiments indicated that the model could accurately obtain the attitude change trend of agricultural machinery. Due to the vibration of the machine, there was a certain error in the attitude angle obtained by the system. In the future, how to eliminate the influence of vibration on the accuracy of attitude angle acquisition should be deeply studied.

Author Contributions

Conceptualization, Y.L. and J.Q.; methodology, Y.L., H.J. and D.H.; the experiment and on-site application, Y.L. and H.L.; supervision, J.Q., D.H. and H.J.; writing—original draft preparation, J.Q., Y.L., S.C. and H.S.; and writing—review and editing, all authors. All authors have read and agreed to the published version of the manuscript.

Funding

This article was supported by the National Natural Science Foundation of China (31971783 and 31401284), the National Key Research and Development Program of China (2016YFD0200700), the National Key Technology Research and Development Program of the Ministry of Science and Technology of China (2014BAD06B03), and the Jilin Science and Technology Development Plan (20190302116GX), and it was also funded by the China Scholarship Council. The financial support from the above funds and organizations are gratefully acknowledged.

Acknowledgments

This research was supported by Heilongjiang Bonong Xingda machinery Co., Ltd., and special thanks is given to them.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Lemley, J.; Kar, A.; Drimbarean, A.; Corcoran, P. Efficient CNN implementation for eye-gaze estimation on low-power/low-quality consumer imaging systems. arXiv 2018, arXiv:1806.10890. [Google Scholar]
  2. Lee, S.H.; Yang, C.S. A real time object recognition and counting system for smart industrial camera sensor. IEEE Sens. J. 2017, 17, 2516–2523. [Google Scholar] [CrossRef]
  3. Zhang, Z.; Zhou, Y.; Liu, H.; Zhang, L.; Wang, H. Visual measurement of water level under complex illumination conditions. Sensors 2019, 19, 4141. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Liu, Q.; Chu, B.; Peng, J.; Tang, S. A Visual Measurement of water content of crude oil based on image grayscale accumulated value difference. Sensors 2019, 19, 2963. [Google Scholar] [CrossRef] [Green Version]
  5. Lin, Y.-T.; Lin, Y.-C.; Han, J.-Y. Automatic water-level detection using single-camera images with varied poses. Measurement 2018, 127, 167–174. [Google Scholar] [CrossRef]
  6. Hu, L.; Yang, W.; He, J.; Zhou, H.; Luo, X.; Zhao, R.; Tang, L.; Du, P. Roll angle estimation using low cost MEMS sensors for paddy field machine. Comput. Electron. Agric. 2019, 158, 183–188. [Google Scholar] [CrossRef]
  7. Altikat, S.; Celik, A. The effects of tillage and intra-row compaction on seedbed properties and red lentil emergence under dry land conditions. Soil Tillage Res. 2011, 114, 1–8. [Google Scholar] [CrossRef]
  8. Shi, G.; Li, X.; Jiang, Z. An improved yaw estimation algorithm for land vehicles using MARG sensors. Sensors 2018, 18, 3251. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  9. Jin, X.; Yin, G.; Chen, N. Advanced estimation techniques for vehicle system dynamic state: A survey. Sensors 2019, 19, 4289. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  10. Kobayashi, K.; Taniwaki, K.; Saito, H.; Seki, M.; Tamaki, K.; Nagasaka, Y. An autonomous rice transplanter guided by global positioning system and inertial measurement unit. J. Field Robot. 2009, 26, 537–548. [Google Scholar]
  11. Tong, X.; Li, Z.; Han, G.; Liu, N.; Su, Y.; Ning, J.; Yang, F. Adaptive EKF based on HMM recognizer for attitude estimation using MEMS MARG sensors. IEEE Sens. J. 2017, 18, 3299–3310. [Google Scholar] [CrossRef]
  12. Arakawa, M.; Okuyama, Y.; Mie, S.; Abderazek, B.A. Horizontal-based Attitude Estimation for Real-time UAV control. In Proceedings of the Seventeenth International Conference on Computer Applications, Yangon, Myanmar, 27 February–1 March 2019. [Google Scholar]
  13. Gakne, P.V.; O’Keefe, K. Monocular-based pose estimation using vanishing points for indoor image correction. In Proceedings of the 2017 International Conference on Indoor Positioning and Indoor Navigation (IPIN), Sapporo, Japan, 18–21 September 2017. [Google Scholar]
  14. Timotheatos, S.; Piperakis, S.; Argyros, A.; Trahanias, P. Vision Based Horizon Detection for UAV Navigation. In Proceedings of the 27th International Conference on Robotics in Alpe-Adria-Danube Region, Patras, Greece, 6–8 June 2018; pp. 181–189. [Google Scholar]
  15. Wang, P.; Xu, G.; Cheng, Y.; Yu, Q. A simple, robust and fast method for the perspective-n-point Problem. Pattern Recognit. Lett. 2018, 108, 31–37. [Google Scholar] [CrossRef]
  16. Zhang, J.; Ren, L.; Deng, H.; Ma, M.; Zhong, X.; Wen, P. Measurement of unmanned aerial vehicle attitude angles based on a single captured image. Sensors 2018, 18, 2655. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  17. Davison, A.J.; Reid, I.D.; Molton, N.D.; Stasse, O. MonoSLAM: Real-time single camera SLAM. IEEE Trans. Pattern Anal. Mach. Intell. 2007, 29, 1052–1067. [Google Scholar] [CrossRef] [Green Version]
  18. Bresson, G.; Alsayed, Z.; Yu, L.; Glaser, S. Simultaneous localization and mapping: A survey of current trends in autonomous driving. IEEE Trans. Intell. Veh. 2017, 2, 194–220. [Google Scholar] [CrossRef] [Green Version]
  19. Von Stumberg, L.; Usenko, V.; Engel, J.; Stückler, J.; Cremers, D. From monocular SLAM to autonomous drone exploration. In Proceedings of the 2017 European Conference on Mobile Robots (ECMR), Paris, France, 6–8 September 2017; pp. 1–8. [Google Scholar]
  20. Rosten, E.; Drummond, T. Machine learning for high-speed corner detection. In European Conference on Computer Vision; Springer: Berlin/Heidelberg, Germany, 2006; pp. 430–443. [Google Scholar]
  21. Bay, H.; Tuytelaars, T.; Van, G.L. Surf: Speeded up robust features. In European Conference on Computer Vision; Springer: Berlin/Heidelberg, Germany, 2006; pp. 404–417. [Google Scholar]
  22. Torr, P.H.S.; Zisserman, A. MLESAC: A new robust estimator with application to estimating image geometry. Comput. Vis. Image Underst. 2000, 78, 138–156. [Google Scholar] [CrossRef] [Green Version]
  23. Lowe, D.G. Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef]
  24. Salahat, E.; Saleh, H.; Sluzek, A.; Al-Qutayri, M.; Mohammad, B.; Ismail, M. A maximally stable extremal regions system-on-chip for real-time visual surveillance. In Proceedings of the IECON 2015-41st Annual Conference of the IEEE Industrial Electronics Society, Yokohama, Japan, 9–12 November 2015; pp. 2812–2815. [Google Scholar]
  25. Zhou, Z.; Shi, Y.; Gao, Z. Wildfire smoke detection based on local extremal region segmentation and surveillance. Fire Saf. J. 2016, 85, 50–58. [Google Scholar] [CrossRef]
  26. Ministry of Land and Resources of the People’s Republic of China (2013). The Communique of the Second National Land Survey Data Main Achievements [News Bulletin]. Available online: http://www.mnr.gov.cn/zwgk/zytz/201312/t20131230_1298865.htm (accessed on 29 June 2020).
  27. Malis, E.; Manuel, V. Deeper understanding of the homography decomposition for vision-based control. Res. Rep. 2007, RR–6303, 90. [Google Scholar]
Figure 1. Procedure of feature-point-based image registration algorithm.
Figure 1. Procedure of feature-point-based image registration algorithm.
Sensors 20 03799 g001
Figure 2. Original images and target images for scale test.
Figure 2. Original images and target images for scale test.
Sensors 20 03799 g002
Figure 3. The registration result of FAST−M for target image in Figure 2b.
Figure 3. The registration result of FAST−M for target image in Figure 2b.
Sensors 20 03799 g003
Figure 4. The registration result of SURF−M for target image in Figure 2b.
Figure 4. The registration result of SURF−M for target image in Figure 2b.
Sensors 20 03799 g004
Figure 5. The registration result of MSER−M for target image in Figure 2b.
Figure 5. The registration result of MSER−M for target image in Figure 2b.
Sensors 20 03799 g005
Figure 6. The result of FAST−M, SURF−M and MSER−M. MAE means mean absolute error, while FAST_T, MSER_T and SURF_T mean the running times of the algorithms, respectively.
Figure 6. The result of FAST−M, SURF−M and MSER−M. MAE means mean absolute error, while FAST_T, MSER_T and SURF_T mean the running times of the algorithms, respectively.
Sensors 20 03799 g006
Figure 7. The result of FAST−M, SURF−M and MSER−M. MAE means mean absolute error, while FAST_T, MSER_T and SURF_T mean the running times of the algorithms, respectively.
Figure 7. The result of FAST−M, SURF−M and MSER−M. MAE means mean absolute error, while FAST_T, MSER_T and SURF_T mean the running times of the algorithms, respectively.
Sensors 20 03799 g007
Figure 8. The result of FAST−M, SURF−M and MSER−M. MAE means mean absolute error, while FAST_T, MSER_T and SURF_T mean the running times of the algorithms, respectively.
Figure 8. The result of FAST−M, SURF−M and MSER−M. MAE means mean absolute error, while FAST_T, MSER_T and SURF_T mean the running times of the algorithms, respectively.
Sensors 20 03799 g008
Figure 9. The 2D imaging model of a camera.
Figure 9. The 2D imaging model of a camera.
Sensors 20 03799 g009
Figure 10. Work flow chart of the system.
Figure 10. Work flow chart of the system.
Sensors 20 03799 g010
Figure 11. Test field.
Figure 11. Test field.
Sensors 20 03799 g011
Figure 12. Program operation interface.
Figure 12. Program operation interface.
Sensors 20 03799 g012
Figure 13. The field test result.
Figure 13. The field test result.
Sensors 20 03799 g013
Table 1. The field test result.
Table 1. The field test result.
Serial NumberActual Value/°Measurement/°Absolute Error/°
12.052.560.51
22.442.030.41
32.582.960.38
41.862.670.81
52.893.320.43
6−1.98−1.070.91
7−2.17−1.340.83
8−2.89−2.260.63
9−3.36−2.980.38
10−3.86−3.240.62
111.540.570.97
12−1.05−0.180.87
13−2.35−1.860.49
14−3.81−3.230.58
15−4.45−3.670.78
16−2.89−2.060.83
17−3.45−4.110.66
18−2.64−2.560.08
191.861.230.63
202.351.670.68
213.453.860.62
222.362.980.41
23−1.84−1.560.28
242.172.640.47
253.943.060.88
Maximum Error————0.97
Minimum Error————0.08
Average Error 0.61

Share and Cite

MDPI and ACS Style

Li, Y.; Huang, D.; Qi, J.; Chen, S.; Sun, H.; Liu, H.; Jia, H. Feature Point Registration Model of Farmland Surface and Its Application Based on a Monocular Camera. Sensors 2020, 20, 3799. https://doi.org/10.3390/s20133799

AMA Style

Li Y, Huang D, Qi J, Chen S, Sun H, Liu H, Jia H. Feature Point Registration Model of Farmland Surface and Its Application Based on a Monocular Camera. Sensors. 2020; 20(13):3799. https://doi.org/10.3390/s20133799

Chicago/Turabian Style

Li, Yang, Dongyan Huang, Jiangtao Qi, Sikai Chen, Huibin Sun, Huili Liu, and Honglei Jia. 2020. "Feature Point Registration Model of Farmland Surface and Its Application Based on a Monocular Camera" Sensors 20, no. 13: 3799. https://doi.org/10.3390/s20133799

APA Style

Li, Y., Huang, D., Qi, J., Chen, S., Sun, H., Liu, H., & Jia, H. (2020). Feature Point Registration Model of Farmland Surface and Its Application Based on a Monocular Camera. Sensors, 20(13), 3799. https://doi.org/10.3390/s20133799

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop