Next Article in Journal
Optimizing Retransmission Threshold in Wireless Sensor Networks
Next Article in Special Issue
Fast Orientation of Video Images of Buildings Acquired from a UAV without Stabilization
Previous Article in Journal
A Precise Drunk Driving Detection Using Weighted Kernel Based on Electrocardiogram
Previous Article in Special Issue
Cubature Information SMC-PHD for Multi-Target Tracking
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Efficient Seam Elimination Method for UAV Images Based on Wallis Dodging and Gaussian Distance Weight Enhancement

1
Beijing Key Laboratory for Geographic Information Systems and Environment and Resources, Capital Normal University, Beijing 100048, China
2
Department of Geography and Environmental Management, University of Waterloo, Waterloo, ON N2L 2R7, Canada
*
Author to whom correspondence should be addressed.
Sensors 2016, 16(5), 662; https://doi.org/10.3390/s16050662
Submission received: 22 March 2016 / Revised: 29 April 2016 / Accepted: 2 May 2016 / Published: 10 May 2016
(This article belongs to the Special Issue UAV-Based Remote Sensing)

Abstract

:
The rapid development of Unmanned Aerial Vehicle (UAV) remote sensing conforms to the increasing demand for the low-altitude very high resolution (VHR) image data. However, high processing speed of massive UAV data has become an indispensable prerequisite for its applications in various industry sectors. In this paper, we developed an effective and efficient seam elimination approach for UAV images based on Wallis dodging and Gaussian distance weight enhancement (WD-GDWE). The method encompasses two major steps: first, Wallis dodging was introduced to adjust the difference of brightness between the two matched images, and the parameters in the algorithm were derived in this study. Second, a Gaussian distance weight distribution method was proposed to fuse the two matched images in the overlap region based on the theory of the First Law of Geography, which can share the partial dislocation in the seam to the whole overlap region with an effect of smooth transition. This method was validated at a study site located in Hanwang (Sichuan, China) which was a seriously damaged area in the 12 May 2008 enchuan Earthquake. Then, a performance comparison between WD-GDWE and the other five classical seam elimination algorithms in the aspect of efficiency and effectiveness was conducted. Results showed that WD-GDWE is not only efficient, but also has a satisfactory effectiveness. This method is promising in advancing the applications in UAV industry especially in emergency situations.

Graphical Abstract

1. Introduction

The development of UAVs conforms to the current increasing demand for low-altitude very high resolution (VHR) remote sensing data [1,2,3]. Compared with the traditional photogrammetry process, the fast reconstitution of UAV image mosaics is a precondition of its application [4,5]. However, the UAV image-processing challenges include large geometric deformity, small size, large number and uneven exposure. These challenges lead to difficulties in seam elimination when mosaicking UAV images [6,7]. The mosaic seams mainly come from two sources: (1) the color or brightness differences due to the exposure variation; and (2) the texture misplacement due to geometric deformity, projection differences caused by tall landscapes and image capture position differences [8]. These two types of seams clearly appear on the UAV remote sensing platform, therefore, the effective and efficient removal of these seams is essential for the application of UAVs.
At present, the major methods of seam elimination are the seamline detection and image fusion methods. The seamline detection method should be considered as a way of circumventing the problem of tall landscapes in the images [9], and can be attributed to two categories: the first category is seamline search by the variation of gradient degree or image texture. Davis [10] proposed the optimal seamline searching method based on Dijkstra’s algorithm, which relies mainly on the calculation of adjacency matrices and distance matrices of high algorithmic complexity [11]. Yuan [12] replaced the Dijkstra algorithm with a greedy algorithm for local optimal path selection. However, the algorithm was still influenced by iterative convergence. Kerschner [13] applied the twin snake operator to automatically select the image seamline. However, the operator cannot guarantee the systematical optimization. Chon [14] eliminated seamlines by dynamic planning stitching. The computational burden of the algorithm rises exponentially with the increase of seamline length [15]. The second category is applying ancillary data to detect the seamline. Wan [16] proposed an algorithm based on the vector path ancillary data, which is only suitable for a few systems and is significantly limited by the vector data. Zuo [17] applied the greedy snake algorithm with the assistance of the DSM method to detect seamlines. The algorithm is fairly complicated and highly dependent on the ancillary data. In conclusion, all these searching seamline algorithms applied on UAV images have three limitations: (1) they require high geometric accuracy of the UAV images, but UAV remote sensing platforms are rather instable and have low parameter accuracy. The equipped camera sensors cannot meet the accuracy requirements because they are not designed for photogrammetry; (2) All of them are complicated and time-consuming. UAV images are small in size but contain large amounts of data, which requires high processing efficiency; (3) Objects in UAV images are not overlapped in a regular manner. The seamlines are difficult to detect, especially for regions with high densities of tall buildings.
In addition to the seamline detection method, image fusion can also be applied to eliminate mosaic seams [18]. Uyttendael [19] applied a feathering and interpolating function based on weighted features to reduce the color difference. However, the feathering algorithm tends to give fuzzy edges when smoothing the exposure difference, and can sometimes lead to the “ghosting” effect. Szeliski [20,21] manually selected at least four pairs of feature points, and estimated the variation of images with the function built on the variation of pixel difference of the feature points, which achieved a satisfactory layer fusion effect. However, since the estimation is based on brightness differences, it is highly sensitive to the brightness of images and can be poorly automated [22]. Su [23] proposed an image fusion method based on wavelet multi-scale decomposition. This method first applies wavelet multi-scale decomposition over the source images. Then, the wavelet weight parameters are determined and the images are reconstructed through inverse wavelet transform. The algorithm is highly complicated and it is difficult to determine wavelet parameters [24]. Zomet [25] eliminated mosaic seams by analyzing the contrast in smooth stitching areas. However, the field smoothing can lead to the appearance of “ghosting” effects [11,26]. Tian [27] developed a brightness and texture seam elimination (BTSE) method with a smooth transition effect on a one-dimensional direction in the overlap region. A “ghosting” effect tends to appear at the border when the algorithm is applied to UAV images with the large geometric deformity. In conclusion, all these image fusion methods for UAV images have two major limitations: (1) a “ghosting” effect tends to appear due to the uneven exposure and the large geometric deformity of UAV images; (2) they are fairly complicated and require long computation times, which conflicts with the fact that UAV systems require high data processing efficiency to deal with the massive amount of image data.
Therefore, the objective of this study is twofold: firstly, to adjust the difference of brightness between the two matched images with the Wallis dodging method and; secondly, to develop a new image fusion algorithm to eliminate the texture seamline based on the First Law of Geography.

2. Study Site and Data

The study site is located in Hanwang (104°09′E to 104°12′E and 31°25′N to 31°28′N) in the northwestern part of the Sichuan Basin (China) and has an overall area of 54.3 km2. It is a city at the foot of mountains with an average elevation of 685 m above sea level and slopes of less than 5°. As an industrial city, it has a sound transportation system and a total population of 53,000, among which the non-agricultural population is 35,000 [28,29]. The major land uses of this study site are woodland, farmland, water, road, and buildings. In this task, UAV image data were acquired on 15 May 2008 after the 5.12 Wenchuan Earthquake. The flight altitude and speed of the UAV platform are 400 m and 50 km/h, respectively. The major parameters of the image sensor equipped on the UAV platform are shown in Table 1. A total of 678 images were acquired with an image resolution of 0.3 m. The average forward overlap is 70% and the side forward overlap is 40%.

3. Methodology

3.1. Wallis Dodging

Image processing before image fusion contains two major steps: image matching and image dodging. Image matching aims to find corresponding points, and image dodging was used to eliminate the brightness differences between two matched images. First, in order for us to find the corresponding points between two images, an image matching method should be applied. In this study, the Scale-Invariant Feature Transform (SIFT) algorithm was used to match the two images [30,31], which consists of four stages: (1) building the scale-space; (2) keypoint localization; (3) removal of bad keypoints; and (4) keypoint description. It has been proven in many studies [32,33] that SIFT not only performs well in image rotation, scale zoom and illumination changes, but also does well in affine transformation, and noise jamming. Subsequently, the Random Sample Consensus (RANSAC) method was applied to the points matched by SIFT to remove any mismatched points [34]. Additionally, the Wallis dodging algorithm [8,35,36] was employed to adjust the difference of brightness between the two matched images before the texture seam elimination method.
The principle behind Wallis image dodging is that it can adjust the variance and mean value of the target image to the reference image’s level. The Wallis filter can be defined by Equation (1):
I i j = ( I i j 2 I 2 ¯ ) × c σ I 1 c σ I 2 + ( 1 c ) σ I 1 + b I 1 ¯ + ( 1 b ) I 2 ¯
where I1 is reference image, I2 is target image, and Iij is the pixel value of I2 in i row, j column after image dodging. I 1 ¯ , I 2 ¯ and σ I 1 , σ I 2 , are the mean and variance value of I1 and I2, respectively; c∈[0.1] is an adjustment coefficient for variance value of the image, and b∈[0.1] is an adjustment coefficient for the mean value. However, setting the two specific parameters is still a critical question in the existing research. The parameter setting method was derived in this study. First, the variance of the target image is shown in Equation (2):
σ I 2 = i = 0 m 1 j = 0 n 1 ( I i j 2 I 2 ¯ ) 2 / m × n
Second, the variance and mean value of the target image was adjusted to the reference image’s level. So the variance and mean value of the target image after image dodging should be roughly equal to σ I 1 and I 1 ¯ , respectively. Therefore, they can be denoted as Equation (3):
σ I 1 i = 0 m 1 j = 0 n 1 ( I i j I 1 ¯ ) 2 / m × n
Third, both sides of the Equation (3) are multiplied by σ I 2 / σ I 1 :
σ I 2 σ I 2 σ I 1 × i = 0 m 1 j = 0 n 1 ( I i j I 1 ¯ ) 2 / m × n
Then, simultaneous application of Equations (2) and (4) gives:
σ I 2 σ I 1 × ( I i j I 1 ¯ ) I i j 2 I 2 ¯
Finally, the pixel value of target image after image dodging is shown in Equation (6):
I i j σ I 1 σ I 2 × ( I i j 2 I 2 ¯ ) + I 1 ¯
Comparing Equation (1) with Equation (6), it found that we will get Equation (6) when the parameters (b and c) were both set to 1 in Equation (1). Therefore, to adjust the mean and variance value of target image to reference image’s level, Equation (6) with Wallis filter (b = 1, c = 1) was used for UAV image dodging.

3.2. GDWE Method

3.2.1. Theoretical Basis

The First Law of Geography proposed by Waldo Tobler in 1970 is “all attribute values on a geographic surface are related to each other, but closer values are more strongly related than are more distant ones” [37]. The law is the foundation of the fundamental concepts of spatial autocorrelation and spatial dependence [38], based on which we have developed an effective and efficient seamline elimination method (GDWE) for UAV image. The principle of GDWE is an image fusion algorithm combining relevant information from two matched UAV images into a single image in the overlapping region. As such, GDWE embraces three major steps: first, the principal point of each image was set as the optimal pixel with the minimum geometric distortion because the image sensor equipped on UAV platform is, in general, a type of non-measurement array CCD camera. Second, the weight in a certain pixel contributed by each image in the overlap region was determined by the distance between the pixel and the principal point. A two-dimensional Gaussian kernel was then employed to describe it. Third, in order to enhance the influence of distance to the weight, an exponent form adjustment coefficient was introduced and it was parameterized by a sensitive analysis method.

3.2.2. Seam Elimination

To develop the algorithm for image fusion in the overlap region of the matched UAV images, some parameters should be defined first, in which the principle points of the two matched images were O1 and O2; O is an arbitrary point in the overlap region; d1(|OO1|) and d2(|OO2|) are the distances between O1, O2 and O; The pixel values of point O in the two matched UAV images are I i j 1 and I i j 2 . The pixel value of point O after image fusion is Iij. Therefore, Iij can be defined as ω 1 × I i j 1 + ω 2 × I i j 2 , where w1, w2 are the weight contributions of the two UAV images to point O, and w1 plus w2 is equal to 1. Based on the theory mentioned above, a Gaussian kernel shown in Figure 1 was introduced to describe the Gaussian distance weight distribution (Gw1), and was defined by Equation (1):
G w 1 = a × e ( | O O 1 | / | O O 2 | ) 2 / 2 σ 2
where a was set to 1 because Gw1 should be equal to 1 when d1 is 0. In order to enhance the influence of Gaussian distance on the weight, an exponent form adjustment coefficient (λ) was introduced into Equation (2):
w 1 = e ( | O O 1 | / | O O 2 | ) 2 λ / 2 σ 2
In which w1 was set to 0.5 when d1 equals d2. When we apply the relationship to Equation (8), we get:
σ = 1 / ( 2 × ln 2 )
Therefore, including these terms in Equation (8) results in Equation (9), the pixel value was defined by Equation (10):
I i j = ( 0.5 ( | O O 1 | / | O O 2 | ) 2 λ ) × I i j 1 + ( 1 0.5 ( | O O 1 | / | O O 2 | ) 2 λ ) × I i j 2
Finally, we named our method Wallis dodging and Gaussian distance weight enhancement (WD-GDWE) when taking the Wallis dodging algorithm into consideration. It is shown in Equation (11):
I i j = ( 0.5 ( | O O 1 | / | O O 2 | ) 2 λ ) × I i j 1 + ( 1 0.5 ( | O O 1 | / | O O 2 | ) 2 λ ) × ( σ I 1 σ I 2 × ( I i j 2 I 2 ¯ ) + I 1 ¯ )

4. Results and Discussion

4.1. Wallis Dodging

To assess the efficiency and effectiveness of WD-GDWE for seamline elimination of UAV images, the method was implemented with Visual C++ programming using 8 GB memory and an Intel Xeon 2.5 GHz CPU. The UAV images covering five different types of land use (woodland, farmland, water, road, and buildings) from the study site were tested.
Figure 2 shows the results of stacking directly versus stacking after Wallis dodging for two matched UAV images covering five different types of land use. From the perspective of visual effects, the results indicate that the brightness difference of two matched images has been effectively balanced by Wallis dodging, in which the left figure of each figure group in Figure 2 was stacked directly and the right figure was stacked after Wallis dodging. The root mean-square error (RMSE) values of the mean and standard deviation were calculated from the two matched UVA images in the overlap region for direct stacking and Wallis dodging, respectively. For each type of land use, at least 36 pairs of matched images were tested, and the averages of the RMSE values were recorded in Table 2. The results show that the Wallis dodging method can effectively balance the brightness differences between the two matched images, in which the RMSE of mean and stand deviation were determined to be 0.0 and less than 0.3, respectively.

4.2. WD-GDWE Method

To acquire the optimal adjustment coefficient (λ) for the WD-GDWE method, a series of values from zero to five with a step size of 0.2 were set, based on which the optimal value of λ was determined when the lowest RMSE between the test images and reference images was achieved. In this study, the optimal value of λ was set to 2.6. Lastly, performance comparisons between WD-GDWE and five other classical seamline elimination algorithms were conducted in terms of efficiency and effectiveness. The specific five classical methods are: Tian’s BTSE algorithm, Uyttendael’s feathering algorithm, Su’s Wavelet algorithm, Szeliski’s algorithm, and Davis’s Dijkstra algorithm, in which the first four methods are based on image fusion and the last one is based on seamline detection. Generally, the image quality assessment indicators for seamline elimination can be divided to three types [39,40,41,42,43]: (1) amount of information: information entropy, standard deviation, cross entropy, signal to noise ratio, and joint entropy [44,45]; (2) image quality: average gradient and wavelet energy ratio [46]; (3) spectral information reserved: RMSE, standard deviation, deviation, and spectral distortion; Taking all three types of indicators into consideration, information entropy, average gradient, and RMSE were selected to access the specific five methods of seamline elimination, respectively. In addition, processing time is also an indicator for evaluating the efficiency of the algorithm. It should be noted that orthoimages were severed as reference images of the RMSE, which was produced from the control points recorded by artificial with the help of a differential GPS.
From the perspective of visual effects, Figure 3 shows the performance comparisons of the five seamline elimination methods, in which Figure 3a is the direct stacking result, Figure 3b is the WD-GDWE method result, Figure 3c–g is the results of the other five different seamless methods, respectively.
Comparing Figure 3a,b, we find that the buildings and the roads obviously display mosaic dislocation, whereas the phenomenon has been greatly improved with the WD-GDWE method. The performance comparisons of the five seamline elimination methods shown in Figure 3c–f indicate that: (1) a “ghosting” effect tends to appear in the Feather, Wavelet, Szeliski, and BTSE algorithms; (2) the visual effects of the Dijkstra algorithm and WD-GDWE are much better than those of the other methods. From the perspective of image quality assessment indicators, the details of the performance comparisons of the six methods were shown Figure 4. Each of the four indicators is an average value calculated from lots of UAV images (at least 36 pairs) for each type of land use. Figure 4a,b show that Dijkstra method gives the most abundant amount of information and the highest definition, and the WD-GDWE method follows. The BTSE is worse than the WD-GDWE method at the border of the fusion image because it only supports smooth transitions in a one-dimensional direction in the overlap region. Considering the improvement of WD-GDWE from BTSE is not obvious in Figure 3 from the perspective of visual effects, therefore, some experimental results at the border of the fusion images with the two methods were added (Figure 5). The Wavelet and Szeliski algorithm are much worse than the BTSE method, and the Feather algorithm is the worst one. Figure 4c shows that the WD-GDWE method preserves more spectral information than the other four algorithms. Figure 4d shows that it takes a little time to run the WD-GDWE, BTSE, Szeliski, and Feather algorithms, whereas the Dijkstra and Wavelet method are time-consuming. In a word, the WD-GDWE method is not only efficient, but also has a satisfactory effectiveness.

5. Conclusions

In this study, an efficient seam elimination method for UAV images based on Wallis dodging and Gaussian distance weight enhancement was proposed. The method has successfully tested by using UAV images acquired after the 5.12 Wenchuan Earthquake. By comparison with other five classical seam elimination methods, the conclusions from this study can be summarized as follows: (1) the WD-GDWE method can effectively adjust the brightness differences between two matched images; (2) the method can successfully eliminate the texture mosaic seams which are usually caused by geometric deformity, projection differences, and image capture position differences on UAV platforms; (3) the WD-GDWE method is highly-efficient, which can meet the high processing speed requirements of massive UAV images. Time-savings are very important in advancing the applications in the UAV industry, especially in emergency situations. The results of this study can be further extended to other fields, such as aerospace remote sensing and computer vision.

Acknowledgments

We acknowledge financial support from the National Natural Science Foundation of China (No.41130744/D0107 and 41171335/D010702). The authors thank Donghai Xie for his valuable comments to the paper. We are also deeply indebted to the reviewers and editors for the thoughtful and constructive comments on improving the manuscript.

Author Contributions

Jinyan Tian carried out the analyses and wrote the manuscript. Xiaojuan Li and Fuzhou Duan were responsible for recruitment the participants, and design the experiments. Junqian Wang and Yang Ou were responsible for data processing. All the authors drafted the manuscript, and approved the final manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Mesas-Carrascosa, F.J.; Rumbao, I.C.; Berrocal, J.A. Porras AG Positional quality assessment of orthophotos obtained from sensors onboard multi-rotor UAV platforms. Sensors 2014, 14, 22394–22407. [Google Scholar] [CrossRef] [PubMed]
  2. Xu, Y.; Ou, J.; He, H.; Zhang, X.; Mills, J. Mosaicking of Unmanned Aerial Vehicle Imagery in the Absence of Camera Poses. Remote Sens. 2016, 8, 204. [Google Scholar] [CrossRef]
  3. Karpenko, S.; Konovalenko, I.; Miller, A.; Miller, B.; Nikolaev, D. UAV Control on the Basis of 3D Landmark Bearing-Only Observations. Sensors 2015, 15, 29802–29820. [Google Scholar] [CrossRef] [PubMed]
  4. Wang, Y.C.; Liu, J.G. Evaluation methods for the autonomy of unmanned systems. Chin. Sci. Bull. 2012, 57, 3409–3418. [Google Scholar] [CrossRef]
  5. Gonzalez, L.F.; Montes, G.A.; Puig, E.; Johnson, S.; Mengersen, K.; Gaston, K.J. Unmanned Aerial Vehicles (UAVs) and Artificial Intelligence Revolutionizing Wildlife Monitoring and Conservation. Sensors 2016, 16, 97. [Google Scholar] [CrossRef] [PubMed]
  6. Zhou, G. Near real-time orthorectification and mosaic of small UAV video flow for time-critical event response. IEEE Trans. Geosci. Remote Sens. 2009, 47, 3. [Google Scholar] [CrossRef]
  7. Wehrhan, M.; Rauneker, P.; Sommer, M. UAV-Based Estimation of Carbon Exports from Heterogeneous Soil Landscapes—A Case Study from the CarboZALF Experimental Area. Sensors 2016, 16, 255. [Google Scholar] [CrossRef] [PubMed]
  8. Sun, M.W.; Zhang, J.Q. Dodging research for digital aerial images. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2008, 37, 349–353. [Google Scholar]
  9. Choi, J.; Jung, H.S.; Yun, S.H. An Efficient Mosaic Algorithm Considering Seasonal Variation: Application to KOMPSAT-2 Satellite Images. Sensors 2015, 15, 5649–5665. [Google Scholar] [CrossRef] [PubMed]
  10. Davis, J. Mosaics of scenes with moving objects. In Proceedings of the 1998 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Santa Barbara, CA, USA, 23–25 June 1998.
  11. Philip, S.; Summa, B.; Tierny, J.; Bremer, P.T.; Pascucci, V. Distributed Seams for Gigapixel Panoramas. IEEE Trans. Vis. Comput. Graph. 2015, 21, 350–362. [Google Scholar] [CrossRef] [PubMed]
  12. Yuan, X.X.; Zhong, C. An improvement of minimizing local maximum algorithm on searching Seam line on searching seam line for orthoimage mosaicking. Acta Geod. Cartgraph. Sin. 2012, 41, 199–204. [Google Scholar]
  13. Kerschner, M. Seamline detection in colour orthoimage mosaicking by use of twin snakes. ISPRS J. Photogramm. Remote Sens. 2001, 56, 53–64. [Google Scholar] [CrossRef]
  14. Chon, J.; Kim, H.; Lin, C.S. Seam-line determination for image mosaicking: A technique minimizing the maximum local mismatch and the global cost. ISPRS J. Photogramm. Remote Sens. 2010, 65, 86–92. [Google Scholar] [CrossRef]
  15. Mills, S.; McLeod, P. Global seamline networks for orthomosaic generation via local search. ISPRS J. Photogramm. Remote Sens. 2013, 75, 101–111. [Google Scholar] [CrossRef]
  16. Wan, Y.; Wang, D.; Xiao, J.; Lai, X.; Xu, J. Automatic determination of seamlines for aerial image mosaicking based on vector roads alone. ISPRS J. Photogramm. Remote Sens. 2013, 76, 1–10. [Google Scholar] [CrossRef]
  17. Zuo, Z.Q.; Zhang, Z.X.; Zhang, J.Q. Seam line intelligent detection in large urban orthoimage mosaicking. Acta Geod. Cartgraph. Sin. 2011, 40, 84–89. [Google Scholar]
  18. Borra-Serrano, I.; Peña, J.M.; Torres-Sánchez, J.; Mesas-Carrascosa, F.J.; López-Granados, F. Spatial Quality Evaluation of Resampled Unmanned Aerial Vehicle-Imagery for Weed Mapping. Sensors 2015, 15, 19688–19708. [Google Scholar] [CrossRef] [PubMed]
  19. Uyttendaele, M.; Eden, A.; Skeliski, R. Eliminating ghosting and exposure artifacts in image mosaics. In Proceedings of the CVPR 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Kauai, HI, USA, December 11–14 2001.
  20. Szeliski, R. Video mosaics for virtual environments. Comput. Graph. Appl. 1996, 16, 22–30. [Google Scholar] [CrossRef]
  21. Szeliski, R.; Shum, H.Y. Creating full view panoramic image mosaics and environment maps. In Proceedings of the 24th Annual Conference on Computer Graphics and Interactive Techniques, Los Angeles, CA, USA, 3–8 August 1998.
  22. Szeliski, R. Computer Vision: Algorithms and Applications; Springer Science & Business Media: London, UK, 2010. [Google Scholar]
  23. Su, M.S.; Hwang, W.L.; Cheng, K.Y. Analysis on multiresolution mosaic images. IEEE Trans. Image Proc. 2004, 13, 952–959. [Google Scholar] [CrossRef]
  24. Gracias, N.; Mahoor, M.; Negahdaripour, S.; Gleason, A. Fast image blending using watersheds and graph cuts. Image Vis. Comput. 2009, 27, 597–607. [Google Scholar] [CrossRef]
  25. Zomet, A.; Levin, A.; Peleg, S.; Weiss, Y. Seamless image stitching by minimizing false edges. IEEE Trans. Image Proc. 2006, 15, 969–977. [Google Scholar] [CrossRef]
  26. Avidan, S.; Shamir, A. Seam carving for content-aware image resizing. ACM Trans. Gr. 2007. [Google Scholar] [CrossRef]
  27. Duan, F.Z.; Li, X.; Qu, X.; Tian, J.; Wang, L. UAV image seam elimination method based on Wallis and distance weight enhancement. J. Image Graph. 2014, 19, 806–813. [Google Scholar]
  28. Zhang, J.; Deng, W. Multiscale Spatio-Temporal Dynamics of Economic Development in an Interprovincial Boundary Region: Junction Area of Tibetan Plateau, Hengduan Mountain, Yungui Plateau and Sichuan Basin, Southwestern China Case. Sustainability 2016, 8, 215. [Google Scholar] [CrossRef]
  29. Chen, F.; Guo, H.; Ishwaran, N.; Zhou, W.; Yang, W.; Jing, L.; Cheng, F.; Zeng, H. Synthetic aperture radar (SAR) interferometry for assessing Wenchuan earthquake (2008) deforestation in the Sichuan giant panda site. Remote Sens. 2014, 6, 6283–6299. [Google Scholar] [CrossRef]
  30. Lowe, D.G. Object recognition from local scale-invariant features. In Proceedings of the Seventh IEEE International Conference on Computer Vision, Kerkyra, Greek, 20–27 September 1999.
  31. Lowe, D.G. Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef]
  32. Zhou, R.; Zhong, D.; Han, J. Fingerprint identification using SIFT-based minutia descriptors and improved all descriptor-pair matching. Sensors 2013, 13, 3142–3156. [Google Scholar] [CrossRef] [PubMed]
  33. Lingua, A.; Marenchino, D.; Nex, F. Performance analysis of the SIFT operator for automatic feature extraction and matching in photogrammetric applications. Sensors 2009, 9, 3745–3766. [Google Scholar] [CrossRef] [PubMed]
  34. Civera, J.; Grasa, O.G.; Davison, A.J.; Montiel, J.M.M. 1-Point RANSAC for extended Kalman filtering: Application to real-time structure from motion and visual odometry. J. Field Robot. 2010, 27, 609–631. [Google Scholar] [CrossRef]
  35. Li, D.R.; Wang, M.; Pan, J. Auto-dodging processing and its application for optical remote sensing images. Geom. Inf. Sci. Wuhan Univ. 2006, 6, 183–187. [Google Scholar]
  36. Pham, B.; Pringle, G. Color correction for an image sequence. Comput. Graph. Appl. 1995, 15, 38–42. [Google Scholar] [CrossRef]
  37. Tobler, W.R. A computer movie simulating urban growth in the Detroit region. Econ. Geogr. 1970, 46, 234–240. [Google Scholar] [CrossRef]
  38. Kemp, K. Encyclopedia of Geographic Information Science; SAGE: Thousand Oaks, CA, USA, 2008; pp. 146–147. [Google Scholar]
  39. Efros, A.A.; Freeman, W.T. Image quilting for texture synthesis and transfer. In Proceedings of the 28th Annual Conference on COMPUTER Graphics and Interactive Techniques, Los Angeles, CA, USA, 12–17 August 2001.
  40. Kwatra, V.; Schödl, A.; Essa, I.; Turk, G.; Bobick, A. Graphcut textures: image and video synthesis using graph cuts. ACM Trans. Graph. 2003. [Google Scholar] [CrossRef]
  41. Kuang, D.; Yan, Q.; Nie, Y.; Feng, S.; Li, J. Image seam line method based on the combination of dijkstra algorithm and morphology. SPIE Proc. 2015. [Google Scholar] [CrossRef]
  42. Pan, J.; Wang, M.; Li, D.; Li, J. Automatic generation of seamline network using area Voronoi diagrams with overlap. IEEE Trans. Geosci. Remote Sens. 2009, 47, 1737–1744. [Google Scholar] [CrossRef]
  43. Agarwala, A.; Donteheva, M.; Agarwala, M.; Drucker, M.; Colburn, A.; Curless, B.; Salesin, D.; Cohen, M. Interactive digital photomontage. ACM Trans. Graph. 2004, 23, 294–302. [Google Scholar] [CrossRef]
  44. Brown, M.; Lowe, D.G. Automatic panoramic image stitching using invariant features. Int. J. Comput. Vis. 2007, 74, 59–73. [Google Scholar] [CrossRef]
  45. Summa, B.; Tierny, J.; Pascucci, V. Panorama weaving: fast and flexible seam processing. ACM Trans. Graph. (TOG) 2012, 31, 83. [Google Scholar] [CrossRef]
  46. Shum, H.Y.; Szeliski, R. Systems and experiment paper: Construction of panoramic image mosaics with global and local alignment. Int. J. Comput. Vis. 2000, 36, 101–130. [Google Scholar] [CrossRef]
Figure 1. An example of a two-dimensional Gaussians distance weight distribution kernel.
Figure 1. An example of a two-dimensional Gaussians distance weight distribution kernel.
Sensors 16 00662 g001
Figure 2. The results of Wallis dodging for two matched UAV images of each type land use, in which (a)–(e) correspond to buildings, woodland, farmland, road, and water, respectively. For example, in the case of (a), the left figure was the direct stacking result of two matched images, whereas the right figure was the stacking result of two matched images after Willis dodging.
Figure 2. The results of Wallis dodging for two matched UAV images of each type land use, in which (a)–(e) correspond to buildings, woodland, farmland, road, and water, respectively. For example, in the case of (a), the left figure was the direct stacking result of two matched images, whereas the right figure was the stacking result of two matched images after Willis dodging.
Sensors 16 00662 g002
Figure 3. The performance comparison of different seam elimination algorithms.
Figure 3. The performance comparison of different seam elimination algorithms.
Sensors 16 00662 g003
Figure 4. Comparisons of different elimination seam algorithms. (a) Information entropy for describing the amount of information; (b) verage gradient to access the image qualities; (c) RMSE between the specific five methods with the orthoimages; (d) time consumption of the six methods.
Figure 4. Comparisons of different elimination seam algorithms. (a) Information entropy for describing the amount of information; (b) verage gradient to access the image qualities; (c) RMSE between the specific five methods with the orthoimages; (d) time consumption of the six methods.
Sensors 16 00662 g004
Figure 5. Both (a) and (b) are the results at the border of the fusion images, in which the left one in (a) or (b) is with the BTSE method and the right one used the WD-GDWE method.
Figure 5. Both (a) and (b) are the results at the border of the fusion images, in which the left one in (a) or (b) is with the BTSE method and the right one used the WD-GDWE method.
Sensors 16 00662 g005
Table 1. The parameters of the image sensor.
Table 1. The parameters of the image sensor.
ItemsParameters
Image SensorRicoh Digital
Pixel Number3648 × 2736
Focal Distance28 mm
CCD1/1.75 inch
Navigation sensorGPS
Image FormatJPEG
Table 2. Average of RMSE values of mean (M) and standard deviation (SD) calculated from the matched UVA images for stacking directly and Wallis dodging, respectively, in each type of land use.
Table 2. Average of RMSE values of mean (M) and standard deviation (SD) calculated from the matched UVA images for stacking directly and Wallis dodging, respectively, in each type of land use.
Land UseRMSE
MSD
BuildingStacking Directly24.56.5
Wallis Dodging0.00.2
WoodlandStacking Directly23.66.2
Wallis Dodging0.00.1
FarmlandStacking Directly19.85.7
Wallis Dodging0.00.1
RoadStacking Directly17.53.6
Wallis Dodging0.00.1
WaterStacking Directly36.29.5
Wallis Dodging0.00.3

Share and Cite

MDPI and ACS Style

Tian, J.; Li, X.; Duan, F.; Wang, J.; Ou, Y. An Efficient Seam Elimination Method for UAV Images Based on Wallis Dodging and Gaussian Distance Weight Enhancement. Sensors 2016, 16, 662. https://doi.org/10.3390/s16050662

AMA Style

Tian J, Li X, Duan F, Wang J, Ou Y. An Efficient Seam Elimination Method for UAV Images Based on Wallis Dodging and Gaussian Distance Weight Enhancement. Sensors. 2016; 16(5):662. https://doi.org/10.3390/s16050662

Chicago/Turabian Style

Tian, Jinyan, Xiaojuan Li, Fuzhou Duan, Junqian Wang, and Yang Ou. 2016. "An Efficient Seam Elimination Method for UAV Images Based on Wallis Dodging and Gaussian Distance Weight Enhancement" Sensors 16, no. 5: 662. https://doi.org/10.3390/s16050662

APA Style

Tian, J., Li, X., Duan, F., Wang, J., & Ou, Y. (2016). An Efficient Seam Elimination Method for UAV Images Based on Wallis Dodging and Gaussian Distance Weight Enhancement. Sensors, 16(5), 662. https://doi.org/10.3390/s16050662

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop