Next Article in Journal
Analyses of Time Series InSAR Signatures for Land Cover Classification: Case Studies over Dense Forestry Areas with L-Band SAR Images
Next Article in Special Issue
Anisotropic Diffusion Based Multiplicative Speckle Noise Removal
Previous Article in Journal
An Evaluation Method of Safe Driving for Senior Adults Using ECG Signals
Previous Article in Special Issue
Multi-View Image Denoising Using Convolutional Neural Network
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Dark Target Detection Method Based on the Adjacency Effect: A Case Study on Crack Detection

1
School of Geography and Information Engineering, China University of Geosciences, Wuhan 430074, China
2
China Energy Engineering Group Guangdong Electric Power Design Institute Company Limited, Guangzhou 510700, China
*
Author to whom correspondence should be addressed.
Sensors 2019, 19(12), 2829; https://doi.org/10.3390/s19122829
Submission received: 26 April 2019 / Revised: 14 June 2019 / Accepted: 21 June 2019 / Published: 25 June 2019
(This article belongs to the Special Issue Advance and Applications of RGB Sensors)

Abstract

:
Dark target detection is important for engineering applications but the existing methods do not consider the imaging environment of dark targets, such as the adjacency effect. The adjacency effect will affect the quantitative applications of remote sensing, especially for high contrast images and images with ever-increasing resolution. Further, most studies have focused on how to eliminate the adjacency effect and there is almost no research about the application of the adjacency effect. However, the adjacency effect leads to some unique characteristics for the dark target surrounded by a bright background. This paper utilizes these characteristics to assist in the detection of the dark object, and the low-high threshold detection strategy and the adaptive threshold selection method under the assumption of Gaussian distribution are designed. Meanwhile, preliminary case experiments are carried out on the crack detection of concrete slope protection. Finally, the experiment results show that it is feasible to utilize the adjacency effect for dark target detection.

1. Introduction

Dark target detection based on high resolution and high contrast images, such as crack detection and shadow detection, is important for engineering applications. The existing detection methods can be divided into three types. The first is the threshold method [1,2,3,4,5,6]; the OTSU method [7] and the iterative threshold method [8] are common threshold methods. The threshold method is simple but sensitive to noise. The second is the classification-based target detection algorithm and this method makes full use of spectral and texture information. It includes traditional methods such as K-means [9], support vector machine (SVM) [10]. Meanwhile, machine learning technology [11,12,13,14] has greatly progressed in recent years and has been introduced into target detection. However, labeled data are costly and time-consuming to obtain. The third type of method is connected component analysis [15,16,17], such as the percolation model [18,19] and stroke width transform (SWT) algorithm [20], which mainly utilizes the relationship between the target and its neighboring regions. However, these methods do not consider the imaging environment of a dark target, such as the adjacency effect.
The adjacency effect is also known as cross radiance. This effect is a physical phenomenon caused by atmospheric crosstalk between fields of different surface reflectance. Under the assumption that atmospheric interference has been eliminated, due to the adjacency effect, the surface-leaving radiance from areas adjacent to the target pixel enhances the signal received at the sensor and cause the contrast degradation, blurring of sharp boundaries, reduced resolution, and the difficulty of atmospheric remote sensing [21,22,23]. Further, the adjacency effect will be more important for higher spatial resolution data than Moderate Resolution Imaging Spectroradiometer(MODIS) with 250–500 m pixels and some studies also state that the effect can be observed in high spatial resolution (<100 m) imagery [24,25,26,27]. Meanwhile, this effect also has the largest impact on high-contrast scenes where bright surfaces, such as land or clouds, are adjacent to dark surfaces such as water [28,29,30,31,32].
Most researchers have focused on how to model and characterize the adjacency effect and proposed parameterizations of the atmospheric point spread function (PSF), and these were used to correct the adjacency effect [33,34,35,36]. The adjacency effect could be described as the convolution between the radiance field and PSF. If the PSF is known, the adjacent effect could be removed by the deconvolution algorithm. So the core problem with respect to the adjacent effect is to solve the atmospheric PSF [37,38]. There are two main methods for obtaining the PSF. The first is the implementation of the radiation transfer equation by parameterizing various atmospheric and observational conditions [39,40,41,42,43]. The second is the Monte Carlo simulation [44,45,46,47]. Further, there are also some statistical methods, such as the bilinear mixing model or dark spectrum fitting, to simulate or eliminate the adjacency effect [48,49,50]. All in all, current research focused on how to remove the adjacency effect poses the question of whether it possible to utilize it instead. There is almost no research in this area, so this paper has tried to utilize the adjacency effect. Combined with the correspondence between the dark target’s location and its intensity due to the adjacent effect, a low-high threshold strategy is proposed and the strategy is applied in a simple high resolution and high contrast crack detection scene, namely the expansion joints on concrete slope protection. Furthermore, the canny-morphology method and SWT algorithm are used to compare with the proposed method.

2. Materials and Methods

2.1. Characteristics of the Adjacency Effect

For the high-resolution and high-contrast images where the dark target is surrounded by a bright background, the reflectance of the target pixel contains the contribution of the scattering of background pixels, namely the adjacency effect. Furthermore, the brighter the background pixel is, the more obvious the adjacency effect [51]. The adjacency effect is related to the distance [52] and the further away from a target pixels, the less obvious the adjacency effect is. All in all, the adjacency effect is dependent on the environment [53,54]. There is a correspondence between the dark target’s location and its intensity. Two typical high-contrast images, a text image and a crack image, are selected to display the correspondence as shown in Figure 1. Four parts of them (Figure 1a–d) are selected to show the details, and the intensity (lightness) profiles are obtained along the lines in each part, as shown in Figure 2. Combined with Figure 1 and Figure 2, the intensity value of characters and cracks vary because of the adjacency effect. Further, the lightness profiles show that pixels of characters or cracks at the middle part have a lower intensity value whereas pixels at the edge have a higher intensity value, i.e., there is a correspondence between the dark target’s location and its intensity value due to the adjacency effect.
Regardless of atmospheric thickness, the correspondence between the dark target’s location and its intensity value is mainly due to the adjacency effect that still exists, as shown in Figure 3 and Figure 4. Two images of GF-2 and Worldview-2 that have not undergone atmospheric correction are selected and four high-contrast parts of them (Figure 3a–d) are selected to show the details. The intensity (lightness) profiles in the red band are obtained along the lines in each part, as shown in Figure 4.
Thus, the following conceptual map exists, as shown in Figure 5.

2.2. Low-High Threshold Detection Strategy

This paper attempts to utilize the features described in Section 2.1 to detect a dark target surrounded by a bright background in the high contrast and high-resolution image, for which threshold segmentation is a common method. Given the rule displayed in Figure 5, when a small threshold is used, the middle parts of the dark target are detected. As the threshold increases, the edge parts of the dark target are detected gradually until the occurrence of over-extraction. Therefore, this paper proposed a detection strategy to combine under-extraction and over-extraction. First, a low threshold is used to locate the dark target and the result (denoted as R m i n ) contains little noise and the middle parts of the dark target. Second, a high threshold is used to detect the complete dark target, however, the result (denoted as R m a x ) has considerable noise. Third, if an intersection occurs between R m i n and the separate unit included in R m a x , then the separate unit is retained; otherwise, it is deleted until all separate units included in R m a x are traversed. The concept map is illustrated in Figure 6.
The detected result contains many parts, every part is a separate unit. For example, Figure 7 contains six separate units, and the rectangular boxes are used to identify them.

2.3. Low-High Threshold Selection

2.3.1. The Characteristic of Gaussian Probability Density Function

The Gaussian probability density function is often used as the distribution hypothesis for the statistical model of images; therefore, this paper introduces the Gaussian distribution into the selection of high and low thresholds. The Gaussian probability density function is:
f ( x ) = 1 2 π σ e ( ( x μ ) 2 2 σ 2 )
where μ and σ are mean and variance. The first derivative represents the change rate of f ( x ) along the increasing direction of x , the first derivative equation of f ( x ) is:
f ( x ) = x μ 2 π σ 3 e   ( x μ ) 2 2 σ 2
Given the derivative test for extremum, the function f (   x ) takes the extremum at the root of the equation:
f ( x ) = 0
Namely,
f ( x ) = 1 2 π σ 3 ( 1 ( x μ ) 2 σ 2 ) e   ( x μ ) 2 2 σ 2 = 0
and the two roots are:
x = μ ± σ
μ and σ only affect the position and width of the curve of the function f ( x ) and f ( x ) , and they have no effect on the shape of the curve (bell-shaped symmetrical curve). Therefore, the case where μ is 0 and σ is 1 is used to describe the curve shape of Gaussian probability density function f ( x ) and its first derivative formula function f ( x ) , as shown in Figure 8.
According to the curve of f ( x ) in Figure 8 and the roots displayed in Equation (3c), the function f ( x ) takes the maximum value at point x = μ σ , where function f ( x ) has the maximum growth rate. Further, combined with the curve of function f ( x ) and function f ( x ) , the function f ( x ) is monotonically increasing from negative infinity to μ σ . Combined with the three-sigma rule, three points are selected as follows:
{ f ' ( μ σ )   = 1 2 π σ 2 e 1 2 f ' ( μ 2 σ ) = 2 2 π σ 2 e 2 f ' ( μ 3 σ )   = 3 2 π σ 2 e 9 2
Although the value of f ( x ) varies with σ , the ratio between them is fixed.
f ( μ 2 σ ) f ( μ σ ) = 0.44626
and
f ( μ 3 σ ) f ( μ σ ) = 0.05495  
So even if the mean μ and variance σ are not known, if the fastest growing point ( μ σ , f ( μ σ ) ) is obtained. By searching for the point forward where the ratio of growth rate to the fastest growing rate is 0.44626, the point ( μ 2 σ , f ( μ 2 σ ) ) is found. Similarly, point ( μ 3 σ , f ( μ 3 σ ) can be found.
Furthermore, according to the three-sigma rule, if μ σ is taken as the threshold, the probability of the numerical distribution in ( μ σ , ) is:
1 ( 1 0.6827 ) / 2 = 0.84135
If μ 2 σ is taken as the threshold, the probability of the numerical distribution in ( μ 2 σ , ) is:
1 ( 1 0.9545 ) / 2 = 0.97725
If μ 3 σ is taken as the threshold, the probability of the numerical distribution in ( μ 3 σ , ) is:
1 ( 1 0.9973 ) / 2 = 0.99865
However, in the actual image, the intensity value is not continuous and the horizontal coordinate interval on the histogram is 1, so the integral of change rates in three intervals are used in place of the three points in Equation (4).
{ μ σ 1 μ σ f ( x ) d x = f ( μ σ ) f ( μ σ 1 ) = e 1 2 e ( σ + 1 ) 2 2 σ 2 μ 2 σ 1 μ 2 σ f ( x ) d x = f ( μ 2 σ ) f ( μ 2 σ 1 ) = e 2 e ( 2 σ + 1 ) 2 2 σ 2 μ 3 σ 1 μ 3 σ f ( x ) d x = f ( μ 3 σ ) f ( μ 3 σ 1 ) = e 9 2 e ( 3 σ + 1 ) 2 2 σ 2
The ratio between them is:
μ 2 σ 1 μ 2 σ f ( x ) d x μ σ 1 μ σ f ( x ) d x = e 2 e ( 2 σ + 1 ) 2 2 σ 2 e 1 2 e ( σ + 1 ) 2 2 σ 2
and
μ 3 σ 1 μ 3 σ f ( x ) d x μ σ 1 μ σ f ( x ) d x = e 9 2 e ( 3 σ + 1 ) 2 2 σ 2 e 1 2 e ( σ + 1 ) 2 2 σ 2
The overall illumination image may have an effect on the mean of the image, but it will have little effect on the variance, so the ratio could be calculated by variance in the practical applications.

2.3.2. Low-High Threshold Selection

For the high resolution and high contrast scene where the dark target is surrounded by a bright background, two assumptions are made in the paper: the intensity value of background pixels obeys the Gaussian distribution and the proportion of background pixels is much larger than the target pixels.
The change rate of the histogram can be calculated by the following Equation:
C i =   H i s t i + 1 H i s t i ( i + 1 ) i
where H i s t i is the number of pixels whose intensity value is i on the gray-level histogram and C i is the change rate at the intensity value i .
The proportion of the growth rate to the maximum growth rate is used as the constraint rules to obtain the high threshold as follows:
T m a x = M A X ( I )
and
I   satisfies   rules :   { C i C m a x α i < m a x
where   T m a x is the selected high threshold and M A X ( I ) is the biggest, namely, the last element of the array I , and I is the candidate array, including all intensity values i that meet the Equation (10b). Meanwhile, m a x is the intensity value where the biggest growth rate is obtained,   C m a x is the biggest growth rate and α is the constraint ratio.
The meaning of Equation (10) is to search forward from the maximum growth point along the histogram and find the first point where the proportion of the growth rate to the maximum growth rate is greater than α . Combined with the three-sigma rule and the reasoning process in Section 2.3.1, α take e 2 e ( 2 σ + 1 ) 2 2 σ 2 e 1 2 e ( σ + 1 ) 2 2 σ 2 or e 9 2 e ( 3 σ + 1 ) 2 2 σ 2 e 1 2 e ( σ + 1 ) 2 2 σ 2 , and the former is suitable for the high gray-level mixing between the target and the background and the latter is suitable for the low gray-level mixing between the target and the background.
According to the features described in Section 2.1, the middle pixels of the dark target are in the front of the histogram and the edge pixels of dark target are distributed behind the histogram. If the threshold T m a x detects all the dark target pixels, pixels that locate in the middle of the dark target and account for   a of the total number of the target pixels can be detected. The detection threshold satisfies the following equation:
T D c e n t e r ( a ) = C U M 1 ( a * C U M ( T m a x ) )
and
a = N D _ c e n t e r N D * 100 %
where C U M (   ) is the cumulative distribution function, C U M 1 (   ) is the inverse of the cumulative distribution function. N D _ c e n t e r is the number of pixels that locate in the center of the dark target, N D is the total number of the dark target pixels, and T D c e n t e r is the corresponding detection threshold. In this paper, the proportion of a = 1 / 3 is recommended to obtain low threshold. Because only when the width of the dark target is greater than or equal to 3, the corresponding relationship between the dark target’s location and its intensity value due to the adjacency effect could be reflected. So, the maximum value of a is 1/3, and it is selected to ensure that all areas affected by the adjacency effect are detected; thus, the selected low threshold is:
T m i n = C U M 1 ( 1 3 * C U M ( T m a x ) )
where C U M (   ) is the cumulative distribution function, C U M 1 (   ) is the inverse of the cumulative distribution function. T m a x and T m i n are the selected high threshold and low threshold respectively.

2.3.3. Spatial Resolution of Data

When the proposed method is applied, there are two requirements for image resolution. First, the value of resolution should less than 250 m or 100 m, because the adjacency effect will be more important for higher spatial resolution data than MODIS with 250–500 m resolution and some studies also state the effect can be observed in high spatial resolution (<100 m) imagery. Second, only when the width of the target reaches the three pixels in the images, the difference brought by the adjacency effect between the edge and middle pixels can be reflected, and thus the proposed adaptive threshold selection method can be applied. So, the ratio between the shortest width of the dark target surrounded by a bright background and image resolution should be no less than 3.
Therefore, the spatial resolution of data should satisfy Equation (13):
{ S R 100   m   o r   250   m S R T W m i n / 3
where S R is the spatial resolution of data and T W m i n is the shortest width of the dark target surrounded by a bright background.

3. An Application in Crack (Expansion Joint) Detection

3.1. Data Selection and Introduction

The expansion joint is a kind of artificial cutting crack. It is designed to safely absorb the temperature-induced expansion and contraction of concrete materials, absorb vibration, or allow movement due to ground settlement or earthquakes. Expansion joints have strict construction specifications, and their design and construction refer to the “Technical Specification for Inspection of Concrete Defects by Ultrasonic Method” which stipulates that a vertical and horizontal expansion joint should be set every 3–5 m, and the width should be 2–3 cm. The expansion joints are included in the red rectangular frame in Figure 9. In summary, the expansion joints have a relatively uniform width and grayscale. So, for every expansion joint in an image, the degree affected by the adjacency effect is similar. Therefore, it is chosen as the research data.
A total of 18 UAV high resolution and high contrast images in a concrete slope protection project are used in this study. They are RGB images of 4000 ×   3000 pixels and the ground sampling distance is about 5 mm. Given Section 2.3.3, the resolution of images should be higher than 2/3 and the UAV images can satisfy the resolution requirement. However, the images are resized to remove water and trees at the side of the slope projection before the experiments. The overall concrete slope protection project is shown in Figure 10 and the test areas are included in the red rectangular frame.

3.2. Result and Analysis

The data satisfies the assumption described in Section 2.3.2. The variance of background is 3.495 by simple statistics and there is a low gray-level mixing of the target and background. According to Equation (8b), the constraint ratio is:
e 9 2 e ( 3 σ + 1 ) 2 2 σ 2 e 1 2 e ( σ + 1 ) 2 2 σ 2 = 0.346
Therefore, the threshold for the expansion joint detection of a single image is:
I e   satisfies   rules :   { C i C m a x 0.346 i < m a x
T e m a x =   I e [ l a s t ]
and
T e m i n = C U M 1 ( 1 3 * C U M ( T e m a x ) )
where I e is the candidate array including all intensity values i that meet the Equation (15a), T e m a x and T e m i n are the high and low threshold selected to detect expansion joints respectively.
The rough detection results are obtained by T e m a x and T e m i n according to the strategy described in Section 2.2. The rough detection results consist of expansion joints and some noise which has the same gray-level distribution with expansion joints, however, there are morphological and geometric differences between them. So, the different constraint conditions are set up to remove noise and achieve the accurate detection of the expansion joints.
The morphological characteristic is used to remove other noise. Because expansion joints exhibit linear morphological characteristics, and the shape of other noise is close to circular. Thus, the circularity F c is used as a constraint condition to remove the other noise, and it is expressed by the following equation:
F c = 4 C c o u n t π C 2 m a x
where C c o u n t is the number of pixels in every separate unit of detected results, and C m a x is the maximum length of the separate unit. Based on the equation above, the   F c value ranges from 0 to 1. Furthermore, the F c value of an image is close to 1 when the shape of the separate unit is nearly circular, and the F c value of the image is close to 0 when the shape of the separate unit is linear. After the trial-and-error experiment, when the value of   F c is greater than 0.18, the separate unit is divided into noise, otherwise, it is divided into the expansion joints.
Besides, the geometric characteristic is used to remove other noise. Because the expansion joint is continuous and has a large area while most of noise has a small area, the area constraint is used to remove the noise. After the trial-and-error experiment, when the area of the separate unit is less than 500, the separate unit is divided into noise, otherwise, it is divided into the expansion joints.
In order to evaluate the detection capacity of the proposed method, the canny-morphology method is selected. Meanwhile, the SWT algorithm is first introduced into this field. Social Media event detection is a major direction of visual event analysis [55,56,57,58] and text attribute is a critical part of semantic visual attributes [59,60], so a text recognition method, the SWT algorithm [61,62,63,64], which is used for text detection comes into being. The algorithm mainly utilizes the uniform width of the stroke and the expansion joints also have a uniform width, so it is selected. Furthermore, the manually drawn sketch is used as ground-truth reference data to evaluate the accuracy of these detection methods. The three evaluation indices are represented as follows:
{ R e c a l l = A r e a O b j e i c t     O b j e i c t   m A r e a O b j e c t m P r e c i s i o n = A r e a O b j e i c t     O b j e i c t   m A r e a O b j e c t F m e a s u r e = 2 * R e c a l l * P r e c i s i o n R e c a l l + P r e c i s i o n
where A r e a O b j e c t m denotes the area of the result produced by manually drawing, A r e a o b j e c t denotes the area of the detected result produced by the detection method, and A r e a O b j e i c t     O b j e i c t   m denotes the area of the product set between A r e a O b j e c t m and A r e a o b j e c t . Based on Equation (17), the R e c a l l value and P r e c i s i o n values range from 0 to 1. Meanwhile, the F-measure combines the results of P r e c i s i o n and   R e c a l l , and the higher the F-measure is, the more accurate the detected result is.
The comparison results of the three methods are displayed in Table 1 and Figure 11. The accuracy between rough detection and accurate detection is not much different, as shown in Figure 11. It shows that the rough detection method itself is effective and accurate detection only further improves the accuracy of the method by increasing certain constraints.
The detectability of the three methods is quantitatively assessed using the evaluation indices described in Equation (17). The higher the value of precision is, the higher the completeness of the detection result is. Combined Figure 11 and Table 1, the final P r e c i s i o n of the three methods is high, and the proposed method has the highest   P r e c i s i o n . The mean P r e c i s i o n of the proposed method, canny-morphology method and SWT algorithm is 95.76%, 94.09%, 87.67% respectively. Meanwhile, the higher the value of R e c a l l is, the less noise is included in the detection results. However, the R e c a l l of the three methods is much lower than their precision. The mean R e c a l l of them is 43.69%, 56.79%, 67.55% respectively. There are many edge portions of expansion joints which are artificially divided into the background pixels, so the area of reference data (manually drawn sketches) is smaller than the real expansion joints, which leads to the overall low R e c a l l . Further, the mean R e c a l l of the proposed method is lower than the other two methods and the reason for this is the interference of dark noise connected to the expansion joints (denoted as D-c-E). Further, the D-c-E also reduces the Recall of canny-morphology method and SWT algorithm; however, due to the constraint of the convolution kernel radius and the width of the stroke, only a small amount of noise pixels are detected by mistake or a small amount of expansion joint pixels are leaked, especially for those large D-c-E. So, the D-c-E has less influence on the two methods than the proposed method. However, if there is no the D-c-E, the Recall of the proposed method is higher than the comparison method, such as image 8 and image 15, the Recall of the proposed method is 80.90% and 76.50%, respectively, as shown in Table 1.
Four typical partial examples of expansion joint detection are shown in Figure 12.
Figure 12a is a partial image including the D-c-E which has a width and intensity value close to the expansion joint. It is difficult to remove such interference for the three methods. The canny-morphology method is most affected because the morphological operation connects the discontinuous parts in D-c-E.
Figure 12b is a partial image including the D-c-E which has a large width and area; further, its intensity value is close to the expansion joint. For such noise, the proposed method is most affected because the D-c-E is detected entirely using the proposed method, which makes R e c a l l drop drastically. However, due to the limitation of the convolution kernel radius, only the edge part of this noise is detected incorrectly using the canny-morphology method, which makes R e c a l l drop little. Meanwhile, due to the constraint of the width of the stroke, the part of expansion joints connected to the dark noise is deleted incorrectly using SWT algorithm, which makes R e c a l l rise a little.
Figure 12c is a partial image including the white noise which is connected to the expansion joint and has a width close to the expansion joints. Such noise can be easily removed using the proposed method. However, such noise will be mis-detected using the canny-morphology method because the edges of white noise are detected and retained. For the SWT algorithm, the part of expansion joints connected to the noise is deleted due to the constraint of the width of the stroke, and the residual part of such noise might be removed in the process of accurate detection.
Figure 12d is a partial image including expansion joint with uneven width. Both the proposed method and the canny-morphology method perform well in this situation. However, due to the constraint of the width of the stroke, parts of expansion joints where the width of the expansion joints is too thin and too thick are deleted incorrectly using the SWT algorithm.
The three methods are realized using MATLAB R2014a on the operating environment, Windows 7, and a processor, Intel(R) Core (TM) i-7400 at 3.00 GHz 3.00 GHz.
The computational complexity of the three methods is described from two aspects of pace and time. The space complexity of these methods is related to the image size. The main time consumption of the proposed method is shown on the process of low-high threshold detection; however, the algorithm has been optimized by the matrix operation and the conditional statement. The time consumption of the canny-morphology method is mainly spent on the process of edge detection. Meanwhile, the traversal calculation of stroke leads to the long-running time of the SWT algorithm and increases with the size of images.
The running time of the three methods is listed in Table 2. The average running time is 1.6090 s, 2.8076 s, and 4.4627 s separately. Therefore, the proposed method has the best performance although its running time varies with images that have different numbers of separate units. The Canny-morphology method has stable performance, and its running time of the Canny-morphology method is fewer changes with the images. The SWT has the worst performance, and its running time varies with image sizes.

4. Conclusions and Discussion

In this paper, a dark target detection method based on the adjacency effect is proposed, and a typical simple scene uniformly affected by the adjacency effect is selected for application experiments. By comparing with the canny-morphology method and the SWT algorithm, it is found that the proposed method can realize the complete detection of expansion joints and it is feasible to utilize the adjacency effect for dark target detection. Furthermore, because only RGB images are needed, the scope of application of the study is wide. However, although the dark noise connected to the dark target is a general problem of various detection methods, the proposed method is more affected. Besides, the application of the adjacency effect in complex scenarios and the detection effect with different resolution remains to be further explored.

Author Contributions

Data Provider, W.W.; data analysis and methodology, L.Y.; writing—original draft preparation, L.Y., Y.T.

Acknowledgments

The authors are grateful to the editor and all those who helped us during the writing of this thesis. The research in this paper was supported by National Key Research and Development Program (Grant No.2018YFB1004600).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Talab, A.M.A.; Huang, Z.; Xi, F.; Liu, H. Detection crack in image using Otsu method and multiple filtering in image processing techniques. Optik 2016, 127, 1030–1033. [Google Scholar] [CrossRef]
  2. Sim, K.S.; Kho, Y.Y.; Tso, C.P.; Nia, M.E.; Ting, H.Y. A Contrast Stretching Bilateral Closing Top-Hat Otsu Threshold Technique for Crack Detection in Images. Scanning 2013, 35, 75–87. [Google Scholar] [CrossRef] [PubMed]
  3. Qin, J.; Shen, X.; Mei, F.; Fang, Z. An Otsu multi-thresholds segmentation algorithm based on improved ACO. J. Supercomput. 2019, 75, 955–967. [Google Scholar] [CrossRef]
  4. Merzban, M.H.; Elbayoumi, M. Efficient solution of Otsu multilevel image thresholding: A comparative study. Expert Syst. Appl. 2019, 116, 299–309. [Google Scholar] [CrossRef]
  5. He, S.; Schomaker, L. DeepOtsu: Document enhancement and binarization using iterative deep learning. Pattern Recognit. 2019, 91, 379–390. [Google Scholar] [CrossRef] [Green Version]
  6. Hutchinson, T.C.; Chen, Z. Improved Image Analysis for Evaluating Concrete Damage. J. Comput. Civil Eng. 2006, 20, 210–216. [Google Scholar] [CrossRef]
  7. Otsu, N. A Threshold Selection Method from Gray-Level Histograms. IEEE Trans. Syst. Man Cybern. 1979, 9, 62–66. [Google Scholar] [CrossRef] [Green Version]
  8. Perez, A.; Gonzalez, R.C. An Iterative Thresholding Algorithm for Image Segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 1987, PAMI-9, 742–751. [Google Scholar] [CrossRef]
  9. Wagstaff, K. Constrained K-means clustering with background knowledge. In Proceedings of the 18th International Conference on Machine Learning, San Francisco, CA, USA, 28 June–1 July 2001; pp. 577–584. [Google Scholar]
  10. Suykens, J.A.K.; Vandewalle, J. Least Squares Support Vector Machine Classifiers. Neural Process. Lett. 1999, 9, 293–300. [Google Scholar] [CrossRef]
  11. LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436. [Google Scholar] [CrossRef]
  12. Badrinarayanan, V.; Kendall, A.; Cipolla, R. SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 2481–2495. [Google Scholar] [CrossRef] [PubMed]
  13. Gao, Y.; Mosalam, K.M. Deep Transfer Learning for Image-Based Structural Damage Recognition. Comput.-Aided Civ. Infrastruct. Eng. 2018, 33, 748–768. [Google Scholar] [CrossRef]
  14. Khan, M.; Yousaf, A.; Javed, N.; Nadeem, S.; Khurshid, K. Automatic Target Detection in Satellite Images using Deep Learning. J. Space Technol. 2017, 7. [Google Scholar]
  15. Ng, A.Y.; Jordan, M.I.; Weiss, Y. On spectral clustering: analysis and an algorithm. In Proceedings of the 14th International Conference on Neural Information Processing Systems: Natural and Synthetic, Vancouver, BC, Canada, 3–8 December 20011; pp. 849–856. [Google Scholar]
  16. Zhou, L.; Lu, Y.; Tan, C.L. Bangla/English Script Identification Based on Analysis of Connected Component Profiles; Springer: Berlin/Heidelberg, Germany, 2006; pp. 243–254. [Google Scholar]
  17. Comaniciu, D.; Meer, P. Robust analysis of feature spaces: color image segmentation. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Washington, DC, USA, 17–19 June 1997; pp. 750–755. [Google Scholar]
  18. Grimmett, G. Percolation; Springer: Berlin/Heidelberg, Germany, 1999. [Google Scholar]
  19. Yamaguchi, T.; Hashimoto, S. Image Processing Based on Percolation Model. IEICE Trans. Inf. Syst. 2006, E89-D, 2044–2052. [Google Scholar] [CrossRef]
  20. Epshtein, B.; Ofek, E.; Wexler, Y. Detecting text in natural scenes with stroke width transform. In Proceedings of the 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Francisco, CA, USA, 13–18 June 2010; pp. 2963–2970. [Google Scholar]
  21. Holben, B.; Vermote, E.; Kaufman, Y.J.; Tanre, D.; Kalb, V. Aerosol retrieval over land from AVHRR data-application for atmospheric correction. IEEE Trans. Geosci. Remote Sens. 1992, 30, 212–222. [Google Scholar] [CrossRef]
  22. Otterman, J.; Fraser, R.S. Adjacency effects on imaging by surface reflection and atmospheric scattering: cross radiance to zenith. Appl. Opt. 1979, 18, 2852–2860. [Google Scholar] [CrossRef] [PubMed]
  23. Burazerović, D.; Heylen, R.; Geens, B.; Sterckx, S.; Scheunders, P. Detecting the Adjacency Effect in Hyperspectral Imagery With Spectral Unmixing Techniques. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2013, 6, 1070–1078. [Google Scholar] [CrossRef]
  24. Lyapustin, A.I.; Kaufman, Y.J. Role of adjacency effect in the remote sensing of aerosol. J. Geophys. Res. Atmos. 2001, 106, 11909–11916. [Google Scholar] [CrossRef] [Green Version]
  25. Justice, C.O.; Vermote, E.; Townshend, J.R.G.; Defries, R.; Roy, D.P.; Hall, D.K.; Salomonson, V.V.; Privette, J.L.; Riggs, G.; Strahler, A.; et al. The Moderate Resolution Imaging Spectroradiometer (MODIS): land remote sensing for global change research. IEEE Trans. Geosci. Remote Sens. 1998, 36, 1228–1249. [Google Scholar] [CrossRef] [Green Version]
  26. Richter, R.; Bachmann, M.; Dorigo, W.; Muller, A. Influence of the Adjacency Effect on Ground Reflectance Measurements. IEEE Geosci. Remote Sens. Lett. 2006, 3, 565–569. [Google Scholar] [CrossRef]
  27. Sei, A. Efficient correction of adjacency effects for high-resolution imagery: integral equations, analytic continuation, and Padé approximants. Appl. Opt. 2015, 54, 3748–3758. [Google Scholar] [CrossRef]
  28. Jackett, C. Deconvolving and improving the spatial resolution of satellite data using the Maximum Entropy Method. Ph.D. Thesis, University of Tasmania, Tasmania, Australia, 2013. [Google Scholar]
  29. Borfecchia, F.; Consalvi, N.; Micheli, C.; Carli, F.M.; Cognetti De Martiis, S.; Gnisci, V.; Piermattei, V.; Belmonte, A.; De Cecco, L.; Bonamano, S.; et al. Landsat 8 OLI satellite data for mapping of the Posidonia oceanica and benthic habitats of coastal ecosystems. Int. J. Remote Sens. 2019, 40, 1548–1575. [Google Scholar] [CrossRef]
  30. DAVE, J. Effect of Atmospheric Conditions on Remote sensing of a Surface Nonhomogeneity. Photogramm. Eng. Remote Sens. 1980, 46, 1173–1180. [Google Scholar]
  31. Moses, W.J.; Sterckx, S.; Montes, M.J.; De Keukelaere, L.; Knaeps, E. Chapter 3 - Atmospheric Correction for Inland Waters. In Bio-optical Modeling and Remote Sensing of Inland Waters; Mishra, D.R., Ogashawara, I., Gitelson, A.A., Eds.; Elsevier: Amsterdam, The Netherlands, 2017. [Google Scholar]
  32. Frouin, R.; Deschamps, P.-Y.; Steinmetz, F. Environmental Effects in Ocean Color Remote Sensing; SPIE: San Diego, CA, USA, 2009; Volume 7459. [Google Scholar]
  33. Singh, S.M. Estimation of multiple reflection and lowest order adjacency effects on remotely-sensed data. Int. J. Remote Sens. 1988, 9, 1433–1450. [Google Scholar] [CrossRef]
  34. Sterckx, S.; Knaeps, E.; Ruddick, K. Detection and correction of adjacency effects in hyperspectral airborne data of coastal and inland waters: the use of the near infrared similarity spectrum. Int. J. Remote Sens. 2011, 32, 6479–6505. [Google Scholar] [CrossRef]
  35. Feng, L.; Hu, C. Land adjacency effects on MODIS Aqua top-of-atmosphere radiance in the shortwave infrared: Statistical assessment and correction. J. Geophys. Res. Oceans 2017, 122, 4802–4818. [Google Scholar] [CrossRef]
  36. Vermote, E.F.; El Saleous, N.Z.; Justice, C.O. Atmospheric correction of MODIS data in the visible to middle infrared: first results. Remote Sens. Environ. 2002, 83, 97–111. [Google Scholar] [CrossRef]
  37. Semenov, A.A.; Moshkov, A.V.; Pozhidayev, V.N.; Barducci, A.; Marcoionni, P.; Pippi, I. Estimation of Normalized Atmospheric Point Spread Function and Restoration of Remotely Sensed Images. IEEE Trans. Geosci. Remote Sens. 2011, 49, 2623–2634. [Google Scholar] [CrossRef]
  38. Duan, S.-B.; Li, Z.-L.; Tang, B.-H.; Wu, H.; Tang, R.; Bi, Y. Atmospheric correction of high-spatial-resolution satellite images with adjacency effects: application to EO-1 ALI data. Int. J. Remote Sens. 2015, 36, 5061–5074. [Google Scholar] [CrossRef]
  39. Diner, D.J.; Martonchik, J.V.; Danielson, E.D.; Bruegge, C.J. Atmospheric Correction of High Resolution Land Surface Images. In Proceedings of the 12th Canadian Symposium on Remote Sensing Geoscience and Remote Sensing Symposium, Vancouver, BC, Canada, 10–14 July 1989; pp. 860–863. [Google Scholar]
  40. Minomura, M.; Kuze, H.; Takeuchi, N. Adjacency Effect in the Atmospheric Correction of Satellite Remote Sensing Data: Evaluation of the Influence of Aerosol Extinction Profiles. Opt. Rev. 2001, 8, 133–141. [Google Scholar] [CrossRef]
  41. Barducci, A.; Pippi, I. Small-angle atmospheric scattering of the light: outline of a theoretical model for adjacency effects on image remote sensing. Satell. Remote Sens. 1994, 2312. [Google Scholar]
  42. Chami, M.; Lenot, X.; Guillaume, M.; Lafrance, B.; Briottet, X.; Minghelli, A.; Jay, S.; Deville, Y.; Serfaty, V. Analysis and quantification of seabed adjacency effects in the subsurface upward radiance in shallow waters. Opt. Express 2019, 27, A319–A338. [Google Scholar] [CrossRef] [PubMed]
  43. Sharma, A.R.; Badarinath, K.V.S.; Roy, P.S. Corrections for Atmospheric and Adjacency Effects on High Resolution Sensor Data—A Case Study Using IRS-P6 LISS-IV Data. In Proceedings of the XXIst ISPRS Congress, Beijing, China, 3–11 July 2008; pp. 497–502. [Google Scholar]
  44. Reinersman, P.N.; Carder, K.L. Monte Carlo simulation of the atmospheric point-spread function with an application to correction for the adjacency effect. Appl. Opt. 1995, 34, 4453–4471. [Google Scholar] [CrossRef] [PubMed]
  45. Kohei, A. Monte Carlo Ray Tracing Based Adjacency Effect and Nonlinear Mixture Pixel Model for Remote Sensing Satellite Imagery Data Analysis. Int. J. Adv. Res. Artif. Intell. 2013, 2, 56–64. [Google Scholar]
  46. Sei, A. Analysis of adjacency effects for two Lambertian half-spaces. Int. J. Remote Sens. 2007, 28, 1873–1890. [Google Scholar] [CrossRef]
  47. Zimovaya, A.V.; Tarasenkov, M.V.; Belov, V.V. Allowance for polarization in passive space sensing of reflective properties of the Earth’S surface. Atmospheric and Oceanic Optics 2016, 29, 171–174. [Google Scholar] [CrossRef]
  48. Wang, X.; Zhong, Y.; Zhang, L.; Xu, Y. Blind Spectral Unmixing Considering the Adjacent Effect. In Proceedings of the IGARSS 2018—2018 IEEE International Geoscience and Remote Sensing Symposium, Valencia, Spain, 22–27 July 2018; pp. 4229–4232. [Google Scholar]
  49. Vanhellemont, Q. Adaptation of the dark spectrum fitting atmospheric correction for aquatic applications of the Landsat and Sentinel-2 archives. Remote Sens. Environ. 2019, 225, 175–192. [Google Scholar] [CrossRef]
  50. Schläpfer, D.; Richter, R. Atmospheric correction of imaging spectroscopy data using shadow-based quantification of aerosol scattering effects. In Proceedings of the 37th EARSeL Symposium, Prague, Czech Republic, 21–24 May 2001; pp. 21–28. [Google Scholar]
  51. Ma, X.S.; Guo, X.Y.; Meng, X.; Yang, Z.; Peng, X.D.; Li, L.G.; Xie, W.M. Simulation and analysis of the adjacency effect in earth-imaging process of the optical remote sensing. Hongwai Yu Haomibo Xuebao 2015, 34, 250–256. [Google Scholar]
  52. Warren, M.A.; Simis, S.G.H.; Martinez-Vicente, V.; Poser, K.; Bresciani, M.; Alikas, K.; Spyrakos, E.; Giardino, C.; Ansper, A. Assessment of atmospheric correction algorithms for the Sentinel-2A MultiSpectral Imager over coastal and inland waters. Remote Sens. Environ. 2019, 225, 267–289. [Google Scholar] [CrossRef]
  53. Bulgarelli, B.; Zibordi, G. On the detectability of adjacency effects in ocean color remote sensing of mid-latitude coastal environments by SeaWiFS, MODIS-A, MERIS, OLCI, OLI and MSI. Remote Sens. Environ. 2018, 209, 423–438. [Google Scholar] [CrossRef]
  54. Bulgarelli, B.; Kiselev, V.; Zibordi, G. Simulation and analysis of adjacency effects in coastal waters: a case study. Appl. Opt. 2014, 53, 1523–1545. [Google Scholar] [CrossRef] [PubMed]
  55. Zhu, J.; Luo, J.; You, Q.; Smith, J.R. Towards Understanding the Effectiveness of Election Related Images in Social Media. In Proceedings of the 2013 IEEE 13th International Conference on Data Mining Workshops, Dallas, TX, USA, 7–10 December 2013; pp. 421–425. [Google Scholar]
  56. Gong, C.; Tao, D.; Chang, X.; Yang, J. Ensemble Teaching for Hybrid Label Propagation. IEEE Trans. Cybern. 2019, 49, 388–402. [Google Scholar] [CrossRef] [PubMed]
  57. Cheng, Z.; Chang, X.; Zhu, L.; Kanjirathinkal, R.C.; Kankanhalli, M. MMALFM: Explainable Recommendation by Leveraging Reviews and Images. ACM Trans. Inf. Syst. 2019, 37, 1–28. [Google Scholar] [CrossRef]
  58. Chang, X.; Ma, Z.; Yang, Y.; Zeng, Z.; Hauptmann, A.G. Bi-Level Semantic Representation Analysis for Multimedia Event Detection. IEEE Trans. Cybern. 2017, 47, 1180–1197. [Google Scholar] [CrossRef]
  59. Ives, D.J.; Hayter, J.H.; Oei, M.; Seltze, D.B. System for social media tag extraction. 2016. [Google Scholar]
  60. Tabassum, A.; Dhondse, S.A. Text Detection Using MSER and Stroke Width Transform. In Proceedings of the 2015 Fifth International Conference on Communication Systems and Network Technologies, Gwalior, India, 4–6 April 2015. [Google Scholar]
  61. Piriyothinkul, B.; Pasupa, K.; Sugimoto, M. Detecting Text in Manga Using Stroke Width Transform. In Proceedings of the 2019 11th International Conference on Knowledge and Smart Technology (KST), Phuket, Thailand, 23–26 January 2019; pp. 142–147. [Google Scholar]
  62. Paul, S.; Saha, S.; Basu, S.; Saha, P.K.; Nasipuri, M. Text localization in camera captured images using fuzzy distance transform based adaptive stroke filter. Multimedia Tools Appl. 2019. [Google Scholar] [CrossRef]
  63. Bosamiya, J.H.; Agrawal, P.; Roy, P.P.; Balasubramanian, R. Script independent scene text segmentation using fast stroke width transform and GrabCut. In Proceedings of the 2015 3rd IAPR Asian Conference on Pattern Recognition (ACPR), Kuala Lumpur, Malaysia, 3–6 November 2015; pp. 151–155. [Google Scholar]
  64. Mosleh, A.; Bouguila, N.; Ben Hamza, A. Image Text Detection Using a Bandlet-Based Edge Detector and Stroke Width Transform. In Proceedings of the British Machine Vision Conference, Surrey, UK, 3–7 September 2012. [Google Scholar]
Figure 1. Two typical high-contrast images. And (ad) are four example parts of the text image and the crack image.
Figure 1. Two typical high-contrast images. And (ad) are four example parts of the text image and the crack image.
Sensors 19 02829 g001
Figure 2. Intensity (Lightness) profiles. And (a) is the intensity profile along the line in Figure 1a. (b) is the intensity profile along the line in Figure 1b. (c) is the intensity profile along the line in Figure 1c. (d) is the intensity profile along the line in Figure 1d.
Figure 2. Intensity (Lightness) profiles. And (a) is the intensity profile along the line in Figure 1a. (b) is the intensity profile along the line in Figure 1b. (c) is the intensity profile along the line in Figure 1c. (d) is the intensity profile along the line in Figure 1d.
Sensors 19 02829 g002
Figure 3. High-contrast scenarios in GF-2 and Worldview-2 images. And (ad) are four example parts of the GF-2 image and the Worldview-2 image.
Figure 3. High-contrast scenarios in GF-2 and Worldview-2 images. And (ad) are four example parts of the GF-2 image and the Worldview-2 image.
Sensors 19 02829 g003
Figure 4. Intensity (Lightness) profiles (only red band is displayed). And (a) is the intensity profile along the line in Figure 3a. (b) is the intensity profile along the line in Figure 3b. (c) is the intensity profile along the line in Figure 3c. (d) is the intensity profile along the line in Figure 3d.
Figure 4. Intensity (Lightness) profiles (only red band is displayed). And (a) is the intensity profile along the line in Figure 3a. (b) is the intensity profile along the line in Figure 3b. (c) is the intensity profile along the line in Figure 3c. (d) is the intensity profile along the line in Figure 3d.
Sensors 19 02829 g004
Figure 5. Concept map of the correspondence between the dark target’s location and its intensity value.
Figure 5. Concept map of the correspondence between the dark target’s location and its intensity value.
Sensors 19 02829 g005
Figure 6. Concept map of Low-High threshold detection strategy. And (a) is the conceptual map consists of noise, dark target and bright background. (b) is the conceptual map of detected result using a low threshold. (c) is the conceptual map of detected result using a high threshold. (d) is the conceptual map of rough detection result using the low-high threshold detection strategy.
Figure 6. Concept map of Low-High threshold detection strategy. And (a) is the conceptual map consists of noise, dark target and bright background. (b) is the conceptual map of detected result using a low threshold. (c) is the conceptual map of detected result using a high threshold. (d) is the conceptual map of rough detection result using the low-high threshold detection strategy.
Sensors 19 02829 g006
Figure 7. The definition of separate unit.
Figure 7. The definition of separate unit.
Sensors 19 02829 g007
Figure 8. The curve shape of function Gaussian probability density function and its first derivative formula function.
Figure 8. The curve shape of function Gaussian probability density function and its first derivative formula function.
Sensors 19 02829 g008
Figure 9. Regular expansion joints on the surface of concrete slope protection.
Figure 9. Regular expansion joints on the surface of concrete slope protection.
Sensors 19 02829 g009
Figure 10. Overall concrete slope protection project.
Figure 10. Overall concrete slope protection project.
Sensors 19 02829 g010
Figure 11. Detection accuracy for expansion joints of all images. And (a) is the Precision curves using the proposed method, the canny_morphology method, and the SWT Algorithm. (b) is the Recall curves using the proposed method, the canny_morphology method, and the SWT Algorithm. (c) is the F-measure curves of using the proposed method, the canny_morphology method, and the SWT Algorithm.
Figure 11. Detection accuracy for expansion joints of all images. And (a) is the Precision curves using the proposed method, the canny_morphology method, and the SWT Algorithm. (b) is the Recall curves using the proposed method, the canny_morphology method, and the SWT Algorithm. (c) is the F-measure curves of using the proposed method, the canny_morphology method, and the SWT Algorithm.
Sensors 19 02829 g011
Figure 12. (ad) are four typical partial examples of expansion joint detection. Columns from left to right are the original images, manually drawn sketches, rough detection results using the proposed method, accurate detection results using the proposed method, detection results using the canny-morphology method, rough detection results using the SWT algorithm and accurate detection results using the SWT algorithm.
Figure 12. (ad) are four typical partial examples of expansion joint detection. Columns from left to right are the original images, manually drawn sketches, rough detection results using the proposed method, accurate detection results using the proposed method, detection results using the canny-morphology method, rough detection results using the SWT algorithm and accurate detection results using the SWT algorithm.
Sensors 19 02829 g012
Table 1. Detection accuracy for expansion joints of all the images. Precision_P, Precision_C, and Precision_C are precision of the proposed method, canny-morphology method, and SWT algorithm respectively. Similarly, Recall_P, Recall_C, and Recall_C are recall of the proposed method, canny-morphology method, and SWT algorithm respectively.
Table 1. Detection accuracy for expansion joints of all the images. Precision_P, Precision_C, and Precision_C are precision of the proposed method, canny-morphology method, and SWT algorithm respectively. Similarly, Recall_P, Recall_C, and Recall_C are recall of the proposed method, canny-morphology method, and SWT algorithm respectively.
NumberPrecision_PRecall_PPrecision_CRecall_CPrecision_SRecall_S
10.9612 0.4142 0.9426 0.6639 0.8744 0.6932
20.9740 0.5282 0.9762 0.6860 0.9147 0.6571
30.9667 0.1859 0.9563 0.3704 0.8000 0.6191
40.9225 0.4647 0.9676 0.4599 0.8652 0.6125
50.9571 0.3081 0.9587 0.4330 0.9347 0.5834
60.9787 0.4179 0.9759 0.5560 0.9554 0.5899
70.9614 0.4085 0.9504 0.6718 0.9160 0.7274
80.9073 0.8090 0.9624 0.7075 0.9253 0.7087
90.9777 0.5395 0.9114 0.6656 0.7669 0.6903
100.9733 0.4360 0.9249 0.6641 0.8279 0.7424
110.9710 0.5619 0.9667 0.6641 0.9627 0.6951
120.9434 0.1522 0.9328 0.3403 0.9256 0.6320
130.9567 0.2515 0.9575 0.5020 0.8892 0.6682
140.9754 0.2203 0.9209 0.4805 0.8444 0.6901
150.9471 0.7650 0.9136 0.6535 0.8832 0.7222
160.9533 0.6394 0.8754 0.4567 0.7961 0.6927
170.9370 0.3587 0.9291 0.5597 0.8715 0.7184
180.9732 0.4025 0.9146 0.6874 0.8273 0.7158
Table 2. Running time for expansion joints of all the images using three method.
Table 2. Running time for expansion joints of all the images using three method.
NumberRT-Proposed Method(s)RT-Candy-Morphology(s)RT-SWT(s)
1 2.1285 2.9224 4.2328
2 1.1376 2.0462 3.6519
3 3.8280 3.4582 4.9786
4 1.0969 2.7031 4.3437
5 1.2345 2.8442 4.1065
6 1.2428 2.6918 3.9110
7 1.9997 2.7717 3.9742
8 0.8414 2.4180 4.0313
9 1.6320 3.0660 4.8097
10 2.7472 3.4834 5.3840
11 1.2736 2.6746 5.5774
12 1.2082 2.8016 4.2024
13 0.9160 2.2509 4.0607
14 1.4074 2.9806 4.9223
15 0.8659 2.9569 4.4141
16 1.0288 3.0996 4.7188
17 1.8465 2.6030 4.0301
18 2.5270 2.7639 4.9794

Share and Cite

MDPI and ACS Style

Yu, L.; Tian, Y.; Wu, W. A Dark Target Detection Method Based on the Adjacency Effect: A Case Study on Crack Detection. Sensors 2019, 19, 2829. https://doi.org/10.3390/s19122829

AMA Style

Yu L, Tian Y, Wu W. A Dark Target Detection Method Based on the Adjacency Effect: A Case Study on Crack Detection. Sensors. 2019; 19(12):2829. https://doi.org/10.3390/s19122829

Chicago/Turabian Style

Yu, Li, Yugang Tian, and Wei Wu. 2019. "A Dark Target Detection Method Based on the Adjacency Effect: A Case Study on Crack Detection" Sensors 19, no. 12: 2829. https://doi.org/10.3390/s19122829

APA Style

Yu, L., Tian, Y., & Wu, W. (2019). A Dark Target Detection Method Based on the Adjacency Effect: A Case Study on Crack Detection. Sensors, 19(12), 2829. https://doi.org/10.3390/s19122829

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop