Next Article in Journal
A Study on AIN Film-Based SAW Attenuation in Liquids and Their Potential as Liquid Ethanol Sensors
Next Article in Special Issue
Efficient Pedestrian Detection at Nighttime Using a Thermal Camera
Previous Article in Journal
An Ultra-Low Power Turning Angle Based Biomedical Signal Compression Engine with Adaptive Threshold Tuning
Previous Article in Special Issue
Person Recognition System Based on a Combination of Body Images from Visible Light and Thermal Cameras
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Thermal Infrared Pedestrian Image Segmentation Using Level Set Method

1
School of Information and Communication Engineering, Harbin Engineering University, Harbin 150001, China
2
School of Mechatronics Engineering, Harbin Institute of Technology, Harbin 150001, China
*
Author to whom correspondence should be addressed.
Sensors 2017, 17(8), 1811; https://doi.org/10.3390/s17081811
Submission received: 7 July 2017 / Revised: 1 August 2017 / Accepted: 3 August 2017 / Published: 6 August 2017

Abstract

:
The edge-based active contour model has been one of the most influential models in image segmentation, in which the level set method is usually used to minimize the active contour energy function and then find the desired contour. However, for infrared thermal pedestrian images, the traditional level set-based method that utilizes the gradient information as edge indicator function fails to provide the satisfactory boundary of the target. That is due to the poorly defined boundaries and the intensity inhomogeneity. Therefore, we propose a novel level set-based thermal infrared image segmentation method that is able to deal with the above problems. Specifically, we firstly explore the one-bit transform convolution kernel and define a soft mark, from which the target boundary is enhanced. Then we propose a weight function to adaptively adjust the intensity of the infrared image so as to reduce the intensity inhomogeneity. In the level set formulation, those processes can adaptively adjust the edge indicator function, from which the evolving curve will stop at the target boundary. We conduct the experiments on benchmark infrared pedestrian images and compare our introduced method with the state-of-the-art approaches to demonstrate the excellent performance of the proposed method.

1. Introduction

Infrared imaging has been applied in many application fields, such as industrial inspection, defense and security. Therefore, infrared target detection, recognition and tracking are important topics in infrared image processing, in which the infrared image segmentation is one of the fundamental steps. In the computer vision and image processing fields, various methods have been proposed to solve the image segmentation problems [1,2,3]. However, due to the particular properties of infrared images, infrared image segmentation is still a challenging problem.
Active contour models have been applied in image segmentation in recent decades because they are able to provide smooth and closed boundary contours as segmentation results. The level set method (LSM) for capturing moving fronts was proposed by Osher and Sethian [4]. In computer vision and image processing, the level set method was introduced independently by Caselles et al. [5,6,7] and Malladi et al. [8] in the context of the active contour (or snake) models [9,10] for image segmentation. In level set-based image segmentation methods, the boundary (contour or interface) of the region is represented as the zero level set of a level set function, and thus the moving contour is formulated as the zero level set of the evolving level set function. The level set method has several advantages, such as being able to represent interfaces with complex topology and dealing with the topology changes in the natural way. These advantages all help the level set method to be intensively explored in the image segmentation field.
In generally, level sets-based image segmentation methods can be classed into two categories, namely, region-based methods and edge-based methods. The region-based methods make use of some region descriptor to control the evolution of the active contour. Based on the model proposed by Mumford and Shah [11] and the assumption of intensity homogeneity, Chan and Vese introduced the two phase level set framework [12] and the multiphase level set framework [13] for image segmentation (also called piecewise constant model). Li et al. [14] proposed a region-based image segmentation method that is able to deal with the intensity inhomogeneity in the image segmentation via a bias field. Zhou et al. [15] represented a region fitting method to improve the Chan and Vese model for infrared image segmentation.
The edge-based methods utilize the edge information [16] for image segmentation, which shows potentially improved performance in various applications, for example, object extraction in aerial imagery [17,18], medical image segmentation [19,20,21,22,23], and infrared image segmentation [24,25]. Li et al. [26,27] and Zhang et al. [28] proposed the level set evolution without re-initialization. Both of them showed the effectiveness of their proposals in the medical image segmentation field. Meng Li et al. [29] introduced the tensor diffusion level set method to extract infrared target contours from a complex background, in which the structure tensor and eigenvalues are used to represent the edges of the infrared object. Tan et al. [24] took advantage of the background subtraction and the level set- based active contour model for human segmentation in infrared image sequences. Zhao et al. [30] proposed an edge map based on the guide filter and the gradient vector flow (GVF) to segment infrared images.
However, it is well known that the edge-based methods suffer from serious boundary leakage problems in images with weak object boundaries [31]. The reason may be that the edge-stop function cannot stop the contour evolution at the poorly defined (weak) boundaries. Therefore, Pratondo [19] proposed a method to construct the robust edge-stop function to improve the medical image segmentation performance. In infrared thermal imaging, the device collects the infrared radiation from the objects and the surrounding scene. In some cases, the temperature difference of some parts of the target and the surroundings is not remarkable, and (or) the differences of the infrared energy emitted from a target are significant, which may result in weak target boundaries and inhomogeneity intensity in the target region, respectively. The former may cause the boundary leakage problem, while the latter may lead to the level set evolution prematurely stopping, and then the boundary is formed in the interior of the target region. Pratonodo’s method [19] can be used to solve the boundary leakage problem, however, we have to select the training samples for training the classifier to construct the robust edge-stop function, which is difficult for object(s) with severely inhomogeneous intensities.
In order to deal with the boundary leakage problem and the intensity inhomogeneity, we propose herein a robust infrared image segmentation method, named intensity adjustment level set evolution (IALSE), which is based on using the level set evolution to extract infrared target contours. The introduced approach constructs a soft mark with a one-bit transform to enlarge the intensity changes around the edges, and then defines a weight function to adaptively adjust the image intensity so that the intensities in the interior of the target approach uniformity. Therefore, the edge-based contour evolution can stop at the desired boundary automatically.
The rest of this paper is organized as follows: in the second section, we give a necessary background of the edge-based level set method and our motivation. In the third section, we introduce our method to address the boundary leakage and the premature stop problems of the level set evolution, and then summarize the whole segmentation method. In Section 4, infrared image segmentation experiments are conducted to demonstrate the effectiveness of the proposed method. Finally, we conclude our work in Section 5.

2. Background and Motivation

2.1. Traditional Level Set Method for Image Segmentation

The level set was proposed for capturing moving fronts by Osher et al. [4]. Caselles et al. [5,6,7] proved that a particular of the classical energy of the snakes model is equivalent to finding a geodesic curve in a Riemannian space with a metric derived from the image content, and then introduced the geometric active contours, which is based on the idea that the contours are represented as the zero level set of an implicit function, called level set function.
In the context of image segmentation, let   C ( t , q ) :   [ 0 , ) × [ 0 , 1 ] R 2 be a dynamic parametric contour and I : [ 0 , a ] × [ 0 , b ] R + be a given image, in which t is a temporal variable and q is a spatial parameter. In order to deform the initial curve C 0 ( q ) = C ( 0 , q ) towards the object boundary, Caselles et al. [5] showed that it should follow the curve evolution equation:
C ( t , q ) t = F N
where N is the unit inward normal vector to the desired boundary (the local optimal curve), and F is the speed function that controls the motion of the curve along its normal direction. Assume that the curve C is a level set of a function ϕ R + × R 2 R such that:
ϕ ( t , x , y )   = constant
This means that the embedding function ϕ is an implicit representation of the curve. Now we let ϕ be a signed distance function ( | ϕ | = 1 ) , whose value at a point is equal to the signed value of the distance between that point and its closest point on the zero level set. Caselles et al. [5] demonstrated that if the curve C evolves according to Equation (1), then the embedding function ϕ should deform as follows:
ϕ t = F | ϕ |
The final level set representation of the geodesic active contours is:
ϕ t = g ( I ) ( c + k ) | ϕ | + ϕ · g ( I )
in which c is a constant that can improve the convergence speed and allow the detection of non-convex objects. k = div ( ϕ | ϕ | ) is the curvature. The stopping function g ( I ) is to stop the evolving curve when it arrives to the object boundary. Usually, g ( I ) is defined as follows:
g ( I ) = 1 1 + | I ^ | p ,   p = 1 , 2 .
where I ^ is the smoothed version of the image I . Generally, we utilize the Gaussian kernel G σ ( σ is the standard deviation) to smooth the image as   I ^ = G σ *   I . The desired contour for image segmentation is given by the zero level set of the steady state ( ϕ t = 0 ) .

2.2. Distance Regularized Level Set Evolution

It is well known that the finally found contour with the level set-based method is the zero level set of the level set function. However, in the traditional level set formulations, the level set function typically develops irregularities during its evolution, which may result in numerical errors and ultimately destroy the stability of the evolution [27]. In order to maintain the stability of the level set evolution, the traditional method is to reinitialize the level set function, which introduces other problems, such as how to apply this reinitialization. Therefore, Li et al. [27] proposed the distance regularized level set evolution (DRLSE), which makes use of a penalty term to make sure the level set function to be the signed distance function (at least in a vicinity of its zero level set) as far as possible.
The general energy formulation with the distance regularization is:
ε ( ϕ ) = μ R p ( ϕ ) + ε e x t ( ϕ )
where ϕ is the level set function defined on a domain Ω , μ > 0 is a constant, and ε e x t ( ϕ ) is the energy that depends on the image. R p ( ϕ ) = Ω p ( | ϕ | ) d x is the level set regularization term that enforces the level set evolution to be stable. To maintain the signed distance property | ϕ | = 1 , Li et al. [27] provide a double-well potential function:
p ( s ) = { 1 ( 2 π ) 2 ( 1 cos ( 2 π s ) ) ,   if   s 1 1 2 ( s 1 ) 2 ,   if   s 1  
To demonstrate the performance of DRLSE, Li et al. [27] applied it to the edge-based active contour model for the image segmentation. Finally, the energy functional is defined as follows:
ε ( ϕ ) = μ R p ( ϕ ) + λ L g ( ϕ ) + α A g ( ϕ )
where λ > 0 and α R are constant. L g ( ϕ ) and A g ( ϕ ) are defined by:
L g ( ϕ ) Ω g δ ( ϕ ) | ϕ | d x
and:
A g ( ϕ ) Ω g H ( ϕ ) d x
The energy L g ( ϕ ) computes the line integral of the function g along the zero level contour ϕ . The energy A g ( ϕ ) is a weighted area of the region. Here, g is defined as Equation (5), δ and H are the Dirac delta function and the Heaviside function, respectively. The energy functional Equation (8) can be approximately minimized by solving the following equation:
ϕ t = μ d i v ( d p ( | ϕ | ) ϕ )     + λ δ ε ( ϕ ) d i v ( g ϕ | ϕ | ) + α g δ ε ( ϕ )
with a given initial level set function ϕ ( 0 , x , y ) . δ ε ( · ) is a approximate Dirac delta function. d p ( · ) is a function defined as d p ( s ) = p ( s ) s . Equation (11) is considered as an edge-based geometric active contour model. For details about DRLSE readers may refer to [27].

2.3. Motivation

Infrared thermal images are generated by collecting the infrared radiation from the scenes. In some cases, the temperature difference of some part of the target and the background scene is not notable, which may cause weak target boundaries. Because the infrared energy emitted from different parts of a target may be significantly different, the intensity in the target region is inhomogeneous. The former may cause the boundary leakage problem. Meanwhile, the later may results in that the contour evolution prematurely stop, and then the boundary is formed in the interior of the target region. For example, there is an infrared image as shown in Figure 1a, which is to be segmented via DRLSE. The parameter values are set as those in [27]. The segmentation results with α = 2.5 , α = 2.7 and α = 3 are shown in Figure 1b–d, respectively.
As shown in Figure 1a, it is obvious that the pixel intensity of each region marked with the rectangular-shaped curve is inhomogeneous. Therefore, the segmentation result (shown in Figure 1b) indicates that the evolving contour stops at a wrong boundary that lies in the interior of the object. In order to reduce the intensity inhomogeneity effect, we increase the value of the parameter   α . It can be seen from Figure 1c,d that the intensity inhomogeneity problem has been partially overcome by this approach. However, we find that the contour does not stop at the weak boundaries that are in the region circled with the oval-shaped curves as shown in Figure 1a.
To address the boundary leakage problem, Pratondo et al. [19] proposed a robust edge-stop function (ESF) for medical image segmentation, which is constructed by using the edge information from the image gradient values and some probability scores from a classifier. That is, the training samples are selected from the background and object regions of the image to be segmented, and then a classifier is trained to classify each pixel. The resulting probability scores are utilized to construct the ESF. However, for an infrared like the one shown in Figure 1a, the difference between the pixel intensities of some target regions and that of the background is not remarkable, so that it may be difficult to classify them into two categories. Therefore, in this paper, we propose the IALSE that measures the intensity change in the vicinity of the boundary and then reduces the intensity inhomogeneity in the target region to address the boundary leakage and intensity inhomogeneity problems, so that the edge-based contour evolution can be stopped at the desired boundary automatically.

3. Intensity Adjustment Level Set Evolution

3.1. Boundary Enhancement

Generally, the original image is smoothed by a Gaussian convolution kernel to reduce the noise and obtain an effective edge-stop function. However, in order to reduce the smooth effect on the edge, we use the following filter:
G = 1 T · G σ , N K N
G σ , N is a N × N filter generated by Gaussian function with the standard deviation σ . K N   is a N × N filter. For example, N = 7 , K 7 is defined as:
K 7 = [ 1 0 0 0 1 0 0 0 1 1 1 1 0 0 1 0 1 0 1 0 0 1 1 1 1 1 1 1 0 0 1 0 1 0 1 0 0 1 1 1 1 0 0 0 1 0 0 0 1 ]
where “ ” is the Hadamard product. 1 T is a normalized factor such that the sum of all elements of the filter G equals to one.
Natarajan [32] introduced a 17 × 17 convolution kernel K o   as follows:
K o ( i , j ) = { 1 / 25 , i , j { 0 ,   4 ,   8 ,   12 ,   16 } 0 , o t h e r w i s e
from which they constructed the one-bit transform for the block-based motion estimation. Erturk [33] proposed a diamond-shaped structured filtering kernel K r , which can be considered as the rotated version the filter K o . Afterwards, Erturk [34] has applied this kernel based one-bit transform to the interesting region extraction in infrared images. The one-bit image is generated by comparing the original image I with the filtered image I K r = I * K r as follows:
B ( i , j ) = { 1 , I ( i , j ) I K r ( i , j ) 0 , otherwise
where ( i , j ) is the location of each pixel in an image. As shown in Figure 2. Figure 2b is the one-bit image, and Figure 2c is the masked infrared image. It can be seen that the one-bit transform can locate the target. However, some regions in the interior of the target are marked as black regions, which mean they do not belong to the target. The major reason is the intensity inhomogeneity in the vicinity of those regions.
Instead of utilizing the hard mark Equation (13), we introduce a soft mark M whose element is defined as:
M ( i , j ) = { 1 , i f   I G ( i , j ) I G K ( i , j ) ( I G ( i , j ) I G K ( i , j ) ) q , o t h e r w i s e
I G is the smoothed image with the modified Gaussian filter in Equation (12). I G K ( i , j ) is obtained by filtering the image I G with the filter kernel K that is defined as follows:
K ( i , j ) = { 1 / 25 , i f   i , j { 0 ,   2 ,   4 ,   6 ,   8 } 0 , o t h e r w i s e
The filter K can be considered as the scaled version of the filter kernel K o . The reason that we do not use the two filtering kernels K o and K r is that they have a larger spatial support, which means they will cover larger region in the infrared image. As we known in the edge detection, if we can find the edge by using both the filter with small size and the filter with large size, the former may be better than the latter for determining the edge position. Therefore, we introduce the filter K by down sampling (scale transform in the discrete version) the filter kernel K o by a factor 2 around the center. It should be noted that, if we replace I G and I G K with I and I K r in Equation (14), respectively, when   q , the soft mark tends to be the hard mark Equation (13). When   q = 2 , the soft mark (Equation (14)) for the image Figure 2a is shown in Figure 2d.
The infrared image and its smoothed image are marked with the soft mark as follows:
I M = I M I G M = I G M
The original image Figure 2a and its smoothed image marked with the soft mark are shown in Figure 2e,f, respectively. By comparing Figure 2d with Figure 2b, and Figure 2e with Figure 2c, we find that the soft mark does not suddenly change the intensity as the hard mark, but smoothly weight the intensity in the vicinity of the edge. According to the Equation (15), if the intensity of M is lower (the region of pedestrian boundaries in Figure 2d), the corresponding intensity of I M will be lower too. As a result, compared with the original image, boundary of Figure 2e is clearer. It is obvious that a larger q will lead to a clearer boundary. However, the boundary is imperfect, and the intensity inhomogeneity becomes worse in the interior of the target region. In contrast, a smaller q may result in an unacceptable boundary enhancement, but prevent the intensity inhomogeneity from being worse, which is caused by weighting the intensity. Therefore, a soft mark with an appropriate parameter q will ensure that weighted intensity in the vicinity of the desired boundary is useful to stop the evaluating contour, and it is possible to adjust the intensity in the interior of the target region to address the intensity inhomogeneity problems. The parameter q is empirically set to be 2 in our experiments.

3.2. Intensity Adjustment

If there is intensity inhomogeneity in the original infrared image, the inhomogeneous intensity may lead to a pseudo boundary in the interior of the target region. Furthermore, the soft mark may aggravate the intensity inhomogeneity, because the soft mark values are less than one near the edges in the target region. Therefore, we will adjust the intensity to reduce the intensity inhomogeneity in this section. Intuitively, the intensity adjustment should dependent on the current intensity value and the image property. We adjust the marked image I G M as follows:
I G M A = I G M ( 1 + f ( I G M ) )
where f ( I G M ) is a weight function that controls the adjustment value. Its size is the same as that of   I G M .
Our approach relies on the assumption that the target region has a hotter (brighter) appearance than most of the background region. We let the maximum value and the average value of the marked image I G M to be I m a x and   I m e a n , respectively. It can be observed from the thermal infrared image that the intensity values of the darker target region are less than those of the brighter target region, but the majority of them are larger than most intense background values. Therefore, we can reasonably assume that the intensity values of the darker target region are close to or larger than the mean value   I m e a n , and assign larger weights to those pixels. Meanwhile, if the intensity value I G M ( i , j ) is closer to I m a x , a smaller weight value should be assigned. In this paper, we make use of the normalized Gaussian function (Equation (17)) for the pixels whose intensity values belong to [   I m e a n , I m a x ] , in which x i , j = I G M ( i , j ) I m a x and μ = I m e a n I m a x . The variance of the normalized Gaussian function is set to be 0.2, which guarantees the adjusted intensity values are not larger than the maximum intensity value I m a x .
Naturally, if the intensity value I G M ( i , j ) of a pixel is close to zero, it is overwhelmingly probable that the pixel belongs to the background. The weight [ f ( I G M ) ] i , j should approach to zero. In order to maintain smooth change of the adjusted intensity values, we also utilize the normalized Gaussian function for the pixels whose intensity values belong to [   0 , I m e a n ] . Thus, the weight function f ( I G M ) is defined as:
[ f ( I G M ) ] i , j   = { ( exp ( ( x i , j μ ) 2 / 0.2 ) exp ( ( 1 μ ) 2 0.2 ) ) 1 exp ( ( 1 μ ) 2 0.2 ) , 1 x i , j μ ( exp ( ( x i , j μ ) 2 / 0.2 ) exp ( ( μ ) 2 0.2 ) ) 1 exp ( ( μ ) 2 0.2 ) , 0 x i , j < μ
For example, the maximum value and the average value of the image shown in Figure 2f are I m a x = 255 and   I m e a n = 80.6 , respectively. The weight function and the adjustment value for the intensity value 0 to 255 are shown in Figure 3. The intensity adjusted image in Figure 4a. When comparing Figure 4a with the original image in Figure 2a, it is observed that the intensities of the target region become approximately uniform or change smoothly. The difference images in Figure 4b,c demonstrate the intensity change of each pixel, from which we find the wanted results that the intensities of the gray region of the target have been amplified. Meanwhile, it is shown in Figure 4a that the blurry boundary between the target and the background becomes more distinct. We should notice that the intensities of the background have also been enlarged. However, the background is much darker than the target, and we have enhanced the boundary. Therefore the target detection is still prominent.
After we obtain the processed image I G M A , the edge indicator function is computed as:
g ( I G M A ) = 1 1 + | I G M A | 2
By substituting Equation (15) into Equation (16), we have I G M A = I G M ( 1 + f ( I G M ) ) , if we let P ( I G ) = M ( 1 + f ( I G M ) ) , then there is:
I G M A = I G ( P ( I G ) ) + I G ( P ( I G ) )
This means that, according to the image property, we adaptively adjust I G and obtain I G M A . Then the resulting edge indicator function g ( I G M A ) will stop the evolving curve at the desired boundary.

3.3. Level Set Based Image Segmentation

The proposed infrared image segmentation algorithm consists of four steps: image smoothing; boundary enhancement; intensity adjustment, and level set based image segmentation. The segmentation algorithm is summarized as follows:
Input: the infrared image   I .
Segmentation:
  • Image Smoothing. Using the filter G (Equation (12)), smooth the image I and obtain the image   I G .
  • Boundary Enhancement. Compute the soft mark M ( i , j ) (Equation (14)) for the image I G . Then the boundary enhanced image I G M is obtained with Equation (15).
  • Intensity Adjustment. Calculate the weight f ( I G M ( i , j ) ) for each pixel with Equation (17). Adjust the intensity of the image I G M A with Equation (16).
  • Level Set Based Image Segmentation. Generate the edge stop function g ( I G M A ) with Equation (18). Equation (11) is applied to carry on level set evolution and get the infrared image segmentation result, in which the terminal condition is that the evolving contour is not change for five iterations, or the number of the iterations reaches to the set value.
Output: The segmentation result.

4. Experimental Results and Discussions

In this section, we present the experimental results of the proposed method on the benchmark infrared images. Meanwhile, in order to demonstrate the effectiveness of the proposed algorithm, we also compare it with the state-of-the-art approaches.

4.1. Data Set and Evaluation Measures

In order to fully test the performance of the proposed method, we chose thermal infrared images from different benchmark datasets, that is, OSU Thermal Pedestrian Database (OSUT) [35], Terravic Motion IR Database (TMID) [36], Pedestrian Infrared/visible Stereo Video Dataset (PISVD) [37] and Infrared Action Recognition [38], because the images of different datasets are captured with different devices under different environments. Pedestrian regions in OSUT are small and the images are filled with noise. When TMID pedestrians are indoors, filament lamps seriously affect the segmentation. At the same time, the intensity of some regions of pedestrians is similar to the background. When pedestrians in TMID are outdoors, the intensities of both background and pedestrians are inhomogeneous. However, the pedestrian intensity is also homogeneous in some images. Pedestrians in PISVD are indoors and the intensity of pedestrians is severely inhomogeneous. There is little noise in the Infrared Action Recognition images, but the floors reflect pedestrians’ reflections and the intensity of pedestrians is also inhomogeneous. The sizes of images in these datasets are 360 × 240, 320 × 240, 480 × 360 and 293 × 256 pixels, respectively. The ground truths of these images are drawn by an expert.
The accuracy and precision of image segmentation evaluation can be broadly classified into distance-based coefficients, region-based coefficients, and statistical analyses of the entire images [39]. In this paper, we adopt five measures for characterizing the performance of the proposed infrared image segmentation algorithm. They are the Dice coefficient, also well known as the similarity index (SI), Jaccard index (JI), Hausdorff Distance, Conformity Coefficient and Area Overlapped Error (AOE) [40].
Let Ω 1 and Ω 2 stand for two intersection sets (if Ω 1 and     Ω 2 do not intersect, we also consider they intersect, but the area of the intersection is zero). The Jaccard coefficient (JI) measures the ratio of the intersection area of Ω 1 and   Ω 2 divided by the area of their union:
JI = | Ω 1 Ω 2 | | Ω 1 Ω 2 |
The Dice coefficient (SI) is calculated as the ratio of the intersection area divided by the sum of each individual area:
SI = 2 | Ω 1 Ω 2 | | Ω 1 | + | Ω 2 |
The Hausdorff Distance is a kind of distance-based coefficient, which is defined as:
H ( Ω 1 , Ω 2 ) = max { h ( Ω 1 , Ω 2 ) , h ( Ω 2 , Ω 1 ) }
where h ( Ω 1 , Ω 2 ) = s u p a Ω 1 i n f b Ω 2 || a b || and || · || is the chosen norm. It measures the distance between the segmentation contour (or surface in 3D) and the true boundary, and is used when the delineation of the boundary is critical.
Conformity Coefficient is a coefficient that measures the global similarity and can be expressed in terms of JI or SI as follows:
K c = 2   JI 1 JI = ( 3   SI 2 ) SI
Area Overlapped Error (AOE) measures between Ω 1 and Ω 2 as follows:
AOE = ( 1 ( Ω 1 Ω 2 ( Ω 1 + Ω 2 ) ( Ω 1 Ω 2 ) ) )
It should be noted that both SI and JI have a minimum score of zero when there is no intersection at all, but the conformity coefficient K c is always smaller than the other two coefficients and has a much wider range of index scores ( , 1 ] , so we also choose the Conformity coefficient to evaluate the accuracy and precision of the proposed algorithm. According to the Equations (20)–(24), we can find that the larger the   JI , SI and   K c , (the upper bound is 1) or the less H ( A , B ) and AOE, the better the segmentation result achieved.

4.2. Experimental Setting

The infrared image segmentation algorithms have been implemented in the MATLAB 2017a environment on a computer with an Intel Xeon E5-2687W v2 3.4 GHz CPU × 2 CPU and 64 GB RAM. In the experiments of our proposed method, we set the smoothing filter parameters N = 7 and σ = 2 , the soft mark parameter q = 2 . The parameters of the level set method are set as those in [27], except for the parameter α which is adjusted according to the segmented infrared images.

4.3. Comparisons of Edge Stop Function

As summarized in Section 3, the purpose of IALSE is to generate a more robust edge indicator function so that the evolving curve can stop at the desired boundary. We convert the edge stop functions generated by IALSE, DRLSE and Robust_ESF into grey images that are shown in Figure 5, in which the value of the indicator function is proportional to the intensity value.
As stated in [5] and the energy minimization principle in Equation (11), when the desired boundary has high variations on the edge indicator function, the contour will stop at those positions. It can be seen from Figure 5 that the edge indicator function on the target regions has larger and uniform values, and has high variations around the boundary. Therefore, the edge indicator function of this paper is better than those of the two compared methods.

4.4. Method Comparison

In recent years, many kinds of level set or active contour algorithms have been proposed for image segmentation. To demonstrate the effectiveness of the proposed algorithm, we compared our method (IALSE) with other segmentation methods, that is, DRLSE from Li et al. [27], Robust_ESF in Pratondo et al. [19], FCMLSM in Li et al. [41], and LSACM from Zhang et al. [42]. In Figure 6, the intensity in the target region of the first image seems approximately uniform from the human visual cognition, which can be well segmented with some existed level set based methods.
We make use of them to testify whether our processes can improve the segmentation on the infrared images with high quality targets. For the other five images, it is obvious that the intensity is inhomogeneous in the target region, from which we demonstrate that our method can obtain satisfied segmentation results. The segmentation results of proposed and compared methods are shown in Figure 7. It can be seen from the first row of Figure 7 that the targets can be well segmented with IALSE, FCMLSM and Robust_ESF. However, the FCMLSM and LSACM methods have segmented some background regions into the target regions. Therefore, our proposed method can work well on the infrared images with high quality targets. It is found from the second row that there is intensity inhomogeneity in the target region, but the target region is distinct from the background. Thus the target is well segmented by using the first four methods, in which FCMLSM is the best. For the infrared image with intensity inhomogeneity (third to sixth rows), our method is better than the other approaches. As shown in the third row of Figure 7, the proposed method extracts the target very well. However, due to the intensity inhomogeneity, the segmentation results with other methods exclude the foot and legs regions that are covered by the boots. In the sixth row of Figure 7, five target regions have been well segmented with our introduce method. The target at the down left corner has been segmented with FCMLSM into two regions that are separated by a region with lower intensity value. Meanwhile the method LSACM has lost a target region. The Robust_ESF method is designed to solve the poorly defined boundary problem. Therefore our method is better than other methods on the infrared images with intensity inhomogeneity.

4.5. Quantitative Evaluation

In order to objectively evaluating the performance of the introduced method, we make use of five measurements   JI , SI , H,   K c and AOE. All the measurements are calculated with the segmentation result and the ground truth. The ground truths of images (in Figure 6) are drawn by an expert, which are shown in Figure 8. The calculated measurement values for IALSE and compared methods are listed in Table 1, in which the best measurement value for each image has been marked in bold. It should be noted that if some regions in the background are also segmented into the target regions, the measurement values for corresponding methods are worse than those of the proposed method. To make the evaluation data more intuitive, we express the above evaluation data of Table 1 in the form of bar graph in Figure 9.
Due to the boundary enhancement and intensity adjustment, the thermal infrared images are well segmented with our proposed method. At the same time, the side product is that the curve evolution of our method will automatically stop at the desired boundary for all experimental images. The number of iterations that the proposed method achieves the segmentation results is listed in Table 2.
We conducted the experiments on infrared images (in Figure 6) with different parameter values q = 1 ,   2 , ,   10 , in which the parameter α is adjusted from 0.5 to 10 with step size 0.2. The best segmentation accuracy of each q, evaluated by SI and JI, are given in Figure 10. It can be seen that it is difficult to find an optimal parameter q for all experimental images. However, the segmentation results with q = 2 are better than those with q = 1 . At the same time, we also offer the average CPU times with different q of proposed method in different experimental images which is shown in Figure 11. In order to make the data more reasonable, we repeat the experiment 10 times and take the average. We can find that, compared with other value, the speed of proposed method q = 2 is not the slowest or even faster than most of the experiments with other values of q. Therefore we set q = 2 to compared with other existing methods. One of our aims is to reduce the intensity inhomogeneity.
However, our method is still sensitive to initialization for the image with significant intensity inhomogeneity in the target region. As shown in Figure 12, we have obtained different segmentation results under the different initial curves while other parameters are the same. We can see that the body covered by cloth can be segmented well even though the intensity is similar to the background. Besides, if we want to segment the hands or head, the region of initial contour have to include part of the regions of hands or head. Because the intensities of hands and head are different from those of the body, an appropriate initialization is necessary to obtain the satisfied segmentation result.

5. Conclusions

In this work, we have introduced a robust thermal infrared image segmentation method based on the level set formulation, which we call IALSE. The proposed IALSE method can increase the contrast between target regions and the background, and adjust the intensity of the target region to be more homogeneous in the infrared image. These strategies can make the curve evolution stop at the desired boundary automatically. To evaluate the proposed method for the infrared image segmentation, we conducted experiments on the thermal infrared images chosen from three benchmark datasets, and compared our method with some existing methods. By using the subjective evaluation and the objective measurements, the superior performance of our method has been demonstrated.

Acknowledgments

This research is funded by the National Natural Science Foundation of China (NSFC) under grant No. 61371175.

Author Contributions

Yulong Qiao and Ziwei Wei conceived and designed the experiments; Ziwei Wei performed the experiments; Yulong Qiao and Ziwei Wei analyzed the data; Yan Zhao contributed analysis tools; Ziwei Wei wrote the paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Shaik, J.; Iftekharuddin, K.M. Detection and tracking of targets in infrared images using Bayesian techniques. Opt. Laser Technol. 2009, 41, 832–842. [Google Scholar] [CrossRef]
  2. Liu, J.; Tang, Z.; Cui, Y.; Wu, G. Local Competition-Based Superpixel Segmentation Algorithm in Remote Sensing. Sensors 2017, 17, 1364. [Google Scholar] [CrossRef] [PubMed]
  3. Zhang, R.; Zhu, S.; Zhou, Q. A Novel Gradient Vector Flow Snake Model Based on Convex Function for Infrared Image Segmentation. Sensors 2016, 16, 1756. [Google Scholar] [CrossRef] [PubMed]
  4. Osher, S.; Sethian, J.A. Fronts propagating with curvature-dependent speed: Algorithms based on Hamilton-Jacobi formulations. J. Comput. Phys. 1988, 79, 12–49. [Google Scholar] [CrossRef]
  5. Caselles, V.; Kimmel, R.; Sapiro, G. Geodesic Active Contours. Int. J. Comput. Vis. 1997, 22, 61–79. [Google Scholar] [CrossRef]
  6. Caselles, V.; Catt, F.; Coll, T.; Dibos, F. A geometric model for active contours in image processing. Numer. Math. 1993, 66, 1–31. [Google Scholar] [CrossRef]
  7. Caselles, V.; Kimmel, R.; Sapiro, G. Geodesic active contours. In Proceedings of the Fifth International Conference on Computer Vision, Cambridge, MA, USA, 20–23 June 1995; pp. 694–699. [Google Scholar]
  8. Malladi, R.; Sethian, J.A.; Vemuri, B.C. Shape modeling with front propagation: A level set approach. IEEE Trans. Pattern Anal. Mach. Intell. 1995, 17, 158–175. [Google Scholar] [CrossRef]
  9. Kass, M.; Witkin, A.; Terzopoulos, D. Snakes: Active contour models. Int. J. Comput. Vis. 1988, 1, 321–331. [Google Scholar] [CrossRef]
  10. Xu, C.; Prince, J.L. Snakes, shapes, and gradient vector flow. IEEE Trans. Image Process. 1998, 7, 359–369. [Google Scholar] [PubMed]
  11. Mumford, D.; Shah, J. Optimal approximations by piecewise smooth functions and associated variational problems. Commun. Pure Appl. Math. 1989, 42, 577–685. [Google Scholar] [CrossRef] [Green Version]
  12. Chan, T.F.; Vese, L.A. Active contours without edges. IEEE Trans. Image Process. 2001, 10, 266–277. [Google Scholar] [CrossRef] [PubMed]
  13. Vese, L.A.; Chan, T.F. A Multiphase Level Set Framework for Image Segmentation Using the Mumford and Shah Model. Int. J. Comput. Vis. 2002, 50, 271–293. [Google Scholar] [CrossRef]
  14. Li, C.; Huang, R.; Ding, Z.; Gatenby, J.C.; Metaxas, D.N.; Gore, J.C. A level set method for image segmentation in the presence of intensity inhomogeneities with application to MRI. IEEE Trans. Image Process. 2011, 20, 2007–2016. [Google Scholar] [PubMed]
  15. Zhou, D.; Zhou, H.; Shao, Y. An improved Chan—Vese model by regional fitting for infrared image segmentation. Infrared Phys. Technol. 2016, 74, 81–88. [Google Scholar] [CrossRef]
  16. Arandjelovic, O. Object Matching Using Boundary Descriptors. In Proceedings of the British Machine Vision Conference 2012, Surrey, UK, 3–7 September 2012; pp. 1–11. [Google Scholar]
  17. Cote, M.; Saeedi, P. Automatic Rooftop Extraction in Nadir Aerial Imagery of Suburban Regions Using Corners and Variational Level Set Evolution. IEEE Trans. Geosci. Remote Sens. 2013, 51, 313–328. [Google Scholar] [CrossRef]
  18. Arandjelović, O.; Pham, D.-S.; Venkatesh, S. Efficient and accurate set-based registration of time-separated aerial images. Pattern Recognit. 2015, 48, 3466–3476. [Google Scholar] [CrossRef]
  19. Pratondo, A.; Chui, C.-K.; Ong, S.-H. Robust Edge-Stop Functions for Edge-Based Active Contour Models in Medical Image Segmentation. IEEE Signal Process. Lett. 2016, 23, 222–226. [Google Scholar] [CrossRef]
  20. Ilunga-Mbuyamba, E.; Cruz-Duarte, J.M.; Avina-Cervantes, J.G.; Correa-Cely, C.R.; Lindner, D.; Chalopin, C. Active contours driven by Cuckoo Search strategy for brain tumour images segmentation. Expert Syst. Appl. 2016, 56, 59–68. [Google Scholar] [CrossRef]
  21. Ngo, T.A.; Lu, Z.; Carneiro, G. Combining deep learning and level set for the automated segmentation of the left ventricle of the heart from cardiac cine magnetic resonance. Med. Image Anal. 2016, 35, 159–171. [Google Scholar] [CrossRef] [PubMed]
  22. Rousson, M.; Paragios, N.; Deriche, R. Implicit Active Shape Models for 3D Segmentation in MR Imaging. In Proceedings of the Medical Image Computing and Computer-Assisted Intervention – MICCAI 2004, Saint-Malo, France, 26–29 September 2004. [Google Scholar]
  23. Prabha, S.; Suganthi, S.S.; Sujatha, C.M. An approach to analyze the breast tissues in infrared images using nonlinear adaptive level sets and Riesz transform features. Technol. Health Care 2015, 23, 429–442. [Google Scholar] [CrossRef] [PubMed]
  24. Tan, Y.; Guo, Y.; Gao, C. Background subtraction based level sets for human segmentation in thermal infrared surveillance systems. Infrared Phys. Technol. 2013, 61, 230–240. [Google Scholar] [CrossRef]
  25. Akula, A.; Khanna, N.; Ghosh, R.; Kumar, S.; Das, A.; Sardana, H.K. Adaptive contour-based statistical background subtraction method for moving target detection in infrared video sequences. Infrared Phys. Technol. 2014, 63, 103–109. [Google Scholar] [CrossRef]
  26. Li, C.; Xu, C.; Gui, C.; Fox, M.D. Level Set Evolution without Re-Initialization: A New Variational Formulation. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Diego, CA, USA, 20–25 June 2005; pp. 430–436. [Google Scholar]
  27. Li, C.; Xu, C.; Gui, C.; Fox, M.D. Distance regularized level set evolution and its application to image segmentation. IEEE Trans. Image Process. 2010, 19, 3243–3254. [Google Scholar] [PubMed]
  28. Zhang, K.; Zhang, L.; Song, H.; Zhang, D. Reinitialization-Free Level Set Evolution via Reaction Diffusion. IEEE Trans. Image Process. 2013, 22, 258–271. [Google Scholar] [CrossRef] [PubMed]
  29. Li, M.; He, C.; Zhan, Y. Tensor diffusion level set method for infrared targets contours extraction. Infrared Phys. Technol. 2012, 55, 19–25. [Google Scholar] [CrossRef]
  30. Zhao, F.; Zhao, J.; Zhao, W.; Qu, F. Guide filter-based gradient vector flow module for infrared image segmentation. Appl. Opt. 2015, 54, 9809–9817. [Google Scholar] [CrossRef] [PubMed]
  31. Estellers, V.; Zosso, D.; Lai, R.; Osher, S.; Thiran, J.P.; Bresson, X. Efficient algorithm for level set method preserving distance function. IEEE Trans. Image Process. 2012, 21, 4722–4734. [Google Scholar] [CrossRef] [PubMed]
  32. Natarajan, B.; Bhaskaran, V.; Konstantinides, K. Low-complexity block-based motion estimation via one-bit transforms. IEEE Trans. Circuits Syst. Video Technol. 1997, 7, 702–706. [Google Scholar] [CrossRef]
  33. Erturk, S. Multiplication-Free One-Bit Transform for Low-Complexity Block-Based Motion Estimation. IEEE Signal Process. Lett. 2007, 14, 109–112. [Google Scholar] [CrossRef]
  34. Erturk, S. Region of Interest Extraction in Infrared Images Using One-Bit Transform. IEEE Signal Process. Lett. 2013, 20, 952–955. [Google Scholar] [CrossRef]
  35. Davis, J.W.; Keck, M.A. A Two-Stage Template Approach to Person Detection in Thermal Imagery. In Proceedings of the Seventh IEEE Workshops on Application of Computer Vision, Breckenridge, CO, USA, 5–7 January 2005; Volume 1, pp. 364–369. [Google Scholar]
  36. Miezianko, R. Terravic Research Infrared Database. IEEE OTCBVS WS Series Bench. Available online: http://vcipl-okstate.org/pbvs/bench/ (accessed on 3 August 2017).
  37. Bilodeau, G.-A.; Torabi, A.; St-Charles, P.-L.; Riahi, D. Thermal—Visible registration of human silhouettes: A similarity measure performance evaluation. Infrared Phys. Technol. 2014, 64, 79–86. [Google Scholar] [CrossRef]
  38. Gao, C.; Du, Y.; Liu, J.; Lv, J.; Yang, L.; Meng, D.; Hauptmann, A.G. InfAR dataset: Infrared action recognition at different times. Neurocomputing 2016, 212, 36–47. [Google Scholar] [CrossRef]
  39. Chang, H.H.; Zhuang, A.H.; Valentino, D.J.; Chu, W.C. Performance measure characterization for evaluating neuroimage segmentation algorithms. Neuroimage 2009, 47, 122–135. [Google Scholar] [CrossRef] [PubMed]
  40. Sofian, H.; Than, J.C.M.; Noor, N.M.; Mohamad, S. Lumen boundary detection in IVUS medical imaging using structured element. In Proceedings of the 11th International Conference on Ubiquitous Information Management and Communication, Beppu, Japan, 5–7 January 2017; pp. 1–7. [Google Scholar]
  41. Li, B.N.; Chui, C.K.; Chang, S.; Ong, S.H. Integrating spatial fuzzy clustering with level set methods for automated medical image segmentation. Comput. Biol. Med. 2011, 41, 1–10. [Google Scholar] [CrossRef] [PubMed]
  42. Zhang, K.; Zhang, L.; Lam, K.M.; Zhang, D. A Level Set Approach to Image Segmentation With Intensity Inhomogeneity. IEEE Trans. Cybern. 2016, 46, 546–557. [Google Scholar] [CrossRef] [PubMed]
Figure 1. DRLSE segmentation results. (a) Original image; (b) α = 2.5 ; (c) α = 2.7 ; (d) α = 3 .
Figure 1. DRLSE segmentation results. (a) Original image; (b) α = 2.5 ; (c) α = 2.7 ; (d) α = 3 .
Sensors 17 01811 g001
Figure 2. (a) Original image; (b) One-bit image; (c) Original image masked with (b); (d) Soft mark image; (e) Original image marked with (d); (f) Smoothed image marked with (d).
Figure 2. (a) Original image; (b) One-bit image; (c) Original image masked with (b); (d) Soft mark image; (e) Original image marked with (d); (f) Smoothed image marked with (d).
Sensors 17 01811 g002
Figure 3. (a) Weight functions for the intensity value 0 to 255; (b) Adjustment value for the intensity value 0 to 255.
Figure 3. (a) Weight functions for the intensity value 0 to 255; (b) Adjustment value for the intensity value 0 to 255.
Sensors 17 01811 g003
Figure 4. (a) Intensity adjusted image   I G M A ; (b) The difference image between I G M A and   I G M ; (c) The difference image between I G M A and I G .
Figure 4. (a) Intensity adjusted image   I G M A ; (b) The difference image between I G M A and   I G M ; (c) The difference image between I G M A and I G .
Sensors 17 01811 g004
Figure 5. Edge indicator functions are generated with the proposed method (the first column); DRLSE (the second column); Robust_ESF (the third column).
Figure 5. Edge indicator functions are generated with the proposed method (the first column); DRLSE (the second column); Robust_ESF (the third column).
Sensors 17 01811 g005aSensors 17 01811 g005b
Figure 6. Source thermal infrared images.
Figure 6. Source thermal infrared images.
Sensors 17 01811 g006aSensors 17 01811 g006b
Figure 7. Segmentation results. From left to right (a) The proposed method IALSE (rectangle stands for the initial curve); (b) FCMLSM; (c) LSACM; (d) Robust_ESF; (e) DRLSE.
Figure 7. Segmentation results. From left to right (a) The proposed method IALSE (rectangle stands for the initial curve); (b) FCMLSM; (c) LSACM; (d) Robust_ESF; (e) DRLSE.
Sensors 17 01811 g007aSensors 17 01811 g007b
Figure 8. Ground truths.
Figure 8. Ground truths.
Sensors 17 01811 g008
Figure 9. Bar graph of quantitatively analyzed segmentation result of infrared images. (a) SI, (b) JI, (c) H, (d)K_c, (e) AOE.
Figure 9. Bar graph of quantitatively analyzed segmentation result of infrared images. (a) SI, (b) JI, (c) H, (d)K_c, (e) AOE.
Sensors 17 01811 g009
Figure 10. Segmentation accuracy index with different parameter values q (a) SI and (b) JI.
Figure 10. Segmentation accuracy index with different parameter values q (a) SI and (b) JI.
Sensors 17 01811 g010
Figure 11. Average execution time of proposed method with different q in different images.
Figure 11. Average execution time of proposed method with different q in different images.
Sensors 17 01811 g011
Figure 12. Segmentation results with different initial contours (cyan lines stand for the initial curves). (a) Segmentation result with initial contour around the neck. (b) Segmentation result with initial contour around the shoulder. (c) Segmentation result with initial contour around the wrist. (d) Segmentation result with initial contour around the shoulder and wrist.
Figure 12. Segmentation results with different initial contours (cyan lines stand for the initial curves). (a) Segmentation result with initial contour around the neck. (b) Segmentation result with initial contour around the shoulder. (c) Segmentation result with initial contour around the wrist. (d) Segmentation result with initial contour around the shoulder and wrist.
Sensors 17 01811 g012
Table 1. Calculation data of comparison methods and proposed method.
Table 1. Calculation data of comparison methods and proposed method.
ImageABCDEFAverage
IALSE (proposed)SI0.94310.94760.96170.92290.94570.88850.9349
JI0.89230.90050.92630.85680.89700.79930.8787
H2.82427.07113.16235.65696.70824.12314.9243
K c 0.87940.88950.92040.83280.88520.74900.8594
AOE0.10770.09950.07370.14320.10300.20070.1213
FCMLSMSI0.84900.94800.78420.62010.88660.78390.8120
JI0.73760.90120.64510.44940.79620.64470.6957
H60.0083214.3126.784.31444.29488.769
K c 0.64420.89030.4498−0.22520.74410.44880.492
AOE0.26240.09880.35490.55060.20380.35530.3043
LSACMSI0.83010.92840.76180.72410.91060.70610.8102
JI0.70960.86630.61530.56760.83580.54580.6901
H62.0973215.783460.108255.362471.725
K c 0.59070.84570.37470.23800.80360.16770.5034
AOE0.29040.13370.38470.43240.16420.45420.3099
Robust_ESFSI0.89890.90390.90360.69730.91340.82150.8564
JI0.81640.82470.82420.53530.84070.69710.7564
H3.16239.055419.41714.42212.0837.280110.903
K c 0.77510.78740.78670.13190.81050.56550.6429
AOE0.18540.17530.17580.46470.15930.30290.2439
DRLSESI0.89910.81950.82880.68220.70330.68690.7637
JI0.81660.69420.70760.51760.54240.52320.6265
H5.831018.02835.12821.02473.6821227.479
K c 0.77550.55950.58680.06820.15630.08850.3443
AOE0.18340.30580.29240.48240.45760.47680.3664
Table 2. Number of iterations of the proposed method.
Table 2. Number of iterations of the proposed method.
ImageABCDEF
Number of Iterations4158006305601035350

Share and Cite

MDPI and ACS Style

Qiao, Y.; Wei, Z.; Zhao, Y. Thermal Infrared Pedestrian Image Segmentation Using Level Set Method. Sensors 2017, 17, 1811. https://doi.org/10.3390/s17081811

AMA Style

Qiao Y, Wei Z, Zhao Y. Thermal Infrared Pedestrian Image Segmentation Using Level Set Method. Sensors. 2017; 17(8):1811. https://doi.org/10.3390/s17081811

Chicago/Turabian Style

Qiao, Yulong, Ziwei Wei, and Yan Zhao. 2017. "Thermal Infrared Pedestrian Image Segmentation Using Level Set Method" Sensors 17, no. 8: 1811. https://doi.org/10.3390/s17081811

APA Style

Qiao, Y., Wei, Z., & Zhao, Y. (2017). Thermal Infrared Pedestrian Image Segmentation Using Level Set Method. Sensors, 17(8), 1811. https://doi.org/10.3390/s17081811

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop