Next Article in Journal
New Nonlocal Symmetries of Diffusion-Convection Equations and Their Connection with Generalized Hodograph Transformation
Previous Article in Journal
Symmetry Breaking and Establishment of Dorsal/Ventral Polarity in the Early Sea Urchin Embryo
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Fault Detection Based on Multi-Scale Local Binary Patterns Operator and Improved Teaching-Learning-Based Optimization Algorithm

Department of Control Science and Engineering, Harbin Institute of Technology, Harbin 150001, China
*
Author to whom correspondence should be addressed.
Symmetry 2015, 7(4), 1734-1750; https://doi.org/10.3390/sym7041734
Submission received: 30 July 2015 / Revised: 17 September 2015 / Accepted: 21 September 2015 / Published: 28 September 2015

Abstract

:
Aiming to effectively recognize train center plate bolt loss faults, this paper presents an improved fault detection method. A multi-scale local binary pattern operator containing the local texture information of different radii is designed to extract more efficient discrimination information. An improved teaching-learning-based optimization algorithm is established to optimize the classification results in the decision level. Two new phases including the worst recombination phase and the cuckoo search phase are incorporated to improve the diversity of the population and enhance the exploration. In the worst recombination phase, the worst solution is updated by a crossover recombination operation to prevent the premature convergence. The cuckoo search phase is adopted to escape the local optima. Experimental results indicate that the recognition accuracy is up to 98.9% which strongly demonstrates the effectiveness and reliability of the proposed detection method.

1. Introduction

Fault detection of train center plate bolt loss (TCPBL) is an important part of the trouble of the moving freight car detection system (TFDS), which is highly demanded by the China Ministry of Railways now [1]. TCPBL has a high incidence which may lead to the train derailment if not found in time [2]. With trackside cameras photographing the key parts of the moving freight cars, four workers with more than five years’ experience will check out all these photos to judge whether there are faults or not [3]. Our goal is to accomplish the same fault detection automatically by computer so that the efficiency of train safety inspection will be greatly improved. In TFDS, the status of freight cars and photography conditions vary significantly at different inspection stations such as illumination, weather, freight car speed, and camera vibration, etc. The photos can have the problems of blur, poor illumination, excess exposure, and occlusion. In recent years, only a few studies have been conducted with traditional feature extraction methods, such as the gray averages of local domain [4], the binary maps [5], and the Haar-like features [6]. However these methods tend to suffer when some factors (such as angle, illumination, and occlusion, etc.) change obviously. Therefore, the approach to a robust fault detection of TCPBL problem is researched.
Until recently, the local binary patterns (LBP) histogram is a simple yet efficient operator to extract the local texture information of images [7]. It was firstly used to extract features of human face images [8] and then widely applied on solving image recognition problems, such as automatic eye localization [9], fragile watermarking [10], fingerprint minutiae matching [11] and scene categorization [12]. Zhao et al. [13] combined the local statistic features of Gabor wavelets with the LBP features as the final feature vectors and obtained a good result in the person-independent facial expression recognition. Gabor wavelets have been widely used to establish new representations of different scales and orientations in the spatial domain [14,15,16] which is beneficial to extract the image texture features. Nouyed et al. [17] pointed out that features of different Gabor channels had different contributions to the final classification task and reported a weighted voting face recognition method. Gao et al. [18] proposed an adaptively weighted local multi-channel Gabor filters (AWMGF) method, which determined the weights of local Gabor magnitude maps by the contribution of each sub-pattern to the classification. Zhu et al. [19] introduced the adaptive boosting (AdaBoost) algorithm to select the important Gabor features in the facial expression recognition. However, the optimal weights of different channels are difficult to calculate in theory, powerful optimization method is needed to search the global optimal weights in the multidimensional space. Rao et al. developed an effective teaching-learning-based optimization (TLBO) algorithm which performed well on the mechanical design problems [20], the continuous non-linear large scale problems [21] and the heat exchangers problems [22]. Togan et al. [23] presented a design procedure employing the TLBO to the discrete optimization of planar steel frames. Yu et al. [24] applied TLBO on several numerical and engineering optimization problems and proved that TLBO is more powerful than the improved bee algorithm (IBA) [25], the hybrid particle swarm optimization with differential evolution (PSO-DE) [26], the modified differential evolution algorithm (COMDE) [27], the g-best guided artificial bee colony (GABC) [28] and the upgraded artificial bee colony (UABC) [29] algorithms.
Previous studies have proven the efficiency of LBP features and TLBO algorithm on engineering applications, however little work has been conducted to investigate their effectiveness on the TCPBL problem. Therefore, an improved fault detection method is proposed to offer a reliable solution to the TCPBL problem against different photographing conditions. A multi-scale local binary patterns (MLBP) operator is designed to extract the texture features of new representations of different Gabor channels. With information of different scales and texture change trends of local areas coded, it can obtain more efficient texture information. An improved teaching-learning-based optimization (ITLBO) algorithm with the worst recombination phase and the cuckoo search phase is proposed to optimize the weight of the classification results of different channels. The experiment results verify the global search ability of the ITLBO and the effectiveness of the proposed detection method.
The rest of this article is organized as follows: Section 2 firstly introduces the structure of the proposed fault detection method and then gives a detailed description on the design of the MLBP operator and the ITLBO algorithm. Then two experimental results are given in Section 3: Compared with some state-of-the-art methods, the experimental results on the benchmark problems can evaluate the search capability of the ITLBO, and the experimental results on the field images database demonstrates the effectiveness of the proposed detection method. Finally, the conclusions of the present work and future plans are given in Section 4.

2. Fault Detection Based on MLBP and ITLBO

The framework of the proposed fault detection method is presented in Figure 1. Gabor wavelets transform is used to acquire the multi-channel representations from different scales and orientations. Then, the MLBP operator is adopted to extract the local texture features of each channel separately. The results of the support vector machine (SVM) classifiers of different channels are usually treated equally, however it will have a positive influence on the recognition task to weight each channel differently. Therefore, the ITLBO algorithm is established to optimize the weights of classification results of different channels with its powerful search capability in global-optimization. Finally, the detection results are acquired by the decision-level weight fusion.
Figure 1. The framework of the proposed fault detection approach.
Figure 1. The framework of the proposed fault detection approach.
Symmetry 07 01734 g001

2.1. Multi-Scale LBP Operator

The LBP operator is a powerful local texture descriptor for image feature extraction, which is defined as:
L B P P ,   R = i = 0 P - 1 s   ( g i   g c ) 2 i , s   ( g i   g c )   =   { 1 , g i g c 0 , g i < g c
where gc is the gray value of the center point. g i ( i = 0 , , P 1 ) , is the P number of neighborhood points on a circle of radius R with gc as the center.
The LBP operator has been successfully used in the image recognition since it has good robustness on the influence of illumination, angle, and occlusion. However, it does not perform well when faced with severe illumination changes and it is too vague to describe the unique details contained in an image. Thus, an improved multi-scale LBP operator (MLBP) is designed to extract more distinctive information and improve the recognition performance. The center point is compared with multiple neighborhood points of different radii and the neighborhood points in the same radial directions are also compared with each other. In this article, two radii and eight radial directions are selected, which are similar to two LBP operators of different radii. The structure of the MLBP operator is shown in Figure 2.
Figure 2. The MLBP operator model.
Figure 2. The MLBP operator model.
Symmetry 07 01734 g002
A region of 5 × 5 points is selected, which is composed of two squares with different radii but centered in the same point P. C 1 i ( i = 0 , 1 , L , 7 ) is the neighborhood point on the inner square with the smaller radius R1 = 1 and C2i(i = 0,1,…,7) is the neighborhood point on the outer square with the bigger radius R2 = 2. The pixel values of eight radial directions are compared with each other in accordance with the thick arrows indications. The 3 × 8 local difference values of three arrows in eight clockwise orientations from the center point compose the whole MLBP operator. Referring to the definition of the original LBP operator, only the important sign information is coded by:
M L B P d i f f = i = 0 7 s ( v a l u e d i f f ) 2 i , s ( v a l u e d i f f ) = { 1 v a l u e d i f f > 0 0 o t h e r w i s e
where d i f f ( C 1 i P , C 2 i P , C 2 i C 1 i ) , is the local gray difference value from the tail to the head of the arrows. Here C 1 i P is the small scale descriptor of the inner square, which is similar to the L B P 8 , 1 operator; C 2 i P is the big scale descriptor of the outer square, which is similar to the L B P 8 , 2 operator; C 2 i C 1 i is the supplementary explanation to describe the texture changes between the inner square and the outer square. By coding these three descriptors into the MLBP features, more comprehensive and distinguishable texture information can be obtained than using only one of them. Besides, the radii R1, R2 are changeable to obtain better performance. A complete MLBP model is shown in Figure 3.
Figure 3. A complete MLBP feature.
Figure 3. A complete MLBP feature.
Symmetry 07 01734 g003

2.2. Improved TLBO Algorithm

TLBO is an interactive teaching-learning process-inspired algorithm based on the influence of a teacher on the achievements of learners in a class. Although the TLBO algorithm has a fast convergence speed, it may be trapped in the local optima on some difficult optimization problems.

2.2.1. Basic TLBO

TLBO is a population-based method. Learners in a classroom are considered as the population and different subjects taken by the learners are considered as different design variables of the optimization problems. Achievements of learners are analogous to the fitness values of the optimization problems. The one with the best fitness value in the entire population is considered as the teacher of the class. The basic TLBO algorithm can be divided into two parts: the teacher phase and the learner phase. The working of these two phases are explained as below.

Teacher Phase

During this phase the teacher tries to increase the mean achievement of the class towards his or her level in terms of knowledge. However the teacher can only increase the mean achievement of the class up to some extent depending on the learners’ capability. Moreover, the achievements of the learners cannot increase equally as different learners possess different knowledge levels. This aspect is presented in mathematical form as follows:
Assume that at any iteration i(i = 1,2,…,G), there are D number of subjects (i.e., design variables) offered to N number of learners (i.e., population size). Xj,k,i is the knowledge of the learner k(k = 1,2,…,N) in a particular subject j(j = 1,2,…,D), Mj,i is the mean knowledge level of the learners in the subject j and Tj,i is the knowledge of the teacher (i.e., best solution) in the subject j. The learner k is updated according to the difference between the current and the new mean knowledge levels of each subject given by:
D i f f e r e n c e _ M e a n j , k , i = r ( M n e w T F M j , i )
where r is a uniformly distributed random number within the range [0,1]. As mentioned earlier, Tj,i tries to move the mean knowledge level Mj,i towards its own level, so now Tj,i will be designated as the new mean knowledge level, then Mnew = Tj,i. TF is the teaching factor which decides the value of mean knowledge level to be changed and its value is decided by:
T F = r o u n d [ 1 + r a n d ( 0 , 1 ) { 2 1 } ]
where the value of TF can be either 1 or 2 randomly with equal probability which means TF is not a given input parameter of the algorithm.
D i f f e r e n c e _ M e a n j , k , i modifies the current learner k according to the following expression:
X j , k , i n e w = X j , k , i + D i f f e r e n c e _ M e a n j , k , i
where X j , k , i n e w is the updated value of Xj,k,i and is accepted if it gives a better fitness value than Xj,k,i.

Learner Phase

During this phase learners interact with other learners and increase their own knowledge if other learners have more useful knowledge than themselves. The learner phase is expressed as below:
At ith iteration, each learner is compared with another learner randomly. For learner X P , i , learner X Q , i are randomly selected such that X P , i X Q , i . In the case that if X Q , i is better than X P , i , X P , i is moved toward X Q , i . Otherwise it is moved away from X Q , i .
X j , P , i n e w = { X j , P , i + r ( X j , P , i X j , Q , i ) , i f   f i t n e s s ( X P , i ) b e t t e r n   t h a n   f i t n e s s ( X Q , i ) X j , P , i + r ( X j , Q , i X j , P , i ) , e l s e
where r is a random number in the range 0 to 1. X j , P , i n e w is the updated value of X j , P , i and accepted if it gives a better fitness value than X j , P , i .

2.2.2. Improved TLBO Algorithm

Two modifications are incorporated into the basic TLBO algorithm to improve its global search capability and avoid being trapped in the local optima.

Worst Recombination Phase

Rao and Patel have introduced the elite TLBO algorithm, in which, the worst learner is replaced by the elite learner after the learner phase [30]. If duplicate learners exist after the replacement, one randomly selected subject of the duplicate learner is modified into a random value. However the diversity of the population can be gradually lost. Meanwhile knowledge of all subjects of the worst learner is discarded during the replacement. However the worst learner does not mean his knowledge of all subjects is the worst. Actually, he can have useful knowledge in some subjects for the further search. Therefore, a new worst recombination phase is proposed to maintain the population diversity and enhance the exploration search. Similar to the crossover operator of differential evolution [31], the worst learner is replaced by the crossover recombination of the current worst learner and another learner randomly; thus useful knowledge of some subjects of the worst learner is still maintained in the new learner. The worst recombination phase is implemented after the learner phase and is explained as below:
At ith iteration, let the learner with the worst fitness value be X P , i . For recombination, another learner X Q , i is randomly selected such that X P , i X Q , i .
X j , P , i n e w = { X j , P , i , r j C P   o r   j = j r a n d X j , Q , i , o t h e r w i s e
where X j , P , i n e w is the recombination of learner X P , i and X Q , i in the subject j. r j [ 0 , 1 ] , is a uniformly distributed random number generated for each subject. C P [ 0 , 1 ] , is the crossover probability constant to decide the distribution proportion of two learners in all subjects. j r a n d [ 1 , D ] , is a random selected subject to ensure that X P , i n e w inherits at least one subject of X P , i . X P , i n e w is accepted if it gives a better fitness value than X P , i .

Cuckoo Search Phase

In the basic TLBO algorithm, learners get improved by moving towards other solutions with higher fitness values through the input of their teacher and the interaction with other learners. However, it lacks the jumping out operation when the algorithm is trapped in a local optimum. Cuckoo search is used to prevent the search from running into a local optimum and overcome the shortcomings of the basic TLBO, which is inspired by the breeding behavior of cuckoos laying their eggs in the nests of other species and the Lévy flights of birds [32]. The Lévy flight is a random walk in which the step-lengths have a heavy-tailed distribution with the directions of the steps being isotropic and random, so that the population can move in random directions. Here, the randomicity of the cuckoo search is used to avoid being trapped in the local optima.
The learners of a class are divided into two groups according to their achievements (i.e., fitness values). Then, multilevel knowledge can be taught in accordance with the capability of different groups of learners. The elite learners are selected for gentle teaching-learning process while the other learners become the inputs of cuckoo search phase for severe training.

2.2.3. Procedure of the ITLBO

With the above design, the flowchart of the proposed ITLBO algorithm is illustrated in Figure 4.
Figure 4. The flowchart of ITLBO.
Figure 4. The flowchart of ITLBO.
Symmetry 07 01734 g004
First, it generates N learners randomly as an initial population P. Then the elite learners with higher achievements are selected as the inputs of the teacher phase, learner phase and worst recombination phase. Next, the other learners become the inputs of the cuckoo search phase, in which all solutions (except for the best one) are updated by the Lévy flights mutation and some bad solutions are replaced with new solutions by the biased random walks. Since multiple operators including teaching-learning, crossover recombination, and Lévy mutation is used, exploration and exploitation can be balanced and a better global search ability with faster convergence speed can be obtained.

2.2.4. Time Complexity Analysis

The ITLBO algorithm can be mainly divided into the initialization, the elite learners’ selection, the teacher phase, the learner phase, the worst recombination phase, and the cuckoo search phase. The time complexity of each procedure is analyzed as below:
The initialization of N number of learners on D number of subjects is performed with the time complexity O [ N × D ] . The elite learners’ selection composed of ranking the learners costs the time complexity O [ N × log   N ] . Both the teach phase and the learner phase have the same time complexity O [ N × D ] . The worst recombination phase: the crossover recombination operation costs the time complexity O [ D ] . The cuckoo search phases: both the Lévy flights mutation and the bad nests replacement are performed with the time complexity O [ N × D ] , so the total time complexity of this phase is still O [ N × D ] .
Therefore, the total time complexity of the proposed ITLBO algorithm is O [ N × D ] + O [ N × log   N ] + O [ N × D ] + O [ N × D ] + O [ D ] + O [ N × D ] , which can be simplified as O [ N × D ] + O [ N × log   N ] . Considering that the population size N is limited to a constant value and the D of optimization problems are usually much larger than log N , so the total time complexity of the ITLBO is considered as O [ N × D ] . Taking the iteration number G into consideration, the final computational complexity of the TLBO is O [ G × N × D ] , which is not large and is the same as that of the basic TLBO. Besides, the total number of function evaluations in the ITLBO algorithm is still 2 × G × N , which is also the same as that of the basic TLBO.

3. Experiments, Results, and Discussions

3.1. Experiments on Benchmark Functions

Eight standard benchmark functions with different characters shown in Table 1 were considered to evaluate the performance of the proposed ITLBO algorithm. The numerical results of the ITLBO were compared with those of the basic TLBO and other state-of-the-art optimization algorithms, such as the TLBO algorithm with Lévy mutation in the learner phase (LTLBO) [33], the genetic algorithm with multi-parent crossover (GAMPC) [34] and the self-regulation particle swarm optimization algorithm (SRPSO) [35]. The “D” refers to the dimension of the optimization problems. The “Range” is the lower and upper bounds of the search spaces. The “fmin” is the theoretical global minimum value for these problems. All algorithms have been coded using Matlab R2014a, and have been run on a PC with a 2.6 GHz processor, 4GB RAM, and Windows 10.
Table 1. Test functions considered in the experiment.
Table 1. Test functions considered in the experiment.
FunctionFormulaRangefmin
Sphere f 1 ( x ) = i = 1 D x i 2 [−100, 100]0
Rosenbrock f 2 ( x ) = i = 1 D 1 100 ( x i + 1 x i 2 ) 2 + ( 1 x i ) 2 [−2.048, 2.048]0
Schwefel P1.2 f 3 ( x ) = i = 1 D ( j = 1 i x j ) 2 [−100, 100]0
Schwefel P2.22 f 4 ( x ) = i = 1 D | x i | + i = 1 D | x i | [−10, 10]0
Schwefel P2.21 f 5 ( x ) = max { | x i | , 1 i D } [−100, 100]0
Rastrigin f 6 ( x ) = i = 1 D ( x i 2 10   cos ( 2 π x i ) + 10 ) [−5.12, 5.12]0
Griewank f 7 ( x ) = i = 1 D x i 2 4000 i = 1 D cos ( x i i ) + 1 [−600, 600]0
Ackley f 8 ( x ) = 20 20   exp ( 0.2 1 D i = 1 D x i 2 ) exp ( 1 D i = 1 D cos ( 2 π x i ) ) + e [−32.768, 32.768]0
To decrease the statistical errors, each function was carried out with random seeds for 25 independent runs. For a fair comparison, the maximum function evaluations of these five algorithms were set to 3000 and all test functions were simulated in 10, 30, and 50 dimensions separately. The population size of each algorithm was set to 20. As the TLBO is a parameter-less algorithm, no other parameter is required for the working of the TLBO. The crossover probability constant CP was set to 0.5 for the ITLBO algorithm. The scaling factor γ = 1 and the shape factor α = 0.8 were set for the learner phase with Lévy mutation of the LTLBO. The parameters of GAMPC are set to: m = 10 , c r = 100 % , ρ = [ 0.05 , 0.1 , 0.1 ] . As for the SRPSO, the acceleration rate is set as η = 1 and the initial and final values of inertia weight are set to: ω I = 1.05 , ω F = 0.55 .
The comparative results of eight benchmark functions for five algorithms are presented in Table 2 in the form of the mean results (M) and the standard deviation (SD). The notations F, A, and D denote function, algorithm, and dimension, respectively. The boldface values given in the table indicate the best results among these five algorithms.
The results show that the TLBO-based algorithms often obtain better solutions than the GAMP and SRPSO algorithms, which indicates the strong search capability of the TLBO. Especially, the ITLBO obtains all the best results of seven functions in 10, 30, and 50 dimensions. Although the SRPSO acquires the best mean results for the Rosenbrock function (D = 10, 30), the ITLBO outperforms the other four algorithms under a higher dimension (D = 50). The performance of ITLBO is greatly improved than that of TLBO and LTLBO, which proves the effectiveness of the worst recombination phase and the cuckoo search phase. The performance of these algorithms commonly decreases when the dimension is higher but the performance of TLBO-based algorithms on the Griewank and Rastrigin functions is just on the contrary. Črepinšek et al. [36] pointed out that, the fitness-distance correlations of some functions increase with the dimensions. For example, for Griewank (D = 2) the fitness-distance correlation is 0.45, while for (D = 50) it is 0.99. When the fitness-distance correlation increases with the dimension, such problems are actually easier to solve for TLBO-based algorithms. This explained the results on the Griewank and Rastrigin functions are better under higher dimensions.
Table 2. Comparison results of ITLBO and state-of-art algorithms on eight test functions of 10, 30, and 50 dimensions with 3000 function evaluations.
Table 2. Comparison results of ITLBO and state-of-art algorithms on eight test functions of 10, 30, and 50 dimensions with 3000 function evaluations.
FAD = 10D = 30D = 50
MSDMSDMSD
SphereGAMPC1.61 × 10–67.56 × 10–61.93 × 1023.52 × 1021.54 × 1038.05 × 102
SRPSO9.95 × 10–15.85 × 10–14.93 × 101.33 × 103.73 × 1028.83 × 10
TLBO7.16 × 10–131.05 × 10–121.76 × 10–122.23 × 10–122.49 × 10–123.03 × 10–12
LTLBO1.75 × 10–175.54 × 10–174.92 × 10–161.02 × 10–151.53 × 10–145.71 × 10–14
ITLBO6.92 × 10–212.37 × 10–203.74 × 10–191.39 × 10–187.65 × 10–183.32 × 10–17
RosenbrockGAMPC5.101.854.68 × 101.72 × 101.44 × 1025.50 × 10
SRPSO4.206.71 × 102.34 × 103.524.82 × 103.86
TLBO7.475.54 × 10–12.74 × 107.48 × 10–14.71 × 109.65 × 10–1
LTLBO7.663.91 × 10–12.67 × 106.36 × 10–14.64 × 101.11
ITLBO6.914.79 × 10–12.58 × 105.68 × 10–14.54 × 107.93 × 10–1
Schwefel P1.2GAMPC1.635.812.33 × 1031.40 × 1031.62 × 1045.78 × 103
SRPSO11.494.913.13 × 1039.46 × 1021.31 × 1044.87 × 103
TLBO1.07 × 10–122.98 × 10–125.91 × 10–121.01 × 10–111.07 × 10–111.58 × 10–11
LTLBO7.76 × 10–171.56 × 10–169.39 × 10–152.85 × 10–144.75 × 10–132.29 × 10–12
ITLBO4.41 × 10–191.65 × 10–185.37 × 10–191.33 × 10–186.62 × 10–182.63 × 10–17
Schwefel P2.22GAMPC4.00 × 10–71.13 × 10–65.53 × 10–15.58 × 10–11.00 × 104.63
SRPSO2.20 × 10–15.43 × 10–23.901.601.54 × 103.79
TLBO2.61 × 10–71.89 × 10–77.83 × 10–74.99 × 10–71.57 × 10–161.64 × 10–6
LTLBO1.41 × 10–92.20 × 10–95.57 × 10–97.18 × 10–91.25 × 10–81.64 × 10–8
ITLBO2.26 × 10–113.85 × 10–111.33 × 10–101.75 × 10–104.01 × 10–108.51 × 10–10
Schwefel P2.21GAMPC5.645.834.11 × 101.08 × 105.80 × 101.06 × 10
SRPSO9.31 × 10–12.30 × 10–11.03 × 101.981.89 × 102.98
TLBO8.16 × 10–75.79 × 10–71.43 × 10–61.62 × 10–61.28 × 10–61.19 × 10–6
LTLBO1.96 × 10–92.34 × 10–91.03 × 10–81.70 × 10–82.61 × 10–83.74 × 10–8
ITLBO1.08 × 10–101.24 × 10–101.26 × 10–92.32 × 10–92.15 × 10–95.36 × 10–9
RastriginGAMPC8.226.338.29 × 103.74 × 101.75 × 1025.65 × 10
SRPSO1.42 × 105.359.16 × 102.85 × 101.93 × 1022.74 × 10
TLBO1.34 × 101.53 × 103.14 × 107.00 × 102.62 × 10–11.31
LTLBO1.846.592.59 × 10–48.79 × 10–44.95 × 10–52.05 × 10–4
ITLBO5.59 × 10–12.798.04 × 10–73.99 × 10–64.80 × 10–121.23 × 10–11
GriewankGAMPC1.47 × 10–11.77 × 10–12.162.931.68 × 101.25 × 10
SRPSO7.86 × 10–11.81 × 10–11.491.17 × 10–14.398.24 × 10–1
TLBO3.59 × 10–21.17 × 10–18.84 × 10–121.14 × 10–118.08 × 10–121.61 × 10–11
LTLBO2.49 × 10–101.15 × 10–101.45 × 10–121.92 × 10–124.63 × 10–121.09 × 10–11
ITLBO5.15 × 10–112.56 × 10–103.10 × 10–166.13 × 10–162.88 × 10–165.66 × 10–16
AckleyGAMPC9.24 × 10–23.19 × 10–14.061.868.421.73
SRPSO9.26 × 10–22.96 × 10–13.532.72 × 10–15.286.19 × 10–1
TLBO2.18 × 10–71.28 × 10–73.77 × 10–73.18 × 10–74.11 × 10–72.55 × 10–7
LTLBO3.23 × 10–87.72 × 10–81.76 × 10–73.53 × 10–72.19 × 10–72.25 × 10–7
ITLBO1.26 × 10–91.40 × 10–91.26 × 10–99.76 × 10–103.34 × 10–96.56 × 10–9
Figure 5 was plotted between the mean function values in log scale and the function evaluations over 25 runs. Five algorithms for eight benchmark functions of 50 dimensions were selected to evaluate the convergence speed of the GAMPC, SRPSO, TLBO, LTLBO, and ITLBO algorithms. It makes clear that the ITLBO has much faster convergence speed than other four algorithms.
Figure 5. Convergence graphs of eight 50-dimensional functions for GAMPC, SRPSO, TLBO, LTLBO and ITLBO: (a) Sphere; (b) Rosenbrock; (c) Schwefel P1.2; (d) Schwefel P2.22; (e) Schwefel P2.21; (f) Rastrigin; (g) Griewank; (h) Ackley.
Figure 5. Convergence graphs of eight 50-dimensional functions for GAMPC, SRPSO, TLBO, LTLBO and ITLBO: (a) Sphere; (b) Rosenbrock; (c) Schwefel P1.2; (d) Schwefel P2.22; (e) Schwefel P2.21; (f) Rastrigin; (g) Griewank; (h) Ackley.
Symmetry 07 01734 g005
All the results indicate that the ITLBO has a better convergence speed and stronger global search capability compared with the GAMPC, SRPSO, TLBO, and LTLBO algorithms. With the help of the worst recombination phase and the cuckoo search phase, ITLBO can get a good population diversity and avoid hovering around the local optimum solutions.

3.2. Application on Fault Detection of TCPBL Problem

3.2.1. Experiment Database

This article mainly studies on the recognition method of bolts region images. Therefore, the experiment database is composed of bolts region images from the segmentation of region of interest of the field images with fault detection manually checked, which contains 500 normal images and 500 fault images. Each image has a resolution of 32 × 32 pixels. Some of them are shown in Figure 6. Because these images are photographed in the process of train movement by the high speed trackside cameras, they have the problems of blur, poor illumination, excess exposure, and occlusion.
Figure 6. Some samples in the bolts region images database.
Figure 6. Some samples in the bolts region images database.
Symmetry 07 01734 g006

3.2.2. Recognition Results and Discussions

In this article, Gabor wavelets transform was used at five scales and eight orientations; thus, the dimension of this optimization problem was 40. In the process of classification, the Leave-One-Out Cross Validation (LOO-CV) was adopted to reduce the influence of stochastic factors and improve the reliability of the classification results. To evaluate the performance of this approach, several different optimization methods with the LBP and MLBP operators are compared and the best (B), mean (M) recognition rates (%) and the standard deviation (SD) over 25 independent runs are given in Table 3.
Table 3. Comparison results of fault detection with different methods.
Table 3. Comparison results of fault detection with different methods.
Weight Optimization MethodLBPMLBP
BMSDBMSD
No Optimization94.6096.30
AWMGF94.8096.30
AdaBoost95.1096.50
GAMPC96.6096.130.3898.2097.030.27
SRPSO96.8096.010.2798.0097.240.25
TLBO97.0096.360.2398.2097.570.31
LTLBO97.2096.400.3698.4097.680.26
ITLBO97.6096.960.2698.9098.560.18
The results show that the MLBP operator has much higher accuracy than LBP operator because it can maintain more distinguishable texture information including not only the local difference information of two different scales but also the texture changes between different scales. Furthermore, with proper weights assigned to the classification results of different Gabor channels, the recognition rates are increased. Moreover, as for this weight optimization problem, the proposed ITLBO algorithm is more efficient than other methods, such as GAMPC, SRPSO, TLBO, and LTLBO. With the help of MLBP and ITLBO, the proposed fault detection method is reliable and efficient in practice.

4. Conclusions

In this article, an improved fault detection method based on the MLBP operator and ITLBO algorithm is successfully applied to the TCPBL problem of TFDS, which is a new application area of pattern recognition. This study reveals that the MLBP operator can obtain more comprehensive discrimination information than LBP, which maintains both the local textures and the texture changes of two different scales. With the incorporation of the worst recombination phase and the cuckoo search phase, the ITLBO algorithm can solve the multi-dimensional parameter optimization problems with a better search capability. Against blur, different illuminations, and occlusion conditions, the proposed fault detection method can still successfully identify 98.9% images of the field database, which demonstrates its reliability and superiority. However, there are still some images could not be identified properly. In addition, the procedures of Gabor transform, MLBP extraction and ITLBO search, can cause some computational burden. Future research should be focused on improving its performance, reducing its computation consumption, and applying it on other TFDS fault detection problems.

Acknowledgements

The authors are grateful to the Science and Technology Program of Harbin, China for financial support in this study.

Author Contributions

Hongjian Zhang, proposed the concept, performed the experiments and wrote the manuscript. Ping He, contributed to the generation of the primary ideas, designed the experiments and analyzed the data. Xudong Yang, participated in the data collection and algorithm simulation.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Zhu, Z.X.; Wang, G.Y. A fast potential fault regions locating method used in inspecting freight cars. J. Comput. 2014, 9, 1266–1273. [Google Scholar] [CrossRef]
  2. Zhou, K.; Lu, M.; Wang, G.; Ren, B.Y. Monitoring data transmission technology in 5T system based on message. China Railw. Sci. 2008, 29, 113–118. [Google Scholar]
  3. Liu, Z.H.; Xiao, D.Y.; Chen, Y.M. Displacement fault detection of bearing weight saddle in TFDS based on Hough transform and symmetry validation. In Proceedings of the 2012 9th International Conference on Fuzzy Systems and Knowledge Discovery, Chongqing, China, 29–31 May 2012; pp. 1404–1408.
  4. Dai, P.; Yang, X.D.; He, P.; Zhang, H.J. Automatic recognition for losing of train bogie center plate screw based on multiple-fuzzy relation tree. In Proceedings of the 2009 International Workshop on Intelligent Systems and Applications, Wuhan, China, 23–24 May 2009; pp. 1–5.
  5. Yang, X.D.; Ye, L.J.; Yuan, J.B. Research of computer vision fault recognition algorithm of center plate bolts of train. In Proceedings of the 2011 International Conference on Instrumentation, Measurement, Computer, Communication and Control, Beijing, China, 21–23 October 2011; pp. 978–981.
  6. Li, N.; Wei, Z.Z.; Cao, Z.P.; Wei, X.G. Automatic fault recognition for losing of train bogie center plate bolt. In Proceedings of the 2012 IEEE 14th International Conference on Communication Technology, Chengdu, China, 9–11 November 2012; pp. 1001–1005.
  7. Ahonen, T.; Hadid, A.; Pietikäinen, M. Face description with local binary patterns: Application to face recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2006, 28, 2037–2041. [Google Scholar] [CrossRef] [PubMed]
  8. Guo, Z.H.; Zhang, L.; Zhang, D. A completed modeling of local binary pattern operator for texture classification. IEEE Trans. Image Process. 2010, 19, 1657–1663. [Google Scholar] [PubMed]
  9. Benrachou, D.E.; Santos, F.N.; Boulebtateche, B.; Bensaoula, S. Automatic eye localization; multi-block LBP vs. Pyramidal LBP three-levels image decomposition for eye visual appearance description. Lect. Notes Comput. Sci. 2015, 9117, 718–726. [Google Scholar]
  10. Pinjari, S.A.; Patil, N.N. A modified approach of fragile watermarking using Local Binary Pattern (LBP). In Proceedings of the 2015 International Conference on Pervasive Computing: Advance Communication Technology and Application for Society, Pune, India, 8–10 January 2015; pp. 1–4.
  11. Monika; Kumar, M. A novel fingerprint minutiae matching using LBP. In Proceedings of the 2014 3rd International Conference on Reliability, Infocom Technologies and Optimization: Trends and Future Directions, Noida, India, 8–10 October 2014; pp. 1–4.
  12. Bai, S. Sparse code LBP and SIFT features together for scene categorization. In Proceedings of the 2014 International Conference on Audio, Language and Image Processing, Shanghai, China, 7–9 July 2014; pp. 200–205.
  13. Zhao, Q.Y.; Pan, B.C.; Pan, J.J.; Tang, Y.Y. Facial expression recognition based on fusion of Gabor and LBP features. In Proceedings of the 2008 International Conference on Wavelet Analysis and Pattern Recognition, Hong Kong, China, 30–31 August 2008; pp. 362–367.
  14. Shen, L.; Bai, L.; Fairhurst, M. Gabor wavelets and general discriminant analysis for face identification and verification. Image Vis. Comput. 2007, 25, 553–563. [Google Scholar] [CrossRef]
  15. Wang, S.; Xia, Y.; Liu, Q.; Luo, J.; Zhu, Y.; Feng, D.D. Gabor feature based nonlocal means filter for textured image denoising. J. Vis. Commun. Image Represent. 2012, 23, 1008–1018. [Google Scholar] [CrossRef]
  16. Bissi, L.; Baruffa, G.; Placidi, P.; Ricci, E.; Scorzoni, A.; Valigi, P. Patch based yarn defect detection using Gabor filters. In Proceedings of the 9th IEEE International Instrumentation and Measurement Technology Conference, Graz, Austria, 13–16 May 2012; pp. 240–244.
  17. Nouyed, I.; Amin, M.A.; Poon, B.; Yan, H. Human face recognition using weighted vote of Gabor magnitude filters. In Proceedings of the 7th International Conference on Information Technology and Application, Sydney, Australia, 21–24 November 2011; pp. 36–40.
  18. Gao, T.; Ma, X.; Wang, J.G. Particle occlusion face recognition using adaptively weighted local Gabor filters. J. Comput. Inf. Syst. 2012, 8, 5271–5277. [Google Scholar]
  19. Zhu, J.X.; Su, G.D.; Li, Y.C. Facial expression recognition based on Gabor feature and AdaBoost. J. Optoelectron. Laser 2006, 17, 993–998. [Google Scholar]
  20. Rao, R.V.; Savsani, V.J.; Vakharia, D.P. Teaching-learning-based optimization: A novel method for constrained mechanical design optimization problems. Comput. Aided Des. 2011, 43, 303–315. [Google Scholar] [CrossRef]
  21. Rao, R.V.; Savsani, V.J.; Vakharia, D.P. Teaching-Learning-Based Optimization: An optimization method for continuous non-linear large scale problems. Inf. Sci. 2012, 183, 1–15. [Google Scholar] [CrossRef]
  22. Rao, R.V.; Patel, V. Multi-objective optimization of heat exchangers using a modified teaching-learning-based optimization algorithm. Appl. Math. Model. 2013, 37, 1147–1162. [Google Scholar] [CrossRef]
  23. Togan, V. Design of planar steel frames using teaching-learning based optimization. Eng. Struct. 2012, 34, 225–232. [Google Scholar] [CrossRef]
  24. Yu, K.J.; Wang, X.; Wang, Z.L. An improved teaching-learning-based optimization algorithm for numerical and engineering optimization problems. J. Intell. Manuf. 2014, 1–13. [Google Scholar] [CrossRef]
  25. Karaboga, D.; Akay, B. Artificial bee colony (ABC), harmony search and bees algorithms on numerical optimization. In Proceeding of IPROMS-2009 on Innovative Production Machines and Systems, Cardiff, UK, 24 March 2009; pp. 1–6.
  26. Liu, H.; Cai, Z.X.; Wang, Y. Hybridizing particle swarm optimization with differential evolution for constrained numerical and engineering optimization. Appl. Soft Comput. 2010, 10, 629–640. [Google Scholar] [CrossRef]
  27. Mohamed, A.W.; Sabry, H.Z. Constrained optimization based on modified differential evolution algorithm. Inf. Sci. 2012, 194, 171–208. [Google Scholar] [CrossRef]
  28. Li, G.Q.; Niu, P.F.; Xiao, X.J. Development and investigation of efficient artificial bee colony algorithm for numerical function optimization. Appl. Soft Comput. 2012, 12, 320–332. [Google Scholar] [CrossRef]
  29. Brajevic, I.; Tuba, M. An upgrade artificial bee colony algorithm for constrained optimization problems. J. Intell. Manuf. 2013, 24, 729–740. [Google Scholar] [CrossRef]
  30. Rao, R.V.; Patel, V. An elitist teaching-learning-based optimization algorithm for solving complex constrained optimization problems. Int. J. Ind. Eng. Comput. 2012, 3, 535–560. [Google Scholar] [CrossRef]
  31. Storn, R.; Price, K. Differential evolution—A simple and efficient heuristic for global optimization over continuous spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  32. Yang, X.S.; Deb, S. Engineering optimization by Cuckoo Search. Int. J. Math. Model. Numer. Optim. 2010, 4, 330–343. [Google Scholar]
  33. Ghasemi, M.; Ghavidel, S.; Gitizadeh, M.; Akbari, E.B. An improved teaching–Learning-based optimization algorithm using Lévy mutation strategy for non-smooth optimal power flow. Int. J. Electr. Power Energy Syst. 2015, 65, 375–384. [Google Scholar] [CrossRef]
  34. Elsayed, S.M.; Ruhul, A.S.; Daryl, L.E. A new genetic algorithm for solving optimization problems. Eng. Appl. Artif. Intell. 2014, 27, 57–69. [Google Scholar] [CrossRef]
  35. Tanweer, M.R.; Suresh, S.; Sundararajan, N. Self-regulating particle swarm optimization algorithm. Inf. Sci. 2015, 294, 182–202. [Google Scholar] [CrossRef]
  36. Črepinšek, M.; Liu, S.H.; Mernik, L. A note on teaching–Learning-based optimization algorithm. Inf. Sci. 2012, 212, 79–93. [Google Scholar] [CrossRef]

Share and Cite

MDPI and ACS Style

Zhang, H.; He, P.; Yang, X. Fault Detection Based on Multi-Scale Local Binary Patterns Operator and Improved Teaching-Learning-Based Optimization Algorithm. Symmetry 2015, 7, 1734-1750. https://doi.org/10.3390/sym7041734

AMA Style

Zhang H, He P, Yang X. Fault Detection Based on Multi-Scale Local Binary Patterns Operator and Improved Teaching-Learning-Based Optimization Algorithm. Symmetry. 2015; 7(4):1734-1750. https://doi.org/10.3390/sym7041734

Chicago/Turabian Style

Zhang, Hongjian, Ping He, and Xudong Yang. 2015. "Fault Detection Based on Multi-Scale Local Binary Patterns Operator and Improved Teaching-Learning-Based Optimization Algorithm" Symmetry 7, no. 4: 1734-1750. https://doi.org/10.3390/sym7041734

APA Style

Zhang, H., He, P., & Yang, X. (2015). Fault Detection Based on Multi-Scale Local Binary Patterns Operator and Improved Teaching-Learning-Based Optimization Algorithm. Symmetry, 7(4), 1734-1750. https://doi.org/10.3390/sym7041734

Article Metrics

Back to TopTop