Next Article in Journal
A Simple Thermodynamic Model of the Internal Convective Zone of the Earth
Previous Article in Journal
Approximation to Hadamard Derivative via the Finite Part Integral
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Comprehensive Evaluation of Graph Kernels for Unattributed Graphs

1
State Key Laboratory of Complex Electromagnetic Environment Effects on Electronics and Information System, Luoyang 471003, China
2
National Innovation Institute of Defense Technology, Academy of Military Science, Beijing 100071, China
*
Author to whom correspondence should be addressed.
Entropy 2018, 20(12), 984; https://doi.org/10.3390/e20120984
Submission received: 25 September 2018 / Revised: 16 December 2018 / Accepted: 16 December 2018 / Published: 18 December 2018

Abstract

:
Graph kernels are of vital importance in the field of graph comparison and classification. However, how to compare and evaluate graph kernels and how to choose an optimal kernel for a practical classification problem remain open problems. In this paper, a comprehensive evaluation framework of graph kernels is proposed for unattributed graph classification. According to the kernel design methods, the whole graph kernel family can be categorized in five different dimensions, and then several representative graph kernels are chosen from these categories to perform the evaluation. With plenty of real-world and synthetic datasets, kernels are compared by many criteria such as classification accuracy, F1 score, runtime cost, scalability and applicability. Finally, quantitative conclusions are discussed based on the analyses of the extensive experimental results. The main contribution of this paper is that a comprehensive evaluation framework of graph kernels is proposed, which is significant for graph-classification applications and the future kernel research.

1. Introduction

Graphs are important structures for information representation, in which nodes and edges respectively represent the entities and the relationships in the real world. Graph processing has been widely used in many scientific fields such as image processing [1], biochemical research [2], social network [3] and natural language processing [4]. Meanwhile, graph comparison plays a core role in data mining and target recognition in these fields. For instance, two molecules with the same chemical properties usually have similar topological structures [5]. Thus people can successfully perform a prediction for an unknown molecule via topology comparison with known ones.
It has been reported that exact graph comparison is equivalent to sub-graph isomorphism detection, which is a well-known NP-hard problem [6]. Inexact substitutions have to be explored, such as approximate graph edit distance [7], topological descriptors [8] and graph kernels.
The construction of graph kernels has been extensively investigated in the past decades. Through a well-defined kernel function, two samples in the graph space could be mapped into a real number, which represents the quantitative similarity between two graphs. Moreover, extracting the graph features explicitly is unnecessary in this procedure.
However, as stated in [9], an exact graph kernel can locate the dissimilarity between two arbitrary and non-isomorphic graphs. Thus it has the same time complexity with graph isomorphism detection, which is NP-hard but has been neither proven as NP-complete nor found to be solved by any polynomial-time algorithm. By now, all the existing graph kernels are actually inexact, paying the distinct tradeoffs among classification accuracy, applicability and runtime performance.
Unfortunately, the problem of how to make a qualitative comparison of graph kernels has not been solved in the current literature. According to [10], “There is no theoretical justification on why certain types of kernels are better than the others”. Therefore, almost all existing approaches only utilize a few real-world datasets to test the classification performance of the proposed kernels. However, this kind of weak evaluation cannot give an effective guidance when we need an optimal method for a practical graph-classification application.
In order to alleviate this problem, a comprehensive evaluation framework of graph kernels is discussed in this paper. According to the design details of the kernel method, the whole graph kernel family can be grouped in five different dimensions. We choose several popular graph kernels to perform a comprehensive evaluation. Through classification tests on plenty of real-world and synthetic datasets, these graph kernels are compared by many criteria such as classification accuracy, F1 score, runtime cost, scalability and applicability. Finally, quantitative conclusions on the graph kernel groups are discussed based on the analyses of the experimental results.
In this paper, we mainly focus on the classification evaluation for unattributed graphs, all the nodes and vertices of which have no attributes [11,12]. The reasons are twofold. Firstly, unattributed graph is more common in the real world because the attributes of entities and relationships are usually difficult and expensive to capture. Secondly, the design of kernel methods for unattributed graphs is more challenging owing to the lack of information. Some researches based on graphs with continuous or high-dimension attributes [13,14] are not considered in our evaluation.

2. Graph Kernel

2.1. Graph Definition

An unattributed graph G = ( V , E ) is a set of unlabeled vertices and links. v 1 , v 2 v n V is the set of vertices. E is the adjacency matrix. If there is a link between v i and v j , then E ( v i , v j ) = 1 . Otherwise, E ( v i , v j ) = 0 .
A graph G ^ = ( V ^ , E ^ ) is an (induced) subgraph of graph G = ( V , E ) , if and only if V ^ V and v 1 , v 2 V ^ , E ^ ( v 1 , v 2 ) = E ( v 1 , v 2 ) . For an unattributed graph, a subgraph is also called as a pattern of a graph.
A (sub)graph G = ( V , E ) is isomorphic to H = ( V , E ) , if there exists as least one bijective function f : V V so that v 1 , v 2 E ( f v 1 , f v 2 ) E . We denote it as G H .
A graph alignment Φ G = ( G α , ϑ ) is defined by a set of several subgraphs G α ( α = 1 , , p ) and a special order of the nodes { v 1 α , , v k α } in each subgraph [15]; this joint order is denoted by ϑ . Therefore Φ G = Φ H means that the two graphs G and H are aligned.

2.2. Kernel Method

Kernel method is an important concept in machine learning. Through a mapping ϕ from the input space S into the reproducing kernel Hilbert space H (RKHS), the kernel function is defined as:
k ( x , x ) = ϕ ( x ) , ϕ ( x ) H
where x , x S and , H is the inner product operation in RKHS. An illustration of kernel method is shown in Figure 1. The most obvious advantage of graph kernel is that a problem which cannot be linearly separated is changed into a linear separable problem through a kernel mapping.
Specifically, we call the method as graph kernel if the input space is the graph space. From Equation (1), it is clear that via a feature extraction method ϕ , graphs can be mapped into vectors in a RKHS, and the inner product of two vectors can represent the graph similarity.
Usually, the graph similarity can be computed directly, without the explicitly definition of the feature extraction method ϕ . In the real world, it is quite difficult to compute an explicit embedding representation of structured data. Therefore, compared to topology descriptors and other graph comparison methods, graph kernel could achieve a smart and accurate similarity measurement because of the implicit feature extraction. In the field of machine learning, it has been witnessed that graph kernels can bridge the gap between graph-based data and a large group of kernel-based machine learning algorithms including support vector machines (SVM), kernel regression, and kernel principle component analysis (kPCA) [9].
According to the Mercer’s theorem [16], a valid graph kernel must be symmetric and positive semi-definite (p.s.d.):
  • Symmetric. Obviously, for two graphs G A and G B , k ( G A , G B ) = k ( G B , G A ) .
  • p.s.d. For a dataset with n graphs, any finite sequences of graphs g 1 , g 2 , g n and any choices of arbitrary real numbers c 1 , c 2 , c n , we have i = 1 n j = 1 n k ( g i , g j ) c i c j 0 , i.e., all the eigenvalues of the kernel matrix K n × n = [ k ( g i , g j ) ] for the dataset are nonnegative.
Note that although the learning ability of valid graph kernels has been proven, there is no theoretical research which demonstrates that invalid graph kernel could not support the learning algorithms. Actually, there exist discussions of how similarities (i.e., non-p.s.d. kernels) can support learning [17,18]. Therefore, some graph kernels are not proved to be p.s.d. in the literatures.

2.3. Kernel Groups

In the literature, the existing graph kernels are usually designed based on the distinct topological analysis of graph structure. According to five different dimensions of kernel design details, the whole family of graph kernels can be categorized as follows:
1. Framework: R-convolution kernels vs. Information theoretic kernels
In 1999, Haussler proposed R-convolution kernels [19], which discover the relationship between graph similarity and the appearances of same or similar substructures of two graphs. As the formal definition in [20], for two graphs G 1 and G 2 , { S 1 ; 1 , , S 1 ; n 1 , , S 1 ; N 1 } and { S 2 ; 1 , , S 2 ; n 2 , , S 2 ; N 2 } are sub-graph sets of G 1 and G 2 respectively. A standard R-convolution kernel K for these two graphs is defined as:
K ( G 1 , G 2 ) = n 1 = 1 N 1 n 2 = 1 N 2 δ ( S 1 ; n 1 , S 2 ; n 2 )
where δ denotes a Dirac kernel shown as:
δ ( S 1 ; n 1 , S 2 ; n 2 ) = { 1 ,   if   S 1 ; n 1 S 2 ; n 2 0 ,   otherwise  
where S 1 ; n 1 S 2 ; n 2 indicates the substructure S 1 ; n 1 is isomorphic (or approximately isomorphic) to S 2 ; n 2 .
So far, most existing kernels [9,10,21,22,23] belong to this group. Intuitively, two similar graphs should have many common substructures. However, the main drawback of the R-convolution kernels is that they neglect the relative locations of substructures. This is because R-convolution kernels cannot establish reliable structural correspondences among substructures [21]. Meanwhile, the teeter-totter problem and the complexity issue of the graph decomposition challenge the development of R-convolution kernels [6].
Recently, Bai et al. utilized information theory methods to compute the probability distribution diffusion of two graphs as the similarity measurement [20,24,25,26,27,28]. By mapping all the data points of the input space into a fitted distribution in a parametric family S, a kernel for the distributions can be defined. This group of kernels for the data points in terms of distributions can be automatically induced in the original input space. Therefore, this framework provides us an alternative way of defining kernels that maps graphs into a statistical manifold. In real-world applications, these kernels outperform the linear kernels using SVM classifiers. Some of them create a bridge between kernel methods and information theory, and thus have an information theoretic interpretation. These methods are called information theoretic kernels. However, its high computational complexity of the information entropy is the bottleneck of this group.
2. Graph Pre-processing: Aligned Kernels vs. Unaligned Kernels
Graph alignment can locate the mapping relationship between the node sequences of two graphs. It is a pre-processing procedure for the original graphs. Common substructures will be pinned in the same position in these two graphs after the alignment. Through assigning parts of one object to parts of the other, the most similar parts of the two objects can be found out. Finding such a bijection is known as the assignment problem and well-studied in combinatorial optimization. This approach has been successfully applied to graph comparison, e.g., in general graph matching as well as kernel-based classification. In contrast to convolution kernels, assignments establish structural correspondences, thereby alleviating the problem of diagonal dominance at the same time. And then it can achieve an accurate similarity computation without false positive. The research on optimal assignment kernels was reported in [29], in which each pair of structures is aligned before comparison. However, the similarities derived in this way are not necessarily positive semi-definite (p.s.d.) and thus do not give rise to valid kernels, severely limiting their use in kernel methods [30]. Kriege et al. discussed the condition for guaranteeing an optimal assignment kernel to be positive semi-definite [31].
The performance of the aligned kernel depends on the alignment accuracy. During graph alignment, every node/subgraph in a graph could only map to one node/subgraph in another graph, which will lead to information loss. Therefore, unaligned kernels are more common in the literature.
3. Matching Pattern: Global kernels vs. Local kernels
A novel choice of matching pattern is usually the core of designing a new graph kernel. The similarity computation of a kernel mainly depends on the exploration of the matching patterns.
Most existing graph kernels belong to the group of local kernels, which focus on local patterns of data structure, such as walks [22], paths [23], sub-trees [21] and limit-sized sub-graphs [10]. For example, the random walk kernel will embed graph into a feature space, in which the feature vector consists of walk numbers with different lengths.
A few studies changed to explore the global characteristics of graphs such as the Lovász number [32] and the whole probability distribution [11]. The kernels based on these methods are called global kernels. In this kernel group, all the members directly obtain the similarity among graphs based on the whole structure information. Therefore, the graph decomposition is not necessary.
In the most recent research, some hybrid patterns were utilized to design a kernel [33]. Because graph matching is a special case of sub-graph matching, the kernels based on graph matching are allocated in the group of local pattern in this paper.
4. Computing Models: Quantum kernels vs. Classical kernels
Quantum computation differs from its classical counterparts. It can store, process and transport information using the peculiar properties of quantum mechanics such as the superposition and the entangled state. These properties result in an exponential increase of the dimensionality of the state-space, which is the basis of the quantum speedup.
As the quantum counterpart of random walk, quantum walk [34] becomes a novel computing model to analyze graph structures. Because of the amazing properties of quantum parallel and quantum interference, the quantum amplitude of quantum walks on a graph could represent more complex structural information. For kernels based on the discrete-time edge-based quantum walk (DTQW) [12,35] or the concrete-time node-based quantum walk (CTQW) [11,21], all of them are called quantum kernels, and the others are called classical kernels. In fact, quantum walk is the only method used to design the quantum kernels in literature.
Here we use DTQW as an instance. The discrete-time quantum walk is the quantum counterpart of the classical random walk. We denote the state space of DTQW as E , which is the edge set of a graph. And a general state of the quantum walk is:
| φ = ( u , v ) E α u v | u v
where the quantum amplitudes α u v are complex. Using the Grover diffusion matrix, the entries of the transition matrix U is shown as follows:
U ( u , v ) , ( w , x ) = { 2 d x δ u x , v = w 0 , o t h e r w i s e
where U ( u , v ) , ( w , x ) shows the quantum amplitude for the transition | u v | w x , d x means the vertex degree and δ u x is the Kronecker delta, i.e., δ u x = 1 if u = x , otherwise δ u x = 0 .
Different from random walk where the probability propagates, what propagates during quantum walk is the quantum amplitude. Therefore, the quantum interference will happen between two crossing walks. In this paper, we only consider quantum computing as a novel computation model and evaluate the quantum kernels by running on a classical computer to simulate quantum walk.
5. Methodologies: Topological kernels vs. Entropy kernels
Graph and sub-graph matching is the key procedure for all the kernels. For most of the existing kernels, match mapping is computed via the topological analysis. In this kernel group, graph isomorphism or sub-graph isomorphism is the main method. However, the pairwise matching will cost a lot of time for the kernel computation. Therefore, adding some toy pattern constraints (e.g., edge, fixed-length walk, triangles) or constructing the product graph are the common methods to locate the matching. The product graph is usually an auxiliary structure to locate common sub-graphs, which is constructed by two graphs [22]. However, the product graph will be large if the two graphs are big, which may still lead to unacceptable complexity.
In information theory, mutual information represents the diffusion of two probability distributions. Utilizing the mutual information entropy of the probability distributions of substructures is a novel trend to search the similar substructures. Therefore these methods are called entropy kernels in this paper.
Here 15 popular graph kernels are chosen and shown in Table 1. Every kernel can be grouped according to the above five dimensions. From the groups of these kernels, we find that:
  • Most of these kernels are R-convolution, unaligned and local-pattern kernels.
  • All the entropy kernels here utilize quantum walk model to compute the probability distribution.
  • All of the information theory kernels here belong to the group of entropy kernels. Meanwhile, some R-convolution kernels which detect the similar substructure via the computation of information entropy also belong to this group.

3. Complexity Analysis

The computation of graph kernel is erogic and complex. As the increase of graph size and dataset size, the computational cost will increase. In this section, we will make a qualitative comparison on the runtime cost of the 15 chosen graph kernels through the analysis of time complexity.
Table 2 shows the time complexities of all the 15 mentioned kernels. Suppose that the base dataset has K graphs and each graph has N nodes (unattributed nodes) and E edges (undirected and unweight edges). All the complexities are denoted by the three parameters.

4. Quantitative Evaluation

In this section, plenty of graph datasets are utilized to perform a quantitative evaluation on graph kernels in terms of many criteria such as classification accuracy, runtime cost, scalability and applicability. From the aforementioned kernel methods, several graph kernels are evaluated using the following tests.

4.1. Datasets

Both real-world datasets and synthetic datasets are used in this paper to evaluate these graph kernels:
The real-world datasets. According to the main scope of graph classification, 31 graph datasets from the real world are chosen, including 20 chemical datasets, five image datasets, two social network datasets, three hand-writing datasets and one fingerprint dataset. Among the 31 datasets, some of them are multi-class, such as COIL-DEL and so on. Others are binary class datasets (given in Table A1 in Appendix A). For each object in these datasets, its topological structure is extracted as an unattributed graph, and we try to find the relationship between the natural characteristic and the graph structure. All the datasets can be downloaded from the benchmark website [39].
The synthetic datasets. In order to further evaluate the scalability and the applicability of these kernels, some synthetic datasets are chosen or generated, including random graphs, cospectral graphs, regular graphs and strong regular graphs.
Table 3 lists the random graph datasets used for scalability evaluation. According to different node and edge levels, 100 sample graphs are randomly generated with the same size for each class in these datasets. To test the set-based scalability, a dataset is generated which consists of different numbers of random graphs with the same graph size.
Table 4 collects three kinds of similar graphs. CosGraph includes 5048 pairs of 10-node graphs. Each pair of graphs has the same graph spectrum, and is called a cospectral graph pair. RegGraph and SRGraph consist of 31 classes of regular graphs and 11 classes of strong regular graphs respectively. Within one class, each graph is regular or strong regular but not isomorphic with others.
All the synthetic datasets can be downloaded from our github website [40].

4.2. Evaluation Criteria

In this paper, our evaluation of graph kernels will mainly focus on several criteria as follows:
  • Accuracy. Accuracy is the most important criterion for classification to compare the graph kernels. In this paper, C-SVM [41] is utilized to do the 10-fold cross validation test. In particular, for all the kernels, 10-fold division is the same for every single comparison, and 100 random tests are repeated. Here we use the average probability of the correct-labelled test samples as the accuracy result. Meanwhile, F1 score (macro F1) is used to compute the classification ability for the multi-class problems.
  • Runtime. This criterion mainly focuses on the computational cost of a graph kernel for a graph dataset. Because the training procedure belongs to the post-process, we only consider the runtime cost of the computation of the kernel matrix.
  • Scalability. To evaluate the runtime cost clearly, scalability is further used to unveil the behavior of the computational time with the increasing number of the graph sizes or graphs in the dataset.
  • Applicability. Theoretically, a complete graph kernel is fit for the general graph family, i.e., if graph G A is not isomorphic to graph G B , then k (     , G B ) k (     , G A ) . However, inexact graph kernels may fail to distinguish some similar graphs,, especially the cases of cospectral graphs and regular graphs. We utilize the failure rate as the applicability measurement for the graph kernels.
All the experiments were tested in Matlab R2010b on an Intel Xeon Core E5-1620 CPU with 8 GB memory. All the runtime consumption tests are executed with a single thread.

4.3. Results

All the experimental results are shown and analyzed in this subsection.

4.3.1. Accuracy Results

For all the tests on real-world datasets, a single label is given for the graph under test to predict which class the graph belongs to. It is considered to be correctly classified when the label for the graph under test equals to its true label. The average classification accuracy results are shown in Figure 2, where the real-world datasets used in the tests are given in Appendix A and the accuracy data is given in Table A2 in Appendix B.
Considering the multi-class cases, macro F1 is used as another criterion to evaluate the accuracy performance, which is the harmonic average result of the average precision and the average recall of all the classes in the datasets. Figure 3 illustrates the macro F1 results. The datasets are the multi-class cases in Appendix A, and the macro F1 results (in percentage) are given in Table A3 in Appendix B.
The results show that most of the kernels only achieve over-50% accuracy and over-0.2 F1 score. The reasons are threefold: (1) there are many multi-classes datasets such as COIL (100 classes) and MSRC (20 classes), which are very difficult to correctly recognize; (2) unattributed graphs contain limited information about real-world object, because they neglect the attributes of nodes and edges; (3) compared to the novel multi-kernel method [42], it is more difficult for the single kernel method to capture plenty of complex features of objects.
In addition, we apply statistical tests on the accuracy results. Since some of the accuracy results cannot satisfy the normality assumption, a non-parametric test, the Kruskal-Wallis test, is used. After the statistical test, the significance is 0.631 (the confidence level is 95%), which means that there is no significant difference between the accuracy results of using different graph kernels. This result is obvious since we admit that there is no best kernel for all the datasets, and precisely for this reason, researchers have to design specific graph kernels for a specific given problem. Therefore, a comprehensive kernel evaluation framework is quite useful to kernel comparison.
Although not significant, there is still a slight advantage of the kernels QJSU, WLK and GHK (which may be due to chance). It can be inferred that some distinguishing ability could be achieved through quantum interference or sub-tree matching. The advantages may be more obvious if some particular classification cases are considered. Therefore, for specific cases, the evaluation process should be reproduced and it is suggested that statistical tests be performed to fully analyze the performance of the kernels.

4.3.2. Runtime Results

Table 5 shows the runtime cost of the 10 chosen kernels for real-world datasets. For each kernel, the runtime of all the 31 real-world datasets are computed. The maximal runtime, minimal runtime and average runtime are listed in lines. Compared with the analyses in Table 2, the runtime evaluation is approximately consistent with the theoretical time complexity comparison.
Generally, SPK is the fastest method because of the fast computation of the shortest path. Meanwhile, WLK and AGK show outstanding runtime performance as well. However, the quantum kernels consume more cost than the classical ones because the simulation of quantum walk costs much runtime on a classical computer. Especially for the case of using DTQW, the runtime cost is nearly the square of that of using RWK, which may be owing to the computation of the evolution of the quantum state.
The Kruskal-Wallis test is applied to the runtime results. When taking 95% degree of confidence, the significance is 0, which means that there are significant differences between the runtime results when using different graph kernels.

4.3.3. Scalability Results

To further explore the scalability, we generate some random graph sets to test the runtime trend with the increasing of the number of node, edge and set size.
• Node-based Scalability
The dataset 200-Edge is designed to test the node-based scalability of graph kernels. All the graphs in the dataset have 200 edges. Therefore, the graph density descends as the graph size increases. Figure 4 shows the runtime comparison of kernels for node-based scalability test. Note that the size range of most of the concerned graphs is not large. The x-axis is not changed by the orders of magnitude. In Figure 4a, QJSK, DQMK and AGK have the best scalabilities because the runtime costs of these kernels nearly maintain the same when the graph size increases. These kernels mainly focus on the local patterns which are relative to graph edges. QJSK utilizes DTQW which is quantum walk among edges. Therefore, when the edge number maintains, these kernels are nearly unaffected.
However, other kernels in Figure 4b show bad scalabilities, especially the QJSU which has the sharpest ascending trend as the graph size increases. The main reason is that CTQW is a costly procedure.
• Edge-based Scalability
The dataset 50-Node is generated to test the edge-based scalability of graph kernels. All the graphs have 50 nodes. Unlike the dataset 200-Edge, the graph density increases as the edge number of graphs increases.
Figure 5 shows the runtime comparison of kernels for edge-based scalability test. Most kernels show good scalabilities when graph density increases (see Figure 5a), except AGK, QJSK and DQMK in Figure 5b. RWK locates the walk pattern of the graphs therefore it is sensible to the graph density. QJSK and DQMK utilize the edge-based discrete-time quantum walk to compute the similarity, which is a high complexity operation when the graph is dense. Therefore, the runtime costs of these 3 kernels increase significantly when the edges increase (see Figure 5b). It is found that this observation is nearly opposite with the result of the above Node-based Scalability experiment.
• Set-based Scalability
The dataset 50-Node&150-Edge is designed to test the set-based scalability of graph kernels. All the graph samples have the same amount of nodes and edges. Figure 6 shows the runtime comparison of kernels for set-based scalability test. Based on the formal definition of the kernel, the kernel matrix used in the kernel-based classification is pairwise and the matrix size relates to the graph number of the dataset. Therefore, all the kernels will cost more runtime when the graph set increases. Compared with other methods, the increasing trends of the runtime costs of QJSK, DQMK and RWK are more significant. For a large dataset, these three kernels cannot work well within an acceptable time.
• Normalized Evaluation
For every kernel, the runtime cost is related with many factors as listed in Table 2. The factor set is assumed to be { x , y , z } . Take factor x as an example. In order to make a normalized standard evaluation on the x -based scalability of a kernel method, we fix all the other factors in the dataset and use the following function to compute the normalized scalability:
S c a l a b i l i t y x = 1 n 1 i = 1 n 1 ln T x i , y z ln T x i + 1 , y z ln x i ln x i + 1
where T x , y z denotes the runtime cost for a dataset with the relative factors { x , y , z } .
Actually, the x -based scalability should be evaluated by the derivatives of the sub-function T x x . The higher the value is, the x -based scalability is worse. However, for an arbitrary kernel, the sub-cost T x is difficult to test. Approximately, we could assume that every factor is independent with each other and the x -based asymptotic complexity is about O ( x k ) . It can be easily proved that Equation (6) can be used to compute T x x approximately.
Note that even if the scalability equals to 1, it does not mean that the runtime cost changes linearly with the factor. This function should be considered as an approximate method to measure the scalability quantitatively.
The node-based, edge-based and set-based normalized scalabilities of the 10 graph kernels are given in Table 6.

4.3.4. Applicability Results

Some similar and non-isomorphic graphs are usually difficult to distinguish via inexact graph comparison methods. Therefore, a graph kernel cannot be applied to some kinds of graphs. In this subsection, the distinguishing ability for similar graphs is used to compare the applicability of these graph kernels. Here similar graphs are the graph pairs or graph groups with similar structure.
Table 7 shows the failure rates of these graph kernels for distinguishing the similar graph pairs collected in Table 4, including the cospectral graphs, regular graphs and strong regular graphs.
The Kruskal-Wallis test under the 95% degree of confidence is conducted. The significance is 0.014. Therefore, the graph kernels have significant influence on the applicability. RWK is the worst kernel, which cannot be used to distinguish these similar graphs. WLK could only locate the difference of the cospectral graphs, but fails for regular graphs. On the contrary, DQMK, LTK, QJSU and AGK achieve the best distinguishing abilities, even for the strong regular graphs.
Generally, the quantum kernels show better applicability. Because the slight topological difference will be amplified by quantum interference, and thus better distinguishing ability is achieved. Therefore, when the sample graphs are similar and difficult to be classified, quantum kernels will be better choices.

5. Discussion

According to the evaluations in Section 4, seven criteria are considered for each kernel including classification accuracy, F1Score, runtime cost, node-based scalability, edge-based scalability, set-based scalability and distinguishing ability. The normalized ability value (using the ability X of kernel K as an example) is defined as follows:
A b i l i t y X = | X K X w o r s t | | X b e s t X w o r s t |
where X b e s t and X w o r s t are the ability value of the optimal kernel and the worst kernel in all the 10 mentioned kernels respectively.
According to Section 2.3, all the graph kernels can be grouped according to five different dimensions. Here we focus on the comparison of the graph kernel groups to explore the advantages and disadvantages of all the kernel groups. For each kernel group, we compare the average abilities of all the kernels in this group.
Figure 7 shows the comparison results of the five graph kernel groups. In the radar figures in Figure 7, the bigger the ability scope is, the better the graph kernel group is. And for each criteria of the comparison, the bigger the value is, the stronger the ability is.
Through the statistical analyses, we find out that:
  • R-convolution kernels perform better on scalabilities and runtime cost, while the information theory kernels show better abilities on accuracy and applicability. The information theory kernels utilize the global probability distribution diffusion of two graphs to measure the graph similarity. Therefore, compared with local pattern matching of R-convolution kernels, the information theory kernels result in the better accuracy and applicability.
  • Aligned kernels have stronger applicability and node-based scalability but are weaker than unaligned kernels on the other criteria. Through graph alignment, the vertex mapping characteristic is found out before kernel computation. Meanwhile, the slight difference of the similar graph pairs can also be located in the alignment procedure. Therefore, after graph alignment, the kernel methods can utilize the vertex mapping directly, which leads to a well node-based scalability.
  • Global kernels, quantum kernels and entropy kernels are worse than their counterpart kernel group in all the other criteria except the distinguishing ability (applicability). It unveils that if good applicability is needed, more complex computations are needed in the kernel method, such as the above kinds of graph kernels.
Above all, R-convolution, unaligned, local-pattern, classical and topological kernels have better ability scope and show more advantages. However, these kinds of kernels lack powerful applicability. The reasons are twofold. Firstly, a complete graph kernel can distinguish all the non-isomorphic graph pairs, which means it has the best applicability. However, it is NP-hard. Therefore, to achieve a powerful applicability, the computation cost will be great. Secondly, distinguishing slight differences will lead to bad generalization performance, and thus result in low accuracy.

6. Conclusions

In this paper, a comprehensive evaluation of graph kernels for unattributed graphs is introduced. According to five different dimensions of the design details, all the existing graph kernels can be catalogued and 10 representative graph kernels are chosen to be completed compared using plenty of real-world and synthetic datasets. For each kernel, we focus on seven criteria to evaluate the performance of the kernel, namely, the classification accuracy, runtime cost, node-based scalability, edge-based scalability, set-based scalability and applicability. Through the kernel group comparison, it is found that the R-convolution, unaligned, local-pattern, classical and topological kernels achieve better performance in all the criteria except for the applicability.
Ten chosen graph kernels may not be enough to represent all the existing graph kernel methods. Therefore, some conclusions in this paper should be seen as guidelines which are useful for choosing an optimal kernel for graph classification or designing a novel kernel. As to the future work, more kernels will be included and graph kernels for attributed graphs will be considered.

Author Contributions

Conceptualization, Y.Z. and L.W. (Liandong Wang); methodology, Y.Z.; software, Y.Z. and L.W. (Lulu Wang); validation, Y.Z., L.W. (Lulu Wang) and L.W. (Liandong Wang); formal analysis, L.W. (Lulu Wang); investigation, Y.Z.; resources, Y.Z. and L.W. (Lulu Wang); data curation, Y.Z.; writing—original draft preparation, Y.Z.; writing—review and editing, L.W. (Lulu Wang).

Funding

This research was funded by China Postdoctoral Science Foundation, grant number 45649 and National Natural Science Foundation of China, grant number 61701502.

Acknowledgments

The authors appreciate the kind comments and professional criticisms of the anonymous reviewers. They have greatly enhanced the overall quality of the manuscript and opened numerous perspectives geared toward improving the work.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. The Real-World Datasets

All the real-world datasets are listed in Table A1.
Table A1. The detailed information of the real-world datasets. The dataset density is calculated by the average edges divided by the average nodes.
Table A1. The detailed information of the real-world datasets. The dataset density is calculated by the average edges divided by the average nodes.
Dataset NameStatisticsDescription
#Set#ClassAvg. #NodesAvg. #EdgesDensity
Mutagenicity4337230.3230.771.01Chemical Molecule
PTC_MM336213.9714.321.03Chemical Molecule
PTC_FM349214.1114.481.03Chemical Molecule
PTC_MR344214.2914.691.03Chemical Molecule
PTC_FR351214.56151.03Chemical Molecule
AIDS2000215.6916.21.03Chemical Molecule
DHFR467242.4344.541.05Chemical Molecule
COX2467241.2243.451.05Chemical Molecule
FRANKENSTEIN4337216.917.881.06Chemical Molecule
BZR405235.7538.361.07Chemical Molecule
NCI14110229.8732.31.08Chemical Molecule
NCI1094127229.6832.131.08Chemical Molecule
MUTAG188217.9319.791.10Chemical Molecule
PROTEINS1113239.0672.821.86Chemical Molecule
PROTEINS_full1113239.0672.821.86Chemical Molecule
ENZYMES600632.6362.141.90Chemical Molecule
BZR_MD306221.3225.0610.57Chemical Molecule
ER_MD446221.33234.8511.01Chemical Molecule
DHFR_MD393223.87283.0111.86Chemical Molecule
COX2_MD303226.28335.1212.75Chemical Molecule
COIL-RAG39001003.013.021.00Image
MSRC_21C2092040.2896.62.40Image
MSRC_9221840.5897.942.41Image
COIL-DEL390010021.5454.242.52Image
MSRC_215632077.52198.322.56Image
Letter-low2250154.683.130.67Handwriting
Letter-high2250154.674.50.96Handwriting
Letter-med2250154.674.50.96Handwriting
IMDB-BINARY1000219.7796.534.88Social Network
IMDB-MULTI150031365.945.07Social Network
Fingerprint280045.424.420.82Fingerprint

Appendix B. The Complete Results of Accuracy Test

The detailed results of the accuracy of the classification test are shown in Table A2 and Table A3.
Table A2. The complete results of the classification accuracy. Every item is comprised of the average accuracy (%) and the standard deviation of 100 repeated tests. The bold numbers are the best result among the ten kernels for each dataset.
Table A2. The complete results of the classification accuracy. Every item is comprised of the average accuracy (%) and the standard deviation of 100 repeated tests. The bold numbers are the best result among the ten kernels for each dataset.
DatasetsSPKWLKAGKGHKRWKQJSULTKASKDQMKQJSK
AIDS99.32 ± 0.5998.75 ± 0.8798.78 ± 0.8499.33 ± 0.5879.99 ± 2.6799.73 ± 0.3499.54 ± 0.4696.79 ± 1.2479.99 ± 2.6779.99 ± 2.67
BZR79.95 ± 2.3187.75 ± 1.5378.96 ± 2.3583.82 ± 1.8678.61 ± 2.3483.26 ± 2.0179.23 ± 2.2679.29 ± 2.5678.61 ± 2.3478.61 ± 2.34
BZR_MD60.71 ± 2.5159.25 ± 2.8248.96 ± 0.9860.56 ± 2.3661.49 ± 1.2662.33 ± 2.0060.59 ± 2.4160.87 ± 2.2659.08 ± 1.1555.41 ± 2.36
COIL-DEL12.04 ± 1.6012.43 ± 1.518.80 ± 1.4118.11 ± 1.750.94 ± 0.497.83 ± 1.237.70 ± 1.164.46 ± 1.056.01 ± 1.100.84 ± 0.08
COIL-RAG4.99 ± 1.115.89 ± 1.143.34 ± 0.875.77 ± 1.180.83 ± 0.056.16 ± 1.135.20 ± 1.162.45 ± 0.763.76 ± 0.930.83 ± 0.05
COX278.15 ± 5.6478.87 ± 5.6378.15 ± 5.6478.65 ± 5.7478.15 ± 5.6478.69 ± 5.9478.17 ± 5.5778.15 ± 5.6478.15 ± 5.6478.08 ± 5.69
COX2_MD47.95 ± 1.1247.51 ± 1.8647.95 ± 1.1248.00 ± 1.1447.00 ± 1.8346.80 ± 2.0047.57 ± 1.1947.44 ± 1.0846.77 ± 1.3347.85 ± 1.01
DHFR70.09 ± 5.8982.13 ± 4.3161.23 ± 5.4879.77 ± 4.7561.23 ± 5.4879.03 ± 4.0960.91 ± 5.0447.40 ± 4.8076.77 ± 4.9661.23 ± 5.48
DHFR_MD68.11 ± 3.6167.07 ± 3.5768.11 ± 3.6166.81 ± 3.6167.55 ± 3.5766.88 ± 3.6267.12 ± 3.6667.72 ± 3.6868.11 ± 3.6168.11 ± 3.61
ER_MD59.65 ± 3.5063.14 ± 3.6259.14 ± 3.0762.62 ± 3.2662.50 ± 3.0361.65 ± 3.3162.78 ± 3.0163.19 ± 3.6259.04 ± 3.0559.14 ± 3.07
ENZYMES28.70 ± 5.5837.39 ± 6.4826.65 ± 5.7837.34 ± 6.5811.63 ± 3.1231.97 ± 6.2122.89 ± 5.1930.27 ± 5.6728.91 ± 5.7119.56 ± 2.44
Fingerprint26.62 ± 2.8127.23 ± 2.9329.75 ± 3.0426.68 ± 2.6924.57 ± 2.7530.16 ± 3.1229.56 ± 3.2124.74 ± 2.6230.73 ± 3.0024.14 ± 2.85
FRANKENSTEIN60.46 ± 2.3572.40 ± 1.8959.46 ± 2.2967.33 ± 2.0857.54 ± 2.6766.91 ± 2.2262.07 ± 2.2563.33 ± 2.0063.77 ± 2.3952.51 ± 3.16
IMDB-BINARY59.09 ± 5.2172.45 ± 4.3364.92 ± 5.2271.74 ± 4.4767.50 ± 5.1462.10 ± 5.2461.75 ± 5.2163.57 ± 5.0346.40 ± 4.3050.36 ± 5.57
IMDB-MULTI40.71 ± 4.6350.96 ± 4.3740.11 ± 4.2550.45 ± 3.6146.20 ± 4.5243.24 ± 4.1445.81 ± 3.7242.81 ± 5.1549.02 ± 4.9343.51 ± 4.15
Letter-high28.99 ± 3.0233.87 ± 2.8432.03 ± 3.0934.20 ± 3.0313.25 ± 2.2346.36 ± 3.1330.71 ± 3.0528.20 ± 3.0832.89 ± 3.5026.90 ± 3.03
Letter-low32.82 ± 2.8739.30 ± 3.1843.92 ± 3.1137.15 ± 3.316.86 ± 2.7078.90 ± 2.5033.26 ± 3.0333.03 ± 5.6146.74 ± 3.2927.71 ± 2.09
Letter-med30.63 ± 2.7336.36 ± 3.4039.70 ± 3.1535.62 ± 3.277.16 ± 2.6474.02 ± 2.8730.82 ± 3.1628.66 ± 5.2843.72 ± 3.2527.10 ± 2.17
Mutagenicity63.64 ± 2.1878.48 ± 1.8659.62 ± 2.3869.34 ± 2.1155.32 ± 2.3969.07 ± 2.1060.17 ± 2.1762.44 ± 2.0458.71 ± 2.6755.32 ± 2.39
MSRC_916.53 ± 1.5313.00 ± 2.988.05 ± 1.1019.29 ± 2.708.49 ± 1.2815.00 ± 0.379.82 ± 1.6117.78 ± 3.1014.73 ± 2.029.38 ± 2.47
MSRC_2112.86 ± 0.687.52 ± 0.775.66 ± 1.7411.35 ± 1.834.12 ± 0.496.92 ± 1.185.02 ± 1.877.12 ± 1.223.54 ± 1.224.39 ± 2.10
MSRC_21C19.14 ± 2.9011.82 ± 1.4013.24 ± 1.6914.32 ± 2.4412.26 ± 1.1417.12 ± 1.3515.04 ± 1.4115.52 ± 1.2511.28 ± 1.2211.78 ± 1.27
MUTAG82.96 ± 3.6581.69 ± 2.3180.84 ± 3.5285.40 ± 2.6577.00 ± 3.4182.67 ± 2.1983.29 ± 2.8184.20 ± 3.4676.42 ± 2.8879.00 ± 3.41
NCI161.76 ± 2.4081.77 ± 1.7962.48 ± 2.2968.02 ± 2.3657.95 ± 1.4666.25 ± 2.2162.84 ± 2.4764.49 ± 2.5565.18 ± 2.3357.82 ± 1.60
NCI10962.16 ± 2.3382.29 ± 1.8962.41 ± 2.2867.34 ± 2.5159.80 ± 2.1166.10 ± 2.5662.58 ± 2.4463.22 ± 2.5664.88 ± 2.2559.62 ± 2.18
PTC_FM59.09 ± 2.2358.64 ± 2.3660.08 ± 2.2059.62 ± 2.1158.75 ± 2.8959.91 ± 2.1561.22 ± 2.0659.39 ± 2.5259.33 ± 1.7658.39 ± 2.00
PTC_FR65.43 ± 3.5865.30 ± 4.1365.50 ± 3.6465.31 ± 3.6265.66 ± 3.5664.49 ± 3.6265.51 ± 3.3565.10 ± 3.2765.62 ± 3.5765.49 ± 3.57
PTC_MM60.92 ± 2.6961.98 ± 2.0162.05 ± 2.5861.45 ± 2.5961.27 ± 2.6459.37 ± 2.4859.60 ± 2.6360.99 ± 2.4961.09 ± 2.7361.08 ± 2.76
PTC_MR56.39 ± 3.5156.68 ± 3.6956.35 ± 4.2955.94 ± 4.3456.05 ± 3.1356.29 ± 4.1756.92 ± 4.1456.22 ± 4.8857.29 ± 4.2055.61 ± 4.23
PROTEINS72.50 ± 4.3872.77 ± 4.1171.29 ± 4.2674.13 ± 3.9770.13 ± 4.8871.79 ± 4.0371.29 ± 4.1572.00 ± 4.3266.05 ± 4.6170.13 ± 4.88
PROTEINS_full72.36 ± 3.8672.56 ± 4.1970.88 ± 3.9573.81 ± 3.7869.26 ± 4.3171.62 ± 4.2670.88 ± 3.9671.74 ± 4.0165.25 ± 4.4769.26 ± 4.31
Average51.44 ± 3.0055.39 ± 2.9050.59 ± 2.9454.49 ± 2.9846.10 ± 2.7755.90 ± 2.8350.64 ± 2.9051.28 ± 3.1950.58 ± 3.0047.07 ± 2.87
Table A3. The complete result of the F1 score test. Every item shows the average score (%) and the standard deviation of 100 repeated tests. The bold numbers are the best result among the ten kernels for each dataset.
Table A3. The complete result of the F1 score test. Every item shows the average score (%) and the standard deviation of 100 repeated tests. The bold numbers are the best result among the ten kernels for each dataset.
DatasetsSPKWLKAGKGHKRWKQJSULTKASKDQMKQJSK
COIL-DEL12.55 ± 1.3913.07 ± 1.5910.98 ± 1.617.59 ± 2.213.23 ± 0.7211.77 ± 1.675.01 ± 0.714.74 ± 0.696.67 ± 1.741.02 ± 0.1
COIL-RAG2.91 ± 0.794.31 ± 0.722.3 ± 0.744.25 ± 1.131.78 ± 0.774.63 ± 1.263.77 ± 0.942.34 ± 1.143.40 ± 0.800.83 ± 0.11
ENZYMES30.61 ± 3.3837.35 ± 4.530.04 ± 7.3137.67 ± 4.8711.76 ± 5.0633.98 ± 7.3422.8 ± 6.2630.7 ± 4.7726.94 ± 5.855.88 ± 0.67
Fingerprint5.78 ± 1.516.63 ± 0.546.54 ± 1.596.26 ± 0.644.43 ± 1.017.14 ± 1.16.37 ± 1.53.35 ± 1.137.17 ± 0.782.78 ± 0.34
IMDB-MULTI40.89 ± 3.0851.67 ± 3.0942.5 ± 1.6151.94 ± 3.1648.29 ± 4.2344.1 ± 4.0146.55 ± 3.7546.0 ± 2.5737.76 ± 3.9929.86 ± 4.67
Letter-high28.98 ± 3.7431.69 ± 2.0131.73 ± 2.0234.28 ± 2.8611.08 ± 1.8347.86 ± 2.3731.53 ± 3.7327.3 ± 2.4538.03 ± 2.776.09 ± 1.03
Letter-low32.21 ± 3.6635.84 ± 2.9636.17 ± 1.9633.75 ± 1.8311.91 ± 3.0579.32 ± 1.2832.25 ± 2.4631.82 ± 5.7944.17 ± 1.596.72 ± 0.89
Letter-med28.16 ± 2.7532.95 ± 3.7533.67 ± 1.8533.3 ± 2.5712.87 ± 4.4674.1 ± 3.2329.49 ± 4.3126.73 ± 3.1341.3 ± 1.145.00 ± 0.29
MSRC_922.13 ± 2.5210.23 ± 1.6618.0 ± 1.8513.68 ± 2.3414.28 ± 1.789.34 ± 1.129.40 ± 2.6016.47 ± 3.5212.38 ± 2.561.78 ± 0.19
MSRC_2114.02 ± 2.778.47 ± 1.376.41 ± 1.1610.85 ± 2.538.15 ± 1.675.88 ± 0.143.43 ± 0.914.26 ± 1.760.49 ± 0.160.74 ± 0.10
MSRC_21C6.56 ± 0.472.96 ± 0.293.81 ± 0.433.96 ± 0.351.97 ± 0.23.94 ± 0.435.37 ± 0.365.28 ± 0.312.11 ± 0.141.01 ± 0.11
Average20.4 ± 2.721.40 ± 2.020.2 ± 2.022.5 ± 2.211.8 ± 2.229.3 ± 2.217.8 ± 2.518.10 ± 2.8020.0 ± 2.005.6 ± 0.87

References

  1. Ta, V.T.; Lézoray, O.; Elmoataz, A.; Schüpp, S. Graph-based tools for microscopic cellular image segmentation. Pattern Recognit. 2009, 42, 1113–1125. [Google Scholar] [CrossRef] [Green Version]
  2. Raymond, J.W.; Willett, P. Maximum common subgraph isomorphism algorithms for the matching of chemical structures. J. Comput.-Aided Mol. Des. 2002, 16, 521–533. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  3. Fan, W. Graph pattern matching revised for social network analysis. In Proceedings of the International Conference on Database Theory, Berlin, Germany, 26–29 March 2012; pp. 8–21. [Google Scholar]
  4. Ngomo, A.C.N.; Schumacher, F. BorderFlow: A Local Graph Clustering Algorithm for Natural Language Processing. In Proceedings of the International Conference on Computational Linguistics and Intelligent Text Processing, Mexico City, Mexico, 1–7 March 2009; pp. 547–558. [Google Scholar]
  5. Mahé, P.; Vert, J.P. Graph kernels based on tree patterns for molecules. Mach. Learn. 2009, 75, 3–35. [Google Scholar] [CrossRef]
  6. Aziz, F.; Wilson, R.C.; Hancock, E.R. Backtrackless walks on a graph. IEEE Trans. Neural Netw. Learn. Syst. 2013, 24, 977–989. [Google Scholar] [CrossRef] [PubMed]
  7. Neuhaus, M.; Bunke, H. Bridging the Gap between Graph Edit Distance and Kernel Machines; World Scientific: Singapore, 2007. [Google Scholar]
  8. Torresani, L.; Kolmogorov, V.; Rother, C. Feature correspondence via graph matching: Models and global optimization. In Proceedings of the Computer Vision—ECCV, Marseille, France, 12–18 October 2008; pp. 596–609. [Google Scholar]
  9. Shervashidze, N.; Schweitzer, P.; Leeuwen, E.J.; Mehlhorn, K.; Borgwardt, K.M. Weisfeiler-lehman graph kernels. J. Mach. Learn. Res. 2011, 12, 2539–2561. [Google Scholar]
  10. Shervashidze, N.; Vishwanathan, S.V.; Petri, T.; Mehlhorn, K.; Borgwardt, K. Efficient graphlet kernels for large graph comparison. In Proceedings of the Twelfth International Conference on Artificial Intelligence and Statistics, Clearwater Beach, FL, USA, 16–18 April 2009; pp. 488–495. [Google Scholar]
  11. Bai, L.; Rossi, L.; Torsello, A.; Hancock, E.R. A quantum Jensen–Shannon graph kernel for unattributed graphs. Pattern Recognit. 2015, 48, 344–355. [Google Scholar] [CrossRef] [Green Version]
  12. Bai, L.; Rossi, L.; Cui, L.; Zhang, Z.; Ren, P.; Bai, X.; Hancock, E. Quantum kernels for unattributed graphs using discrete-time quantum walks. Pattern Recognit. Lett. 2017, 87, 96–103. [Google Scholar] [CrossRef] [Green Version]
  13. Orsini, F.; Frasconi, P.; De Raedt, L. Graph invariant kernels. In Proceedings of the 24th International Conference on Artificial Intelligence (AAAI 2015), Austin, TX, USA, 25–30 January 2015; pp. 3756–3762. [Google Scholar]
  14. Morris, C.; Kriege, N.M.; Kersting, K.; Mutzel, P. Faster kernels for graphs with continuous attributes via hashing. In Proceedings of the IEEE 16th International Conference on Data Mining (ICDM 2016), Barcelona, Spain, 12–15 December 2016; pp. 1095–1100. [Google Scholar]
  15. Berg, J.; Karp, R.M. Local graph alignment and motif search in biological networks. Proc. Natl. Acad. Sci. USA 2004, 101, 14689–14694. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  16. Ferreira, J.C.; Menegatto, V.A. Eigenvalues of integral operators defined by smooth positive definite kernels. Integral Equ. Oper. Theory 2009, 64, 61–81. [Google Scholar] [CrossRef]
  17. Balcan, M.F.; Blum, A. On a theory of learning with similarity functions. In Proceedings of the International Conference on Machine Learning, Pittsburgh, PA, USA, 25–19 June 2006; pp. 73–80. [Google Scholar]
  18. Chen, Y.; Gupta, M.R.; Recht, B. Learning kernels from indefinite similarities. In Proceedings of the International Conference on Machine Learning, Montreal, QC, Canada, 14–18 June2009; pp. 145–152. [Google Scholar]
  19. Haussler, D. Convolution Kernels on Discrete Structures; Department of Computer Science, University of California at Santa Cruz: Santa Cruz, CA, USA, 1999. [Google Scholar]
  20. Bai, L. Information Theoretic Graph Kernels; University of York: York, UK, 2014. [Google Scholar]
  21. Bai, L.; Rossi, L.; Zhang, Z.; Hancock, E. An aligned subtree kernel for weighted graphs. In Proceedings of the International Conference on Machine Learning (ICML 2015), Lille, France, 6–11 July 2015; pp. 30–39. [Google Scholar]
  22. Gärtner, T.; Flach, P.; Wrobel, S. On graph kernels: Hardness results and efficient alternatives. In Proceedings of the Learning Theory and Kernel Machines 16th Annual Conference on Learning Theory and 7th Kernel Workshop (COLT/Kernel 2003), Washington, DC, USA, 24–27 August 2003; pp. 129–143. [Google Scholar]
  23. Borgwardt, K.M.; Kriegel, H.P. Shortest-path kernels on graphs. In Proceedings of the 5th IEEE International Conference on Data Mining (ICDM 2005), Houston, TX, USA, 27–30 November 2005; pp. 8–16. [Google Scholar]
  24. Rossi, L.; Torsello, A.; Hancock, E.R. Measuring Graph Similarity through Continuous-Time Quantum Walks and the Quantum Jensen-Shannon Divergence. Phys. Rev. E 2015, 91, 022815. [Google Scholar] [CrossRef] [PubMed]
  25. Bai, L.; Rossi, L.; Bunke, H.; Hancock, E.R. Hancock: Attributed Graph Kernels Using the Jensen-Tsallis q-Differences. In Proceedings of the Joint European Conference on Machine Learning and Knowledge Discovery in Databases, Nancy, France, 15–19 September 2014; pp. 99–114. [Google Scholar]
  26. Bai, L.; Edwin, R. Hancock: Graph Kernels from the Jensen-Shannon Divergence. J. Math. Imaging Vis. 2013, 47, 60–69. [Google Scholar] [CrossRef]
  27. Bai, L.; Zhang, Z.; Wang, C.; Bai, X.; Hancock, E.R. Hancock: A Graph Kernel Based on the Jensen-Shannon Representation Alignment. In Proceedings of the IJCAI, Buenos Aires, Argentina, 25–31 July 2015; pp. 3322–3328. [Google Scholar]
  28. Rossi, L.; Torsello, A.; Hancock, E.R. Unfolding Kernel Embeddings of Graphs: Enhancing Class Separation through Manifold Learning. Pattern Recognit. 2015, 48, 3357–3370. [Google Scholar] [CrossRef]
  29. Fröhlich, H.; Wegner, J.K.; Sieker, F.; Zell, A. Optimal assignment kernels for attributed molecular graphs. In Proceedings of the 22nd International Conference on Machine Learning (ICML 2005), Lille, France, 6–11 July 2005; pp. 225–232. [Google Scholar]
  30. Vert, J.P. The optimal assignment kernel is not positive definite. arXiv, 2008; arXiv:0801.4061. [Google Scholar]
  31. Kriege, N.M.; Giscard, P.L.; Wilson, R. On valid optimal assignment kernels and applications to graph classification. In Proceedings of the Advances in Neural Information Processing Systems (NIPS 2016), Barcelona, Spain, 5–10 December 2016; pp. 1623–1631. [Google Scholar]
  32. Johansson, F.; Jethava, V.; Dubhashi, D.; Bhattacharyya, C. Global graph kernels using geometric embeddings. In Proceedings of the 31st International Conference on Machine Learning (ICML 2014), Beijing, China, 21–26 June 2014; pp. 21–26. [Google Scholar]
  33. Kondor, R.; Pan, H. The multiscale Laplacian graph kernel. In Proceedings of the Advances in Neural Information Processing Systems (NIPS 2016), Barcelona, Spain, 5–10 December 2016; pp. 2990–2998. [Google Scholar]
  34. Childs, A.M. Universal computation by quantum walk. Phys. Rev. Lett. 2009, 102, 180501. [Google Scholar] [CrossRef] [PubMed]
  35. Bai, L.; Zhang, Z.; Ren, P.; Rossi, L.; Hancock, E.R. An edge-based matching kernel through discrete-time quantum walks. In Proceedings of the International Conference on Image Analysis and Processing (ICIAP 2015), Genoa, Italy, 7–11 September 2015; pp. 27–38. [Google Scholar]
  36. Feragen, A.; Kasenburg, N.; Petersen, J.; de Bruijne, M.; Borgwardt, K. Scalable kernels for graphs with continuous attributes. In Proceedings of the Advances in Neural Information Processing Systems (NIPS 2013), Lake Tahoe, NV, USA, 5–8 December 2013; pp. 216–224. [Google Scholar]
  37. Costa, F.; de Grave, K. Fast neighborhood subgraph pairwise distance kernel. In Proceedings of the ICML, Haifa, Israel, 21–24 June 2010. [Google Scholar]
  38. Horváth, T.; Gärtner, T.; Wrobel, S. Cyclic pattern kernels for predictive graph mining. In Proceedings of the KDD, Seattle, WA, USA, 22–25 August 2004. [Google Scholar]
  39. The Graph Kernel Benchmarks. Available online: https://ls11-www.cs.tu-dortmund.de/staff/morris/graphkerneldatasets (accessed on 15 June 2018).
  40. The Datasets and the Matlab Codes. Available online: https://github.com/YiZhangNUDT/graph_kernel_test (accessed on 10 September 2018).
  41. Chang, C.C.; Lin, C.J. LIBSVM: A library for support vector machines. ACM Trans. Intell. Syst. Technol. 2011, 2, 27. [Google Scholar] [CrossRef]
  42. Liu, X.; Dou, Y.; Yin, J.; Wang, L.; Zhu, E. Multiple Kernel k-Means Clustering with Matrix-Induced Regularization. In Proceedings of the AAAI 2016, Phoenix, AZ, USA, 12–17 February 2016; pp. 1888–1894. [Google Scholar]
Figure 1. Example of kernel method.
Figure 1. Example of kernel method.
Entropy 20 00984 g001
Figure 2. The classification accuracies of the 10 mentioned kernels for the real-world datasets. The detailed information about this test is shown in Table A2.
Figure 2. The classification accuracies of the 10 mentioned kernels for the real-world datasets. The detailed information about this test is shown in Table A2.
Entropy 20 00984 g002
Figure 3. The F1 score of the 10 chosen kernels for the multi-class datasets. The detailed result of this test is shown in Table A3.
Figure 3. The F1 score of the 10 chosen kernels for the multi-class datasets. The detailed result of this test is shown in Table A3.
Entropy 20 00984 g003
Figure 4. The runtime comparison of kernels for node-based scalability test. (a) The runtime costs of QJSK, DQMK and AGK maintain when the graph size increases. (b) the ascending runtime trends of other kernels.
Figure 4. The runtime comparison of kernels for node-based scalability test. (a) The runtime costs of QJSK, DQMK and AGK maintain when the graph size increases. (b) the ascending runtime trends of other kernels.
Entropy 20 00984 g004
Figure 5. The runtime comparison of kernels for edge-based scalability test. Compared with QJSK, DQMK and AGK shown in (b), the kernels in (a) almost maintain the runtime cost when the graph density increases.
Figure 5. The runtime comparison of kernels for edge-based scalability test. Compared with QJSK, DQMK and AGK shown in (b), the kernels in (a) almost maintain the runtime cost when the graph density increases.
Entropy 20 00984 g005
Figure 6. The runtime comparison of kernels for set-based scalability test. Compared with QJSK, DQMK and RWK shown in (b), the kernels in (a) show slower increasing trends when the graph set increases.
Figure 6. The runtime comparison of kernels for set-based scalability test. Compared with QJSK, DQMK and RWK shown in (b), the kernels in (a) show slower increasing trends when the graph set increases.
Entropy 20 00984 g006
Figure 7. The ability comparison of all the 5 graph kernel groups in 7 criteria. (a) R-convolution kernels vs. Information theory kernels. (b) Aligned kernels vs. Unaligned kernels. (c) Local kernels vs. Global kernels. (d) Quantum kernels vs. Classical kernels. Note that for the 10 chosen kernels, the radar figure of Entropy kernels vs. Topological kernels is the same with that of Quantum kernels vs. Classical kernels.
Figure 7. The ability comparison of all the 5 graph kernel groups in 7 criteria. (a) R-convolution kernels vs. Information theory kernels. (b) Aligned kernels vs. Unaligned kernels. (c) Local kernels vs. Global kernels. (d) Quantum kernels vs. Classical kernels. Note that for the 10 chosen kernels, the radar figure of Entropy kernels vs. Topological kernels is the same with that of Quantum kernels vs. Classical kernels.
Entropy 20 00984 g007
Table 1. The 15 groups of chosen graph kernels.
Table 1. The 15 groups of chosen graph kernels.
Kernel NameFrameworkAlignedMatching PatternComputing ModelMethodology
SPK [23]R-convolutionNoLocal (Path)ClassicalTopology
WLK [9]R-convolutionNoLocal (Subtree)ClassicalTopology
AGK [10]R-convolutionNoLocal (Graphlet)ClassicalTopology
GHK [36]R-convolutionNoLocal (Path)ClassicalTopology
RWK [22]R-convolutionNoLocal (Walk)ClassicalTopology
QJSU [11]Information TheoryNoGlobalCTQWEntropy
LTK [32]R-convolutionNoGlobalClassicalTopology
ASK [21]R-convolutionYesLocal (Subtree)CTQWEntropy
DQMK [35]R-convolutionYesLocal (Edge)DTQWEntropy
QJSK [12]Information TheoryNoGlobalDTQWEntropy
BWK [6]R-convolutionNoLocal(Walk)ClassicalTopology
JTK [25]Information TheoryNoGlobalCTQWEntropy
NSPDK [37]R-convolutionNoLocal(Subgraph)ClassicalTopology
CPK [38]R-convolutionNoLocal(Cycle)ClassicalTopology
MLGK [33]Information TheoryNoMixClassicalTopology
Table 2. The time complexities of 15 chosen graph kernels.
Table 2. The time complexities of 15 chosen graph kernels.
Kernel NameComplexityP.S.
SPK O ( K N 4 + K 3 ) -
WLK O ( K N 2 + K 2 N 2 )
AGK O ( K N 4 + K 3 ) e.g., graphlet size is 4
GHK O ( K 2 N 2 ( E + log N + N 2 ) )
RWK O ( K 2 N 3 ) -
QJSU O ( K 2 N 3 ) -
LTK O ( K 2 ( E N 2 + N 3 ) ) Gaussian Kernel as the base kernel
ASK O ( K 2 N 4 )
DQMK O ( K 2 N E 3 )
QJSK O ( K 2 N E 3 )
BWK O ( K 2 N 3 )
JTK O ( K 2 N 2 + K N 3 )
NSPDK O ( K 2 N 2 E log E )
CPK O ( K ( N 2 + E ) )
MLGK O ( K 2 N 5 )
Table 3. The random graph datasets for scalability evaluation.
Table 3. The random graph datasets for scalability evaluation.
Dataset NameStatistics
#Set#Nodes#Edges#Class
50-Node30005050:50:150030
200-Edge200020:5:11520020
50-Node&150-Edge100:100:15005015015
Table 4. Three similar-graph datasets. In CosGraph, every cospectral graph pair is used as a test. In RegGraph and SRGraph, paired compassions of the graphs in each class are done using kernel methods.
Table 4. Three similar-graph datasets. In CosGraph, every cospectral graph pair is used as a test. In RegGraph and SRGraph, paired compassions of the graphs in each class are done using kernel methods.
Dataset NameStatistics
#set#ClassAvg. #Nodes#Test Pairs
CosGraph10,0965048105048
RegGraph64903116.34885,128
SRGraph73031137.635,099,490
Table 5. The runtime cost of graph kernels for real-world datasets.
Table 5. The runtime cost of graph kernels for real-world datasets.
Kernel NameMinimum RuntimeMaximum RuntimeAverage Runtime
SPK0.19 s11.81 s 2.97 s
WLK2.42 s2.01 m26 s
AGK3.09 s6.38 m1.50 m
GHK14.02 s4.42 h27.93 m
RWK22.40 s4.27 h45.6 m
QJSU18.02 s3.95 h50.3 m
LTK67.99 s4.86 h1.05 h
ASK84.82 s14.71 h2.55 h
DQMK85.50 s47.17 h7.85 h
QJSK62.54 s70.44 h10.6 h
Table 6. The three kinds of normalized scalabilities of the 10 mentioned kernels. The results in red italic font denote bad scalabilities we observe in Figure 4, Figure 5 and Figure 6, while the results in blue bold font are outstanding scalabilities.
Table 6. The three kinds of normalized scalabilities of the 10 mentioned kernels. The results in red italic font denote bad scalabilities we observe in Figure 4, Figure 5 and Figure 6, while the results in blue bold font are outstanding scalabilities.
Kernel NameNode-Based ScalabilityEdge-Based ScalabilitySet-Based Scalability
SPK1.96−0.411.04
WLK0.96−0.021.14
AGK−0.581.960.98
GHK1.600.091.69
RWK0.680.612.06
QJSU5.000.171.34
LTK1.82−0.121.10
ASK1.860.221.48
DQMK−0.042.482.07
QJSK−0.043.082.32
Table 7. The failure rates of all the kernels for distinguishing the similar graphs.
Table 7. The failure rates of all the kernels for distinguishing the similar graphs.
Kernel NameCospectral GraphsRegular GraphsStrong Regular GraphsAverage
SPK33.16%1.14%100%44.77%
WLK1.66%100%100%67.22%
AGK5.96%4.87%3.82%4.88%
GHK18.82%0.12%100%39.65%
RWK100%100%100%100%
QJSU0%2.79%0.65%1.15%
LTK0%0%0%0%
ASK33.16%1.14%95.96%43.42%
DQMK0%0%0%0%
QJSK33.16%1.14%13.60%15.97%

Share and Cite

MDPI and ACS Style

Zhang, Y.; Wang, L.; Wang, L. A Comprehensive Evaluation of Graph Kernels for Unattributed Graphs. Entropy 2018, 20, 984. https://doi.org/10.3390/e20120984

AMA Style

Zhang Y, Wang L, Wang L. A Comprehensive Evaluation of Graph Kernels for Unattributed Graphs. Entropy. 2018; 20(12):984. https://doi.org/10.3390/e20120984

Chicago/Turabian Style

Zhang, Yi, Lulu Wang, and Liandong Wang. 2018. "A Comprehensive Evaluation of Graph Kernels for Unattributed Graphs" Entropy 20, no. 12: 984. https://doi.org/10.3390/e20120984

APA Style

Zhang, Y., Wang, L., & Wang, L. (2018). A Comprehensive Evaluation of Graph Kernels for Unattributed Graphs. Entropy, 20(12), 984. https://doi.org/10.3390/e20120984

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop