Next Article in Journal
Nutraceuticals Obtained by SFE-CO2 from Cladodes of Two Opuntia ficus-indica (L.) Mill Wild in Calabria
Next Article in Special Issue
Model Checking Resiliency and Sustainability of In-Vehicle Network for Real-Time Authenticity
Previous Article in Journal
Video Forensics: Identifying Colorized Images Using Deep Learning
Previous Article in Special Issue
SPEKS: Forward Private SGX-Based Public Key Encryption with Keyword Search
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Data Quality Measures and Efficient Evaluation Algorithms for Large-Scale High-Dimensional Data

School of Cybersecurity, Korea University, Seoul 02841, Korea
*
Author to whom correspondence should be addressed.
Appl. Sci. 2021, 11(2), 472; https://doi.org/10.3390/app11020472
Submission received: 31 October 2020 / Revised: 31 December 2020 / Accepted: 4 January 2021 / Published: 6 January 2021
(This article belongs to the Special Issue Trustworthiness in Mobile Cyber Physical Systems)

Abstract

:
Machine learning has been proven to be effective in various application areas, such as object and speech recognition on mobile systems. Since a critical key to machine learning success is the availability of large training data, many datasets are being disclosed and published online. From a data consumer or manager point of view, measuring data quality is an important first step in the learning process. We need to determine which datasets to use, update, and maintain. However, not many practical ways to measure data quality are available today, especially when it comes to large-scale high-dimensional data, such as images and videos. This paper proposes two data quality measures that can compute class separability and in-class variability, the two important aspects of data quality, for a given dataset. Classical data quality measures tend to focus only on class separability; however, we suggest that in-class variability is another important data quality factor. We provide efficient algorithms to compute our quality measures based on random projections and bootstrapping with statistical benefits on large-scale high-dimensional data. In experiments, we show that our measures are compatible with classical measures on small-scale data and can be computed much more efficiently on large-scale high-dimensional datasets.

1. Introduction

We are witnessing the success of machine learning in various research and application areas, such as vision inspection, energy consumption estimation, and autonomous driving, just to name a few. One major contributor to the success is the fact that the datasets are continuously accumulated and openly published in several domains. Low data quality is very likely to cause inferior prediction performance of machine learning models, and therefore measuring data quality is an indispensable step in a machine learning process. Especially in real-time and mission-critical cyber-physical-system, defining appropriate data quality measures is critical since the low generalization performance of a deployed model can result in system malfunction and possibly catastrophic damages to the physical world. Despite the importance, there exist only a few works for measuring data quality where most of them are hard to evaluate on large-scale high-dimensional data due to computation complexity.
A popular early work on data quality measures includes Ho and Basu [1], proposing 12 types of quality measures which are simple but powerful enough to address different aspects of data quality. These measures have limitations, however, in that it is difficult to compute them for large-scale high-dimensional and multi-class datasets. Baumgartner and Somorjai [2] proposed a quality measure designed for high-dimensional biomedical datasets; however, it does not work efficiently on large-scale data. Recently, Branchaud-Charron et al. [3] proposed a quality measure for high-dimensional data using spectral clustering. Although this measure is adequate for large-scale high-dimensional data, it requires an embedding network which involves a large amount of computation time for training.
In this paper, we propose three new quality measures called M s e p , M v a r , and M v a r i , and their computation methods that overcome the above limitations. Our approach is inspired by Fisher’s linear discriminant analysis (LDA) [4], which is mathematically well-defined for finding a feature subspace that maximizes class separability. Our computation method makes use of the techniques from statistics, random projection [5] and bootstrapping [6], to compute the measure for large-scale high-dimensional data efficiently.
The contributions of our paper are summarized as follows:
  • We propose three new data quality measures that can be compuated directly from a given dataset and can be compared across datasets with different numbers of classes, examples, and features. Although our approach takes ideas from LDA (linear discriminant analysis) and PCA (principal component analysis), the techniques by themselves do not produce single numbers that are comparable across different datasets.
  • We provide efficient algorithms to approximate the suggested data quality measures, making them available for large-scale high-dimensional data.
  • The proposed class separability measure M s e p is strongly correlated with the actual classification performance of linear and non-linear classifiers in our experiments.
  • The proposed in-class variability measures M v a r and M v a r i quantify the diversity of data within each class and can be used to analyze redundancy or outlier issues.

2. Related Work

In general, the quality of data can be measured by Kolmogorov complexity which is also known as the descriptive complexity for algorithmic entropy [7]. However, the Kolmogorov complexity is not computationally feasible; instead, an approximation is used in practice. To our knowledge, there are three main approaches for approximating the Kolmogorov complexity: descriptor-based, classifier-based, and graph-based approaches. We describe these three main categories below.

2.1. Descriptor-Based Approaches

Ho and Basu [1] proposed simple but powerful quality measures based on descriptors. They proposed 12 quality measures, namely F 1 , F 2 , F 3 , L 1 , L 2 , L 3 , N 1 , N 2 , N 3 , N 4 , T 1 , and T 2 . The F measures represent the amount of feature overlap. In particular, F 1 measures the maximum Fisher’s ratio which represents the maximum discriminant power of between features. F 2 represents the volume of overlap region in the two-class conditional distributions. F 3 captures the ratio of overlapping features using the maximum and minimum value. The L measures are for the linear separability of classes. L 1 is a minimum error of linear programming (LP), L 2 is an error rate of a linear classifier by LP, and L 3 is an error rate of linear classifier after feature interpolation. The N measures represent mixture identifiability, the distinguishability of the data points belonging to two different classes. N 1 represents the ratio of nodes connected to the different classes using the minimum spanning tree of all data points. N 2 is the ratio of the average intra-class distance and average inter-class distance. N 3 is the leave-one-out error rate of the nearest neighbor (1NN). N 4 is an error rate of 1NN after feature interpolation. The T measure represents the topological characteristic of a dataset. T 1 represents the number of hyperspheres adjacent to other class features. T 2 is the average number of data points per dimension. These quality measures can capture various aspects of data quality; however, they are fixed for binary classification and not applicable for multi-class problems. Furthermore, quality measures, such as N 1 , N 2 , and N 3 , require a large amount of computation time on large-scale high-dimensional data.
Baumgartner and Somorjai [2] proposed a quality measure for high-dimensional but small biomedical datasets. They used singular value decomposition (SVD) with time complexity O ( m i n ( m 2 n , m n 2 ) ) , where m is the number of data points, and n is the number of features. Thus, it is computationally demanding to calculate their measures for the datasets with large m and n, such as recent image datasets.
There are other descriptor-based approaches, for example for meta learning [8,9], for classifier recommendation [10], and for synthetic data generation [11]. However, only a small number of data points in a low dimensional space have been considered in these works.

2.2. Graph-Based Approaches

Branchaud-Charron et al. [3] proposed a graph-based quality measure using spectral clustering. First, they compute a probabilistic divergence-based K × K class similarity matrix S, where K is the number of classes. Then, an adjacency matrix W is computed from the S matrix. The quality measure is defined as a cumulative sum of the eigenvalues gap which is called as cumulative spectral gradient ( C S G ), which represents the minimum cutting cost of the S. The authors also used a convolutional neural network-based auto-encoder and t-SNE [12,13] to find an embedding that can represent data points (images in their case) well. Although the method is designed for high-dimensional data, it requires to train a good embedding network to reach quality performance.
Duin and Pękalska [14] proposed a quality measure based on a dissimilarity matrix of data points. Since calculating the dissimilarity matrix is a time-consuming process, the method is not adequate for large-scale high-dimensional data.

2.3. Classifier-Based Approaches

Li et al. [15] proposed a classifier-based quality measure called an intrinsic dimension, which is the minimum number of solutions for certain problems. For example, in a neural network, the intrinsic dimension is the minimum number of parameters to reach the desired prediction performance.
The method has a benefit that it can be applied to many different types of data as long as one has trainable classifiers for the data; however, it often incurs high computation cost since it needs to change the number of classifier parameters iteratively during data quality evaluation.
Overall, the existing data quality measures are mostly designed for binary classification in low dimension spaces with a small number of data points. Due to their computation complexity, they tend to consume large amount of time when applied to large-scale high-dimensional data. In addition, the existing measures tend to focus only on the inter-class aspects of data quality. In this paper, we propose two new data quality measures suitable for large-scale high-dimensional data resolving the above mentioned issues.

3. Methods

In this section, we formally describe our data quality measures. We focus on multi-class classification tasks where each data point is associated with a class label out of c categories ( c 2 ). Our measures are created by adapting ideas from Fisher’s LDA [4]. Fisher’s LDA is a dimensionality reduction technique, finding a projection matrix that maximizes the between-class variance and minimizes the within-class variance at the same time. Motivated by the idea, we propose two types of data quality measures, class separability M s e p and in-class variability M v a r and M v a r i . For efficient handling of large-scale high-dimensional data, we also propose techniques to reduce both computation and memory requirements taking advantage of statistical methods, bootstrapping [6] and random projection [5].

3.1. Fisher’s LDA

The objective of Fisher’s LDA [4] is to find the feature subspace which maximizes the linear separability of a given dataset. Fisher’s LDA achieves the objective by minimizing the within-class variance and maximizing the between-class variance simultaneously.
To describe the Fisher’s LDA formally, let us consider an input matrix X R m × n where m is the number of data points, n is the input dimension, and x i , j R n is a j-th data point in the i-th class. The within-class scatter matrix S w R n × n is defined as follows:
S w = i = 1 c j = 1 m i ( x i , j x ¯ i ) ( x i , j x ¯ i ) T .
Here, c is the number of classes, m i is the number of data points in the i-th class, and x ¯ i R n is the mean of data points in the i-th class. This formulation can be interpreted as the sum of class-wise scatter matrices. A small determinant of S w indicates that data points in the same class exist densely in a narrow area, which may lead to high class separability.
Next, the between-class scatter matrix S b R n × n is defined as follows:
S b = i = 1 c m i x ¯ i x ¯ x ¯ i x ¯ T ,
where x ¯ is the mean of entire data points in the given dataset. A large determinant of S b indicates that the mean vector x ¯ i of each class is far from the x ¯ , another condition hinting for high class separability.
Using these two matrices, we can describe the objective of Fisher’s LDA as follows:
Φ l d a arg max Φ | Φ T S b Φ | | Φ T S w Φ | .
Here, Φ l d a R n × d is the projection matrix where d is the dimension of feature subspace (in general, we choose d n ). The column vectors of projection matrix Φ l d a are the axes of feature subspace, which maximize the class separability. The term in the objective function is also known as the Fisher’s criterion. By projecting X onto these axes, we obtain a d-dimensional projection of the original data X R m × d :
X = X Φ l d a .
In general, if S w is an invertible matrix, we can calculate the projection matrix which maximizes the objective of the Fisher’s LDA by eigenvalue decomposition.

3.2. Proposed Data Quality Measures

Motivated by the ideas in Fisher’s LDA, we propose two types of new data quality measures: M s e p (class separability), M v a r and M v a r i (in-class variability).

3.2.1. Class Separability

Our first data quality measure tries to capture the class separability of a dataset by combining the within-class variance and between-class variance, similarly to Fisher’s LDA (1) but more efficiently for large-scale and high-dimensional data and comparable with other datasets.
We start from creating the normalized versions of the matrices S w and S b in Fisher’s LDA (1) so that they will not be affected by the different numbers of examples in classes ( m i is the number of examples in the i-th class) across different datasets. The normalized versions are denoted by S ^ w and S ^ b :
S ^ w : = i = 1 c 1 m i j = 1 m i ( x i , j x ¯ i ) ( x i , j x ¯ i ) T , S ^ b : = i = 1 c m i j = 1 c m j ( x ¯ i x ¯ ) ( x ¯ i x ¯ ) T .
Considering the determinants of these n × n matrices as in the original Fisher’s LDA will be too costly for a high-dimensional data where n is large, since the time complexity to compute the determinants will be proportional nearly to n 3 . Instead, we consider the direction of maximum linear separation v R n that maximizes the ratio of between-class variance to the within-class variance being projected onto the vector. Using the vector, we define our first data quality measure M s e p for class separability as follows:
M s e p : = max v R n : v = 1 | v T S ^ b v | | v T S ^ w v | .
This formulation is almost the same as (1) in Fisher’s LDA except that (1) finds the projection matrix ϕ l d a which maximizes the Fisher’s criterion, while, in (3), we will focus on finding the maximum value of Fisher’s criterion itself. Unlike Fisher’s criterion, our measure M s e p is comparable across datasets with different numbers of classes and examples due to normalization, to check the relative difficulty of linear classification.
Solving (3) directly will be preventive for a large n as in the original LDA. If S ^ w is invertible, we can calculate the vector which maximizes M s e p as follows using simple linear algebra. To find the vector v which maximizes the equation in (3), first differentiate it with respect to v to get:
( v T S ^ b v ) S ^ w v = ( v T S ^ w v ) S ^ b v .
This leads us to the following generalized eigenvalue problem in the form of:
S ^ w 1 S ^ b v = λ v ,
where λ = v T S ^ b v v T S ^ w v can be thought as an eigenvalue of the matrix S ^ w 1 S ^ b . The maximizer v is the eigenvector corresponding to the largest eigenvalue of S ^ w 1 S ^ b which can be found rather efficiently by the Lanczos algorithm [16]. However, the overall time complexity for computation can be up to O ( n 3 ) , which makes it difficult to calculate the optimal vector for high-dimensional data, such as images. In Section 3.3, we provide an efficient algorithm to compute M s e p using random projection.

3.2.2. In-Class Variability

Our second data quality measure gauges the in-class variability. Figure 1 shows one of the motivating examples to consider in-class variability for data quality. In the figure, we have two photos of the Bongeunsa temple in Seoul, Korea, taken by the same photographer. The photographer had been asked to take photos of Korean objects from several different angles, and it turned out that quite a few of the photos were taken in only marginal angle differences. Since the data creation was a government-funded project providing data for developing object recognition systems in academia and industry, low data variability was definitely an issue.
Here, we define two types of in-class variability measure, the overall in-class variability of a given dataset M v a r and the in-class variability of the i-th class, M v a r i . First, the overall in-class variability M v a r tries to capture the minimum variance of data points being projected onto any direction, based on the matrix S ^ w defined in (2):
M v a r : = min v R n : v = 1 1 c · n v T S ^ w v ,
where c is the number of class and n is the dimension of data. Unlike class separability, we added additional normalization factors c and n, since the value S ^ w is affected by the number of class and the data dimension.
Second, the class-wise in-class variability M v a r i is based on the sample covariance matrix of each class:
S ^ w i : = 1 m i j = 1 m i ( x i , j x ¯ i ) ( x i , j x ¯ i ) T ,
where m i is the number of data points in the i-th class. The class-wise in-class variability measure M v a r i is defined as follows:
M v a r i : = min v R n : v = 1 1 c · n v T S ^ w i v .
The normalization factors c and n are required for the same reason as in M v a r . The measure M v a r i represents the smallest variance of the data points in the same class after being projected onto any direction.
As a matter of fact, M v a r and M v a r i are the same as the smallest eigenvalue of 1 c · n S ^ w and 1 c · n S ^ w i which can be computed for instance using the Lanczos algorithm [16] on the inverse of them with O ( n 3 ) computation, which will be preventive for large data dimensions n. We discuss a more efficient way to estimate the value in the next section, which can be computed alongside with our first data quality measure without significant extra cost.
Using the M v a r and M v a r i , we can analyze the variety or redundancy in a given dataset. For instance, a very small M v a r and M v a r i would indicate that we may have a small diversity issue where data points in the invested class are mostly alike. On the other hand, the overly large M v a r and M v a r i may indicate a noise issue in the class, possibly including incorrect labeling. The difference between M v a r and M v a r i is that the M v a r aggregates the information of diversity for each class, and the M v a r i represents the information of diversity for a specific class. Since the M v a r aggregates the information of variability of data points in each class, we can use this for comparing the in-class variability between datasets. On the other hand, we can use M v a r i for the datasets analysis, i.e., data points of a specific class with less M v a r i than other classes may cause low generalization performance. We will discuss more details in Section 4.

3.3. Methods for Efficient Computation

One of the key properties required for data quality measures is that they should be computable in a reasonable amount of time and computation resources since the amount and the dimension of data are keep increasing as new advanced sensing technologies become available. In this section, we describe how we avoid a large amount of time and memory complexity to compute our suggested data quality measures.

3.3.1. Random Projection

Random projection [5] is a dimension reduction technique that can transform an n-dimensional vector into a k-dimensional vector ( k n ), while preserving the critical information of the original vector. The idea behind of random projection is the Johnson-Lindenstrauss lemma [17]. That is, for any vectors { x , x } X from a set of m vectors in X R n and for ϵ ( 0 , 1 ) , there exists a linear mapping f : R n R k such that the pairwise distances of vectors are almost preserved after projection in the sense that:
( 1 ϵ ) x x 2 2 f ( x ) f ( x ) 2 2 ( 1 + ϵ ) x x 2 2 ,
where k > 8 ln ( m ) / ϵ 2 . It is known that when the original dimension n is large, a random projection matrix P R k × n can serve as the feature mapping f in the lemma, since random vectors in R n tend to be orthogonal to each other as n increases [18].
Motivated by the above phenomenon, we use random projection to find a vector that satisfies (3) instead of calculating the eigenvalue decomposition to solve (4). The idea is that if the number of random vectors is sufficiently large, the maximum value of the Fisher’s criterion calculated by random projection can approximate the behavior of a true solution.
Furthermore, random projection makes it unnecessary to explicitly store S ^ w and S ^ b since we can simply compute the denominator and numerator of (3) as follows:
w T S ^ w w = i = 1 c 1 m i j = 1 m i w T ( x i , j x ¯ i ) ( x i , j x ¯ i ) T w ,
w T S ^ b w = i = 1 c m i j = 1 c m j w T x ¯ i x ¯ x ¯ i x ¯ T w ,
where w is a random unit vector drawn from N ( 0 , 1 ) . This technique is critical for dealing with high-dimensional data, such as images, in a memory-efficient way. In our experiments, ten random projection vectors were sufficient in most cases to accurately estimate our quality measures.

3.3.2. Bootstrapping

Bootstrapping [6] is a sampling-based technique that estimates the statistic of the population with little data using sampling with replacement. For instance, bootstrapping can be used to estimate the mean and the variance of a statistic from an unknown population. Let s i is a statistic of interest that is calculated from a randomly drawn sample of an unknown population. The mean and variance of the statistic can be estimated as follows:
μ ^ = 1 B i = 1 B s i , σ ^ 2 = 1 B i = 1 B s i 2 1 B i = 1 B s i 2 ,
where B is the number of bootstrap samples, μ ^ is a mean estimate of the statistic, and σ ^ 2 is the variability of the estimate. By using a small B, we can reduce the number of data points to be considered at once. We found that B = 100 and making each bootstrap sample to be 25 % of a given dataset in size worked well overall our experiments. We summarized the above procedure in Algorithm 1 (The implementation is available here: https://github.com/Hyeongmin-Cho/Efficient-Data-Quality-Measures-for-High-Dimensional-Classification-Data.)
Algorithm 1: Algorithm of class separability and in-class variability.
Applsci 11 00472 i001

4. Experiment Results

In this section, we show that our method can evaluate the data quality of the large-scale high-dimensional dataset efficiently.
To verify the representative performance of M s e p for class separability, we calculated the correlation between the accuracy of chosen classifiers and M s e p . Classifiers used in our experiments are as follows: a perceptron, a multi-layer perceptron with one hidden layer and LeakyReLU (denoted by MLP-1), and a multi-layer perceptron with two hidden layers and LeakyReLU (denoted by MLP-2). To simplify the experiments, we trained the models with the following settings: 30 epochs, a batch size of 100, a learning rate of 0.002, the Adam optimizer, and the cross-entropy loss function. Additionally, we fixed the hyperparameters of Algorithm 1 as B = 100 , R = 0.25 , and n v = 10 since there was no big difference in performance when larger hyperparameter values were used.
For comparison with other quality measures, we chose F 1 , N 1 , and N 3 from Ho and Basu [1] and C S G from Branchaud-Charron et al. [3]. Here, N 1 , N 3 , and C S G are known to be highly correlated with test accuracy of classifiers Branchaud-Charron et al. [3]. F 1 is similar to our M s e p in its basic idea. Other quality measures suggested in Ho and Basu [1] showed very similar characteristics to F 1 , N 1 , N 3 , and C S G and are therefore not included in the results.

4.1. Datasets

To evaluate the representative performance of M s e p for class separability, we used various image datasets that are high-dimensional and popular in mobile applications. We chose ten benchmark image datasets for our experiments: MNIST, notMNIST, CIFAR10, Linnaeus, STL10, SVHN, ImageNet-1, ImageNet-2, ImageNet-3, and ImageNet-4. MNIST [19] consists of ten handwritten digits from 0 to 9. The dataset contains 60,000 training and 10,000 test data points. We sampled 10,000 data from the training data for a model training and measuring the quality, and we sampled 2500 data from the test data for assessing the model accuracy. The notMNIST [20] dataset is quite similar to MNIST, containing English letters from A to J in various fonts. It has 13,106 training and 5618 test samples. We sampled the data in the same way as MNIST. Linnaeus [21] consists of five classes: berry, bird, dog, flower, and others. Although the dataset is available in various image sizes, we chose 32 × 32 to reduce the computation time of N 1 , N 3 , and C S G . CIFAR10 [22] is for object recognition with ten general object classes. It consists of 50,000 training data and 10,000 test data points. We sampled the CIFAR10 dataset in the same way as MNIST. STL10 [23] is also for object recognition with ten classes, and it has 92 × 92 images: we resized the images into 32 × 32 to reduce the computation time for the N 1 , N 3 , and C S G . The dataset consists of 5000 training and 8000 test data points. We combined these two sets into a single dataset, and then sampled 10,000 data points from the combined set for model training and measuring quality. We also sampled 2500 data points from the combined set for assessing prediction model accuracy if necessary. SVHN [24] consists of street view house number images. The dataset contains 73,200 data points. We sampled 10,000 training data for a model training and measuring the quality, and we sampled 2500 data for assessing the model accuracy. ImageNet-1, ImageNet-2, ImageNet-3 and ImageNet-4 are subsets of Tiny ImageNet dataset [25]. The Tiny ImageNet dataset contains 200 classes, and each class has 500 images. They are consist of randomly selected ten classes of the Tiny ImageNet dataset (total 5000 data points). We used 4500 data points for model training and measuring the quality and 500 data points for assessing the model accuracy, respectively.
We summarized the details of datasets in Table 1. The accuracy values in the Table 1 are calculated from the MLP-2 model since it showed good overall performance compared to the perceptron and the MLP-1 models.

4.2. Representation Performance of the Class Separability Measure M S e p

Here, we show in experiments that how well our first quality measure M s e p represents class separability, compared to simple but popular classifiers and the existing data quality measures.

4.2.1. Correlation with Classifier Accuracy

To demonstrate how well M s e p represents the class separability of given datasets, we compared the absolute value of Pearson correlation and Spearman rank correlation between quality measures M s e p , N 1 , N 3 , F 1 , and C S G to the prediction accuracy of three classification models: perceptron, MLP-1, and MLP-2. Table 2 summarizes the results.
In the case of the perceptron, M s e p has a similar Pearson correlation with the shortest computation time to the N 1 and N 3 which have the highest correlation with the accuracy of classifiers. Furthermore, M s e p and F 1 have the highest Spearman rank correlation. This is because M s e p and F 1 measure linear separability that is essentially the information captured by the linear classifier, the perceptron in our case. In the case of MLP-1 and MLP-2, M s e p also showed a sufficiently high correlation with classification accuracy although it is slightly lower in Pearson correlation compared to the case of the perceptron. On the other hand, C S G does not seem to have noticeable benefits considering its computation time. This is because C S G is affected by an embedding network which requires a large amount of training time.
In summary, the result shows that our measure M s e p can capture separability of data as good as the existing data quality measures, while reducing computation time significantly.

4.2.2. Correlation with Other Quality Measures

In order to check if our suggested data quality measure M s e p is compatible with the existing ones in quality, and therefore ours can be a faster alternative to the existing data quality measures, we computed the Pearson correlation between F 1 , N 1 , N 3 , C S G , and M s e p . The results are summarized in Table 3. Our measure M s e p showed a high correlation with all four existing measures F 1 , N 1 , N 3 , and C S G , indicating that M s e p is able to capture the data quality information represented by F 1 , N 1 , N 3 , and C S G .

4.2.3. Computation Time

As mentioned above, our quality measure M s e p represents the class separability well, but much faster in computation than F 1 , N 1 , N 3 , and C S G . Here, we show how the computation time changes according to data dimension and sample sizes, in order to show that our suggested data quality measure can be used for many big-data situations.
The computation time according to the data dimension is shown in Figure 2 and Table 4. In all dimensions, our measure M s e p was on average 3.8 times faster than F 1 , 13.1 times faster than N 1 , 25.9 times faster than N 3 , and 17.7 times faster than C S G . Since the N 1 , N 3 , and C S G have to calculate the MST and to train a 1NN classifier and embedding networks, respectively, it is inevitable that they would take a large amount of computation time (see more details in Section 2.1 and Section 2.2). On the other hand, since M s e p utilizes random projection and bootstrapping to avoid eigenvalue decomposition problem and to deal with the big-data situations, the computation time of M s e p is shortest in all cases.
Figure 3 and Table 5 show how computation time changes for various sample sizes. Our measure M s e p was on average 2.8 times faster than F 1 , 47.0 times faster than N 1 , 94.5 times faster than N 3 , and 41.6 times faster than C S G . N 1 and N 3 show extremely increasing computation time with respect to the sample size, which is not suitable for large-scale high-dimensional datasets.
All the above results show that our measure M s e p is suitable for the big-data situations and compatible with other well-accepted data quality measures.

4.2.4. Comparison to Exact Computation

In Section 3.2, we proposed to use random projections and bootstrapping for fast approximation of the solution of (4), which can be computed exactly as an eigenvalue. Here, we compare the values of M s e p using the proposed approximate computation (denoted by “Approx”) and the exact computation (denoted by “Exact”) due to an eigensolver in the Python scipy package. One thing is that, since we use only Gaussian random vectors for projection, it is likely that they may not match the true eigenvectors; therefore, the approximated quantity would differ from the exact value. However, we found that the approximate quantities match well the exact values in their correlation, as indicated in Table 6, and, therefore, can be used for fast comparison of data quality of high-dimensional large-scale datasets.

4.3. Class-Wise In-Class Variability Measure, M V a r i

In fact, many of the existing data quality measures are designed to measure the difficulty of classification for a given dataset. However, we believe that the in-class variability of data must be considered as another important factor of data quality. One example to show the importance and usefulness of our in-class variability measure M v a r i is the generalization performance of a classifier.
The generalization performance of the learning model is an important consideration especially in mission-critical AI-augmented systems. There are many possible reasons causing low generalization, and overfitting is one of the troublemakers. Although we have techniques to alleviate overfitting, e.g., checking the learning curve, regularization [26,27], and ensemble learning [28], it is critical to check if there is an issue in data to begin with which may lead to any inductive bias. For example, a very small value of M v a r i in a class compared to the others would indicate a lack of variability in the class, which can lead to low generalization due to, e.g., unchecked input noise, background signal, object occlusion, and angle/brightness/contrast differences during training. On the other hand, the overly large M v a r i may indicate outliers or even mislabeled data points likely to incur unwanted inductive bias in training.
To show the importance and usefulness of in-class variability, we created a degraded version of CIFAR10 (denoted by degraded-CIFAR10) by reducing the variability of a specific class. The degraded-CIFAR10 is created by the following procedure. First, we chose an image of the deer class in the training data, then selected the nine mostly similar images in the angular distance to the chosen one. Figure 4 shows the total ten images selected by the above procedure that have similar backgrounds and shapes. Next, we created 1000 images by sampling with replacement from the ten images, while adding random Gaussian noise with zero mean and unit variance, and we replaced the original deer class data with sampled degraded deer class data.
Table 7 shows that the value of M v a r i is significantly small on the degraded deer class compared to the other classes. That is, it can capture small in-class variability. In contrast, Table 8 shows that the existing quality measures F 1 , N 1 , N 3 , and C S G may not be enough to signify the degradation of the dataset. As we can see, all quality measures indicate that class separability increased in degraded-CIFAR10 compared to the original CIFAR10; however, the test accuracy from MLP-2 decreased. This is because the reduction in in-class variability is very likely to decrease the generalization performance. Therefore, class separability measures can deliver incorrect information regarding data quality in terms of in-class variability, which can be a critical problem for generating a trustworthy dataset or training a trustworthy model.
As we showed above, the small value of M v a r i of a specific class represents that similar images do exist in the invested class, which can lead to low generalization performance of classifiers. Suppose we have generated a dataset for an autonomous driving object classification task. The dataset has been revealed that it has a high class separability through various quality measures. Moreover, the training accuracy was also high. Therefore, one may expect high generalization performance. Unfortunately, the exact opposite can happen. If the variability in the specific class is small as in the degraded-CIFAR10 example above, high generalization performance cannot be expected. For instance, if a car with new colors and new shapes that have never been trained is given as an input to the model, the probability of properly classifying the car will be low. This example indicates that in-class variability plays an important role in data quality evaluation.

4.4. Quality Ranking Using M S e p and M V a r

As we mentioned before, quality measures M s e p and M v a r can be compared among different datasets. The class separability M s e p represents the relative difficulty of linear classification, and the overall in-class variability M v a r represents the average variability of data points in classes.
Figure 5 shows a data quality comparison plot of datasets in our experiments. The direction towards the lower-left corner indicates lower class separability and lower in-class variability, and the upper-right direction is for higher class separability and higher in-class variability. According to the plot, the MNIST and the notMNIST dataset show very high linear separability compared to other datasets, indicating that their classification might be easier than the other datasets. The SVHN dataset is at the lower-left corner, indicating low linear separability and possible redundancy issues (this could be just the reflection of the fact that many SVHN images contain changing digits but the same backgrounds). The four ImageNet datasets, Linnaeus, CIFAR10 and STL10 have similar class separability and in-class variability values. This appears to be understandable considering their similar data construction designed for object recognition.

5. Conclusions

In this paper, we proposed data quality measures M s e p , M v a r and M v a r i , which can be applied efficiently on large-scale high-dimensional datasets. Our measures are estimated using random projection and bootstrapping and therefore can be applied efficiently on large-scale high-dimensional data. We showed that M s e p can be used as a good alternative to the existing data quality measures capturing class separability, while reducing their computational overhead significantly. In addition, M v a r and M v a r i measures in-class variability, which is another important factor to avoid unwanted inductive bias in trained models.

Author Contributions

Conceptualization, H.C. and S.L.; methodology, H.C. and S.L.; validation, H.C.; writing-original draft preparation, H.C.; writing-review and editing, S.L.; supervision, S.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by Basic Science Research Program through the National Research Foundation of Korea(NRF) funded by the Ministry of Education(2018R1D1A1B07051383), and also by the MSIT(Ministry of Science and ICT), Korea, under the ITRC(Information Technology Research Center) support program(IITP-2020-0-01749) supervised by the IITP(Institute of Information & Communications Technology Planning & Evaluation).

Data Availability Statement

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ho, T.K.; Basu, M. Complexity measures of supervised classification problems. IEEE Trans. Pattern Anal. Mach. Intell. 2002, 24, 289–300. [Google Scholar]
  2. Baumgartner, R.; Somorjai, R. Data complexity assessment in undersampled classification of high-dimensional biomedical data. Pattern Recognit. Lett. 2006, 27, 1383–1389. [Google Scholar] [CrossRef]
  3. Branchaud-Charron, F.; Achkar, A.; Jodoin, P.M. Spectral metric for dataset complexity assessment. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–21 June 2019. [Google Scholar]
  4. Hancock, J.M.; Zvelebil, M.J.; Cristianini, N. Fisher Discriminant Analysis (Linear Discriminant Analysis). Dict. Bioinform. Comput. Biol. Available online: https://doi.org/10.1002/9780471650126.dob0238.pub2 (accessed on 25 October 2020).
  5. Bingham, E.; Mannila, H. Random projection in dimensionality reduction: Applications to image and text data. In Proceedings of the ACM Special Interest Group on Knowledge Discovery in Data, San Francisco, CA, USA, 26–29 August 2001. [Google Scholar]
  6. Efron, B. Bootstrap Methods: Another look at the jackknife. Ann. Stat. 1979, 7, 1–26. [Google Scholar] [CrossRef]
  7. Li, M.; Vitányi, P. An Introduction to Kolmogorov Complexity and Its Applications; Springer: Berlin, Germany, 2008. [Google Scholar]
  8. Leyva, E.; González, A.; Pérez, R. A set of complexity measures designed for applying meta-learning to instance selection. IEEE Trans. Knowl. Data Eng. 2015, 27, 354–367. [Google Scholar] [CrossRef]
  9. Sotoca, J.M.; Mollineda, R.A.; Sánchez, J.S. A meta-learning framework for pattern classification by means of data complexity measures. Intel. Artif. 2006, 10, 31–38. [Google Scholar]
  10. Garcia, L.P.F.; Lorena, A.C.; de Souto, M.C.P.; Ho, T.K. Classifier recommendation using data complexity measures. In Proceedings of the International Conference on Pattern Recognition, Beijing, China, 20–24 August 2018. [Google Scholar]
  11. de Melo, V.V.; Lorena, A.C. Using complexity measures to evolve synthetic classification datasets. In Proceedings of the IEEE International Joint Conference on Neural Network, Rio de Janeiro, Brazil, 8–13 July 2018. [Google Scholar]
  12. Masci, J.; Meier, U.; Cireşan, D.; Schmidhuber, J. Stacked Convolutional Auto-Encoders for Hierarchical Feature Extraction. In Proceedings of the International Conference on Artificial Neural Networks, Espoo, Finland, 14–17 June 2011. [Google Scholar]
  13. van der Maaten, L.; Hinton, G. Visualizing Data using t-SNE. J. Mach. Learn. Res. 2008, 9, 2579–2605. [Google Scholar]
  14. Duin, R.P.W.; Pękalska, E. Object representation, sample size and dataset complexity. Data Complexity in Pattern Recognition; Springer: Berlin, Germany, 2006. [Google Scholar]
  15. Li, C.; Farkhoor, H.; Liu, R.; Yosinski, J. Measuring the intrinsic dimension of object landscapes. In Proceedings of the International Conference on Learning Representations, Vancouver, BC, Canada, 30 April–3 May 2018. [Google Scholar]
  16. Golub, G.H.; Loan, C.F.V. Matrix Computations. Baltimore; Johns Hopkins University Press: Baltimore, MD, USA, 1996; pp. 470–507. [Google Scholar]
  17. Dasgupta, S.; Gupta, A. An Elementary Proof of the Johnson-Lindenstrauss Lemma; Technical Report; UC Berkeley: Berkeley, CA, USA, 1999. [Google Scholar]
  18. Kaski, S. Dimensionality reduction by random mapping: Fast similarity computation for clustering. In Proceedings of the IEEE International Joint Conference on Neural Networks, Anchorage, AK, USA, 4–9 May 1998. [Google Scholar]
  19. LeCun, Y.; Cortes, C. MNIST Handwritten Digit Database. Available online: http://yann.lecun.com/exdb/mnist/ (accessed on 25 October 2020).
  20. Bulatov, Y. Notmnist Dataset. Technical Report, Google (Books/OCR). 2011. Available online: http://yaroslavvb.blogspot.it/2011/09/notmnist-dataset.html (accessed on 25 October 2020).
  21. Chaladz, G.; Kalatozishvili, L. Linnaeus 5 Dataset for Machine Learning. Available online: http://chaladze.com/l5/ (accessed on 25 October 2020).
  22. Krizhevsky, A. Learning Multiple Layer of Features from Tiny Images; Technical Report; Computer Science Department, University of Toronto: Toronto, ON, Canada, 2009. [Google Scholar]
  23. Coates, A.; Lee, H.; Ng, A.Y. An analysis of single layer networks in unsupervised feature learning. J. Mach. Learn. Res. 2011, 15, 215–223. [Google Scholar]
  24. Netzer, Y.; Wang, T.; Coates, A.; Bissacco, A.; Wu, B.; Ng, A.Y. Reading Digits in Natural Images with Unsupervised Feature Learning. In Proceedings of the NIPS Workshop on Deep Learning and Unsupervised Feature Learning, Granada, Spain, 16–17 December 2011. [Google Scholar]
  25. Deng, J.; Dong, W.; Socher, R.; Li, L.J.; Li, K.; Fei-Fei, L. ImageNet: A large-scale hierarchical image database. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; pp. 248–255. [Google Scholar]
  26. Srivastava, N.; Hinton, G.; Krizhevsky, A.; Sutskever, I.; Salakhutdinov, R. Dropout: A Simple Way to Prevent Neural Networks from Overfitting. J. Mach. Learn. Res. 2014, 15, 1929–1958. [Google Scholar]
  27. Ioffe, S.; Szegedy, C. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. arXiv 2015, arXiv:1502.03167. [Google Scholar]
  28. Opitz, D.; Maclin, R. Popular Ensemble Methods: An Empirical Study. J. Artif. Intell. Res. 1999, 11, 169–198. [Google Scholar] [CrossRef]
Figure 1. An example of low in-class variability that similar images in the same class. The images are Bongeunsa temple in Seoul, Korea. (Source: Korean Type Object Image AI Training Dataset at http://www.aihub.or.kr/aidata/132, National Information Society Agency.)
Figure 1. An example of low in-class variability that similar images in the same class. The images are Bongeunsa temple in Seoul, Korea. (Source: Korean Type Object Image AI Training Dataset at http://www.aihub.or.kr/aidata/132, National Information Society Agency.)
Applsci 11 00472 g001
Figure 2. Data dimension vs. computation time (CIFAR10).
Figure 2. Data dimension vs. computation time (CIFAR10).
Applsci 11 00472 g002
Figure 3. Sample size vs. computation time (CIFAR10).
Figure 3. Sample size vs. computation time (CIFAR10).
Applsci 11 00472 g003
Figure 4. Ten similar selected images in the deer class on degraded-CIFAR10. Images with high similarity were selected using cosine similarity.
Figure 4. Ten similar selected images in the deer class on degraded-CIFAR10. Images with high similarity were selected using cosine similarity.
Applsci 11 00472 g004
Figure 5. Data quality plot using the two proposed quality measures.
Figure 5. Data quality plot using the two proposed quality measures.
Applsci 11 00472 g005
Table 1. Details of the datasets used in our experiments. The accuracy is from MLP-2, and M represents the total number of data used for training and evaluation.
Table 1. Details of the datasets used in our experiments. The accuracy is from MLP-2, and M represents the total number of data used for training and evaluation.
DatasetsAccuracyMNo. ClassesDescription
MNIST92.64%12.5 k10Hand written digit
notMNIST89.24%12.5 k10Fonts and glyphs similar to MNIST
Linnaeus45.50%4.8 k5Botany and animal class images
CIFAR1042.84%12.5 k10Object recognition images
STL1040.88%12.5 k10Object recognition images
SVHN45.60%12.5 k10House number images
ImageNet-137.40%5 k10Visual recognition images (Tiny ImageNet)
ImageNet-240.60%5 k10Visual recognition images (Tiny ImageNet)
ImageNet-334.20%5 k10Visual recognition images (Tiny ImageNet)
ImageNet-436.20%5 k10Visual recognition images (Tiny ImageNet)
Table 2. The absolute Pearson and Spearman rank correlation between the quality measures and the accuracy of three classifiers on the ten image datasets (MNIST, CIFAR10, notMNIST, Linnaeus, STL10, SVHN, ImageNet-1, ImageNet-2, ImageNet-3, and ImageNet-4). The computation time of our method M s e p is the fastest.
Table 2. The absolute Pearson and Spearman rank correlation between the quality measures and the accuracy of three classifiers on the ten image datasets (MNIST, CIFAR10, notMNIST, Linnaeus, STL10, SVHN, ImageNet-1, ImageNet-2, ImageNet-3, and ImageNet-4). The computation time of our method M s e p is the fastest.
ClassifierQuality MeasurePearson Corr.Spearman Corr.Time (s)
F 1 0.93860.76971253
N 1 0.98890.73335104
Perceptron N 3 0.98580.73339858
C S G 0.94520.818223,711
M s e p (ours)0.96930.8061354
F 1 0.90390.34551253
N 1 0.99590.97585104
MLP-1 N 3 0.99610.90309858
C S G 0.92950.587923,711
M s e p (ours)0.92610.3818354
F 1 0.88550.34551253
N 1 0.99080.92735104
MLP-2 N 3 0.99120.87889858
C S G 0.91270.587923,711
M s e p (ours)0.91170.4303354
Table 3. The absolute Pearson correlation between M s e p and other quality measures.
Table 3. The absolute Pearson correlation between M s e p and other quality measures.
M sep (Ours) F 1 N 1 N 3 C S G
M s e p (ours)1.00000.96730.93220.92450.9199
F 1 0.96731.0000.89090.88790.8806
N 1 0.93220.89091.00000.99880.9400
N 3 0.92450.88790.99881.0000.9417
C S G 0.91990.88060.94000.94171.0000
Table 4. Data dimension vs. computation time (CIFAR10) in detail (the values in the table represent seconds).
Table 4. Data dimension vs. computation time (CIFAR10) in detail (the values in the table represent seconds).
Image SizeDimension F 1 N 1 N 3 CSG M sep (Ours)Speedup
16 × 16 × 3 76849246445196317 × 2.9 115.5
32 × 32 × 3 30722039471850251357 × 3.6 44.1
48 × 48 × 3 6912492214441823132122 × 4.0 34.3
64 × 64 × 3 12,2881011381074665427303 × 3.3 24.6
80 × 80 × 3 19,2001783597311,6746838461 × 3.9 25.3
96 × 96 × 3 27,6482850865217,3469550700 × 4.1 24.8
Table 5. Sample size vs. computation time (CIFAR10) in detail (the values in the table represent seconds).
Table 5. Sample size vs. computation time (CIFAR10) in detail (the values in the table represent seconds).
Sample Size F 1 N 1 N 3 CSG M sep (Ours)Speedup
10,0002059391904264556 × 3.7 47.2
20,000411371977975674136 × 3.0 57.3
30,000605857217,1779065220 × 2.8 78.1
40,00082514,89629,81311,843293 × 2.8 101.8
50,000100623,23546,54116,208387 × 2.6 120.3
Table 6. Comparison of Exact and Approx values and their correlations. Pearson and Spearman are the Pearson and Spearman rank correlation between the Exact and Approx.
Table 6. Comparison of Exact and Approx values and their correlations. Pearson and Spearman are the Pearson and Spearman rank correlation between the Exact and Approx.
E x a c t A p p r o x
MNIST0.15500.0535
CIFAR100.03310.0173
notMNIST0.21230.0625
Linnaeus0.02500.0118
STL100.06950.0207
SVHN0.00040.0010
ImageNet-10.00620.0131
ImageNet-20.02930.0145
ImageNet-30.01910.0125
ImageNet-40.00920.0076
Pearson0.9847
Spearman0.9152
Table 7. Our in-class variability measure for the degraded-CIFAR10 dataset. A class with a smaller value than other classes has a lower variability.
Table 7. Our in-class variability measure for the degraded-CIFAR10 dataset. A class with a smaller value than other classes has a lower variability.
Measure M var i × 1000
Class
Airplane0.1557
Automobile0.2069
Bird0.1394
Cat0.1803
Deer0.0123
Dog0.1830
Frog0.1344
Horse0.1775
Ship0.1472
Truck0.1997
Table 8. Quality measures on the degraded-CIFAR10 dataset. The existing quality measures F 1 , N 1 , N 3 , and C S G only capture the class separability and fail to capture the degradation. Lower values of N 1 , N 3 , and C S G represent higher class separability, whereas lower values of F 1 represent lower class separability. The test accuracy is from MLP-2 trained with original and degraded-CIFAR10, respectively, and tested on the original CIFAR10 test data.
Table 8. Quality measures on the degraded-CIFAR10 dataset. The existing quality measures F 1 , N 1 , N 3 , and C S G only capture the class separability and fail to capture the degradation. Lower values of N 1 , N 3 , and C S G represent higher class separability, whereas lower values of F 1 represent lower class separability. The test accuracy is from MLP-2 trained with original and degraded-CIFAR10, respectively, and tested on the original CIFAR10 test data.
Quality Measure F 1 N 1 N 3 C S G T e s t   A c c u r a c y   ( % )
Data
Original CIFAR100.22130.79090.70650.703042.84
Degraded-CIFAR100.26980.70350.60960.604941.28
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Cho, H.; Lee, S. Data Quality Measures and Efficient Evaluation Algorithms for Large-Scale High-Dimensional Data. Appl. Sci. 2021, 11, 472. https://doi.org/10.3390/app11020472

AMA Style

Cho H, Lee S. Data Quality Measures and Efficient Evaluation Algorithms for Large-Scale High-Dimensional Data. Applied Sciences. 2021; 11(2):472. https://doi.org/10.3390/app11020472

Chicago/Turabian Style

Cho, Hyeongmin, and Sangkyun Lee. 2021. "Data Quality Measures and Efficient Evaluation Algorithms for Large-Scale High-Dimensional Data" Applied Sciences 11, no. 2: 472. https://doi.org/10.3390/app11020472

APA Style

Cho, H., & Lee, S. (2021). Data Quality Measures and Efficient Evaluation Algorithms for Large-Scale High-Dimensional Data. Applied Sciences, 11(2), 472. https://doi.org/10.3390/app11020472

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop