Next Article in Journal
Mediating Effects of Self-Efficacy, Resilience and Job Satisfaction on the Relationship between Person–Organisation Fit and Employee Work Adjustment
Next Article in Special Issue
Teacher-Assistant Knowledge Distillation Based Indoor Positioning System
Previous Article in Journal
Virtual Carbon Flow in China’s Capital Economic Circle: A Multi-Regional Input–Output Approach
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Deep Learning Approach for the Detection of Noise Type in Ancient Images

1
Department of Computer Engineering, Faculty of Science and Technology, Vishwakarma University, Pune 411048, India
2
Department of Information System, College of Applied Sciences, King Khalid University, Muhayel 61913, Saudi Arabia
3
Department of Computer Science, College of Computers and Information Technology, Taif University, P.O. Box 11099, Taif 21944, Saudi Arabia
4
Department of Information Technology, College of Computers and Information Technology, Taif University, P.O. Box 11099, Taif 21944, Saudi Arabia
*
Author to whom correspondence should be addressed.
Sustainability 2022, 14(18), 11786; https://doi.org/10.3390/su141811786
Submission received: 23 July 2022 / Revised: 9 September 2022 / Accepted: 16 September 2022 / Published: 19 September 2022
(This article belongs to the Special Issue Applied Artificial Intelligence for Sustainability)

Abstract

:
Recent innovations in digital image capturing techniques facilitate the capture of stationary and moving objects. The images can be easily captured via high-end digital cameras, mobile phones and other handheld devices. Most of the time, captured images vary compared to actual objects. The captured images may be contaminated by dark, grey shades and undesirable black spots. There are various reasons for contamination, such as atmospheric conditions, limitations of capturing device and human errors. There are various mechanisms to process the image, which can clean up contaminated image to match with the original one. The image processing applications primarily require detection of accurate noise type which is used as input for image restoration. There are filtering techniques, fractional differential gradient and machine learning techniques to detect and identify the type of noise. These methods primarily rely on image content and spatial domain information of a given image. With the advancements in the technologies, deep learning (DL) is a technology that can be trained to mimic human intelligence to recognize various image patterns, audio files and text for accuracy. A deep learning framework empowers correct processing of multiple images for object identification and quick decision abilities without human interventions. Here Convolution Neural Network (CNN) model has been implemented to detect and identify types of noise in the given image. Over the multiple internal iterations to optimize the results, the identified noise is classified with 99.25% accuracy using the Proposed System Architecture (PSA) compared with AlexNet, Yolo V5, Yolo V3, RCNN and CNN. The proposed model in this study proved to be suitable for the classification of mural images on the basis of every performance parameter. The precision, accuracy, f1-score and recall of the PSA are 98.50%, 99.25%, 98.50% and 98.50%, respectively. This study contributes to the development of mural art recovery.

1. Introduction

Digital images play a vital role in many fields such as medical science, astronomy, geography, defense, and image animation [1]. In real life scenarios, it has been experienced that captured images or moving objects differ as compared to the original image. This has been experienced with high quality capturing media as well. The captured image is induced with unwanted effects such as grey scale, dense shade, improper light effects etc. This unwanted part of the captured image is termed as noise. Noise may cause incorrect results during image analysis. Image restoration is used to address the problem and manipulate the image in such a way that it will closely match the original [2]. Detecting and removing noise for preserving contents are requirements of current research in the image processing domain. There are different techniques to remove noise from the digital images, such as filtering and machine learning techniques. This is also termed as denoising. In the denoising process, the initial requirement is to detect and identify the type of noise from the given image. The detected noise type will give input to process further for the image restoration process. After noise detection (ND) and noise removal it can be used in many applications such as disease detection, object detection in astronomy etc. Image restoration (IR) is majorly required in many fields such as medical science and ancient historical document keeping [3].
In this paper, ancient mural images are used as an image dataset because these depict the cultural heritage and topographical status of a community. It gives information about historical details such as culture, lifestyle and nature at that time [4]. These provide important information to people regarding changes in ancient images and artistic style. These murals are of great interest to archaeologists and scientists for research work [5]. The preservation of these ancient images is a most challenging task, as most of these images are very old; some of these images became distorted due natural climatic disasters such as earthquakes, storms and rains [6]. It has also been noticed that improper handling of murals can also lead to distortion. These distortions in the murals have limited capabilities to perform research-oriented studies [7]. Degradation, shedding and cracking are common damage issues with ancient artworks. The distortions can lead researchers to identity and classify the murals and may result in inaccurate results. Restoration of the degraded artworks requires skilled artisans, which are difficult to find these days. Therefore, technique will be advanced to use digital image restoration to meet the requirements of the degraded mural images [8,9].
Restoration of such ancient images will be a massive contribution in historical research and study. This will help to permanently store and update mural information. In recent years researchers have been focusing on various interesting areas for preserving these historical details. This paper aims to focus its novel contribution in the noise detection (ND) and ancient image restoration (IR) area [7,10]. The different types of noise considered in this research include Poisson noise, speckle noise, Gaussian noise and impulse noise. There is a Probability Density Function (PDF) for Gaussian noise. Normal Gaussian distribution is used by Guassian noise, which is based on normal Gaussian distribution. The noise could be generated due to atmospheric conditions such as sensor noise or high temperature at the time of image acquisition. It is either additive or multiplicative. Poisson noise is similar to Gaussian distribution. The main reason for this type of noise generation is the number of photons present in the image which is captured by sensors and another reason is the discrete nature of electric charge. The speckle noise present in images is one type of granular noise, and exists inherently in an image. Signal gravity is the main reason for this type of degradation. It is more complex and harder to interpret the image [11,12,13]. A noise model study is essential for image noise removal, which is required for better accuracy results [14,15].
The primary research contribution starts with dominant features that have been extracted from the images. The feature extraction process is done by using wavelet transform. The wavelet transform is selected for the feature extraction due to the fact that it decomposes images to a high extent. Then the model is constructed by using convolution neural network.
Deep learning framework works analogous to human brain. This framework repeatedly performs tasks, tweaks, and optimizes during each run to get the improved results in comparison to traditional techniques [16]. The results are based on continuous learning, unlearning, and relearning without human intervention of framework. Deep learning has proven optimization in mission critical applications such as image tracing, semantic segmentation, image recognition and restoration [17].
Recent study has demonstrated detection of COVID-19 using patterns of Convolution Neural Networks and Long short-term memory (LSTM) through X-Ray images [18]. There are many kinds of deep learning models, among which convolutional neural network is commonly used [19].
The ancient mural images having the major challenges as these images are prone to be degraded. A PSA is designed based on multilevel neural network to identify and classify noise. The construction of this type of neural network not only gives high reliability but also works fine even if length of dataset is low. The results of PSA are then compared with existing with AlexNet, Yolo V5, Yolo V3, RCNN and CNN model. It only proves that the PSA is much robust than that of the existing one. The proposed model in this study proved to be suitable for the classification of mural images on the basis of every performance parameter. The precision, accuracy, f1-score and recall of the PSA are 98.50%, 99.25%, 98.50% and 98.50% respectively. This study contributes to the development of mural art analysis. Based on performance results, our model is more suitable for identifying and classifying mural image data sets. There is vast scope to use mural images for restoration. This will be significant support for archaeologist to study ancient mural images.
A list of abbreviations and acronyms are given in Table 1.
This paper contents are organized as: the literature survey of existing method is proposed in Section 2, Section 3 proposes noise identification flow; followed by performs implementation and analysis in Section 4; Section 5 gives results and in Section 6 conclusions are given.

2. Literature Survey

This section highlights existing methods and analyses strengths and weaknesses of existing methods. The challenges of research are highlighted towards the end of the section.
Qi Wang, Jing et al. [20] proposes preserving the information of low-frequency images to remove noise of high frequency. As a result, fractional calculus can effectively reduce noise, and the fractional calculus theory is well-established in digital image processing. This study proposes a new fractional differential gradient-based approach for detecting noise locations in images, as well as fractional integration based an enhanced image denoising algorithm. The proposed model can efficiently remove noise while preserving image textures and edges. The simplicity of the approach and the high degree of consistency are its advantages.
Ying Liufuetal [21], A reformative noise-immune neural network (RNINN) model has been developed based on this research, and it is compared to the classical gradient-based recursive neural network model, on convergence qualities and noise-resistance capabilities. The RNINN model’s efficiency is demonstrated by a relative numerical experiment simulation and application of images target detection.
Kushagra Yadav and colleagues [22] propose a two-stage object identification strategy, first stage is the image denoising process and second stage is the primary object detection, to upgrade prior detection pipelines. In image denoising step, proposed model uses the Residual Dense Network (RDN) to restore improved quality images from a noisy one. It extracts local characteristics from strongly connected convolutional layers while training hierarchical functions for image denoising. DNN-based denoising algorithms were validated using task-based IQ measures in paper [23]. Binary signal detection tasks were explicitly taken into account under SKE/BKS conditions. In order to assess the impact of denoising on task performance, the ideal observer (IO) and common linear numerical observers are assessed, and detection efficiency is estimated.
Studies examining modern denoising methods using objective methodologies are scarce [24]. To restore phase data when there is significant speckle decorrelation noise, it is critical to unwrap and de-noise the data. The two most common approaches to dealing with a noise wrapped phase are to de-noise before unwrapping or unwrap before de-noising. A comparison of the techniques’ resilience and efficiency is the goal of this study. Six combinations from various methodologies are compared in this study. The techniques are tested using ten simulated phase maps with escalating noise standard deviations. According to the simulation results, using windowed Fourier transform filtering before unwrapping with least-squares and iterations results in high accuracy and computation speed for restoring noisy data [25].
Varad A. Pimpalkhute and colleagues [26] proposed that in various image processing applications, such as denoising, compression, and video tracking, require accurate noise type and strength prediction. There are numerous approaches for estimating the kind and degree of noise in digital photographs. In most of these methods, images are transformed or their spatial domain information is used.
To evaluate the level of Gaussian noise in digital images, authors present a hybrid Discrete Wavelet Transform (DWT) and edge information removal-based approach. According to Kriengkri Langampol et al. [27] mixed noise happens when multiple types of noise present at the same time in images. Based on the switching bilateral filter, researchers present the detection switching bilateral filter (DSBF) where Gaussian and impulsive noise considered as mixed noise. The performance of filter is compared to NCSR, BM3D and SBF filters. The simulation results demonstrate that DSBF performs better as comparing it to other approaches, it provides the highest PSNR. Because this technique employs the best color parameter values for red, green, and blue. Furthermore, proposed technique’s complexity is similar to that of the SBF, which is the original filter in this field.
A two-stage SAID-END denoising algorithm was presented by Amandeep Singh et al. [28] to tackle the performance difficulties of existing denoising algorithms. This study by Danilo Gustavo Gil Sierra [29] proposes the sequential implementation of two bioinspired computational models to reduce impulsive noise and edge identification in grayscale images. This work, according to Piyush Joshi and Surya Prakash [30], based on noise detection, proposes a new technique for evaluating image quality.
Many applications rely on training images, such as face recognition and ear recognition. Poor-quality training photos result in inefficient training, which lowers the biometric system’s performance. This research proposes a method for estimating the quality of biometric images. For picture quality assessment, the technique does not require a reference image. The proposed technique’s efficacy has been demonstrated through experimental study.G. Maragatham et al. [31] describes noise detection and identification using statistical moment-based approach. This technique uses the discrete cosine transform to process frequency domain information of the image, which is grouped by bands. In order to determine whether noise exists, and more specifically whether impulsive noise exists, the statistical parameter kurtosis and the threshold sum of absolute deviation are used. The identification of noise is a key component of any digital image processing system that identifies the filters to smooth out the image for further processing.
For noise identification, an Artificial Neural Network (ANN)-based technique was presented [32]. The proposed method involved isolating the noise samples and extracting their statistical properties, which were then used to identify the noise using a neural network. The results showed that neural networks were more effective at detecting noise. The exact filter can be used to enhance the given images by recognizing the noise [33].
An approach based on neural networks for detecting noise types in noisy images is presented by Karibasappa K.G and colleagues [15]. There are many types of de-noising filters that can be applied to the proposed method. Based on the results of simulation experiments, the technology appears to be able to accurately determine the type of noise.
Hui Ying Khaw [34] proposes a model for recognizing image noise of various forms and levels, including Poisson, Speckle, Gaussian and Impulse noise as well as a combination of many types of noise.
The CNN method with back propagation algorithm is used to categories noise type. The authors’ investigations have shown that the suggested noise type recognition model is reliable, with an overall average accuracy as 99.3 percent when recognizing eight different types of noise. An image is a visual representation of something or someone, while a digital image is a numerical representation of that image. The technique that is used to change these pixel values in order to improve the image’s quality or to provide different viewpoints for human use [35].
According to Davood Karimia et al. [36], there is an increasing demand for such type of datasets for medical analysis applications. The influence of label noise, on the other hand, has gotten insufficient consideration. Label noise has been found to have a considerable impact on the performance of recent studies such as deep learning, machine learning. In medical science domain, as datasets often limited, domain expertise require for labeling and incorrect predictions may have a direct impact on human health.
In paper [19], study has shown machine leaning has achieved better performance on minimal sample datasets, whereas deep learning has significantly better performance on larger datasets. Deep learning techniques mimics human intelligence to understand audio sounds, text patterns and images. A smart deep learning framework is compared with human brain on object identifications and quick decision-making abilities. This is also known as deep neural setwork. This has widely used in the latest emerging techniques such as Driverless cars or self-driving and Automatic Machine Translation (AMT) [37].
In paper [38] implemented a photo classification system for cloud shapes using FDM extractor for detection and extraction cloudlike objects from large photos into small images, which can be sent further to a multi-channel CNN image classifier. Using this classification system, meteorologists will be able to categorize weather images more quickly, which will increase the efficiency of meteorological statistics. Weather forecasts can be improved by using comprehensive statistical data.
To reduce overfitting constraint of Deep CNN in paper [39] proposes a two-stage training method by optimizing the feature boundary. Pre-training, anomaly detection, and implicit regularization are three parts of two-stage training method. The implicit regularization training process begins with the anomaly detection using the pre-training model, followed by setting a regularization factor. This approach to implicit regularization at the data level offers innovative methods for AI, which can be applied to machine learning algorithms as well.
For automatic modulation classification in paper [40], proposed a spectrum interference-based two-level data augmentation method in DL. As deep CNN trained with data augmentation shows excellent performance, the proposed approach can be employed directly in cognitive radios if the model can be trained on more signals under different SNRs with various modulation schemes. Deep learning models can be improved by augmentation of data, but various factors, like model structure and weight initialization, may affect the generalization ability of neural networks.
Based on the literature discussed in this section and summary given in Table 2, the challenges of noise types in Ancient Images are:
  • For removing noise from the images, it is necessary to detect the type of noise so that the content of the image will not get hampered while removing noise pixels.
  • Also, there is a need to develop system in such a way that irrespective of the content of the image the type of noise should get detected.
In response to these challenges, the proposed system fills this literature gap. The proposed system is tested on different types of images, and it gives satisfactory results. In results and discussion chapter, results are given which got after comparison of PSA with various algorithms. The proposed system is evaluated on different processors for finding out the time complexity of the system as done by A.S. Ladkat et al. [41].
Table 2. Comparison with other detection applications.
Table 2. Comparison with other detection applications.
PaperType of Detection Technique Used DatasetAccuracy (%)
[9]Classification of muralsMultiChannel seperable network model (MCSN)China Dunhuang
Murals
88.16
[16]Biological imagesDeep LearningWood boards93
[18]COVID-19deep CNN-LSTMX-ray images99.4
[37]Living or non living thingsVGG16ImageNet99.8
[38]Cloud shapeCNN and FDM200 actual photos of real scenes a94
[39]Image detection Two stage trainingUSPS, ILSVRC2012,
MNIST, SVHN, CIFAR10, CIFAR100
98
Proposed System ArchitectureNoise typeWavelet Transform and CNNAncient mural images99.25

3. Proposed Noise Identification

Noise identification flow for the proposed system is shown in Figure 1 gives overall flow experiment. Some of Indian Heritage Image Retrieval Dataset images are processed to be reference images. A noisy image is generated by adding different types of noise to the reference image, such as Poission, Speckle, Gaussian and Impulse noise.
Then those classified by neural network classifier to obtain the classification accuracies. Furthermore, the differential accuracy of a classifier when given a reference image and images with noise was observed [42].
The PSA is shown in Figure 2 CNN is used because it gives maximum output. The PSA receives the labeled dataset as input in the form of coloured images. The wavelet transform has been used to extract the features from the photos (WT). The WT retrieves 1,572,864 characteristics in total. The system’s spatial complexity rises as a result. A dimensionality reduction technique is employed to lessen the complexity of the space, which reduces the number of features to 24,576. The 11 × 11 filter size convolution is chosen. The filter with 11-pixel height and width provides bigger spatial regions, which greatly aids in improving the classification’s accuracy. The pooling layer aims to reduce the spatial size of the representation that the convolutional layer collected. Essentially, it simplifies and condenses the information that has been obtained. The most used pooling technique is max pooling. The max pooling layer draws a window across the input, taking the maximum value inside the window and disregarding all other values. Similar to a convolutional layer the window size is specified (which corresponds to the filter size) and stride. Usually, a CNN’s last layers are fully connected layers. Their main responsibility is to categorize the features that the convolutional layer and pooling layer series have found and retrieved. Before feeding the fully connected layers, the feature maps are flattened into a single 1D vector which is then sent to the neural network for classification using the appropriate ratios of neurons and layers.

4. Algorithm Steps and Processes

PSA is implemented in two steps as described in the following steps:
  • Step 1: Acquire ancient images from dataset
Dataset contains mural images from various heritage sites. Accept ancient images dataset and then images dataset is passed through wavelet transform.
  • Step 2: Decompose image using wavelet transform and extract the features [43].
In digital image processing, feature extraction is the primary technique. After segmentation, this process executes in image processing Feature extraction is a crucial stage that represents the final result. It reduces the amount of resources needed to process a large set of data. It refers to the process of identifying and processing certain features within an image that are of interest. Features are described as functions of measurements. Each feature describes some quantifiable property of an object and is calculated in a way that estimates some essential characteristics. The features considered in PSA are Energy, Mean, Homogeneity, Contrast, Correlation, Variance, Skewness, Kurtosis, Entropy [20,44,45,46].
An energy measures how homogeneous image is, which means that it has a few gray levels.
i , j p i , j 2
Mean measures the intensity of the pixels in a region as a measure of the average value of their brightness.
μ = i = 0 G 1 i   p i
Homogeneity measures how close the distribution of elements in a matrix is to its diagonal matrix.
i , j p i , j 1 + i j
Contrast is the difference between the brightness of the objects or regions in a particular field of view and the brightness of their neighbors in that field of view.
i , j i j 2 p i , j
Correlation is degree and type of relationship between adjacent pixels throughout the whole image.
i , j i μ i j μ j p i , j σ i σ j
Variance is a measure of how much the intensity of the image varies around the mean.
σ 2 = i = 0 G 1 i μ 2   p i
When a histogram is symmetric about the mean, skewness is zero; otherwise, it is either positive or negative depending on whether the histogram is skewed above or below the average measure of asymmetry.
μ 3 = σ 3 = i = 0 G 1 i μ 3   p i
Kurtosis is a measure of flatness of the histogram or peak distribution relative to a normal distribution.
μ 4 = σ 4 = i = 0 G 1 i μ 4   p i 3
Entropy is a measure of the overall disorder of the image or measure of randomness.
Entropy   = i = 0 G 1   p i log 2 p i
The low and high components are extracted from the given images. For the decomposition of the image level three, Daubechies 5 is selected. The i m a g e x , y is passed through the wavelet transform to obtain W ψ i j , m , n as a result, as illustrated in Equation (10).
W φ j 0 , m , n = 1 M N x = 0 M 1 N 1 i m a g e ( x , y ) φ j = 0 ( x , y )
W ψ i ( j , m , n ) = 1 M N x = 0 M = 1 y = 0 N 1 i m a g e ( x , y ) ψ i j j , m , n ( x , y ) ψ i = { H , V , D }
where j 0 is an arbitrary starting scale W φ j 0 , m , n coefficients define an approximation of i m a g e x , y at scale j 0 W ψ i j , m , n parameters for the scales j > = j 0 .   Given the W φ and W ψ i , i m a g e x , y   is   as   follows :
i m a g e ( x , y ) = 1 M N m n W φ j 0 , m , n φ j 0 , m , n ( x , y ) + 1 M N i = H , V , D j = j 0 m n W ψ i ( j , m , n ) ψ j , m , n i ( x , y )
Equation (12) shows decomposition of the i m a g e x , y , which is nothing but the extracted features. The result of the decomposition of the image acquires higher space. To reduce the space complexity, the following mathematical model is used. This model is used to reduce the dimensionality of the extracted feature while keeping the logic behind the data unchanged.
  • Step 3: Dimensional reduction of features
Now, retrieved decomposed values of the input image are passed through the equations which are given below, to obtain reduced dimensionality features:
w 1 = arg   max w = 1 i t 1 i 2 = arg max w = 1 i x i . w 2
w 1 = arg   max w = 1 X w 2 = arg max w = 1 w T X T X w
w 1 = arg   max w T X T X w W T W
Equation (13) is there to obtain the maximum matching of the features with most suitable mathematical arguments. Equations (14) and (15) give the feature and find the ith component of the feature, which can be found by subtracting the first i-1 components,
X ^ i = X s = 1 i 1 X w i w i T
w i = arg max w = 1 X ^ k w 2 = arg   max   w T   X ^ i T   X ^ i   w w T   w
To calculate the covariance of the feature set, the following equation is used:
Q P C j   , P C k   X w j T X w k where   w j T X T X w k = w j T λ k w k = λ k w j T w k
Now Equation (17) is delivered as input to the neural network. Then, features represented by Equation (18) further feed to the neural network to obtain a classification. All these equations contribute to the one central idea, which is to find the logic between all the features and reduce the dimensionality of that. This leads to the lossless data compression.
  • Step 4: Pass the features to Convolutional Neural Network
Step 4: Pass the features to Convolutional Neural Network
Neural network is summation of all the well-connected neurons along with their weights. The neural network with best suitable weight is called a trained neural network. Here, we consider x as input to the neural network layer, y as output of that layer and w is the weights of the connected neurons.
For all the x ij expressions in which ω ab occurs,
E ω a b = i = 0 N m j = 0 N m E x i j x i j ω a b = i = 0 N m j = 0 N m E x i j y ( i + a ) ( j + b ) 1
Here x ij ω ab = y i + a j + b 1 , which is used as a forward propagation equation.
To compute the gradient, it is necessary to obtain the values E x ij , where the chain rule can be given as,
E x ij = E y ij y ij x ij = E y ij x ij σ x ij = E y ij   σ x ij
Therefore, the error at the current layer E y ij can be simply calculated by using deltas E x ij at the current layer by using the derivative of the activation function,   σ x .
Due to the known errors at the current layer, it is now easy to compute the gradient with respect to the weights employed by this convolutional layer. Then it is necessary to transmit errors back to the previous layer in addition to computing the weights for the current convolutional layer.
E y i j 1 = a = 0 m 1 b = 0 m 1 E x ( i a ) ( j b ) x ( i a ) ( j b ) y i j 1 = a = 0 m 1 b = 0 m 1 E x ( i a ) ( j b ) ω a b
This forward and backward moment gives rise to the improvement in the weights in such a way that the neural network will have maximum accuracy of classification. Then pooling operations are applied on the convolved feature set.
  • Step 5: Flattening of the pooled features
Next, the procedure involves flattening. The generated 2-dimensional arrays from pooled feature maps are flattened into a single long continuous linear vector. The fully connected layer uses the flattened matrix as an input for noise identification.
  • Step 6: Noise classification
The labeled classes are speckle noise, Poisson noise, Gaussian noise and impulse value noise. These four classes are trained and tested using CNN on the basis of extracted features. Then the detected image is noisy or not.

5. Results and Discussion

The results of the proposed system, ‘noise type detection of ancient images with features extraction and neural networks’, are presented in this section. The experiment is performed using TensorFlow and Keras for the neural network. The hardware environment in this experiment consists of 11th Gen Intel(R)Core (TM) i7-1165G7 @ 2.80GHzmemory 16GB, 64-bit operating system. The software environment includes freeware Microsoft visual studio code editor version 1.70.1, used with Python 3.6.3 for coding. OpenCV is used for image processing. The ancient mural images used in this paper are sourced from Indian Heritage Image Retrieval Dataset. It contains images from various heritage sites in Andhra Pradesh and Karnataka which describe various historical and mythological events. Image resolution is 10 M pixels and 18 M pixels, whose sizes range from 300 KB to 15 MB. Sample noisy images and non-noisy images are shown in Figure 3.
The metric employed for the analysis to improve effectiveness of the model is accuracy, precision, recall, and F1-score. The wavelet transforms of level 3 and db 5 are selected based on the accuracy of the classifier. The results of classification based on different levels are as tabulated in Table 3.
Once the algorithm for the feature extraction is finalized, the performance parameters are calculated to check and compare PSA with other existing architectures.
The performance of algorithms has been measured by a number of metrics, each focusing on a different aspect. Appropriate metrics need to be applied to evaluate the performance of each machine learning problem. To determine the performance of algorithms and to make a comparison, here common metrics are used for classification problems which are precision, recall [47], f1-score [48], accuracy and confusion matrix [49]. These metrics are commonly used in many classification applications [50,51] for performance evaluation.
To find out the performance parameters, the confusion matrix is first calculated and then accuracy, precision, recall and F1 score is calculated, which is represented by Equations (22)–(25).
Accuracy = TN + TP TN + TP + FN + FP
Precision = TP FP + TP
Recall = TP FN + TP
F 1 Score = 2 TP 2 TP + FN + FP
where TP correctly predicts positive and is true, TN correctly predicts the negative and is true. Where FP predicts positive and it is false, and FN predicts negative and it is false.
From the table it is demonstrated that 89.23% is the minimum accuracy when the features are extracted by using WT of level 2 and db 1 and the maximum accuracy is there in case of the WT of level 3 and db 5. Different configurations of the WT are tried and tested with proposed neural network. The WT of level 3 and db 5 gives the maximum accuracy, as shown in Figure 4. Here, level 3 with db 5 is selected for the feature extraction.
After the consideration of the result of the classifications, the WT of level 3 and db 5 is selected. Then, using the same WT, the system is tested on four different classes (Figure 5).
The performance parameters of the proposed system are tabulated in Table 4 and its comparative graph is shown in Figure 6. The overall accuracy of the proposed system is 98.93%.

5.1. Comparative Methods

The effectiveness of the PSA model is demonstrated through the comparative analysis with different models like AlexNet, Yolo V5, Yolo V3, RCNN and CNN.
Table 5 gives the results of PSA comparison with other models and visualization shown in Figure 7.
The comparative analysis of PSA model is based on performance metrics such as accuracy, F1-score, recall and precision. These performance parameters of the proposed system show better performance against the abovementioned models.

5.2. Comparative Analysis

A different number of hidden layers are checked for accuracy. When nine hidden layers are used, classification accuracy is observed to be the highest. The accuracy along with the number of hidden layers is given in Table 6. This shows that, as the number of hidden layers increases, accuracy increases gradually. However, once the number of hidden layers reaches the 9th layer, the accuracy starts decreasing.
Figure 8 is for the diagrammatical representation of noise classification using PSA. It represents the cluster of class and the target points labeled as noise type not detected, correctly detected and incorrectly detected.

5.3. Comparative Discussion

Comparative analysis of PSA models based on accuracy, precision, recall and F1-score is shown in Figure 9a. For every performance parameter the proposed system is superior to the AlexNet, Yolo V5, Yolo V3, RCNN and CNN, as shown in Figure 9b–f. These performance parameters of all the models are tabulated in Table 7.
PSA utilizes only 12 kb memory for 50 kb image size and 0.0001 seconds as processing time. Similarly, time and space complexity for different image sizes of PSA is shown in Table 8.
On the different configurations, the time complexity is also evaluated. The required time for different hardware platforms to provide results is tabulated in Table 9.
CPUs such as the i5 and i7 have similar time complexity, but GPUs require less time than CPUs. Accordingly, the processor comparison time requirement is different and GPU gives a better performance.

6. Conclusions

For restoring the ancient images, it is necessary to first identify the type of noise. The proposed study indicates optimized results for the noise identification and noise type classification of the noise present in the image. The PSA was tested against various parameters for reliability and accuracy. The time complexity of the system was compared against different hardware configurations. Additionally, the proposed system was compared with the existing AlexNet, Yolo V5, Yolo V3, RCNN and CNN models. The PSA accuracy was 99.25%, which is significantly better than the previous systems. Several noises can coexist at the same time, which can be classified in future work.

Author Contributions

Conceptualization, P.P.; methodology, P.P. and B.A.; writing—original draft preparation, P.P.; validation, S.S.A. and M.R.; formal analysis, B.A. and A.A.; writing—review and editing, M.R. and N.A.; supervision, B.A.; funding acquisition, A.A. All authors have read and agreed to the published version of the manuscript.

Funding

This study was funded by the Deanship of Scientific Research, Taif University Researchers Supporting Project number (TURSP-2020/302), Taif University, Taif, Saudi Arabia.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data in this research paper will be shared upon request to the corresponding author.

Acknowledgments

The authors would like to thank Taif University Researchers Supporting Project number (TURSP-2020/302), Taif University, Taif, Saudi Arabia for supporting this work.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Reeves, S.J. Image Restoration: Fundamentals of Image Restoration. In Academic Press Library in Signal Processing; Elsevier: Amsterdam, The Netherlands, 2014; Volume 4, pp. 165–192. [Google Scholar]
  2. Pawar, P.Y.; Ainapure, B.S. Image Restoration using Recent Techniques: A Survey. Des. Eng. 2021, 7, 13050–13062. [Google Scholar]
  3. Maru, M.; Parikh, M.C. Image Restoration Techniques: A Survey. Int. J. Comput. Appl. 2017, 160, 15–19. [Google Scholar]
  4. Cao, J.; Zhang, Z.; Zhao, A.; Cui, H.; Zhang, Q. Ancient mural restoration based on a modified generative adversarial network. Herit. Sci. 2020, 8, 7. [Google Scholar] [CrossRef]
  5. Zeng, Y.; Gong, Y.; Zeng, X. Controllable digital restoration of ancient paintings using convolutional neural network and nearest neighbor. Pattern Recognit. Lett. 2020, 133, 158–164. [Google Scholar] [CrossRef]
  6. Li, H. Restoration method of ancient mural image defect information based on neighborhood filtering. J. Comput. Methods Sci. Eng. 2021, 21, 747–762. [Google Scholar] [CrossRef]
  7. Bayerová, T. Buddhist Wall Paintings at Nako Monastery, North India: Changing of the Technology throughout Centuries. Stud. Conserv. 2018, 63, 171–188. [Google Scholar] [CrossRef]
  8. Mol, V.R.; Maheswari, P.U. The digital reconstruction of degraded ancient temple murals using dynamic mask generation and an extended exemplar-based region-filling algorithm. Herit. Sci. 2021, 9, 137. [Google Scholar] [CrossRef]
  9. Cao, J.; Jia, Y.; Chen, H.; Yan, M.; Chen, Z. Ancient mural classification methods based on a multichannel separable network. Herit. Sci. 2021, 9, 88. [Google Scholar] [CrossRef]
  10. Cao, J.; Li, Y.; Zhang, Q.; Cui, H. Restoration of an ancient temple mural by a local search algorithm of an adaptive sample block. Herit. Sci. 2019, 7, 39. [Google Scholar] [CrossRef]
  11. Boyat, A.K.; Joshi, B.K. A review paper: Noise models in digital image processing. arXiv 2015, arXiv:1505.03489. [Google Scholar] [CrossRef]
  12. Owotogbe, J.S.; Ibiyemi, T.S.; Adu, B.A. A comprehensive review on various types of noise in image processing. Int. J. Sci. Eng. Res. 2019, 10, 388–393. [Google Scholar]
  13. Kaur, S. Noise types and various removal techniques. Int. J. Adv. Res. Electron. Commun. Eng. (IJARECE) 2015, 4, 226–230. [Google Scholar]
  14. Halse, M.M.; Puranik, S.V. A Review Paper: Study of Various Types of Noises in Digital Images. Int. J. Eng. Trends Technol. (IJETT) 2018. [Google Scholar] [CrossRef]
  15. Karibasappa, K.G.; Hiremath, S.; Karibasappa, K. Neural network based noise identification in digital images. Assoc. Comput. Electron. Electr. Eng. Int. J. Netw. Secur. 2011, 2, 28–31. [Google Scholar]
  16. Affonso, C.; Rossi, A.L.D.; Vieira, F.H.A.; de Leon Ferreira, A.C.P. Deep learning for biological image classification. Expert Syst. Appl. 2017, 85, 114–122. [Google Scholar] [CrossRef] [Green Version]
  17. Chai, J.; Zeng, H.; Li, A.; Ngai, E.W. Deep learning in computer vision: A critical review of emerging techniques and application scenarios. Mach. Learn. Appl. 2021, 6, 100134. [Google Scholar] [CrossRef]
  18. Islam, Z.; Islam, M.; Asraf, A. A combined deep CNN-LSTM network for the detection of novel coronavirus (COVID-19) using X-ray images. Inform. Med. Unlocked 2020, 20, 100412. [Google Scholar] [CrossRef]
  19. Wang, P.; Fan, E.; Wang, P. Comparative analysis of image classification algorithms based on traditional machine learning and deep learning. Pattern Recognit. Lett. 2021, 141, 61–67. [Google Scholar] [CrossRef]
  20. Wang, Q.; Ma, J.; Yu, S.; Tan, L. Noise Detection and image denoising based on fractional calculus. Chaos Solitons Fractals 2020, 131, 109463. [Google Scholar] [CrossRef]
  21. Liufu, Y.; Jin, L.; Xu, J.; Xiao, X.; Fu, D. Reformative noise-immune neural network for equality-constrained optimization applied to image target detection. IEEE Trans. Emerg. Top. Comput. 2021, 10, 973–984. [Google Scholar] [CrossRef]
  22. Yadav, K.; Mohan, D.; Parihar, A.S. Image Detection in Noisy Images. In Proceedings of the 2021 5th International Conference on Intelligent Computing and Control Systems (ICICCS), Madurai, India, 6–8 May 2021; IEEE: New York, NY, USA, 2021; pp. 917–923. [Google Scholar]
  23. Li, K.; Zhou, W.; Li, H.; Anastasio, M.A. Anastasio. Assessing the impact of deep neural network-based image denoising on binary signal detection tasks. IEEE Trans. Med. Imaging 2021, 40, 2295–2305. [Google Scholar] [CrossRef]
  24. Hellassa, S.; Souag-Gamane, D. Improving a stochastic multi-site generation model of daily rainfall using discrete wavelet de-noising: A case study to a semi-arid region. Arab. J. Geosci. 2019, 12, 53. [Google Scholar] [CrossRef]
  25. Xia, H.; Montresor, S.; Picart, P.; Guo, R.; Li, J. Comparative analysis for combination of unwrapping and de-noising of phase data with high speckle decorrelation noise. Opt. Lasers Eng. 2018, 107, 71–77. [Google Scholar] [CrossRef]
  26. Pimpalkhute, V.A.; Page, R.; Kothari, A.; Bhurchandi, K.M.; Kamble, V.M. Digital image noise estimation using DWT coefficients. IEEE Trans. Image Processing 2021, 30, 1962–1972. [Google Scholar] [CrossRef]
  27. Langampol, K.; Wasayangkool, K.; Srisomboon, K.; Lee, W. Applied Switching Bilateral Filter for Color Images under Mixed Noise. In Proceedings of the 2021 9th International Electrical Engineering Congress (iEECON), Pattaya, Thailand, 10–12 March 2021; IEEE: New York, NY, USA, 2021; pp. 424–427. [Google Scholar]
  28. Singh, A.; Sethi, G.; Kalra, G.S. Spatially adaptive image denoising via enhanced noise detection method for grayscale and color images. IEEE Access 2020, 8, 112985–113002. [Google Scholar] [CrossRef]
  29. Gil Sierra, D.G.; Sogamoso, K.V.A.; Cuchango, H.E.E. Integration of an adaptive cellular automaton and a cellular neural network for the impulsive noise suppression and edge detection in digital images. In Proceedings of the 2019 IEEE Colombian Conference on Applications in Computational Intelligence (ColCACI), Barranquilla, Colombia, 5–7 June 2019; IEEE: New York, NY, USA, 2019; pp. 1–6. [Google Scholar]
  30. Joshi, P.; Prakash, S. Image quality assessment based on noise detection. In Proceedings of the 2014 International Conference on Signal Processing and Integrated Networks (SPIN), Noida, India, 20–21 February 2014; IEEE: New York, NY, USA, 2014; pp. 755–759. [Google Scholar]
  31. Maragatham, G.; Roomi, S.M.; Vasuki, P. Noise Detection in Images using Moments. Res. J. Appl. Sci. Eng. Technol. 2015, 10, 307–314. [Google Scholar] [CrossRef]
  32. Santhanam, T.; Radhika, S. A Novel Approach to Classify Noises in Images Using Artificial Neural Network 1. 2010. Available online: https://www.researchgate.net/publication/47554429_A_Novel_Approach_to_Classify_Noises_in_Images_Using_Artificial_Neural_Network (accessed on 22 July 2022).
  33. Karibasappa, K.G.; Karibasappa, K. AI based automated identification and estimation of noise in digital images. In Advances in intelligent Informatics; Springer: Cham, Switzerland, 2015; pp. 49–60. [Google Scholar]
  34. Khaw, H.Y.; Soon, F.C.; Chuah, J.H.; Chow, C. Image noise types recognition using convolutional neural network with principal components analysis. IET Image Process. 2017, 11, 1238–1245. [Google Scholar] [CrossRef]
  35. Hiremath, S.; Rani, A.S. A Concise Report on Image Types, Image File Format and Noise Model for Image Preprocessing. 2020. Available online: https://www.irjet.net/archives/V7/i8/IRJET-V7I8852.pdf (accessed on 22 July 2022).
  36. Karimi, D.; Dou, H.; Warfield, S.K.; Gholipour, A. Deep learning with noisy labels: Exploring techniques and remedies in medical image analysis. Med. Image Anal. 2020, 65, 101759. [Google Scholar] [CrossRef]
  37. Tiwari, V.; Pandey, C.; Dwivedi, A.; Yadav, V. Image classification using deep neural network. In Proceedings of the 2020 2nd International Conference on Advances in Computing, Communication Control and Networking (ICACCCN), Greater Noida, India, 18–19 December 2020; IEEE: New York, NY, USA, 2020; pp. 730–733. [Google Scholar]
  38. Zhao, M.; Chang, C.H.; Xie, W.; Xie, Z.; Hu, J. Cloud shape classification system based on multi-channel CNN and improved FDM. IEEE Access 2020, 8, 44111–44124. [Google Scholar] [CrossRef]
  39. Zheng, Q.; Yang, M.; Yang, J.; Zhang, Q.; Zhang, X. Improvement of generalization ability of deep CNN via implicit regularization in two-stage training process. IEEE Access 2018, 6, 15844–15869. [Google Scholar] [CrossRef]
  40. Zheng, Q.; Zhao, P.; Li, Y.; Wang, H.; Yang, Y. Spectrum interference-based two-level data augmentation method in deep learning for automatic modulation classification. Neural Comput. Appl. 2021, 33, 7723–7745. [Google Scholar] [CrossRef]
  41. Ladkat, A.S.; Date, A.A.; Inamdar, S.S. Development and comparison of serial and parallel image processing algorithms. In Proceedings of the 2016 International Conference on Inventive Computation Technologies (ICICT), Coimbatore, India, 26–27 August 2016; pp. 1–4. [Google Scholar] [CrossRef]
  42. Boonprong, S.; Cao, C.; Chen, W.; Ni, X.; Xu, M.; Acharya, B.K. The classification of noise-afflicted remotely sensed data using three machine-learning techniques: Effect of different levels and types of noise on accuracy. ISPRS Int. J. Geo-Inf. 2018, 7, 274. [Google Scholar] [CrossRef]
  43. Subashini, P.; Bharathi, P.T. Automatic noise identification in images using statistical features. Int. J. Comput. Sci. Technol. 2011, 2, 467–471. [Google Scholar]
  44. Masood, S.; Jaffar, M.A.; Hussain, A. Noise Type Identification Using Machine Learning. 2014. Available online: https://pdfs.semanticscholar.org/6049/ee6606315cb6bb8f4db26cd5d1da5e1fb873.pdf (accessed on 22 July 2022).
  45. Krishnamoorthy, T.V.; Reddy, G.U. Noise detection using higher order statistical method for satellite images. Int. J. Electron. Eng. Res. 2017, 9, 29–36. [Google Scholar]
  46. Mutlag, W.K.; Ali, S.K.; Aydam, Z.M.; Taher, B.H. Taher. Feature extraction methods: A review. J. Phys. Conf. Ser. 2020, 1591, 012028. [Google Scholar] [CrossRef]
  47. Powers, D.M.W. Evaluation: From precision, recall and F-measure to ROC, informedness, markedness and correlation. arXiv 2020, arXiv:2010.16061. [Google Scholar]
  48. Sasaki, Y. The truth of the F-measure. Teach Tutor Mater 2007, 1, 1–5. [Google Scholar]
  49. Vakili, M.; Ghamsari, M.; Rezaei, M. Performance analysis and comparison of machine and deep learning algorithms for IoT data classification. arXiv 2020, arXiv:2001.09636. [Google Scholar]
  50. AlZoman, R.; Alenazi, M. A comparative study of traffic classification techniques for smart city networks. Sensors 2021, 21, 4677. [Google Scholar] [CrossRef]
  51. Li, W.; Liu, H.; Wang, Y.; Li, Z.; Jia, Y.; Gui, G. Deep Learning-Based Classification Methods for Remote Sensing Images in Urban Built-Up Areas. IEEE Access 2019, 7, 36274–36284. [Google Scholar] [CrossRef]
Figure 1. Noise Identification Flow.
Figure 1. Noise Identification Flow.
Sustainability 14 11786 g001
Figure 2. PSA for the classification of noise type in image.
Figure 2. PSA for the classification of noise type in image.
Sustainability 14 11786 g002
Figure 3. Samples of noisy and non-noisy images.
Figure 3. Samples of noisy and non-noisy images.
Sustainability 14 11786 g003
Figure 4. Accuracy of the classification of the noisy images.
Figure 4. Accuracy of the classification of the noisy images.
Sustainability 14 11786 g004
Figure 5. Confusion matrix of the classification of the different noise types using PSA.
Figure 5. Confusion matrix of the classification of the different noise types using PSA.
Sustainability 14 11786 g005
Figure 6. Performance parameters for the classification of different classes using PSA.
Figure 6. Performance parameters for the classification of different classes using PSA.
Sustainability 14 11786 g006
Figure 7. Comparison of the PSA with other models on the basis of performance parameters.
Figure 7. Comparison of the PSA with other models on the basis of performance parameters.
Sustainability 14 11786 g007
Figure 8. Diagrammatical representation of noise classification using proposed system architecture.
Figure 8. Diagrammatical representation of noise classification using proposed system architecture.
Sustainability 14 11786 g008
Figure 9. Performance parameters of the PSA and other models.
Figure 9. Performance parameters of the PSA and other models.
Sustainability 14 11786 g009aSustainability 14 11786 g009b
Table 1. List of acronyms and abbreviations.
Table 1. List of acronyms and abbreviations.
PSA
DWT
Proposed System Architecture
Discrete Wavelet Transform
NDNoise Detection
IRImage Restoration
PDFProbability Density Function
ANNArtificial Neural Network
RDN
PSNR
Residual Dense Network
Peak signal-to-noise ratio
IOIdeal Observer
AMTAutomatic Machine Translation
DWTDiscrete Wavelet Transform
FDMFrame Difference Method
AI
WT
RCNN
Artificial Intelligence
Wavelet Transform
Region-based Convolutional Neural Network
Table 3. Accuracy of the PSA on the bases of feature extraction.
Table 3. Accuracy of the PSA on the bases of feature extraction.
Wavelet Leveldb ValueAccuracy (%)
2189.23
2292.34
2395.23
2493.12
2592.59
3194.93
3292.34
3395.23
3497.23
3598.93
4196.34
4295.23
4397.52
4498.29
4597.42
Table 4. The proposed system’s performance parameters.
Table 4. The proposed system’s performance parameters.
Class NameAccuracyPrecisionRecallF1-Score
Poisson noise98.75%97%98%98%
Speckle noise99.50%99%99%100%
Gaussian noise99%98%98%97%
Impulse value noise99.75%100%99%99%
Table 5. Performance comparison of proposed system with other models.
Table 5. Performance comparison of proposed system with other models.
AlgorithmAccuracyPrecisionRecallF1-Score
PSA99.25%98.50%98.50%98.50%
AlexNet97.30%96.60%95.70%97.93%
Yolo V594.09%94.58%93.10%94%
Yolo V392.24%90.88%91.20%93.17%
RCNN90.03%92.08%90.39%92.13%
CNN89.39%89.53%88.62%88.82%
Table 6. Performance parameters of the proposed system based on number of hidden layers.
Table 6. Performance parameters of the proposed system based on number of hidden layers.
Number of Hidden LayersAccuracy
186.23
287.43
389.56
490.88
592.66
694.09
795.69
897.29
998.93
1097.29
1196.23
Table 7. Performance parameters of the proposed PSA, AlexNet, Yolo V5, Yolo V3, RCNN and CNN.
Table 7. Performance parameters of the proposed PSA, AlexNet, Yolo V5, Yolo V3, RCNN and CNN.
AlgorithmClass NameAccuracyPrecisionRecallF1-Score
PSAGaussian noise99.00%98.00%98.00%98.00%
Impulse value noise99.75%100%99.00%100%
Poisson noise98.75%97.00%98.00%97.00%
Speckle noise99.50%99.00%99.00%99.00%
AlexNetGaussian noise98.62%97.13%96.42%98.62%
Impulse value noise97.41%96.30%95.23%98.17%
Poisson noise97.32%95.14%93.62%96.42%
Speckle noise96.21%97.83%97.53%98.50%
Yolo V5Gaussian noise96.43%94.72%93.84%94.24%
Impulse value noise95.72%94.28%93.63%95.24%
Poisson noise92.52%96.46%91.30%92.88%
Speckle noise91.69%92.84%93.61%93.62%
Yolo V3Gaussian noise95.24%92.43%94.20%93.53%
Impulse value noise89.63%90.75%91.64%94.45%
Poisson noise92.48%91.30%87.53%92.12%
Speckle noise91.59%89.03%91.41%92.60%
RCNNGaussian noise86.43%89.35%87.53%90.70%
Impulse value noise89.53%92.54%90.51%91.73%
Poisson noise93.15%94.02%92.09%92.56%
Speckle noise91.00%92.42%91.42%93.51%
CNNGaussian noise87.35%86.25%84.62%85.73%
Impulse value noise88.53%87.53%86.83%87.62%
Poisson noise95.42%96.42%96.00%96.19%
Speckle noise86.24%87.92%87.03%85.72%
Table 8. Noise classification time and space complexity of PSA for different sizes of image.
Table 8. Noise classification time and space complexity of PSA for different sizes of image.
PSA
Image SizeTime (Seconds)Required Memory for
Processing (kb)
50 kb0.000112
100 kb0.000116
200 kb0.00029425
500 kb0.00078439
750 kb0.00186248
1 Mb0.127474
5 Mb1.038891
10 Mb2.764128
15 Mb2.9543381
Table 9. Time required using PSA on different configurations.
Table 9. Time required using PSA on different configurations.
Configuration
CPU/GPUProcessorRAMRequired Time in Seconds
CPUi38GB0.393
CPUi58GB0.292
CPUi78GB0.286
GPUNvidia K8024 GB0.003
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Pawar, P.; Ainapure, B.; Rashid, M.; Ahmad, N.; Alotaibi, A.; Alshamrani, S.S. Deep Learning Approach for the Detection of Noise Type in Ancient Images. Sustainability 2022, 14, 11786. https://doi.org/10.3390/su141811786

AMA Style

Pawar P, Ainapure B, Rashid M, Ahmad N, Alotaibi A, Alshamrani SS. Deep Learning Approach for the Detection of Noise Type in Ancient Images. Sustainability. 2022; 14(18):11786. https://doi.org/10.3390/su141811786

Chicago/Turabian Style

Pawar, Poonam, Bharati Ainapure, Mamoon Rashid, Nazir Ahmad, Aziz Alotaibi, and Sultan S. Alshamrani. 2022. "Deep Learning Approach for the Detection of Noise Type in Ancient Images" Sustainability 14, no. 18: 11786. https://doi.org/10.3390/su141811786

APA Style

Pawar, P., Ainapure, B., Rashid, M., Ahmad, N., Alotaibi, A., & Alshamrani, S. S. (2022). Deep Learning Approach for the Detection of Noise Type in Ancient Images. Sustainability, 14(18), 11786. https://doi.org/10.3390/su141811786

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop