Next Article in Journal
Boosting Engineering Education with Virtual Reality: An Experiment to Enhance Student Knowledge Retention
Previous Article in Journal
A Global Overview of SVA—Spatial–Visual Ability
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Meat Texture Image Classification Using the Haar Wavelet Approach and a Gray-Level Co-Occurrence Matrix

by
Kiswanto Kiswanto
1,2,*,
Hadiyanto Hadiyanto
3 and
Eko Sediyono
4
1
Information Systems Doctoral Program, Graduate School, Diponegoro University, Semarang 50241, Indonesia
2
Department of Information Systems, Atma Luhur Institute of Science and Business, Pangkalpinang 33172, Indonesia
3
Department of Chemical Engineering, Faculty of Chemical Engineering, Diponegoro University, Semarang 50275, Indonesia
4
Department of Computer Science, Faculty of Information and Technology, Satya Wacana Christian University, Salatiga 50711, Indonesia
*
Author to whom correspondence should be addressed.
Appl. Syst. Innov. 2024, 7(3), 49; https://doi.org/10.3390/asi7030049
Submission received: 29 March 2024 / Revised: 29 May 2024 / Accepted: 30 May 2024 / Published: 12 June 2024

Abstract

:
This research aims to examine the use of image processing and texture analysis to find a more reliable and efficient solution for identifying and classifying types of meat, based on their texture. The method used involves the use of feature extraction, Haar wavelet, and gray-level co-occurrence matrix (GLCM) (with angles of 0°, 45°, 90°, and 135°), supported by contrast, correlation, energy, homogeneity, and entropy matrices. The test results showed that the k-NN algorithm excelled at identifying the texture of fresh (99%), frozen (99%), and rotten (96%) meat, with high accuracy. The GLCM provided good results, especially on texture images of fresh (183.21) and rotten meat (115.79). The Haar wavelet results were lower than those of the k-NN algorithm and GLCM, but this method was still useful for identifying texture images of fresh meat (89.96). This research development is expected to significantly increase accuracy and efficiency in identifying and classifying types of meat based on texture in the future, reducing human error and aiding in prompt evaluation.

1. Introduction

Meat is an excellent source of animal protein. As Indonesia’s population increases, there is a higher demand for meat. Unfortunately, many traders deliberately mix pork and beef to gain higher income [1]. There has been a decline in beef sales in Indonesia ahead of Eid al-Fitr, due to rising prices. In anticipation, some traders combine pork and beef, as pork is cheaper, with a color and texture similar to beef. It is difficult for most people to visually differentiate between beef and pork [2]. The National Standardization Agency of Indonesia (2018) states that the color, texture, tenderness, and marbling of fat and meat are the main factors for assessing the quality of beef [3]. Nutrition is crucial in Indonesia for increasing human resources, and beef supplies animal protein to consumers. Healthy meat is bright red, shiny but not pale, elastic but not sticky, and possesses a distinctive aroma [4]. Meat quality is determined by various factors, such as size, texture, color, and aroma. Currently, meat quality is determined by its color and shape. However, this method of identifying meat quality has shortcomings because of the subjectivity and consequent unreliability of human judgment [5].
Since food authenticity is a trending topic, food fraud is a global problem that has attracted attention in recent decades. Beef, buffalo, chicken, duck, goat, sheep, and pork are examples of popular types of meat with high economic value, as they are sources of nutrition and play important roles in culture, religion, and the economy. Both raw and processed meat are often known to be contaminated [6]. People have many concerns about meat, such as religious, legal, economic, and medical issues. To protect customers from fraud and bad marketing practices, several analytical approaches have been proposed to identify meat species in separate or combined samples [7]. Species substitution in meat products is a common issue documented globally. Using low-value meat resources to produce high-value meat products is considered food fraud and is usually aimed at financial gain. This phenomenon has raised financial, health, and religious issues [8].
Meat adulteration, especially for commercial gain, has become widespread and has led to significant threats to public health, besides giving rise to moral and religious offenses. A technology for fast, precise, and reliable detection is essential to efficiently track meat adulteration. Considering the importance and rapid development of meat adulteration detection technology, recent developments in this field need to be thoroughly analyzed and recommendations should be made for future progress [9]. As beef, buffalo meat, and pork are sources of contamination, deep analyses can maintain social, religious, economic, and health standards [10]. Selecting texture images of adequate quality is one of the main problems in texture classification [11]. Whether the textural characteristics of raw meat are reliable indicators of beef tenderness is to be ascertained [12]. Techniques for differentiating the value of turkey and pork ham use wavelet texture and color characteristics [13]. Researchers used practical procedures to evaluate various texture analysis techniques and regression methods to predict various physico-chemical and sensory attributes of different cuts of Iberian pork. Statistical methods like Haralick descriptors, local binary patterns, fractal features, and frequency descriptors like Gabor or wavelet features are examples of texture descriptors [14]. Experiments to evaluate pork quality show that the two-dimensional Gabor wavelet transform is more effective for extracting the textural properties of pork [15].
This study suggests the use of Dynamic Weighted Temporal Long Short-Term Memory (DWTLSTM) with a discrete wavelet transform to reduce noise contamination in e-nose signals during beef quality monitoring. The proposed approach performs well, with an average accuracy of 94.83% and an average F-measure of 85.05% in classifying beef quality [16]. This technique is based on clustering, and the Haar filter is used to carefully extract features in the wavelet transform domain. Decomposition is performed at the first, second, and third levels in the wavelet domain, and the resulting coefficients at each level are examined to estimate the freshness of fish samples [17]. Sheep muscle classification is achieved using a wavelet analysis of near-infrared (NIR) hyperspectral imaging data. Wavelet transformation is applied to determine the optimal wavelet features for classifying sheep muscles [18].
To obtain 64-dimensional characteristics with excellent quality, eight image texture features are extracted in each of the final eight wavelengths, using a two-dimensional wavelet transformation [15]. Half-pig carcasses can be accurately depicted using spectral wavelet graphics called spectral weight. Further, spectral weight is applied as a prediction model to weigh different pieces of pork and determine the tissue composition [19]. The use of the transformation of DWTLSTM is recommended, together with interference for preventing e-nose signal contamination to evaluate beef quality [20]. To estimate TVB-N values in cooked beef during storage, this study examines the integration of spectral and image data from visible and near-infrared hyperspectral imaging. Nine ideal wavelengths are selected, using the Sequential Projection Algorithm (SPA) and Variables Without Elimination (VWE). The discrete wavelet transform (DCT) extracts 36 single values as texture features [21].
This study uses image preprocessing, dataset training, and classification to assess the quality of tuna meat classes according to color space. The three main steps in image preprocessing are cropping, converting RGB images to HSV, and extracting features using wavelets. The findings show that Symlet waves produce higher correlation coefficients among feature extraction rates than Haar waves. Wavelet Symlet and k-NN classify 65 test image datasets with a higher accuracy of 81.8% than wavelet Haar and k-NN, which is 80.3% [22]. The amount of intramuscular fat (IMFAT) in beef ribeye muscle has been predicted using a multiresolution texture analysis technique, using wavelet transformation. Several features are calculated based on 2-D wavelet decomposition ultrasonic images using Haar wavelets as the basis function to create a fast wavelet transform [23]. A new technique for local thresholding using weighted detail coefficients in wavelet synthesis is adopted to generate multi-scale thresholding functions for image segmentation. The fast wavelet technique can implement such a local wavelet-based thresholding method, which is adapted to the local environment and size. Researchers used X-ray imaging for this method to detect physical contamination in chicken inspections [24].
These features are extracted carefully, using a Haar filter in the form of a domain wavelet transform. Decomposition at the first, second, and third levels in the wavelet domain was performed, and the resulting coefficients at each level were examined to estimate the freshness of the fish samples [25]. The amount and percentage of IMFAT are the main elements determining beef quality. Texture analysis is used on B-mode ultrasound images of live bovine cow muscle ribeye to predict its IMFAT. GLCM is applied for multiresolution analysis of textures and second-order statistics using wavelet transform (WT) [26]. This research aims to distinguish between beef and pork using digital photos. For texture analysis, color features use a GLCM to calculate the first-order statistical average of color values [27]. There is a technique to extract the texture of beef, which is a vital component in beef classification. Texture feature analysis extraction is carried out using the frequency domain Discrete Wavelet Transform (DWT) and GLCM statistics [28].
This research aims to examine the freshness level of chicken meat by analyzing its color and texture. GLCM is the texture property used [29]. This research combines and applies contrast, energy, and homogeneity properties to CNN learning to investigate the capacity of GLCM to identify patterns with significant variance, robustness to geometric distortion, and direct transformation [30]. This research aims to accurately categorize and differentiate groups of cows in large numbers. GLCM is used to extract image features [31]. In this research, we used the Support Vector Machines classifier to create a system to identify the quality of beef based on its color and texture. Statistical techniques and the GLCM method are used for feature extraction [32]. This research aims to determine the grade of beef for human consumption. The k-NN algorithm is used to categorize meat photos, using a co-occurrence matrix. Based on color and texture, this research can be used to differentiate between various types of meat [33]. To extract texture properties (contrast, correlation, energy, and homogeneity), the GLCM is used to image the first-principle components of the hyperspectral image, which has a variance of 98.13% [34].
This research aims to evaluate the computational precision of tilapia fish using the k-NN algorithm and digital image processing techniques. Customers can use a smartphone as a visualization to ascertain whether the fish is suitable for consumption [35]. This research aims to find out the freshness of fish using NB and k-NN classification techniques based on fisheye images. These findings indicate that the k-NN approach is more effective than NB, with mean values of 0.97, 0.97, 0.97, 0.97, and 0.97 for accuracy, precision, recall, specificity, and AUC, respectively [36]. The goat meat image classification model is created using several practical modeling algorithms, such as k-NN, to ensure the authenticity of fresh and cooked food. Kebab is made from goat meat [37].
The use of pre-processed meat marble image segments with contrast enhancement and illumination normalization is recommended. Fat pixels are described intramuscularly, and attribute scores are determined by learning definitions of the meat standards required. The learning method used is an instance-based system that provides a score on segmentation results using the k-NN algorithm [38]. This research not only identifies the type of meat but also, for the first time, differentiates body parts and types of meat. The results of the proposed approach are compared with other machine learning algorithms used in previous investigations, including k-NN and deep learning [39]. Pork sausages are classified using linear and nonlinear techniques, including k-NN, to differentiate them [40]. This paper results in an innovative and reliable biometrics-based method for identifying a cow’s tail. Two additional classifiers, Fuzzy-k-nearest neighbor (Fk-NN) and k-NN, are used to validate the findings produced by the classifiers [41]. Collecting and evaluating olfactory data, electronic noses—a non-destructive detection method—can determine the freshness of meat. Unlike traditional machine learning techniques such as k-NN, pre-drilled AlexNet, GoogLeNet, and ResNet models are re-drilled in the last three layers [42].
This study also tests how well the MicroNIR device (VIAVI, Santa Rosa, CA, USA) uses distance cluster analysis on k-NN and multivariate data analysis to coordinate the dry sausage fermentation process [43]. With 97.4% accuracy, the k-NN model learns and categorizes ApSnet segmented image features [44]. The first derivative spectrum is used in k-NN classification [45]. The new cow recognition system uses hybrid texture and muzzle pattern features to identify and categorize cow breeds. k-NN and other classification models are used in cow classification [46]. The k-NN technique is used to construct the model [47].
In this (Figure 1) Researchers consider the classification stages of meat texture images quite comprehensive. The following is an explanation of each stage:
  • Data Collection is the first stage, representing collecting texture images of beef, buffalo meat, goat meat, horse meat, and pork, using a digital camera.
  • Sampling is the stage where the researcher selects samples from the images collected to be used as training data.
  • Class Division is the stage where the images are grouped into classes (fresh, frozen, and rotten) based on the texture characteristics of the meat.
  • Preprocessing is the stage before feature extraction, where the images need to be processed to remove noise or make other adjustments to produce better data.
  • Feature Extraction is the stage that involves extracting Haar wavelet and GLCM features as input for the classification process.
  • Dataset is the stage where the processed data and the extracted features are grouped (fresh, frozen, and rotten) into a dataset ready for the classification process.
  • Classification is the stage where the classification model, such as k-NN is used to classify the images into predetermined classes.
  • Calculate Similarity Distance is a stage, which is carried out in some cases after classification, and may be necessary to determine how similar the classified image is to images in the same class.
  • Validation is the stage that involves evaluating the performance of the classification model to ensure that the model can classify images correctly and reliably.
  • Confusion Matrix Results is the stage where the confusion matrix is used to display classification performance in more detail, showing how good the model is at classifying images into the correct classes.

2. Material and Method

2.1. Data Collection

This research used five types of meat: beef, buffalo meat, goat meat, horse meat, and pork. The meats were cut into 15 × 15 cm pieces by slicing lengthwise or in the direction opposite to the grain. There are three categories of each type of meat: fresh, frozen, and rotten. The researchers collected 50 pieces of each type of meat sample (fresh, frozen, and rotten meat), amounting to 150 pieces of samples. These 150 pieces were multiplied by five types of meat, and there were 750 pieces of different meat samples in total. Texture images of various types of meat were captured using a digital camera. Acquisition of perpendicular images was carried out by adjusting the distance at which the image was taken. The image-taking distances were 10, 20, 30, and 40 cm. Room lighting was provided by Emitting Diode (LED) lamps with wattage] of 3, 5, 7, 9, 11, and 13, respectively.

2.2. Preprocessing

The texture image characteristics of meat types were obtained using an image feature extraction approach. There were several preprocessing steps in image processing before feature extraction. GLCM was a feature used to extract meat texture images. The purpose of image cropping was to remove labels from images with a texture similar to flesh.
Extracting image properties as attributes for the classification process was the final step in the image processing process. The Wavelet Haar feature and GLCM are two examples of these features. The histogram feature consisted of the following five features: entropy, energy, contrast, homogeneity, and correlation. The five GLCM qualities used in this research were energy, contrast, correlation, homogeneity, and entropy. As many as 175 images showed various types of meat textures. Separate images of each piece of meat were taken. Each type of meat was tried fifty times, with 600 photos in the training set and 150 in the test set. The types of meat used were beef, buffalo, goat, horse, and pork. Images were taken at a resolution of 500 × 500 pixels.
The final stage of image processing was extracting image characteristics as attributes of the classification process. GLCM and Wavelet Haar are examples of this feature. The five features included in the histogram were energy, contrast, homogeneity, correlation, and entropy, which were the five GLCM properties used in this research.

2.3. k-Nearest Neighbor (k-NN)

The results showed that k-NN had the highest accuracy among algorithms for categorizing and comparing k-NN patterns, based on their accuracy [48]. The k-NN method was used for data processing [49]. The distance between two points in feature space can be calculated using various distance metrics, such as Euclidean distance, Manhattan distance, Minkowski distance, or other distance metrics, of which the first is the most commonly used. The distance between two points A and B in dimension n is
( A , B ) = i = 1 n A i B i 2
where A i B i is the i component of the two vectors to be compared and n is the number of dimensions (features) of each vector.
After finding the k -nearest neighbor of the point to be classified, the majority class of the k neighbor is taken as the class prediction. If k = 1 , then the predicted class will be the class of the nearest neighbor. If k > 1 , then class predictions are usually taken based on the majority of the class of the k -nearest neighbor.
The distance between each data point in the training dataset and the data point to be predicted was measured as part of the k-NN computation. The following are the main steps in k-NN calculations:
  • The distance metric used to calculate the proximity between data points must be selected using the Euclidean distance.
  • Using the Euclidean distance metric, the distance to each data point in the training dataset and each testing data point is to be determined.
  • After calculating the distance, the next step is to find the k-NN and look at the k-NN of the test data points based on the most negligible distance value. Thereafter, the calculated distances are to be sorted and the lowest K value selected.
  • The next step is to predict the class (classification) as a class prediction for the test data points; if the task involves classification, it should be ensured that the majority class is from the k-NN.
Figure 2 below shows the stages of the classification model:

2.4. Wavelet Haar

DWT, which is based on Haar wavelet basis functions, is used for dimensionality reduction in the data. Algorithm coefficients and prediction time consumption are better used with DWT (wavelet basis function and the number of transformation layers are Haar-4 respectively) [50]. The Haar wavelet is a type of wavelet function used in wavelet analysis. The wavelet function is a mathematical tool for analyzing and representing texture images of various types of meat or data simultaneously in the time and frequency domains. The Dutch mathematician Alfréd Haar presented this function first in 1910.
The Haar wavelet is one of the easiest wavelets to understand and use due to its discrete and straightforward character. It is a local segment function often used in data compression, image processing, and image analysis because of its ability to quickly identify variations in the texture images of various types of meat. The image is separated into smaller intervals for the Haar wavelet transform, and each interval is then subjected to the Haar wavelet. This procedure enables the imaging of different signal components at various resolutions. The main advantage of Haar waves is their ability to convey high-frequency information effectively; so, they are often used in applications requiring rapid identification of edges or signal changes. Approximation coefficients and detail-based mathematical formulas can be used to describe the Haar wavelet transform. The calculation for discrete signal x   [ n ] with a length of N , estimation coefficient A   [ k ] , and detail coefficient D   [ k ] at level k is as follows:
A. Approximation Coefficient:
A k = x 2 k + x [ 2 k + 1 ] 2
B. Detail Coefficient:
D k = x 2 k x [ 2 k + 1 ] 2
This process can be repeated at each transformation level by replacing x   [ n ] with A   [ k ] for the next level. The following are the basic steps to perform the Haar wavelet transform:
  • The intervals for texture photos of different types of meat are divided into smaller intervals. Convolution operations on adjacent intervals or the average of two consecutive values can achieve this.
  • After separating the intervals, the approximation coefficient (A) and detail coefficient (D) are calculated. The approximation coefficient represents the finer details or low-frequency components in the texture image of a type of meat. The detail coefficient represents high-frequency elements or information that are coarser or change more quickly.
  • Coefficient values are normalized to suit the needs of a particular use case. Adjusting scales or assigning weights can be part of this process.
Table 1 below shows the estimated pseudocode (cA) and detail coefficients (cH, cV, and CD) calculated in the Haar wavelet transform:
In the pseudocode above, the researchers started by taking a grayscale image from the input image (if the image is in color). Thereafter, they divided the image into 2 × 2 sub-blocks. Following that, for each sub-block, they calculated the average pixel as the approximation component as well as the horizontal, vertical, and diagonal details as the detail coefficients component. After calculating all the components, the researchers combined these values to form cA, cH, cV, and cD, which, respectively, represented the approximation coefficient and detail coefficients components of the Haar wavelet transform.
In this code, the Haar coefficient is a function to calculate the approximation coefficients and details of the Haar wavelet transform for the input image. This function uses pywt.dwt2 from PyWavelets to perform the Haar wavelet transform. The transformation results were then separated into approximation coefficients (cA) and detail coefficients (cH, cV, and cD), which represent coarse and detailed information from the image, respectively, for the horizontal, vertical, and diagonal dimensions.
Figure 3 shows the Haar wavelet approximation histogram is a visual representation of the energy distribution of an image or data after the Haar wavelet transformation process. Haar wavelet is a type of wavelet transform used in image analysis and image processing. The researchers can read the Haar wavelet approximation histogram through the following steps:
  • Wavelet Haar involves the decomposition of the original image into basic parts called approximation (coarse features) and detail (fine features).
  • The original image is divided into two equal parts, after which the difference between the two parts is calculated to obtain the detail, while the average of the two parts results in an approximation.
  • A histogram is a visual representation of the distribution of values in the data. The approximation histogram of Haar wavelets shows the frequency of appearance of approximation values after the Haar wavelet transform.
  • Peaks in a histogram appear due to a distinctive structure or pattern in the data. In the context of Haar wavelets, these peaks may appear due to sudden changes in approximation values, which may be indicative of significant changes in the original image.
  • The peaks in a Haar wavelet approximation histogram can provide information about important features in the data, such as sudden changes, the presence or absence of certain patterns, and the level of roughness or smoothness in the image.
Why do many peaks appear? Some causal factors include the following:
(a)
If the original image has many sudden changes or small details, there will be many peaks in the approximation histogram of the Haar wavelet.
(b)
The resolution of the Haar wavelet transform also influences the number of peaks. The greater the resolution, the finer the details that can be detected, resulting in fewer peaks in the histogram.
(c)
Using different scales in the Haar wavelet transform can also affect the number of peaks in the histogram. The more varied the scales used, the more details are captured, resulting in more peaks.
By understanding the Haar wavelet approximation histogram and analyzing its peaks, researchers can gain valuable insights into the structure and characteristics of the image or data they are reviewing.

2.5. Gray Level Co-Occurrence Matrix

Selecting GLCM features is important to improve sample differentiation [51]. GLCM is a texture analysis tool used in digital image processing. It identifies the spatial relationship between the pixel intensities of a grayscale image to extract information about its pattern and texture. GLCM measures the frequency at which pairs of pixels with a given intensity appear together at a certain distance and direction. By calculating this matrix and analyzing meat texture images, many statistics can be generated, such as energy, contrast, correlation, homogeneity, and entropy. For GLCM elements C(i, j) with horizontal, vertical, and diagonal directions at a distance of 1, the general formula will depend on the definition of the GLCM and the specific directions used. Assuming that we are working with grayscale images at intensity levels between 0 and L 1 , where L is the number of intensity levels, the general formula for calculating GLCM elements C(i, j) with horizontal, vertical, and diagonal directions at a distance of 1 is
C i ,   j = m = 1 M 1 n = 1 N δ ( I m , n = i   a n d   I   m + 1 , n = j
C i ,   j = m = 1 M 1 n = 1 N 1 δ ( I m , n = i   a n d   I   m + 1 , n + 1 = j
where
  • I m , n is the intensity of the image pixel at coordinates (m, n).
  • δ . returns one if the parenthetical statement is accurate, and 0 if otherwise.
  • M is the number of image rows.
  • N is the number of image columns.
Basic GLCM calculations for horizontal, vertical, and diagonal directions at a distance of 1 from the grayscale image are as follows:
1 , 1 , 2 , 3 2 , 2 , 3 , 4 1 , 1 , 2 , 2 3 , 4 , 4 , 4
The steps of calculation are as follows:
  • Select horizontal, vertical, and diagonal directions at a distance of 1.
  • Count the pairs of pixels that occur together:
    • Scan the image to identify pairs of pixels with matching intensities.
    • Pairs of pixels that occur together include horizontal (1, 1), (1, 2), (2, 3), (3, 4), (2, 2), (3, 3), (4, 4), (1, 1), (1, 2), (2, 3), (3, 4), (2, 2), (3, 3), (4, 4), vertical (1, 1), (2, 2), (1, 1), (3, 4), (2, 2), (1, 1), (4, 4), (2, 2), (2, 2), (3, 3), (4, 4), (2, 2), (2, 2), (4, 4) and diagonals (1, 1), (1, 1), (2, 2), (3, 3), (2, 2), (3, 3), (4, 4), (1, 1), (1, 1), (2, 2), (3, 3), (2, 2), (3, 3), (4, 4).
  • Create a GLCM
    • Count the occurrences of pixel pairs and insert them into the GLCM.
      2 , 3 , 2 , 1
      1 , 2 , 2 , 3
      0 , 1 , 2 , 2
      1 , 0 , 0 , 3
      3 , 1 , 3 , 0
      0 , 4 , 0 , 2
      2 , 1 , 2 , 1
      0 , 2 , 0 , 2
      3 , 0 , 0 , 0
      0 , 4 , 0 , 0
      0 , 0 , 3 , 0
      0 , 0 , 0 , 3
      (Horizontal)(Vertical)(Diagonal)
  • GLCM Normalization:
    Normalize the matrix to obtain a probability distribution.
    0.1 , 0.15 , 0.1 , 0.05
    0.05 , 0.1 , 0.1 , 0.15
    0 , 0.05 , 0.1 , 0.1
    0.05 , 0 , 0 , 0.15
    0.1875 , 0.625 , 0.1875 , 0
    0 , 0.25 , 0 , 0.125
    0.125 , 0.0625 , 0.125 , 0.0625
    0 , 0.125 , 0 , 0.125
    0.375 , 0 , 0 , 0
    0 , 0.5 , 0 , 0
    0 , 0 , 0.375 , 0
    0 , 0 , 0 , 0.375
    (Horizontal)(Vertical)(Diagonal)
Based on the probability distribution matrix presented in (Figure 4), it is important to prepare fresh, frozen, and rotten meat images in the grayscale format before calculating the GLCM. Horizontal GLCM involves calculating the occurrence of pairs of gray values horizontally, while vertical and diagonal GLCM involves calculating gray values vertically and diagonally. Thereafter, researchers can create a GLCM histogram for each direction, representing the frequency distribution of pairs of gray values. By performing these steps, fresh, frozen, and rotten meat images can be analyzed using GLCM histograms.
An explanation of how the GLCM is used to describe the relationship between pixels in meat texture images can be found in the researcher’s analysis of the GLCM computational stages. For example, I is a grayscale image with a size of M × N . The specified distance and direction are d   and θ , respectively. GLCM G with a size of K × K (where K is the number of gray levels in the image) is calculated by matching each pair of pixels with a specified distance and direction. The general equation for GLCM elements is
G i j d ,   θ = m = 1 M n = 1 N δ I m n , i x   δ I m + δ x , n + δ y ,   j
where
  • G i j d ,   θ is the GLCM element for the intensity pairs i and j at a distance of d and a direction of θ .
  • δ is the Kronecker delta function, with a value of 1 if the values in it are the same, and 0 if they are different.
  • I m n is the pixel intensity at the coordinate ( m , n ) in the image.
  • δ x and δ y is the shift in direction that determines the distance.
This process is repeated for each pair of pixels at the specified distance and direction, resulting in a complete GLCM. The GLCM can then be used to extract various texture features, which are useful for further analysis. The different stages of GLCM computation are as follows:
  • Direction and Distance Selection
Determining direction and distance has important meaning since it affects the measurement of the relationship between pixels. This affects the texture information extracted from the image.
2.
GLCM Calculation
Counting the occurrence of a pair of pixel intensities at a given distance and direction is necessary to calculate the GLCM. The final product is a matrix that displays the frequency of certain pairs of pixels that appear together.
3.
Matrix Normalization
Matrix normalization is applied to determine the probability distribution of the occurrence of pixel pairs. This probability distribution can calculate the probability of a pair of pixels occurring in the entire image size.
4.
Feature Extraction from Matrix Normalization
After normalization, various texture properties, including energy, contrast, correlation, homogeneity, and entropy, can be extracted from the matrix. Each element offers different details of the texture characteristics of the image.
5.
Texture Statistics and Characteristics
Energy calculation can be used to ascertain the extent to which pixel intensity approaches a certain value. The difference in intensity of neighboring pixels is called contrast. The degree of correlation between pixel brightness in a given distance and direction is reflected in the correlation. The degree of uniformity of pixel intensity distribution throughout the image is called homogeneity. The degree of uncertainty in the pixel intensity distribution is measured by entropy.
Following this procedure will produce a set of attributes to identify or describe a meat texture image more accurately. This method has several benefits in pattern recognition and image analysis, especially in image processing for the identification and classification of meat types.
Based on the classification of meat type texture images using k-NN, Haar wavelet, and GLCM approaches presented in (Figure 5), The k-NN, Haar Wavelet, and GLCM methodologies can be used to identify the different stages and procedures in classifying texture images of various types of meat. Color images of different types of beef are converted to grayscale as an initial step in the processing procedure. Image decomposition using Haar wavelets with smoothing and reduction on lines and columns is the second feature extraction stage using GLCM and Haar wavelets. We create four images: HH (detail components), LH, HL, and LL (estimated). The mean, standard deviation, and trend of each detailed element are calculated statistically on the deconstructed image. For each deconstructed image, GLCM normalization produces a normalized GLCM. GLCM analysis, or GLCM, to extract texture features or properties, including energy, contrast, correlation, homogeneity, and entropy, is the third feature extraction method of GLCM. The fourth classification procedure is sorting texture images of various types of meat into fresh, frozen, and rotten categories. Class or label predictions are made using the Euclidean matrix and confusion matrix. The final method involves assessing performance using evaluation metrics, i.e., determining how effectively the model can predict the correct class on the test data using measures such as accuracy. Accuracy is how close a prediction is to the actual or expected value. The six classification results are based on image classification. In other words, fresh, frozen, and rotten are the three categories used as a basis for dividing findings based on the texture image categories of different meat varieties. Using this method, the model can understand and identify patterns or characteristics that differentiate the texture of various types of meat in the image. Metrics, like accuracy, are used in performance evaluation to show how effectively a model predicts the right class on test data. Using the results of classification, meat varieties can then be categorized according to their condition.

2.6. System Implementation and Testing

  • Number of samples and classification
As many as 175 images showed various types of meat textures. Each type of meat had three categories: fresh, frozen, and rotten. There were 50 texture images for each class of meat and 150 images for each type of meat.
2.
Dataset division
A total of 600 images in the training set and 150 images in the testing set formed the two main parts of the dataset. It is important to divide this dataset so that the model can be trained and its performance tested objectively.
3.
Types of meat tested
Five different types of meat were tested: pork, goat meat, horse meat, buffalo meat, and beef. Each type of meat has three texture levels: fresh, frozen, and rotten.
4.
Use of digital camera
A digital camera with the same light intensity and distance took images that formed a meat texture dataset. This is important to ensure constant image capture.
5.
Image size
The image size is 500 × 500 pixels. The site offers sufficient resolution to allow detailed texture investigation.
6.
Example of image acquisition results
Examples of texture image capture results for each class of meat are shown in Table 2. It is important to understand the range of images tested and used in the categorization process.
7.
Distribution of GLCM Values
The distribution of GLCM values based on the image histogram of meat types is shown in (Table 2). It enables an understanding of texture characteristics that can be captured and applied to the classification process.
The implementation and testing of the meat type classification system owes its foundation to the experimental setup, which includes a well-divided dataset, consistent use of cameras, and an in-depth examination of the distribution of GLCM values. All these procedures are essential to guarantee the classification model’s dependability and strong generalization on never-before-seen data (test data).

2.7. Classification

The k-NN categorization results are generated based on a combination of characteristics. Table 2 shows images of the results of meat texture experiments in meat class (a) fresh meat, class (b) frozen meat, and class (c) rotten meat, as well as the histogram distribution.
As seen in Table 2, this study collected texture image samples from five types of meat (beef, buffalo, goat, horse, and pork), each of which was available in three states: fresh, frozen, or rotten. The table analysis is explained below. Starting with beef, 150 texture image samples were available, 50 samples each representing fresh, frozen, and rotting categories. A total of 150 beef samples were used in the analysis. Second, 150 buffalo meat texture images were used, of which 50 samples each were frozen, rotten, and fresh. During the analysis, 150 buffalo meat samples were collected. Goat meat was the third type of meat tested, and 150 samples of goat meat texture images were collected, 50 of frozen and 50 of fresh. During the analysis, 150 goat meat samples were examined. There were a total of 150 texture image samples for the fourth meat sample, namely horse—50 each for fresh, frozen, and rotten categories. During the analysis, 150 horse meat samples were examined. The total of the fifth meat sample, pork, was 150 texture images, where 50 samples each belonged to the fresh, frozen, and rotten categories. A total of 150 pork samples were used in the analysis. Including 150 samples for each type of meat, the total samples for the entire study were 750. In the dataset division, there were 600 training data samples and 150 testing data samples, consisting of two parts of the dataset. This way, researchers can better describe different types and situations of meat by categorizing data. Reliable assessment of the models or techniques applied to feature extraction and analysis of meat texture images was further supported by dividing the dataset into training and testing subsets.

2.8. Histogram

The algorithm used to calculate the histogram includes ascertaining the number of pixels in the image at each intensity level. The following is the basic formula for calculating a histogram, assuming the image is monochromatic or has one intensity level:
For example,
n is the number of pixels in the image;
L is the number of possible intensity levels (e.g., for a grayscale image, L = 256 ) .
Thus, the histogram ( H ) can be calculated using the following formula:
H i = n i n
where
H ( i ) is the frequency of occurrence of the pixel intensity- i ;
n i is the number of pixels with intensity- i ;
n is the total number of pixels in the image.
The calculation result above is a normalized value showing that the total of all histogram items is 1. Next, the following formula can be used to obtain the cumulative histogram:
C D F i = j = 0 i H ( j )
C D F ( i ) is the cumulative distribution function for pixel intensity- i ;
H ( j ) is the histogram value of the pixel intensity- j .
The cumulative distribution function represents the total proportion of pixels with an intensity less than or equal to a certain value (PDF). Histogram computing is important in several domains, including image processing, color analysis, and image data processing.

3. Results and Discussion

3.1. Experiment Results

The size of each category of meat class (fresh, frozen, and rotten) was confirmed by analyzing 750 meat texture sample images from five types of meat, viz., goat meat, horse meat, buffalo meat, pork, and beef. The pixels in each meat image were unique. Without scaling when taking meat samples, pixel size variations occur between images. In particular, in high-resolution imagery, scaling can result in visual distortion, reduced image quality, and increased computational effort [52,53]. Experiments were conducted to ensure that different pixel sizes for each type and quality of meat were carefully studied to provide consistency and accuracy in the test results and avoid distortion or conflict due to variations in image quality. Based on pixel intensity and pixel frequency, it is simulated that the sample size and histogram distribution in the class (a) dataset are on average more significant and dominant than in the class (b) and class (c) datasets. The frequency and intensity distribution of pixels in an image, along with the histogram distribution, determine the appearance of the image, and thus, this is the main focus of the analysis of various algorithms.

3.2. Performance Evaluation of Classification Results

Confusion matrices are a valuable tool for assessing the performance of classification models in predicting correct classes. The confusion matrix consists of four main components: False Positive (FP), True Negative (TN), True Positive (TP), and False Negative (FN). The results of texture image classification for several meat varieties in Table 1 indicate the classification performance in the form of a confusion matrix. There are four options based on Table 1 of the confusion matrix: True Positive (TP) is an instance that is positive and classified as positive; False Negative (FN) is an event classified as unfavorable. True Negative (TN) is an event that is declared hostile; False Positive (FP) is an instance that has been declared positive.
  • True Positive (TP): The number of positive observations correctly predicted by the model.
  • True Negative (TN): The number of negative observations correctly predicted by the model.
  • False Positive (FP): The number of negative observations incorrectly predicted as positive by the model (Type I error).
  • False Negative (FN): The number of positive observations incorrectly predicted as negative by the model (Type II error).

3.3. Validation

The probability that an image is considered to have good quality and indeed has good quality is called sensitivity. Equation (9) can be used to calculate sensitivity.
S e n s i t i v i t y = T r u e   P o s i t i v e   ( T P ) T r u e   P o s i t i v e   ( T P ) + F a l s e   N e g a t i v e   ( F N )
Specificity is the probability of an image being considered to have low quality and indeed has low quality. Equation (10) can be used to calculate specificity.
S p e c i f i c i t y = T r u e   N e g a t i v e   ( T N ) T r u e   N e g a t i v e   ( T N ) + F a l s e   P o s i t i v e   ( F P )
By dividing the number of classifications by the number of correct classifications, as shown in Equation (11), the accuracy of classification success can be described.
A c c u r a c y = T r u e   P o s i t i v e   T P + T r u e   N e g a t i v e   ( T N ) T r u e   P o s i t i v e   T P + T r u e   N e g a t i v e   T N + F a l s e   P o s i t i v e   F P + F a l s e   N e g a t i v e   ( F N )
  • True Positive (TP) is the number of positive cases identified as positive.
  • True Negative (TN) is the number of negative cases correctly identified as negative.
  • False Positive (FP) is the number of negative cases incorrectly identified as positive.
  • True Positive (TP) is the number of positive cases correctly identified as positive.
Illustrations of the histogram distribution and experimental findings of meat texture in class (a) fresh meat, class (b) frozen meat, and class (c) spoiled meat are shown in (Table 4) below:
This k-NN research was created using the Google Colab platform. A GPU, or graphics processing unit, was used in each experiment. Two approaches are usually adopted. In the first stage, two techniques—GLCM and the Haar wavelet algorithm—were used to extract features to identify meat images. In the second approach, the meat image was determined directly from the tracked meat features (feature extraction) using k-NN that has been trained on various image domains. This set of features was put into the k-NN model to see whether the classification performance is sufficient to separate the texture image (a) from images (b) and (c).

4. Feature Selection Results

Five types of meat images (beef, buffalo meat, goat meat, horse meat, and pork) were used to test the feature selection findings. Each type of meat image was divided into three classes: fresh, frozen, and rotten. A description of each meat is presented in Table 3 using GLCM metrics, including homogeneity, contrast, correlation, energy, and entropy. Importantly, each meat texture image has the same pixel size, namely 500 × 500 [55,56] sing a uniform pixel size for all meat texture images optimizes computer power: it can lower the computational load and speed up the data analysis process. However, this method has several disadvantages, including a lack of spatial resolution and difficulty in accurately capturing any scale variations between images. The texture of each meat image can still be studied more deeply using GLCM metrics, which include contrast, correlation, energy, homogeneity, and entropy. This GLCM measure can be helpful when comparing the textural properties of fresh, frozen, and rotten meat from different types of animals observed. Therefore, despite limitations in spatial resolution, GLCM analysis can still provide valuable insights into meat texture recognition and classification. It can also provide valuable insights into the detection and classification of meat texture, despite limitations in spatial resolution.
Table 5 illustrates how effectively all feature sets differentiate the three classes: (a), (b), and (c). The average percentage of fresh meat images ranges from 51.52% to 183.21%, while for frozen meat images, the range is from 78.25% to 185.75%, and for rotten meat images, the range is from 34.62% to 115.79%.
The image of fresh meat shows that fresh pork has less varied textures than other meats. In contrast, the higher mean for all GLCM measurements indicates that the variation in intensity of fresh goat meat is more significant. Significant variations in the intensity and texture structure of fresh meat images from different meat varieties were demonstrated in this experiment. These differences are useful for categorizing and identifying images and understanding the textural characteristics of different types of meat.
On frozen meat images, frozen pork texture images have the lowest average. It shows that the texture of frozen pork has less variation in intensity than frozen goat meat. In contrast, frozen goat meat shows a more striking difference in intensity. These results provide a summary of the textural characteristics of frozen goat meat. This information is useful for learning more about the differences between various cuts of meat and identifying or classifying frozen beef images.
However, the images of rotten meat show that, in most cases, the textural differences between rotten pork and rotten beef are not that spoiled. The difference in intensity is more significant in rotten meat. This study expands our knowledge of the textural characteristics of rotten beef images. This information can help differentiate between states of meat and identify or classify stages of rotten meat [57,58].
Based on (Table 5), the findings show how well the features can differentiate between the three classes: (a), (b), and (c). For fresh meat, the average variation ranges from 51.52% to 183.21%, indicating that fresh pork has a less varied texture. In frozen meat, pork has the lowest average, implying that the texture of frozen pork has less variation in intensity compared to goat meat. In rotten meat, the difference in texture between pork and beef is not that noticeable, but the difference in intensity is more significant. This information can help classify and identify types of meat and understand the differences between fresh, frozen, and rotten meat.
Based on the conclusions from the analysis in (Table 5), a series of features can differentiate between images of fresh, frozen, and rotten meat. Fresh pork has less variation in texture, whereas fresh goat meat has more significant variation in intensity. In frozen meat, pork has less variation in intensity, while goat meat shows more significant differences in intensity. The difference in texture between rotten pork and beef is not significant, but the difference in intensity is more striking. This information is useful for classifying and identifying types of meat based on texture characteristics and understanding the differences between fresh, frozen, and rotten meat categories.
Table 6 presents the validation results for all aspects of class (a), (b), and (c) using k-NN classification. It is expected that (A) sensitivity, (B) specificity, (C) accuracy, and (D) Matthews correlation function well. For fresh meat images, the coefficients have percentages ranging from (A) 97.959% to 100%; (B) 96.078% to 100%; (C) 97% to 99%; and (D) 94.019% to 98.02%. Frozen beef images have percentages ranging from 96.078% to 100% in (A), 97.959% to 100% in (B), 97% to 99% in (C), and 94.019% to 98.02% in (D). The percentage of rotten meat images is as follows: (A) 92.308% to 96%; (B) 95.833% to 97.917%; (C) 94% to 96%; and (D) 88.07% to 92.074 [59,60]. Findings from the analysis of fresh, frozen, and rotten meat images (A), (B), (C), and (D) provide essential information about how well k-NN classification models can categorize meat status based on image texture.
The validation results using k-NN classification for all features of class (a), class (b), and class (c) are presented in (Table 6) below:
Fresh Meat: The model can correctly detect fresh meat images for both categories, as evidenced by (A) having the most significant proportion and being more dominant in fresh goat and fresh horse meat texture images and (B) having different quality percentages. With images with the highest accuracy for fresh meat texture, this model can correctly differentiate fresh beef from frozen or rotten beef. This shows how well the model can differentiate fresh meat images from other settings. (C) The model can classify fresh meat images correctly; the texture image of fresh beef and fresh horse meat is the most accurate. This shows how well the model can recognize fresh meat images and differentiate them from other conditions. Images of fresh beef and fresh horse meat (D) are more common in texture images and have the highest percentage (D). This shows a good agreement between the feature model and the actual classes in the fresh meat environment for the classification results. On the other hand, the percentage (D) in the texture image of fresh buffalo meat shows poor performance.
Frozen Meat: The model’s tendency to recognize the image of frozen pork accurately is demonstrated by the model having (A) the highest dominance percentage over the image of frozen buffalo meat texture. On the other hand, the texture image of frozen buffalo meat has the lowest (A) value, indicating problems in identifying this type of meat. The decreasing trend of the model in classifying the image of frozen buffalo meat correctly is demonstrated by (B), which displays the lowest percentage of all images of frozen buffalo meat texture. In contrast, the images of frozen beef and horse meat textures have the most significant percentage (B), indicating that the model performs better in classifying these images accurately. (C) The frozen buffalo meat texture image has the lowest accuracy, while the frozen pork texture image has the best dominant accuracy. This shows that the model classifies frozen pork images more accurately than buffalo meat images. However, this algorithm works well when classifying the image of frozen meat. When comparing the images of frozen goat meat and pork, the frozen beef percentage (D) is the highest. It shows how well and accurately the algorithm can classify the image of frozen goat meat and pork. On the other hand, the percentage (D) in the texture image of fresh buffalo meat shows poor performance.
Rotten Meat: Showing the model’s excellence in identifying the image of rotten goat meat, the (A) highest percentage is more dominant in the image of the texture. However, as the image of rotten buffalo meat has (A) the lowest percentage, it is challenging to distinguish this type of meat. (B) This has the lowest rate for the image of rotten buffalo and horse meat textures, which shows the difficulty of effectively identifying the image of both types of meat. However, the images of rotten beef and pork textures have the highest percentage, showing that the model is more accurate in classifying the image of rotten beef and pork. (C) The texture image of rotten goat, pork, and buffalo meat has the highest accuracy, while the texture image of rotten beef has the lowest. It shows that for the model to classify the image of rotten buffalo meat accurately, it has to be more precise. However, the images of rotten beef, goat meat, and pork can be identified correctly in this experiment. Overall, the system works well in categorizing cases of rotten meat. (D) This shows the images of rotten pork, goat meat, and beef textures. Rotten meat is more common and has the best percentage (D). This model performs well in classifying the image of rotten meat; however, the lowest level of the image is associated with the rotten horse meat texture (D).
Table 7 shows how well the three models compare in terms of the performance measurement results of the k-NN algorithm, the Haar Wavelet algorithm, and the GLCM for the three classes (fresh, frozen, and rotten). (A) k-NN, (B) Wavelet Haar, and (C) GLCM are assumed. First, for the fresh meat texture image class, the percentages are (A) 97% to 99%; (B) 89.25% to 89.96%; and (C) 51.52% to 183.21%. The percentage of frozen meat in both images is 97% to 99% in (A), 87.56% to 88.25% in (B), and 78.25% to 185.75% in (C), all three with percentages ranging from 94%. For rotten meat images, the percentages are up to 96% in (A), 86.26% to 87.97% in (B), and 34.62% to 115.79% in (C).
The performance measurement comparison of k-nearest neighbor algorithm, Haar Wavelet algorithm, and gray-level co-occurrence matrix can be seen in Table 7 below:
The performance results of algorithms (A), (B), and (C) relating to fresh, frozen, and rotten meat images provide valuable insights into the performance of algorithm models in determining the condition of meat based on texture images. Fresh Meat: (A) The k-NN algorithm gives the best results on texture images of fresh goat meat, horse meat, and beef. It also has the most significant standout score (99%). This shows how well the k-NN algorithm performs in classifying fresh meat images. k-NN performs the worst in the fresh buffalo meat texture image, with a score of 97%. Meanwhile, (B) shows the fresh pork texture image, which demonstrates that the Haar Wavelet algorithm performs worse (89.25%) than k-NN. However, when it comes to fresh beef texture images, our algorithm performs well (89.96%). For the fresh pork texture image (C), the GLCM method gives consistent results, with the lowest application percentage (51.52%). However, GLCM data can still be used for classification, although GLCM has the largest fresh goat meat texture images (183.21%). Of the three algorithms studied, the GLCM, Wavelet Haar, and k-NN models perform the best in categorizing fresh meat texture images into meat categories. This shows that the k-NN technique is more suitable for applications that manage fresh meat texture images.
Frozen Meat: (A) Using the k-NN method, the frozen pork texture image gives the most prominent value (99%), while the texture image of frozen buffalo meat yields the lowest result (97%). This shows how well the k-NN algorithm performs in classifying frozen meat images. However, compared to k-NN, the Haar Wavelet algorithm performs worst (87.56%) on the frozen beef texture images. However, the algorithm continues to produce good results. In total, 88.25 percent is the highest performance in the frozen beef texture image. Based on the GLCM algorithm, the frozen pork texture image (C) has the lowest value, 78.25%. However, GLCM results can still be used to classify frozen meat images. After testing three rounds, k-NN maintains its top ranking in frozen meat texture image classification, with Wavelet Haar and GLCM ranking second and third, respectively. This shows that the k-NN technique is more suitable for applications that require frozen meat texture image classification. GLCM is the best model in performance testing, with an accuracy of 185.75%.
Rotten Meat: (A) has the lowest percentage (94%), although the texture images of rotten beef, goat meat, and pork have the highest k-NN value (96%). This shows how effective the k-NN algorithm is while classifying images of rotten meat. Meanwhile, (B) k-NN outperforms the Haar Wavelet algorithm (86.26%); there is an image of rotten buffalo meat. However, the algorithm continues to produce good results. In contrast, (C) shows that the GLCM approach produces uncertain results; the rotten pork texture image has the lowest performance percentage (34.62). However, GLCM data can still be used to classify frozen meat images. The rotten beef texture image has the highest performance percentage (115.79%). After testing three different algorithms, k-NN, GLCM, and Wavelet, Haar continues to produce the best classification results for images containing rotten meat textures. This shows that the k-NN approach is more suitable for classifying rotten meat texture images.
Classification accuracy based on meat texture analysis is presented in (Table 8) below:

5. Conclusions and Future Work

Fresh Meat: With a very high-performance percentage, the k-NN algorithm is best at categorizing fresh meat texture images, especially fresh goat meat, horse meat, and beef. However, GLCM is also excellent in classifying fresh meat texture images, especially fresh goat meat. Frozen Meat: The k-NN algorithm also excels at categorizing frozen pork texture images with a high-performance percentage. In performance testing, GLCM excels in classifying frozen beef texture images with very high accuracy. Rotten Meat: The k-NN algorithm demonstrates its efficacy in this work by re-standing out in categorizing texture images of rotten meat. Even so, GLCM can be used to classify images of rotten meat, yet with uncertain results. Based on comprehensive testing, the k-NN technique is generally more suitable for classifying meat texture images, especially in distinguishing fresh, frozen, and rotten meat. However, GLCM also produces impressive results, especially in classifying frozen beef. Usually, k-NN and GLCM perform better in this scenario; however, the Haar Wavelet algorithm tends to perform worse, despite producing suitable results.
It is expected that this research can develop and integrate the proposed model into a real-time monitoring system for meat freshness by integrating machine learning and deep learning algorithms, applying more efficient data processing techniques by optimizing wavelet decomposition, and expanding and augmenting the dataset with a wider variety of data.

Author Contributions

Conceptualization, K.K., H.H. and E.S.; Data Curation, K.K.; Methodology, H.H. and K.K.; Software, K.K.; Validation, K.K.; Visualization, K.K.; Writing—original draft, K.K.; Writing—review and editing, H.H.; Project Administration, E.S.; Supervision, H.H. and E.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Acknowledgments

The Information Systems Doctoral Program Graduate School supported this research, as did Diponegoro University and the Atma Luhur Pangkalpinang Institute of Science and Business.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Aisah, S.A.; Setyaningrum, A.H.; Wardhani, L.K.; Bahaweres, R. Identifying Pork Raw-Meat Based on Color and Texture Extraction Using Support Vector Machine. In Proceedings of the 2020 8th International Conference on Cyber and IT Service Management (CITSM), Pangkal Pinang, Indonesia, 23–24 October 2020. [Google Scholar] [CrossRef]
  2. Kiswanto; Hadiyanto; Sediyono, E. Modification of the Haar Wavelet Algorithm for Texture Identification of Types of Meat Using Machine; Springer Nature: Singapore, 2024. [Google Scholar] [CrossRef]
  3. Hanny Hikmayanti, H.; Madenda, S.; Widiyanto, S. Comparison of Beef Marbling Segmentation by Experts towards Computational Techniques by Using Jaccard, Dice and Cosine. Turk. J. Comput. Math. Educ. 2021, 12, 623–627. [Google Scholar]
  4. Handayani, H.H.; Masruriyah, A.F.N. Determination of beef marbling based on fat percentage for meat quality. Int. J. Psychosoc. Rehabil. 2020, 24, 8394–8401. [Google Scholar]
  5. Adi, K.; Pujiyanto, S.; Nurhayati, O.D.; Pamungkas, A. Beef quality identification using color analysis and k-nearest neighbor classification. In Proceedings of the 2015 4th International Conference on Instrumentation, Communications, Information Technology, and Biomedical Engineering (ICICI-BME), Bandung, Indonesia, 2–3 November 2015; pp. 180–184. [Google Scholar] [CrossRef]
  6. Uddin, S.M.K.; Hossain, M.A.M.; Chowdhury, Z.Z.; Johan, M.R.B. Short targeting multiplex PCR assay to detect and discriminate beef, buffalo, chicken, duck, goat, sheep and pork DNA in food products. Food Addit. Contam. Part A 2021, 38, 1273–1288. [Google Scholar] [CrossRef]
  7. Sciences, M.R.; Farag, M.; Alagawany, M.E.A. El-Hack, Ruchi Tiwari Ruchi Tiwari, Kuldeep Dhama Kuldeep Dhama. Review Article Identiication of Diferent Animal Species in Meat and Meat Products. Trends Adv. 2015, 3, 334–346. [Google Scholar]
  8. Dalsecco, L.S.; Palhares, R.M.; Oliveira, P.C.; Teixeira, L.V.; Drummond, M.G.; de Oliveira, D.A.A. A Fast and Reliable Real-Time PCR Method for Detection of Ten Animal Species in Meat Products. J. Food Sci. 2018, 83, 258–265. [Google Scholar] [CrossRef]
  9. Li, Y.; Liu, S.; Meng, F.; Liu, D.; Zhang, Y.; Wang, W.; Zhang, J. Comparative review and the recent progress in detection technologies of meat product adulteration. Compr. Rev. Food Sci. Food Saf. 2020, 19, 2256–2296. [Google Scholar] [CrossRef]
  10. Hossain, M.A.M.; Ali, E.; Sultana, S.; Asing; Bonny, S.Q.; Kader, A.; Rahman, M.A. Quantitative Tetraplex Real-Time Polymerase Chain Reaction Assay with TaqMan Probes Discriminates Cattle, Buffalo, and Porcine Materials in Food Chain. J. Agric. Food Chem. 2017, 65, 3975–3985. [Google Scholar] [CrossRef]
  11. Albkosh, F.M.; Hitam, M.S.; Yussof, W.N.J.H.W.; Hamid, A.A.K.A.; Ali, R. Optimization of discrete wavelet transform features using artificial bee colony algorithm for texture image classification. Int. J. Electron. Comput. Eng. 2019, 9, 5253–5262. [Google Scholar] [CrossRef]
  12. Sun, X.; Chen, K.; Maddock-Carlin, K.; Anderson, V.; Lepper, A.; Schwartz, C.; Keller, W.; Ilse, B.; Magolski, J.; Berg, E. Predicting beef tenderness using color and multispectral image texture features. Meat Sci. 2012, 92, 386–393. [Google Scholar] [CrossRef]
  13. Jackman, P.; Sun, D.W.; Allen, P.; Valous, N.A.; Mendoza, F.; Ward, P. Identification of important image features for pork and turkey ham classification using colour and wavelet texture features and genetic selection. Meat Sci. 2010, 84, 711–717. [Google Scholar] [CrossRef]
  14. Ávila, M.; Durán, M.; Caballero, D.; Antequera, T.; Palacios-Pérez, T.; Cernadas, E.; Fernández-Delgado, M. Magnetic Resonance Imaging, texture analysis and regression techniques to non-destructively predict the quality characteristics of meat pieces. Eng. Appl. Artif. Intell. 2019, 82, 110–125. [Google Scholar] [CrossRef]
  15. Zeng, S.; Chen, L.; Jiang, L.; Gao, C. Hyperspectral imaging technique based on Geodesic K-medoids clustering and Gabor wavelets for pork quality evaluation. Int. J. Wavelets Multiresolution Inf. Process. 2017, 15, 1750066. [Google Scholar] [CrossRef]
  16. Wijaya, D.R. Noise filtering framework for electronic nose signals: An application for beef quality monitoring. Comput. Electron. Agric. 2019, 157, 305–321. [Google Scholar] [CrossRef]
  17. Kishore, M.; Issac, A.; Minhas, N.; Sarkar, B. Image processing based method to assess fish quality and freshness. J. Food Eng. 2016, 177, 50–58. [Google Scholar] [CrossRef]
  18. Pu, H.; Xie, A.; Sun, D.W.; Kamruzzaman, M.; Ma, J. Application of Wavelet Analysis to Spectral Data for Categorization of Lamb Muscles. Food Bioprocess Technol. 2014, 8, 1–16. [Google Scholar] [CrossRef]
  19. Masoumi, M.; Marcoux, M.; Maignel, L.; Pomar, C. Weight prediction of pork cuts and tissue composition using spectral graph wavelet. J. Food Eng. 2021, 299, 110501. [Google Scholar] [CrossRef]
  20. Wijaya, D.R.; Sarno, R.; Zulaika, E.; Sabila, S.I. Development of mobile electronic nose for beef quality monitoring. Procedia Comput. Sci. 2017, 124, 728–735. [Google Scholar] [CrossRef]
  21. Song, K.; Wang, S.H.; Yang, D.; Shi, T.Y. Combination of spectral and image information from hyperspectral imaging for the prediction and visualization of the total volatile basic nitrogen content in cooked beef. J. Food Meas. Charact. 2021, 15, 4006–4020. [Google Scholar] [CrossRef]
  22. Sujana, I.G.; Putra, E. Classification of Tuna Meat Grade Quality Based on Color Space Using Wavelet and k-Nearest Neighbor Algorithm. In Proceedings of the 2023 International Conference on Smart-Green Technology in Electrical and Information Systems (ICSGTEIS), Bali, Indonesia, 2–4 November 2023; pp. 2–4. [Google Scholar]
  23. Amin, V.; Wilson, D.; Rouse, G.; Udpa, S. Multiresoi, Utional Texture Analysis for Ultrasound Tissue. Nondestructive Testing and Evaluation. 1998, 14, 201–215. [Google Scholar]
  24. Tao, Y. Wavelet-based adaptive thresholding method for image segmentation. Opt. Eng. 2001, 40, 868. [Google Scholar] [CrossRef]
  25. Kim, N.D.; Amin, V.; Wilson, D.; Rouse, G.; Udpa, S. Ultrasound image texture analysis for characterizing intramuscular fat content of live beef cattle. Ultrason. Imaging 1998, 20, 191–205. [Google Scholar] [CrossRef]
  26. Asmara, R.A.; Romario, R.; Batubulan, K.S.; Rohadi, E.; Siradjuddin, I.; Ronilaya, F.; Ariyanto, R.; Rahmad, C.; Rahutomo, F. Classification of pork and beef meat images using extraction of color and texture feature by Grey Level Co-Occurrence Matrix method. IOP Conf. Ser. Mater. Sci. Eng. 2018, 434, 012072. [Google Scholar] [CrossRef]
  27. Widiyanto, S. Texture Feature Extraction Based On GLCM and DWT for Beef Tenderness Classification. In Proceedings of the 2018 Third International Conference on Informatics and Computing (ICIC), Palembang, Indonesia, 17–18 October 2018; pp. 1–4. [Google Scholar]
  28. Asmara, R.A.; Hasanah, Q.; Rahutomo, F.; Rohadi, E.; Siradjuddin, I.; Ronilaya, F.; Handayani, A.N. Chicken meat freshness identification using colors and textures feature. In Proceedings of the 2018 Joint 7th International Conference on Informatics, Electronics & Vision (ICIEV) and 2018 2nd International Conference on Imaging, Vision & Pattern Recognition (icIVPR), Kitakyushu, Japan, 25–29 June 2018; pp. 93–98. [Google Scholar] [CrossRef]
  29. Santoni, M.M.; Sensuse, D.I.; Arymurthy, A.M.; Fanany, M.I. Cattle Race Classification Using Gray Level Co-occurrence Matrix Convolutional Neural Networks. Procedia Comput. Sci. 2015, 59, 493–502. [Google Scholar] [CrossRef]
  30. Ibrahim, N.M. Muzzle feature extraction based on gray level co-occurrence matrix. Int. J. Vet. Med. 2016, 1, 16–24. [Google Scholar]
  31. Farinda, R.; Firmansyah, Z.; Sulton, C.; Wijaya, I.G.P.S.; Bimantoro, F. Beef Quality Classification based on Texture and Color Features using SVM Beef Quality Classification based on Texture and Color Features using SVM Classifier. J. Telemat. Informatics 2018, 6, 201–213. [Google Scholar] [CrossRef]
  32. Agustin, S.; Dijaya, R. Beef Image Classification using K-Nearest Neighbor Algorithm for Identification Quality and Freshness. J. Phys. Conf. Ser. 2019, 1179, 012184. [Google Scholar] [CrossRef]
  33. Jia, B.; Wang, W.; Yoon, S.C.; Zhuang, H.; Li, Y.F. Using a combination of spectral and textural data to measure water-holding capacity in fresh chicken breast fillets. Appl. Sci. 2018, 8, 343. [Google Scholar] [CrossRef]
  34. Suhadi, S.; Atika, P.D.; Sugiyatno, S.; Panogari, A.; Handayanto, R.T.; Herlawati, H. Mobile-based fish quality detection system using k-nearest neighbors method. In Proceedings of the 2020 Fifth International Conference on Informatics and Computing (ICIC), Gorontalo, Indonesia, 3–4 November 2020. [Google Scholar] [CrossRef]
  35. Yudhana, A.; Umar, R.; Saputra, S. Fish Freshness Identification Using Machine Learning: Performance Comparison of k-NN and Naïve Bayes Classifier. J. Comput. Sci. Eng. 2022, 16, 153–164. [Google Scholar] [CrossRef]
  36. Jiang, H.; Yuan, W.; Ru, Y.; Chen, Q.; Wang, J.; Zhou, H. Feasibility of identifying the authenticity of fresh and cooked mutton kebabs using visible and near-infrared hyperspectral imaging. Spectrochim. Acta Part A Mol. Biomol. Spectrosc. 2022, 282, 121689. [Google Scholar] [CrossRef]
  37. Barbon, A.P.A.d.C.; Barbon, S.; Campos, G.F.C.; Seixas, J.L.; Peres, L.M.; Mastelini, S.M.; Andreo, N.; Ulrici, A.; Bridi, A.M. Development of a flexible Computer Vision System for marbling classification. Comput. Electron. Agric. 2017, 142, 536–544. [Google Scholar] [CrossRef]
  38. Sabilla, S.I.; Sarno, R.; Triyana, K.; Hayashi, K. Deep learning in a sensor array system based on the distribution of volatile compounds from meat cuts using GC–MS analysis. Sens. Bio-Sens. Res. 2020, 29, 100371. [Google Scholar] [CrossRef]
  39. Matera, J.; Cruz, A.; Raices, R.; Silva, M.; Nogueira, L.; Quitério, S.; Cavalcanti, R.N.; Freiras, M.; Júnior, C.C. Discrimination of Brazilian artisanal and inspected pork sausages: Application of unsupervised, linear and non-linear supervised chemometric methods. Food Res. Int. 2014, 64, 380–386. [Google Scholar] [CrossRef]
  40. Gaber, T.; Tharwat, A.; Hassanien, A.E.; Snasel, V. Biometric cattle identification approach based on Weber’s Local Descriptor and AdaBoost classifier. Comput. Electron. Agric. 2016, 122, 55–66. [Google Scholar] [CrossRef]
  41. Xiong, Y.; Li, Y.; Wang, C.; Shi, H.; Wang, S.; Yong, C.; Gong, Y.; Zhang, W.; Zou, X. Non-Destructive Detection of Chicken Freshness Based on Electronic Nose Technology and Transfer Learning. Agriculture 2023, 13, 496. [Google Scholar] [CrossRef]
  42. González-Mohino, A.; Pérez-Palacios, T.; Antequera, T.; Ruiz-Carrascal, J.; Olegario, L.S.; Grassi, S. Monitoring the processing of dry fermented sausages with a portable NIRS device. Foods 2020, 9, 1294. [Google Scholar] [CrossRef]
  43. Varghese, A.; Jain, S.; Jawahar, M.; Prince, A.A. Auto-pore segmentation of digital microscopic leather images for species identification. Eng. Appl. Artif. Intell. 2023, 126, 107049. [Google Scholar] [CrossRef]
  44. Fernández-Ibáñez, V.; Fearn, T.; Soldado, A.; de la Roza-Delgado, B. Development and validation of near infrared microscopy spectral libraries of ingredients in animal feed as a first step to adopting traceability and authenticity as guarantors of food safety. Food Chem. 2010, 121, 871–877. [Google Scholar] [CrossRef]
  45. Kumar, S.; Singh, S.K.; Singh, R.; Singh, A.K. Animal Biometrics: Techniques and Applications; Springer: Berlin/Heidelberg, Germany, 2018; pp. 1–243. [Google Scholar] [CrossRef]
  46. Chen, T.; Qi, X.; Chen, M.; Chen, B. Gas Chromatography-Ion Mobility Spectrometry Detection of Odor Fingerprint as Markers of Rapeseed Oil Refined Grade. J. Anal. Methods Chem. 2019, 2019, 3163204. [Google Scholar] [CrossRef]
  47. Hasan, N.U.; Ejaz, N.; Ejaz, W.; Kim, H.S. Meat and fish freshness inspection system based on odor sensing. Sensors 2012, 12, 15542–15557. [Google Scholar] [CrossRef] [PubMed]
  48. Shik, A.V.; Sobolev, P.V.; Zubritskaya, Y.V.; Baytler, M.O.; Stepanova, I.A.; Chernyaev, A.P.; Borschegovskaya, P.Y.; Zolotov, S.A.; Doroshenko, I.A.; Podrugina, T.A.; et al. Rapid testing of irradiation dose in beef and potatoes by reaction-based optical sensing technique. J. Food Compos. Anal. 2024, 127, 105946. [Google Scholar] [CrossRef]
  49. Jiang, R.; Shen, J.; Li, X.; Gao, R.; Zhao, Q.; Su, Z. Detection and recognition of veterinary drug residues in beef using hyperspectral discrete wavelet transform and deep learning. Int. J. Agric. Biol. Eng. 2022, 15, 224–232. [Google Scholar] [CrossRef]
  50. Malegori, C.; Franzetti, L.; Guidetti, R.; Casiraghi, E.; Rossi, R. GLCM, an image analysis technique for early detection of bio fi lm. J. Food Eng. 2016, 185, 48–55. [Google Scholar] [CrossRef]
  51. Parsania, P.S.; Virparia, P.V. A Review: Image Interpolation Techniques for Image Scaling. Int. J. Innov. Res. Comput. Commun. Eng. 2014, 2, 7409–7414. [Google Scholar] [CrossRef]
  52. Fedorov, E.; Utkina, T.; Nechyporenko, O.; Korpan, Y. Development of technique for face detection in image based on binarization, scaling and segmentation methods. East. -Eur. J. Enterp. Technol. 2020, 1, 23–31. [Google Scholar] [CrossRef]
  53. DeCarli, C.; Murphy, D.G.M.; Teichberg, D.; Campbell, G.; Sobering, G.S. Local histogram correction of MRI spatially dependent image pixel intensity nonuniformity. J. Magn. Reson. Imaging 1996, 6, 519–528. [Google Scholar] [CrossRef]
  54. Bae, S.H.; Kim, M. A novel SSIM index for image quality assessment using a new luminance adaptation effect model in pixel intensity domain. In Proceedings of the 2015 Visual Communications and Image Processing (VCIP), Singapore, 13–16 December 2015; pp. 5–8. [Google Scholar] [CrossRef]
  55. Jakubek, J. Data processing and image reconstruction methods for pixel detectors. Nucl. Instrum. Methods Phys. Res. Sect. A Accel. Spectrometers Detect. Assoc. Equip. 2007, 576, 223–234. [Google Scholar] [CrossRef]
  56. Wu, H.; Li, Z. Scale Issues in Remote Sensing: A Review on Analysis, Processing and Modeling. Sensors 2009, 9, 1768–1793. [Google Scholar] [CrossRef]
  57. Yang, Y.; Wang, W.; Zhuang, H.; Yoon, S.C.; Jiang, H. Fusion of spectra and texture data of hyperspectral imaging for the prediction of the water-holding capacity of fresh chicken breast filets. Appl. Sci. 2018, 8, 640. [Google Scholar] [CrossRef]
  58. Park, B.; Kise, M.; Windham, W.R.; Lawrence, K.C.; Yoon, S.C. Textural analysis of hyperspectral images for improving contaminant detection accuracy. Sens. Instrum. Food Qual. Saf. 2008, 2, 208–214. [Google Scholar] [CrossRef]
  59. Ali, N.; Neagu, D.; Trundle, P. Evaluation of k-nearest neighbour classifier performance for heterogeneous data sets. SN Appl. Sci. 2019, 1, 1559. [Google Scholar] [CrossRef]
  60. Gallego, A.J.; Rico-juan, J.R.; Valero-mas, J.J. Efficient k-nearest neighbor search based on clustering and adaptive k values. Pattern Recognit. 2021, 122, 108356. [Google Scholar] [CrossRef]
  61. Africa, A.M.D.; Alberto, S.T.C.; Tan, T.Y.E. Development of a portable electronic device for the detection and indication of fireworks and firecrackers for security personnel. Indones. J. Electr. Eng. Comput. Sci. 2020, 19, 1194–1203. [Google Scholar] [CrossRef]
  62. Wijaya, D.R.; Sarno, R.; Zulaika, E. DWTLSTM for electronic nose signal processing in beef quality monitoring. Sens. Actuators B Chem. 2021, 326, 128931. [Google Scholar] [CrossRef]
  63. Ayaz, H.; Ahmad, M.; Mazzara, M.; Sohaib, A. Hyperspectral imaging for minced meat classification using nonlinear deep features. Appl. Sci. 2020, 10, 7783. [Google Scholar] [CrossRef]
Figure 1. Classification stages of the meat texture image.
Figure 1. Classification stages of the meat texture image.
Asi 07 00049 g001
Figure 2. Classification Stage Model.
Figure 2. Classification Stage Model.
Asi 07 00049 g002
Figure 3. Haar wavelet approximation on images of fresh meat types and their histograms.
Figure 3. Haar wavelet approximation on images of fresh meat types and their histograms.
Asi 07 00049 g003
Figure 4. GLCM: (a) horizontal, (b) vertical, and (c) diagonal images of fresh meat types and their histograms.
Figure 4. GLCM: (a) horizontal, (b) vertical, and (c) diagonal images of fresh meat types and their histograms.
Asi 07 00049 g004aAsi 07 00049 g004b
Figure 5. Classification of texture images of meat types using k-NN, Wavelet Haar, and GLCM approaches.
Figure 5. Classification of texture images of meat types using k-NN, Wavelet Haar, and GLCM approaches.
Asi 07 00049 g005
Table 1. Pseudocode approximation (cA) and detail coefficients (cH, cV, and cD) are calculated in the Haar wavelet transform.
Table 1. Pseudocode approximation (cA) and detail coefficients (cH, cV, and cD) are calculated in the Haar wavelet transform.
1.Input: take a grayscale image from the input image.
2.Read: divide the image into 2 × 2 sub-blocks for each sub-block.
3.Read: calculate the average of pixels in sub-blocks (approximation components), and calculate horizontal, vertical, and diagonal details.
4.Original Detail Horizontal = top-left pixels—average
5.Original Detail Vertical = top-right pixel—average
6.Original Detail Diagonal = bottom-left pixels—average
7.Combine all the average values to create an approximation coefficient (cA) component.
8.Combine all the horizontal detail values to create a detail horizontal coefficient (cH) component.
9.Combine all the vertical detail values to create a vertical detail coefficient (cV) component.
10.Combine all the detail diagonal values to create a detail diagonal coefficient (cD) component.
11.Return cA, cH, cV, and cD as the result of the Haar wavelet transform
Table 2. Feature subset.
Table 2. Feature subset.
No.Type of MeatFeature SubsetsSample
1BeefFresh Beef50
Frozen Beef50
Rotten Beef50
2BuffaloFresh Buffalo Meat50
Frozen Buffalo Meat50
Rotten Buffalo Meat50
3GoatFresh Goat Meat50
Frozen Goat Meat50
Rotten Goat Meat50
4HorseFresh Horse Meat50
Frozen Horse Meat50
Rotten Horse Meat50
5PorkFresh Pork50
Frozen Pork50
Rotten Pork50
Table 3. Confusion matrix [54].
Table 3. Confusion matrix [54].
Classification
Positive PredictionNegative Prediction
Actual ClassificationActual PositiveTrue Positive (TP)False Negative (FN)
Actual NegativeFalse Positive (FP)True Negative (TN)
Table 4. Illustration of the histogram distribution and findings of meat texture experiments in class (a) fresh meat, class (b) frozen meat, and class (c) rotten meat. There were 50 samples for each of these classes of meat.
Table 4. Illustration of the histogram distribution and findings of meat texture experiments in class (a) fresh meat, class (b) frozen meat, and class (c) rotten meat. There were 50 samples for each of these classes of meat.
FileClassResults of Sampling Types of Meat
Beef ImageHistogram
(a)
Fresh Beef
Asi 07 00049 i001Asi 07 00049 i002
(b)
Frozen Beef
Asi 07 00049 i003Asi 07 00049 i004
(c)
Rotten Beef
Asi 07 00049 i005Asi 07 00049 i006
Buffalo
(a)
Fresh Buffalo Meat
Asi 07 00049 i007Asi 07 00049 i008
(b)
Frozen Buffalo Meat
Asi 07 00049 i009Asi 07 00049 i010
(c)
Rotten Buffalo Meat
Asi 07 00049 i011Asi 07 00049 i012
Goat
(a)
Fresh Goat Meat
Asi 07 00049 i013Asi 07 00049 i014
(b)
Frozen Goat Meat
Asi 07 00049 i015Asi 07 00049 i016
(c)
Rotten Goat Meat
Asi 07 00049 i017Asi 07 00049 i018
Horse
(a)
Fresh Horse Meat
Asi 07 00049 i019Asi 07 00049 i020
(b)
Frozen Horse Meat
Asi 07 00049 i021Asi 07 00049 i022
(c)
Rotten Horse Meat
Asi 07 00049 i023Asi 07 00049 i024
Pork
(a)
Fresh Pork
Asi 07 00049 i025Asi 07 00049 i026
(b)
Frozen Pork
Asi 07 00049 i027Asi 07 00049 i028
(c)
Rotten Pork
Asi 07 00049 i029Asi 07 00049 i030
Table 5. Comparison of GLCM feature extraction results for contrast, correlation, energy, homogeneity, and entropy metrics.
Table 5. Comparison of GLCM feature extraction results for contrast, correlation, energy, homogeneity, and entropy metrics.
Type of MeatClassGLCM MetricsMinimumMaximumAverage
ContrastCorrelationEnergyHomogeneityEntropy
BeefFresh Beef686.140.4660.0160.055−72.23−72.23686.14122.89
Frozen Beef552.350.4540.0140.063−72.15−72.15552.3596.15
Rotten Beef651.10.860.0140.058−73.09−73.09651.1115.79
BuffaloFresh Buffalo Meat656.570.680.010.056−72.66−72.66656.57116.93
Frozen Buffalo Meat551.830.6440.0120.056−73.53−73.53551.8395.80
Rotten Buffalo Meat583.980.0560.0180.053−72.43−72.43583.98102.33
GoatFresh Goat Meat988.170.280.0150.056−72.48−72.48988.17183.21
Frozen Goat Meat999.0970.4740.0120.05−70.90−70.90999.097185.75
Rotten Goat Meat545.430.3040.0170.064−71.60−71.60545.4394.84
HorseFresh Horse Meat716.540.2780.0120.055−72.82−72.82716.54128.81
Frozen Horse Meat624.060.4880.0130.06−73.25−73.25624.06110.28
Rotten Horse Meat458.320.3320.020.079−72.17−72.17458.3277.32
PorkFresh Pork329.530.3760.0250.056−72.37−72.37329.5351.52
Frozen Pork462.780.3920.0240.051−71.99−71.99462.7878.25
Rotten Pork244.980.340.0290.064−72.30−72.30244.9834.62
Table 6. Validation results using k-NN classification for all features of class (a), class (b), and class (c).
Table 6. Validation results using k-NN classification for all features of class (a), class (b), and class (c).
Number of Neighbors (k)ClassSensitivitySpecificityAccuracyMatthews Correlation Coefficient
1Fresh Beef98.039%100%99%98.02%
Frozen Beef96.154%100%98%96.077%
Rotten Beef94.231%97.917%96%92.074%
2Fresh Buffalo Meat97.959%96.078%97%94.019%
Frozen Buffalo Meat96.078%97.959%97%94.019%
Rotten Buffalo Meat92.308%95.833%94%88.07%
3Fresh Goat Meat100%98.039%99%98.02%
Frozen Goat Meat98%98%98%98%
Rotten Goat Meat96%96%96%92%
4Fresh Horse Meat100%98.939%99%98.02%
Frozen Horse Meat96.154%100%98%96.077%
Rotten Horse Meat94.118%95.918%95%90.018%
5Fresh Pork98%98%98%96%
Frozen Pork100%98.039%99%98.02%
Rotten Pork94.231%97.917%96%92.074%
Table 7. Comparison of performance measurements of the k-nearest neighbor algorithm, Haar Wavelet algorithm, and gray level co-occurrence matrix.
Table 7. Comparison of performance measurements of the k-nearest neighbor algorithm, Haar Wavelet algorithm, and gray level co-occurrence matrix.
FileClassk-NN
Accuracy %
Haar Wavelet
Accuracy %
GLCM
Accuracy %
BeefFresh Beef9989.96122.89
Frozen Beef9888.2596.15
Rotten Beef9687.97115.79
BuffaloFresh Buffalo Meat9789.75116.93
Frozen Buffalo Meat9787.9695.80
Rotten Buffalo Meat9486.88102.33
GoatFresh Goat Meat9989.47183.21
Frozen Goat Meat9886.73185.75
Rotten Goat Meat9686.7994.84
HorseFresh Horse Meat9989.85128.81
Frozen Horse Meat9887.56110.28
Rotten Horse Meat9586.2677.32
PorkFresh Pork9889.2551.52
Frozen Pork9987.6778.25
Rotten Pork9686.3634.62
Table 8. Classification accuracy based on meat texture analysis.
Table 8. Classification accuracy based on meat texture analysis.
AuthorStructureTexture Analysis Method (Features) MethodAccuracy (%)
Yudhana, Anton Umar, Rusydi Saputra, Sabarudin [36]FishRGB colors and GLCM features k-NN94%
Don Africa, Aaron M Claire Alberto, Stephanie T Evan Tan, Travis Y [61]Beef and porkSkewness, Kurtosis, Mean, and Std Deviation k-NN98.6%
Wijaya, Dedy Rahman Sarno, Riyanarto Zulaika, Enny [62]BeefRegression results (black: actual, blue: prediction, red: prediction with error) Discrete Wavelets Transform and Long Short-Term Memory (DWTLSTM) dan k-nearest neighbor (k-NN)85.05%
Kiswanto, Hadiyanto, and Eko Sediyon [2]Beef, buffalo, goat, horse and porkRGB, GLCM, and HSV Haar wave algorithm76.72%
Ayaz, Hamail Ahmad, Muhammad Mazzara, Manuel Sohaib, Ahmed [63]MeatHSI k-NN82%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kiswanto, K.; Hadiyanto, H.; Sediyono, E. Meat Texture Image Classification Using the Haar Wavelet Approach and a Gray-Level Co-Occurrence Matrix. Appl. Syst. Innov. 2024, 7, 49. https://doi.org/10.3390/asi7030049

AMA Style

Kiswanto K, Hadiyanto H, Sediyono E. Meat Texture Image Classification Using the Haar Wavelet Approach and a Gray-Level Co-Occurrence Matrix. Applied System Innovation. 2024; 7(3):49. https://doi.org/10.3390/asi7030049

Chicago/Turabian Style

Kiswanto, Kiswanto, Hadiyanto Hadiyanto, and Eko Sediyono. 2024. "Meat Texture Image Classification Using the Haar Wavelet Approach and a Gray-Level Co-Occurrence Matrix" Applied System Innovation 7, no. 3: 49. https://doi.org/10.3390/asi7030049

APA Style

Kiswanto, K., Hadiyanto, H., & Sediyono, E. (2024). Meat Texture Image Classification Using the Haar Wavelet Approach and a Gray-Level Co-Occurrence Matrix. Applied System Innovation, 7(3), 49. https://doi.org/10.3390/asi7030049

Article Metrics

Back to TopTop