Next Article in Journal
Ground Reaction Forces and Joint Moments Predict Metabolic Cost in Physical Performance: Harnessing the Power of Artificial Neural Networks
Previous Article in Journal
The Role of the Logistics Operator in the Network Coordination of Omni-Channels
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Machine Learning-Based Classification of Rock Bursts in an Active Coal Mine Dominated by Non-Destructive Tremors

by
Łukasz Wojtecki
1,*,
Mirosława Bukowska
1,
Sebastian Iwaszenko
1 and
Derek B. Apel
2
1
Central Mining Institute—National Research Institute, 1 Gwarków Sqr., 40-166 Katowice, Poland
2
School of Mining and Petroleum Engineering, University of Alberta, Edmonton, AB T6G 2R3, Canada
*
Author to whom correspondence should be addressed.
Appl. Sci. 2024, 14(12), 5209; https://doi.org/10.3390/app14125209
Submission received: 21 May 2024 / Revised: 10 June 2024 / Accepted: 13 June 2024 / Published: 15 June 2024
(This article belongs to the Section Earth Sciences)

Abstract

:
Rock bursts are dynamic phenomena in underground openings, causing damage to support and infrastructure, and are one of the main natural hazards in underground coal mines. The prediction of rock bursts is important for improving safety in mine openings. The hazard of rock bursts is correlated with seismic activity, but rock bursts are rare compared to non-destructive tremors. The five machine learning classifiers (multilayer perceptron, adaptive boosting, gradient boosting, K-nearest neighbors, and Gaussian naïve Bayes), along with an ensemble hard-voting classifier composed of these classifiers, were used to recognize rock bursts among the dominant non-destructive tremors. Machine learning models were trained and tested on ten sets of randomly selected data obtained from one of the active hard coal mines in the Upper Silesian Coal Basin, Poland. For each of the 627 cases in the database, 15 features representing geological, geomechanical, mining, and technical conditions in the opening as well as tremor energy and correlated peak particle velocity were determined. Geological and geomechanical parameters of the coal seams and surrounding rocks were aggregated into a single GEO index. The share of rock bursts in the database was only about 8.5%; therefore, the ADASYN balancing method, which addresses imbalanced datasets, was used. The ensemble hard-voting classifier most effectively classified rock bursts, with an average recall of 0.74.

1. Introduction

Rock burst is a catastrophic phenomenon occurring in underground mines and geotechnical openings during which rocks are violently ejected into the excavation space. As a result of this dynamic phenomenon, the underground opening or its part including support, machines, and devices are damaged or destroyed. In most underground mines or underground geotechnical structures, rock bursts have a stress cause. The stress level in the rock mass in the vicinity of the opening is so high that the strength of the rocks is exceeded, and they suddenly break. Of course, the ability of rocks to accumulate strain energy and release it rapidly is also essential.
In underground hard coal mines, a rock burst can be defined as a sudden release of strain energy accumulated in the rock mass, correlated with high-energy vibrations of rock mass, and acoustic effects and shock waves causing the structural destruction of a coal seam or its roof or floor rocks, and, simultaneously, an ejection of rocks into the excavation [1]. The dynamic destruction of an opening in a coal seam, having a stress origin, is usually referred to as a coal bump. However, due to its characteristic geological structure (i.e., thick layers of competent rocks), other types of rock bursts, called stroke rock bursts, can be found in coal mines [1,2]. Stroke rock burst is caused by the dynamic load pulse resulting from the fracture of thick layers of competent rocks, which affects the part of a coal seam close to an excavation [2]. However, the coal seam need not be stressed [2]. This dynamic load pulse may also impact the partially stressed sidewall part. Such a mixed rock burst is referred to as a stress-stroke [2].
The occurrence of rock bursts in coal mines is conditioned by many factors, making their prediction very challenging. The ability of the coal seam to accumulate strain energy and violent disintegration (i.e., the proneness of the coal seam to burst and the stress level in the coal seam) is of great importance [1,2]. The stress level in the coal seam may result from the thickness of the overburden, mining remnants, protective pillars, and geological disturbances [1,2]. Additionally, mining works performed in the same coal seam may significantly determine the risk of rock bursts in subsequent openings [2]. A neighborhood of goafs in the same coal seam or excessive bed cutting may affect the level of rock burst hazard [2]. A floor heave may occur when the seam is thick or weak rocks are deposited in its floor (e.g., shales). This phenomenon is often associated with a rock burst, especially when the top layer of a thick seam is mined first, and there is coal on the floor of the excavation. The presence of thick sandstone layers in the deposit structure is also a factor that may affect the occurrence of stroke or stress-stroke rock bursts. The proneness of the seam and the surrounding rocks for the occurrence of rock bursts may be considered together. In addition to those above-mentioned, mining and technical factors may also determine the occurrence of rock bursts. Releasing stress from the coal seam or fracture of strong roof rocks (e.g., by blasting) can significantly reduce the occurrence of rock bursts. Moreover, strengthening the support of openings may make them more resistant to rock bursts. In such cases, the rock burst may not occur, or its damage may be minimized. As presented above, the multitude of factors influencing and minimizing the occurrence of rock bursts is significant in coal mines.
The risk of rock bursts is one of the basic threats in coal mines in the Upper Silesian Coal Basin (USCB). In mines located in the USCB area, the mining hazard assessment method estimates the potential risk of rock bursts. This method is a part of the complex method for assessing the state of rock burst hazards in coal mines [2]. Factors such as the depth of mining, the thickness of the coal seam, the uniaxial compressive strength of coal, the presence of edges of other seams, geological disturbances, destressing mining, and others are taken into account, and each of them has an appropriate number of points. Finally, point values of all factors are summed up, and the opening or its part is qualified to one of four degrees of rock burst risk (i.e., none, low, medium, and high) [2].
In addition, when designing mining works, the analytical state of stress in the rock mass is usually calculated [3,4,5,6]. This allows us to estimate the amount of stress in the rock mass near structures such as edges and pillars in other seams, faults, seam splits, etc. There is a growing interest in modeling rock and support behavior in correlation with mining activities [7,8]. Although the phenomenon of rock bursts in coal mines has been studied for many years [9,10,11,12], it is still very challenging as many parameters and the nonlinear relationship between them influence this.
Machine learning algorithms are used increasingly in many fields including mining (e.g., for mining-induced seismicity [13,14,15] and mining subsidence [16,17]) and in mining geology [18,19]. This is due to their ability to solve nonlinear problems. Attempts were also made to apply these algorithms directly to assess the risk of rock bursts in mines of various rocks and minerals including coal mines.

1.1. Current Use of Machine Learning Algorithms for Rock Burst Prediction

The prediction of rock bursts using machine learning algorithms provides an alternative approach adopted by many researchers, which is based on learning by experience. Machine learning algorithm models are trained without the need to determine the cause of rock bursts. Supervised machine learning algorithms have mainly been used to classify stress rock bursts into appropriate intensity levels. This is understandable because rock bursts of this type most often occur during the underground mining of various mineral resources and tunneling. The parameters taken into account mainly concern the stress level and strength parameters of rocks in which the opening is drilled such as the maximum tangential stress, the uniaxial tensile strength, the uniaxial compressive strength, and the elastic energy index. For example, Bai et al. [20] trained an artificial neural network model based on 20 rock bursts that occurred during tunneling. Chen et al. [21] applied artificial neural networks for predicting rock bursts and their intensity, and the total database included 18 cases. Ge and Feng [22] proposed the use of artificial neural networks as weak classifiers in the AdaBoost algorithm to categorize and predict rock bursts and considered 36 rock burst cases. Su et al. [23] applied the k-nearest neighbor algorithm for rock burst prediction at a great depth in South Africa. Su et al. [24] proposed a rock burst intensity prediction model based on Gaussian process (GP). The model was trained on thirteen cases and tested on five cases of rock bursts during geoengineering works in different countries. Gong et al. [25] used a Bayes discriminant analysis method to predict the possibility and classification of rock bursts, and the database included 14 cases. Zhou et al. [26] used the support vector machine (SVM) algorithm for rock burst prediction. Their database contained 132 rock bursts in openings drilled in various rocks (e.g., granite, sandstone, diorite, rhyolite). Jia et al. [27] proposed the particle swarm optimization (PSO) algorithm and the general regression neural network (GRNN). The database contained 26 cases. Adoko et al. [28] combined neural networks with fuzzy logic for predicting the intensity of rock bursts, and the dataset consisted of 174 rock bursts. Zhou et al. [29] compared the ability of the following machine learning algorithms to predict rock burst categories: linear discriminant analysis, quadratic discriminant analysis, partial least-squares discriminant analysis, naïve Bayes, k-nearest neighbor, multilayer perceptron neural network, classification tree, support vector machine, random forest, and gradient-boosting machine, and their dataset included 246 rock bursts. Li et al. [30] applied Bayesian networks for rock burst prediction, and their rock burst database contained 135 cases. Shrihani Faradonbeh and Taheri [31] applied genetic algorithm-based emotional neural network (GA-ENN), gene expression programming (GEP), and decision tree-based C4.5 algorithm to predict the occurrence of rock burst in a binary condition and their dataset included 134 rock bursts. Li and Jimenez [32] used the logistic regression classifier for the long-term prediction of rock burst. The database contained 135 cases (i.e., 83 rock bursts and 52 non-rock burst events). Pu et al. [33] applied a decision tree model to predict rock burst potential in kimberlite at an underground diamond mine. The database contained 132 rock bursts including 108 complete samples and 24 incomplete samples. Zheng et al. [34] applied the entropy weight gray relational backpropagation (BP) neural network to predict the intensity grade of rock burst and the database used to train the model contained 20 rock bursts. Ghasemi et al. [35] applied a decision tree algorithm to predict the occurrence and intensity of rock burst and the database contained 174 rock bursts. Ke et al. [36] developed naïve Bayes (NB) models and optimized them using four weight optimization methods (i.e., forward, backward, particle swarm optimization, and evolutionary). The database contained 134 rock bursts from different projects (e.g., hydropower plants, underground coal and metalliferous mines, and powerhouse stations). Shukla et al. [37] utilized XGBoost, decision tree, and support vector machine to predict the occurrence of rock burst in various underground projects. A total of 134 rock burst events were gathered together from various published studies. Ullah et al. [38] employed t-distributed stochastic neighbor embedding (t-SNE), K-means clustering, and XGBoost to predict the short-term rock burst risk. A total of 93 rock burst patterns with six influential features from microseismic monitoring events of the Jinping-II hydropower project in China were used. Li et al. [39] combined feature selection (FS), t-distributed stochastic neighbor embedding (t-SNE), and Gaussian mixture model (GMM) clustering for data preprocessing and the dataset contained 344 rock burst cases.

1.2. Current State of Research for Hard Coal Mines

Machine learning algorithms have also been used to predict rock bursts and their intensity, strictly in underground coal mines. For example, Sun et al. [40] proposed a fuzzy neural network risk prediction model for rock burst trained with the improved backpropagation algorithm. There were 10 input parameters representing conditions in the openings. The model was trained on 16 samples and tested on 10 samples. Shi et al. [41] developed a rock burst prediction model based on the BP neural network. Eight input parameters were used representing geological and mining conditions and mining technology. The model was trained on seventeen cases and tested on six cases. Wojtecki et al. [42] classified cases of rock bursts and non-destructive tremors in one of the active underground coal mines. The database contained 150 samples, and the percentage of rock bursts was approximately 35%. These samples were characterized by 25 input parameters. Seven parameters described the geological conditions, ten parameters defined the mining conditions, and three parameters described the technical/technological conditions. There were also two seismic parameters and three parameters describing the rock burst hazard. The models of multilayer perceptron classifier and decision tree were the most effective. Wojtecki et al. [43] reduced the number of parameters in the rock burst database and replaced the geological and mining parameters with the rock mass bursting tendency index WTG and the anomaly of the vertical stress component in coal seams, respectively. A total of eleven input parameters were used. The thickness of the seams, two seismic parameters, three technical/technological parameters, and three parameters describing the seismic hazard were left. Again, the models of decision tree and multilayer perceptron classifier classified the rock bursts best. Miao et al. [44] proposed a rock burst prediction index system containing 16 indicators. A total of 118 cases were used as sample data. However, the Boruta-random forest model had better predictive performance than the others [44].
Only about 1% of tremors with the energy that could lead to a rock burst in the opening end up in this way [45]. In hard coal mines in the USCB, the number of rock bursts is significantly smaller compared to the number of strong tremors that can lead to a rock burst. In recent years, the ratio of the number of rock bursts—classified by Polish mining regulations—to the number of strong tremors (i.e., tremors with energy greater than or equal 1 × 105 J, so with a local magnitude ML ≥ 1.68) in coal mines in the USCB was as follows: ~0.13% (2018), ~0.32% (2019), ~0.07% (2020), and ~0.13% (2021). It should be noted that in underground coal mines located in the USCB, there are also dynamic phenomena that are less destructive (i.e., they do not cause a loss of functionality of the openings, or they cause a slight loss of their functionality or a slight deterioration in the safety of their use). Such phenomena are not classified as rock bursts in official statistics. However, the number of all dynamic phenomena, regardless of the extent of destruction and damage to underground mine openings (i.e., classified as a rock burst or not), is still very small compared to non-destructive strong tremors. The prepared database at least partially tries to reflect this imbalance in the number of destructive and non-destructive phenomena in mining openings.
Based on the selected geological, geomechanical, mining and technical parameters as well as the tremor energy and the correlated peak particle velocity (PPV), an attempt was made to classify rock burst cases and non-destructive tremors in one of the active hard coal mines in the USCB. The rock mass proneness to rock bursts was assessed via the GEO system [46]. All geological and geomechanical parameters were included in the GEO index [46].

2. Materials and Methods

2.1. Characteristics of the Database

Parameters related to rock bursts in the underground openings of coal mines or preventing their occurrence were selected. The database contained a total of 627 samples, of which 53 were cases of rock bursts (labelled as 1), and in the remaining cases (labelled as 0), despite the tremors, the excavation was not damaged or destroyed. All dynamic phenomena that led to any damage to the opening were classified as rock bursts. Thus, samples labelled as 1 include all cases of damage or destruction of underground infrastructure on a local scale or with a range of several dozen to several hundred meters (e.g., floor uplift, damages of support (steel arches, mesh, props), and coal ejection into the opening). In this way, the definition of a rock burst was extended, which, according to Polish mining regulations, is considered only to be a case of complete loss of functionality by an opening.
Samples in the database represent points in underground mining excavations for which geological, geomechanical, mining and technical conditions were determined and correlated with the occurrence of tremor with known energy and the corresponding value of peak particle velocity. These data come from one of the hard coal mines in Poland, in the USCB. The risk of rock bursts in the selected mine remains high due to, for example, complex geological structure of the deposit (thick layers of sandstone, faults, folds, washouts etc.), mining remnants (edges of coal seams, pillars), and the proneness of coals to accumulate strain energy and burst. In the selected mine, coal seams are extracted using a longwall system with caving. Mining is carried out at large depths, exceeding even 1 km. Such large depths determine a high level of primary stress, which also adversely affects the occurrence of rock bursts. The previously mentioned thick sandstone layers are also very unfavorable in the selected mine. The fracturing of these sandstones is a source of strong tremors threatening underground openings. The data collected in the database embraced a period of the last 25 years of mining in the selected mine. Therefore, the geological, mining and technological conditions, rock burst prevention methods, and seismological observations can be considered comparable. For each of the 627 samples in the database, 15 parameters (features) related to the occurrence of rock bursts or their prevention were included.

2.1.1. GEO Index

The potential rock burst hazard is determined while planning new openings, which can be paralleled to a long-term rock burst prediction. First, the mechanical properties of coal and/or surrounding rocks (e.g., the uniaxial compressive strength), are considered. Moreover, places where the risk of rock burst is greater due to the properties of coal or surrounding rocks or the presence of factors affecting the stress level in the rock mass (e.g., edges of adjacent coal seams) are identified. Assessing the geomechanical properties of the excavated rocks is one of the basic methods of assessing the potential rock burst hazards. Numerous studies in this area have been carried out for many years in the mines of the USCB [46,47,48,49,50,51]. However, most of the indicators for assessing the rock burst proneness in coal mines refer mainly to different coal types and rocks surrounding the coal seams. Only a few of them are used to assess the rock mass proneness, with careful consideration of its geological features. Until the 1970s, the natural proneness to rock bursts was assessed in the USCB only in terms of quality [46]. Then, indicators of rock burst proneness were developed based on the pre-failure stress–strain characteristics obtained during the compression of the sample. The first studies concerned the strain energy storage index WET [47]. This index was intended for coal. The coal sample is loaded to 81–90% of the breaking stress, unloaded to a value of about 5%, and repeatedly loaded until the sample is destroyed [46]. The WET index is the dimensionless ratio of the elastic deformation energy to the irreversibly lost energy during the sample deformation. For the conditions occurring in coal mines in the USCB, an attempt was made to collectively analyze the factors determining the rock mass proneness to rock bursts and develop an appropriate assessment scale. A geological and geomechanical system for assessing rock mass proneness to rock bursts, GEO, has been developed [46]. This system combines many elements that comprise the rock mass structure and properties in the Upper Silesian Coal Basin. For mines in the USCB, a GEO system was proposed to assess the proneness of the entire seam-surrounding rock system to rock burst [46].
All information on geological conditions in the vicinity of selected points in openings has been involved in the GEO index (e.g., depth, coal seam thickness, and distance of the coal seam from the thick layer of sandstone) [46]. This index also involves information about the geomechanical properties of rock mass and the maximum energy of tremors that may occur with a given rock mass structure. In addition, the GEO index contains information about the rock burst proneness of the entire seam-surrounding rock system. It includes WET index, rock mass number Lg [48], and rock mass bursting proneness index WTG [49]. For each parameter, classes are specified that correspond to the points of the GEO system. Finally, the points associated with each parameter are summed up to give a GEO index value. The minimum, maximum, and average values of the GEO index in the database equaled 44, 134, and 101, respectively. The maximum number of points allowed in the GEO system (i.e., 134) was reached in about 2% of the samples. According to the scale for assessing the natural proneness of a rock mass to rock bursts in the GEO system, for approx. 3.7% of the samples, the rock mass was not prone to rock bursts (GEO index below 68); for approx. 30.8% of the samples, the rock mass was slightly prone to rock burst (GEO index between 68 and 90); and for approx. 65.5% of the samples, the rock mass was strongly prone to rock bursts (GEO index higher than 90). The distribution of the GEO index values, with division into rock bursts (1) and cases without damage and destruction in the openings (0), is shown in Figure 1a. Considering only rock bursts in the dataset, the calculated GEO index ranged from 75 to 134 (mean 109). About 80% of the rock bursts in the dataset occurred in rock mass strongly prone to rock bursts. The remaining approx. 20% of rock bursts occurred in rock mass slightly prone to rock bursts. This may result from extending the rock burst criterion to any excavation damage. Considering the lack of a formal rock burst limit, such an approach is experimental for Polish mines and brings a new quality. The GEO system was designated for cases of rock burst, which, according to the mining regulations, meant significant damage to the openings and loss of functionality.

2.1.2. Seismic Energy and PPV

This study aimed to determine whether, given the geological, mining, and technical conditions occurring in a selected point of the opening, a tremor of a certain energy, generating peak particle velocity at this point, leads to a rock burst or not. Rock bursts in coal mines in the USCB are mostly correlated with strong tremors (i.e., energy greater than or equal to 1 × 105 J). However, there have also been cases of rock burst after medium-energy tremors( i.e., with an energy of 104 J). Low-energy tremors, mainly of the upper range of energy around 103 J, may also be associated with damage to the openings. Such cases mainly concern drilled headings. Selected excavation points included in the database were associated with tremors of energies from 8 × 102 J (ML = 0.58) to 6 × 108 J (ML = 3.67). Samples related to strong tremors accounted for approximately 94% of the database. Low- and medium-energy tremors each accounted for approximately 3% of the database.
Numerous studies have shown the relationship between high values of PPV and the occurrence of rock bursts. Such studies have been conducted in mines in the USCB [52,53,54]. The PPV values for rock bursts in USCB coal mines usually range from about 0.05 m/s to 1 m/s [54]. When mining activities are conducted in areas of concentrated stress, excavation damage may occur at lower values of PPV [54]. There is a high risk to the stability of openings if the PPV value exceeds 0.4 m/s [52]. The PPV value can be measured directly in the openings or calculated using the empirical formula [54]:
l o g P P V · R = 0.66   l o g M o 7.4
where PPV is expressed in m/s, R is the hypocentral distance in [m], and Mo is the scalar seismic moment in [Nm].
PPV values, calculated based on Formula (1) for individual samples in the database, ranged from about 0.006 m/s to about 0.77 m/s (mean 0.08 m/s). Considering only rock bursts, the calculated PPV ranged from about 0.043 m/s to about 0.77 m/s (mean 0.15 m/s). The distribution of tremor energies and PPVs, with division into rock bursts (1) and cases without damage and destruction in the openings (0), is illustrated using violin plots and presented in Figure 1b and Figure 1c, respectively. The values of the parameters presented in Figure 1a–c for rock bursts and non-destructive tremors were in the similar range. However, a high value of a given parameter does not mean that a rock burst is certain.

2.1.3. Mining Parameters

A total of nine mining parameters related to the occurrence of rock bursts or preventing this phenomenon were selected. Six of them can be considered as unfavorable, in other words, the presence of the mining remnant, the vertical distance between the opening and mining remnant (in meters), the time since the remnant was left (in years), the neighborhood of goafs in the same coal seam, excessive bed cutting, and other negative factors, relatively rare in the analyzed mine (e.g., change of mining direction during longwall mining, drilling the heading across the bedding of a seam, split or folding of coal seam etc.). The mining remnant, left due to an earlier extraction of another coal seam, causes a stress increase in the rock mass [1]. Three values of this parameter were assumed: none (63% of samples), edge of the coal seam (32.5% of samples), and residual coal pillar (4.5% of samples). The presence of additional stress in the coal seam is an important factor determining the occurrence of rock bursts. Therefore, the vertical distance of the mining remnant from the excavation is important. According to Polish mining regulations, the occurrence of the remnant in other coal seams must be considered when its vertical distance is between 160 m above and 60 m below the opening. Among the samples in the database, this distance ranged from about 316 m above and about 106 m below the opening (mean 103 m above the opening). The stress increase due to the presence of mining remnants should fade over time. However, the increase in stress due to remnants can persist for several decades [1].
The pressure of the rock layers above the boundary of goafs may increase stress in the unmined coal body. An opening drilled in the vicinity of goafs may be influenced by additional stress and thus may have a higher level of rock burst hazard. A neighborhood of goafs was assumed when the distance between the excavation and boundary of goafs did not exceed 50 m. A neighborhood of goafs occurred in about 39% of the samples in the database. Another negative parameter is excessive bed cutting. Drilling several excavations at a short distance from each other causes weakening of the coal body and may lead to a rock burst. Samples with excessive bed cutting (i.e., where at least three excavations were next to each other) accounted for 6.5% of the database.
The next three mining parameters were related to the earlier mining of the adjacent coal seam, which can significantly reduce the rock burst hazard. These parameters include the vertical distance to the extracted adjacent coal seam, the thickness of this seam, and the time since the mining of the adjacent coal seam was undertaken. In this article, such a previously extracted coal seam was referred to as a destressing coal seam, ignoring its vertical distance. According to Polish mining regulations, a coal seam is classified as a lower rock burst hazard degree if a destressing coal seam, deposited not more than 50 m below or 20 m above, is mined with caving.
In the case of a thick coal seam, the destress effect is achieved by mining one of its layers. In general, the destress effect should be greater if the destressing seam is closer and thicker and a shorter time has elapsed since it was excavated. The parameters correlated with the so-called destressing seam were determined for about 71% of the samples in the database. The vertical distance between the excavation and the destressing seam for these samples ranged between 0 m (upper layer of thick coal seam) and about 309 m (mean 75 m). Therefore, the distance criterion of the destressing seam was met by only about 38.6% of these samples. The thickness of the destressing seam was between 1 m and about 4 m (mean 2.5 m), and the time elapsed since destressing mining ranged between 1 year and 34 years (mean 15 years). In the case of the remaining 29% of samples, other seams were certainly mined above them, but their vertical distance and the time that had elapsed since their extraction were large enough to be completely ignored.

2.1.4. Technological Parameters

Three parameters related to the technology of works were also selected (i.e.,: type of excavation, reinforcement of the support, and rock burst prevention). Based on the first of the mentioned parameters, the samples were classified as longwall galleries (90.4% of samples), drilled headings (7.4% of samples), and opening-out headings (2.2% of samples). Such an uneven share results from rock bursts occurring mainly in longwall galleries. Advancing the longwall face generates mining pressure in the unmined coal body, so a higher stress level can be expected close to the main gate and tailgate. Drilled headings are also at risk of rock bursts. Coal may be ejected from the exposed working face directly into the opening, and the possibilities to protect against it are limited. Opening-out headings have a transport and ventilation function. These excavations are mostly old and located in protective pillars with high-stress levels. Steel arches are the basic support for openings. This support can be reinforced, which may prevent rock bursts or minimize excavation damage. Support reinforcement was applied in 90.9% of the samples in the database. Such a large share resulted from the fact that the dataset came from a mine with a high risk of rock bursts, where support reinforcement, mainly by props, is commonly employed. This applies, in particular, to openings with seismic activity and an increased risk of rock burst. The share of types of support reinforcement is as follows: props (97.2%), props and bolts together (1.2%), wooden support (0.9%), steel horseheads (0.35%), and props and steel horseheads and reduced spacing between steel arches together (0.35%). In the selected mine, active rock burst prevention is used on a large scale, based mainly on destress blasting in coal seams and roof rocks. Destress blasting for rock burst prevention was performed for 57.9% of the samples in the database. In 54.8% of them, long-hole destress blasting in roof rocks was executed together with destress blasting in coal seams; in 40% of these samples, long-hole destress blasting in roof rocks was performed independently; and in 5.2% of them, only destress blasting in coal seams was executed.

2.1.5. Feature Correlation

The fifteen parameters presented above were generally not or slightly correlated with each other. The Spearman correlation coefficient was used as a measure. This coefficient shows any, including the nonlinear, monotonic relationship between two variables, ranging between −1 and 1. The sign of the Spearman correlation coefficient indicates whether it is a positive or negative monotonic relationship. Only a few parameters in the database were strongly positively correlated, for example, the distance from the remnant and the time since it was left, and the distance from the destressing seam and the time that had elapsed since its extraction. The Spearman correlation coefficient in these cases equaled 0.97 and 0.7, respectively. Such a correlation resulted from the fact that shallower seams are usually excavated earlier than those deposited deeper. The energy of the tremor and the associated PPV were moderately positively correlated, with the Spearman correlation coefficient being 0.53. This is somewhat understandable because the PPV is proportional to the energy of the tremor. Of course, distance from the tremor focus and the directionality of seismic wave radiation must also be considered. Considering only rock bursts classified as such following the applicable mining regulations, the Spearman correlation coefficient between the GEO and tremor energy was 0.44.
The correlation between the fifteen distinguished parameters and the occurrence of rock bursts in the database was also determined. It should be noted that rock bursts accounted for approximately 8.5% of the dataset. In most of the remaining cases, despite high PPV values and the rock mass’s proneness to rock bursts, rock bursts did not occur. This does not prove the lack of usefulness of the analyzed parameters, but a complicated and nonlinear dependence of the occurrence of rock bursts. Strong tremors are often not associated with rock bursts in underground mining openings, even when drilled in rock masses prone to rock bursts. Moreover, in the case of the GEO system, it should be emphasized that it was created based on rock bursts classified according to the Polish mining regulations. Individual classification criteria in the GEO system have been adopted for rock bursts regarding dynamic phenomena caused by tremors. As a result, openings lost their functionality and thus were significantly damaged or destroyed. In this article, even local damage to the excavation was considered a rock burst. In an excavation drilled in a rock mass prone to rock bursts, rock bursts do not have to occur due to a strong tremor. Moreover, such cases are rare in comparison to cases of non-destructive tremors. Additionally, reinforcement of the support or a destressing effect due to blasting can prevent a rock burst, even in rock mass prone to rock burst.
The highest value of the Spearman correlation coefficient was for PPV (0.26), followed by the type of excavation (0.22), and GEO index (0.12). The Spearman correlation coefficient for a neighborhood to goafs, excessive bed cutting, and reinforcement of the support was 0.11. For the remaining nine parameters, the absolute value of the Spearman correlation coefficient did not exceed 0.1. Negligible negative and positive monotonic relationships were found for five and four features, respectively. A RadViz plot was used to illustrate the cases of rock bursts and cases without destruction and damage to mining openings against the background of the distinguished features with normalized values (Figure 2). The RadViz plot enables the graphical presentation of multidimensional data. In this way, it is possible to determine existing regularities, identify clusters of similar objects, and indicate unusual observations. The separateness of the features was visualized this way.
The dataset was highly imbalanced. Using such data may have resulted in the incorrect classification of cases being in the significant minority. To balance the dataset, the adaptive synthetic (ADASYN) algorithm was used. In this algorithm, the oversampling of minority samples takes place. This algorithm generates synthetic data but does not just copy the minority samples. Instead, synthetic samples with parameters similar to real samples are generated, especially in the vicinity of the majority of samples. In the ADASYN algorithm, k-nearest neighbors are found for each minority sample, and the majority class’s dominance in its neighborhood is determined. As a result, more synthetic minority samples are generated for neighborhoods dominated by majority-class samples.

2.2. Machine Learning Algorithms

Machine learning is a branch of artificial intelligence that demonstrates intelligent behavior by finding relationships in the training data. Machine learning algorithms differ from classical algorithms, which are composed of a specific sequence of instructions. Instead, they create a model based on analyzing the provided training data. The process of creating knowledge about a solution is called learning. The three most frequently mentioned categories of machine learning algorithms are supervised, unsupervised, and reinforcement learning [55,56]. Supervised learning involves identifying a function that predicts future outcomes based on the data containing both the input features and expected outcomes. Predictive models receive instructions on how and what to learn. Supervised machine learning algorithms include, among others, artificial neural networks, adaptive boosting, gradient boosting, extreme gradient boosting, K-nearest neighbors, naïve Bayes, support vector machines, decision trees, and random forests. In unsupervised learning, the training set only consists of the input features and does not include labels (expected outcomes). Unsupervised machine learning algorithms look for patterns based on the provided examples. This group includes K-means, independent component analysis (ICA), and principal component analysis (PCA). Reinforcement learning, a trial-and-error method, occurs in a completely unknown, often dynamic environment. The algorithm uses feedback from its decisions and learns from its experiences by receiving a positive (reward) or negative reinforcement signal (penalty). Q-learning and SARSA are examples of reinforcement learning.
This article demonstrates the use of selected supervised machine learning algorithms such as artificial neural network, adaptive boosting, gradient boosting, K-nearest Neighbors, and naïve Bayes to assess potential rock burst hazards in underground openings of an active coal mine. All calculations were performed in the JupyterLab environment [57,58]. All scripts were coded in Python using the available machine learning library Scikit-learn (sklearn) [59]. The optimal hyperparameters of each machine learning algorithm were determined using a grid search approach. In the vast majority of cases, the standard values proposed in the machine learning library Scikit-learn were adopted.

2.2.1. The Multilayer Perceptron Classifier

The artificial neural network (ANN) is a computational model designed to simulate the network of neurons that make up a human brain. McCulloch and Pitts [60] proposed a mathematical model of a neuron. Hebb [61] described the principles of neural network learning. Subsequently, Rosenblatt [62] created a perceptron, a simple neural network made of independent McCulloch–Pitts neurons. It was innovative to apply the learning process as a method of programming. Minsky and Papert [63] proved that perceptrons could not learn linearly inseparable functions such as XOR, which led to a decreased interest in perceptrons. This situation persisted until Rumelhart et al. [64] proposed the backpropagation algorithm (BP). The use of a hidden layer in the neural network enabled nonlinear calculations. LeCun et al. [65] used neural networks with backpropagation, where layers were constrained to be convolutional.
One of the most commonly used types of unidirectional artificial neural networks is a multilayer perceptron (MLP). The multilayer perceptron classifier (MLPC) can classify datasets that are not linearly separable. Such a neural network was also used for the study presented in this article. The neurons in the MLP network are connected between layers peer-to-peer, with no connections between neurons within the same layer. There are three basic types of layers in the MLP network: input, hidden, and output. In the MLP network used, five hidden layers were applied. The more hidden layers a network has, the deeper dependencies it can find. The connections between neurons are assigned weights established during the network’s training. These weights help determine the importance of each variable; larger weights have a greater impact on the output than other inputs. All inputs are then multiplied by their respective weights and then added together. The results are then passed through an activation function, determining the output value. The role of the activation function is to introduce nonlinearity in the data processing path. Its output value is forwarded to the next layer in the network. Neurons in hidden layers use activation functions such as the rectified linear unit (ReLU) function, logistic sigmoid function, and hyperbolic tangent (Tanh) function. The ReLU function outputs the input directly if it is positive; otherwise, it returns zero. This was used for hidden layers in the constructed MLP network. The most common activation functions in the output layer are linear, logistic, and Softmax. The Softmax function, often used in the output layer of a multilayer perceptron adapted to solve classification problems, was used in the constructed MLP network. The Softmax is an exponential function whose value is normalized so that the sum of the activations for the entire layer equals one. According to this normalization, the output values can be interpreted as the probabilities of a given input signal belonging to particular classes [66].
The training of MLP networks is possible thanks to the backward error propagation method [64,67]. In this method, the error signal is calculated from the output layer, through hidden layers, toward the input layer. Then, the minimization of the sum of squares (or other function) of the learning error using the gradient descent optimization method takes place. The weights of connections between processing elements located in adjacent layers of the network are adjusted in this way. A stochastic gradient-based optimizer, ADAM [68], was applied to improve the weights in the trained MLP network. A constant learning rate (step size in updating the weights) of 0.001 was used. The input and output layers comprised fifteen and two neurons, respectively. The hidden layers between them contained fifteen, twelve, nine, seven, and four nodes, respectively.

2.2.2. Adaptive and Gradient Boosting Classifiers

Boosting is a general method to increase the effectiveness of any learning algorithm. However, it is often used with decision trees. The decision tree is a machine learning algorithm that divides the dataset into smaller subsets according to their features. The splitting process continues until all the remaining data belong to only one class. The boosting method belongs to ensemble learning, where subsequent classifiers of the same algorithm (e.g., decision tree) are trained, and an ensemble classifier is created based on them. Kearns and Valiant [69] discussed whether a weak learning algorithm, whose efficiency is slightly better than random guessing, could create an effective strong classifier. The boosting method was introduced by Freund and Schapire [70]. There are three main types of boosting algorithms: adaptive boosting (AdaBoost), gradient boosting (GB), and extreme gradient boosting (XGBoost). The first two were selected for use in this article.
The AdaBoost algorithm was proposed by Freund and Schapire [71]. This algorithm initially sets the same weights for all samples in the training dataset. Next, a new classifier is built based on the training dataset, considering the sample weights. This new classifier is added to the ensemble classifier, with a weight corresponding to its accuracy. Finally, the sample weights are updated; correctly classified samples receive a lower weight, and incorrectly classified samples receive a higher weight. In the AdaBoost algorithm, each subsequent classifier tries to classify samples that the previous ones misclassified. The final prediction in the AdaBoost algorithm is based on a majority vote of the weak classifiers’ predictions weighted by their accuracy. The entire process is repeated until the residual error, or the difference between the actual and predicted values, falls below the established threshold. The AdaBoost algorithm used in this article was based on an elementary form of decision tree known as stumps (i.e., the maximum depth of these trees equaled 1). The maximum number of decision trees at which boosting was terminated equaled 50. Stagewise additive modeling using a multiclass exponential loss function for real (SAMME.R) was applied during model training. This algorithm uses the class probabilities during training.
The gradient boosting algorithm was developed by Friedman [72], and is similar to the AdaBoost algorithm in that it is also based on a sequential training technique. All classifiers influence the final decision regarding the classification of individual samples (e.g., decision trees) used to build the algorithm model. However, this algorithm does not increase the weights of incorrectly classified samples. The contribution of the weak classifier to the ensemble classifier is based on the gradient descent algorithm, an iterative optimization algorithm that minimizes the loss function. The subsequent classifiers are maximally correlated with the negative gradient of the loss function associated with the whole ensemble classifier. The GB algorithm used in this article was based on decision trees, and the number of boosting stages was 100. The log loss function was optimized. The maximum depth, which limits the number of nodes in each tree, equaled 3. The learning rate, which determines the share of each tree in the ensemble classifier, equaled 0.1. Such a small learning rate value requires more trees but has better generalization ability. This regularization method is referred to as shrinkage. The Friedman mean square error (Friedman mse) criterion was applied to measure the quality of a split in decision trees.

2.2.3. k-Nearest Neighbors Classifier

The K-nearest neighbors (KNN) classifier is one of the most widely used classifiers in supervised machine learning. It was introduced by Fix and Hodges [73] and developed by Cover [74]. The KNN classifier belongs to the family of “lazy learning” classifiers, which means that it only stores the training dataset and does not undergo training. All calculations take place when a classification is being made. This classifier predicts that a given sample belongs to the class with the largest k of its nearest neighbors. It is assumed that samples with similar parameters belong to the same class. The KNN algorithm first determines the k nearest neighbors for a given sample based on a distance metric (e.g., Euclidean, Manhattan, Minkowski, or Hamming). Then, a majority vote is taken among the selected k neighbors of the analyzed sample and is assigned to the dominant class in its surroundings. In this article, the Minkowski distance was used, which is expressed by the formula:
d M i n k o w s k i = i = 1 n x i y i p 1 p
where xi and yi are the samples for which the distance is calculated, and the value of the hyperparameter p equaled 2 (Euclidean distance measure). Moreover, k = 5 was adopted as the number of nearest neighbors for classification, and all were weighted equally.

2.2.4. Gaussian Naïve Bayes Classifier

The naïve Bayes classifier is a probabilistic classifier based on Bayes’ theorem. The probability that the sample belongs to class (label) c takes into account the feature values x1, x2, … xm (i.e., the posterior probability), and can be calculated according to the following formula:
P c | x 1 ,   x 2 , x m = P x 1 ,   x 2 , x m | c P c P x 1 ,   x 2 , x m
where P(x1, x2, … xm|c) is a probability of feature values x1, x2, … xm taking into account that class c, P(c) is a class prior probability (i.e., the ratio of the number of samples of a given class c to the number of all samples in the database), P(x1, x2, … xm) is a marginal probability (i.e., probability of feature values x1, x2, … xm occurring across all possible classes in the database). Since the denominator in Formula (3), in other words, the marginal probability does not depend on the class, it does not change the classification result. It is assumed that the features are independent in the naïve Bayes classifier. This assumption is often incorrect, but it does not cause difficulties in preparing an effective classifier in practice. An assumption about the independence of feature values from within a class leads to simplification:
P x 1 ,   x 2 , x m | c k = 1 m P x k | c
A statistical probability distribution can be assumed for each feature in a database. The most commonly assumed types of distributions are normal (Gaussian), polynomial, and Bernoulli. In this article, it was assumed that the features had a Gaussian distribution. Hence, the used classifier was Gaussian naïve Bayes (GNB). The distribution density function was in the form of the normal distribution:
P x k | c = 1 σ c 2 π e x k μ c 2 2 σ c 2
where µc is the mean value of the feature k for vectors belonging to the class c, and σc is the standard deviation of this mean.
The application of a naïve Bayes classifier amounts to the calculation of the posterior probabilities for each class and finally classifying the sample to the class for which the value of the calculated probability is the highest:
c N B = arg max cϵC P c k = 1 m P x k | c
In the applied Gaussian naïve Bayes, the priors were adjusted according to the data.

2.2.5. Ensemble Classifier

In many cases, the collective answer is better than a single expert’s answer. This phenomenon is known as the wisdom of the crowd, and it can also be used when the predictions of a group of classifiers are combined. Very often, a group of classifiers predicts better than a single one, even the most efficient classifier. Such a group of classifiers is called an ensemble classifier. This type of classifier can be created based on a group of classifiers of one type (e.g., decision trees) for each separate subset of the training set. An example of one of the commonly used ensemble algorithms is a random forest, built from a set of decision trees, in which the final classification is determined by majority voting. Ensemble learning is also used in boosting algorithms, as mentioned earlier.
An ensemble classifier is most effective when the individual component classifiers are as independent as possible. One way to achieve this is to train the component classifiers of ensemble classifiers using different machine learning algorithms. This increases the probability that the component classifiers will not make the same errors, leading to greater efficiency of the ensemble classifier. In this article, after training, the MLP, AdaBoost, GB, KNN, and GNB classifiers, they were combined into one ensemble classifier. Hard voting was used (i.e., predictions calculated by individual classifiers were combined), and finally, the class that received the most votes was chosen.

3. Results

Based on the train–test split function, the dataset was randomly divided into training and test data in an 80% to 20% ratio. The training dataset was pre-processed using a standard scaler algorithm (i.e., the features were standardized by removing their mean and scaling them to unit variance). Then, the ADASYN algorithm was applied to balance the training dataset. The scaling function determined for the training dataset was used to scale the test dataset. In the manner described above, ten independent sets of training and test data were prepared. The size of randomly selected datasets is shown in Table 1. P stands for the number of rock bursts, and N stands for the number of non-destructive tremors. The value in brackets indicates the number of rock bursts after balancing the training dataset. Ten models of each of the mentioned machine learning algorithms (i.e., MLP, AdaBoost, GB, KNN, GNB classifiers, and ensemble classifier) were trained and tested on these datasets.
The use of binary classification (i.e., positive for rock bursts and negative for cases without damage and destruction to the openings) enabled the use of basic model quality assessments such as true positive (TP), true negative (TN), false positive (FP), and false negative (FN). The TP and TN are the numbers of correctly classified positive and negative cases, respectively. The FP is the number of samples incorrectly classified as positive but actually negative, representing the number of false alarms. In the case of FN, the situation is reversed and concerns the underestimation of positive cases. The model efficiency parameters were calculated based on the mentioned basic parameters such as accuracy, recall (sensitivity, true positive rate), precision (positive predictive value), and F1-score. The first of these parameters is the ratio of all correctly classified samples to all samples in the dataset:
A c c u r a c y = T P + T N T P + T N + F N + F P
The recall is the ratio of TP to all positive cases in the dataset. It can be calculated as follows:
R e c a l l = T P T P + F N
Precision describes the degree to which positive predictions can be trusted. It is the ratio of TP to all cases classified by the model as positive:
P r e c i s i o n = T P T P + F P
F1-score is the harmonic mean of precision and recall, which are assumed to be equally important. F1-score is calculated according to the following formula:
F 1 - s c o r e = 2 R e c a l l · P r e c i s i o n R e c a l l + P r e c i s i o n = 2 T P 2 T p + F N + F P
The efficiency characteristics of the models of selected classifiers: MLP, AdaBoost, GB, KNN, GNB, and ensemble hard-voting classifier are presented in Table 2, Table 3, Table 4, Table 5, Table 6 and Table 7, respectively.
For a comparison of the trained and tested models of machine learning algorithms, the receiver operating characteristic (ROC) curve was also used (Figure 3). The ROC curve is created by plotting the recall (true positive rate) against the false positive rate, and it illustrates the ability of a binary classifier as its discrimination threshold is varied. The more accurate the model, the more the ROC curve is shifted toward the upper-left corner. The diagonal represents a random classifier. The area under the ROC curve is an indicator describing the effectiveness of the model. The values of the area under the ROC curves for the models of the selected machine learning algorithms are shown in Table 8.
During classification, it is not uncommon to struggle between discovering all the positive cases in the data while reducing the erroneous results. However, it is often a process where a balance is sought between two opposing forces. Models should therefore be characterized by both high recall and precision. Figure 4 shows the precision–recall curves for all trained and tested models, and for all datasets. The more accurate the model, the more the precision–recall curve is shifted toward the upper-right corner. The large area under the curve means both high recall and high precision. In Table 9, the areas under the precision–recall curves are shown.

4. Discussion

The risk of rock bursts is still a significant problem in mines located in the USCB and around the world. The article presents the possibility of classifying rock bursts in a selected hard coal mine using examples from the past. The investigation of rock bursts in coal mines in the USCB shows that they mostly occur in longwall galleries (maingates or tailgates) and drilled headings. However, in recent years, rock bursts have also occurred in the opening-out headings, usually in the protective pillars, especially when the longwall mining was ending in their area.
This article presents the results of the classification of rock bursts and non-destructive tremors in an active underground coal mine using machine learning algorithms. The methodology used allowed for the correct classification of rock bursts in the conditions of numerous non-destructive tremors and in the complex geological and mining conditions of an underground coal mine.
Geological, geomechanical, mining, and technical parameters were taken into account. The first two types of parameters were included in one GEO index, adapted to the conditions prevailing in underground mines in the USCB and a rock burst database collected following the Polish mining regulations. The selected points were then correlated with tremors in their vicinity, and the PPV values at these points were calculated. Such a comprehensive use of the selected parameters was applied to assess whether a rock burst may occur at a given point in the opening. However, the presented method could be applied to assess the potential risk of rock bursts for openings in underground coal mines.
Efforts were made to at least partially reflect the real ratio of rock bursts to most cases of tremors that did not cause damage to the openings. The database contained 627 samples. The share of rock bursts in the current database was only about 8.5%. As a result, the dataset was highly imbalanced, and cases of tremors without damage to the openings predominated.
The parameters related to the occurrence of rock bursts did not show a linear relationship (i.e., an increase in a given parameter does not necessarily have to be associated with the occurrence of rock bursts in the opening). Many parameters influence the occurrence of a rock burst in a specific configuration, and a large number of them were included. Balancing the dataset with the ADASYN sampling algorithm was necessary for the correct classification of minority rock burst events.
Since the occurrence of rock bursts and the range of damage are not linear, machine learning algorithms were used. Machine learning algorithms including MLP, AdaBoost, GB, KNN, and GNB classifiers are particularly suitable for solving nonlinear issues. The average accuracy for the models of applied algorithms equaled: 0.92 for MLP, 0.9 for GB, 0.87 for AdaBoost, 0.85 for KNN, and 0.77 for GNB. The average recall of the models equaled 0.67 for GNB, 0.63 for GB, 0.62 for MLP, 0.6 for KNN, and 0.56 for AdaBoost. The average precision of these models ranged from 0.24 (for GNB) to 0.54 (for MLP). The average F1-score of these models was between 0.34 (for GNB) and 0.57 (for MLP).
The use of an ensemble hard-voting classifier consisting of models of the five previously mentioned classifiers was proposed. The average accuracy value of the ensemble classifier models was 0.91, comparable to the highest average accuracy of the single classifier (i.e., 0.92 for MLP). Moreover, the average recall value of the ensemble classifier models was the highest (i.e., 0.74). The average precision of the ensemble classifier models equaled 0.51, and the average F1-score of these models equaled 0.6 and was the highest. The number of false alarms accounted for about 3.5% to 10.3% of all cases without damage to the openings (mean 6.9%). The ensemble classifier models had the highest average area under the ROC curve (i.e., 0.83). Additionally, the highest value of this parameter (i.e., 0.92) was in one of the models of this classifier. The area under the ROC curve for the remaining models ranged from 0.72 to 0.78 on average. The mean value of the area under the precision–recall curve was also the highest for the ensemble classifier models (0.64). However, only half of the ensemble classifier models had the highest area under the precision–recall curve. For the other models, it was smaller (3 cases) or equal (2 cases) in comparison with the single algorithm models.
This article proposed a new approach to assessing the potential state of rock burst hazards in openings. It would be necessary to determine the geological, geomechanical, mining, and technical conditions at the points of the selected opening. By making certain assumptions, the trained models could be used to assess the potential risk of rock bursts in planned mine openings. The maximum predicted energy determined in accordance with mining regulations (e.g., by analytical, empirical or statistical methods) could be assumed as the tremor energy. Some assumptions about the location of the tremors’ foci would also be necessary as this affects the calculated PPV. Usually, strong tremors are associated with the fracturing of a thick layer of sandstone deposited above the seam. Moreover, they usually occur close to remnants and edges in other seams, where rock layers are deformed.
An important issue to consider in machine learning is the availability (quantity) and quality of data. The amount of data affects the generalization efficiency of the model. If there is too little data, especially for complex models, overfitting may occur. The study conducted for the dataset used did not show evidence of this phenomenon. Another limitation is the representativeness of the data. The database used contains records from one coal mine, so the resulting models were adapted to local conditions. Applying them without additional training to other mines may prove problematic. However, some general patterns in the trained models cannot be ruled out either. Moreover, it cannot be ruled out that the effectiveness of the models will deteriorate as the number of non-destructive tremors in the database continues to increase (i.e., the actual ratio of non-destructive tremors to rock bursts approaches).
For some rock bursts and non-destructive tremors, the parameter configurations used were very similar. Considering such cases could be the reason for differences in the effectiveness of the trained models. It cannot be ruled out that using a different parameter configuration would result in a better distinction between rock bursts and non-destructive tremors. It should be noted that the GEO index was created on the basis of a database of rock bursts in accordance with mining regulations (i.e., on the basis of dynamic phenomena causing the loss of functionality of mine openings). In the database used for the study, phenomena causing local destruction and damage of the openings were also considered to be rock bursts. An additional problem is the directionality of energy radiation from the source of the tremor, which depends on the focal mechanism. This can have a significant impact on the PPV value in the opening. Studies of the focal mechanism are not conducted as standard practice in coal mines in the USCB.

5. Conclusions

Models of selected machine learning algorithms were trained to classify rock bursts and non-destructive tremors in a selected hard coal mine. (1) The dominance of strong and non-destructive tremors over rock bursts in the database was considered. (2) The database corresponding to real conditions required balancing, therefore the ADASYN algorithm was used. (3) The GEO index was used as one of the features, describing the proneness of the entire seam-surrounding rock system to rock burst. (4) Rock bursts were best classified by an ensemble classifier composed of MLP, AdaBoost, GB, KNN, and GNB classifiers. On average, almost three-quarters of rock bursts in the test datasets were correctly classified by the ensemble classifier. (5) Developing a machine learning-based rock burst classification could help prevent accidents by alerting miners and engineers in advance of potential rock bursts. However, further research in this area should be continued using various machine learning algorithms and their configurations in the ensemble classifier.

Author Contributions

Conceptualization, Ł.W. and M.B.; Methodology, Ł.W.; Software, S.I.; Validation, Ł.W., S.I. and D.B.A.; Formal analysis, Ł.W. and S.I.; Investigation, Ł.W. and S.I.; Resources, Ł.W. and M.B.; Data curation, Ł.W. and M.B.; Writing—original draft preparation, Ł.W., M.B., S.I. and D.B.A.; Writing—review and editing, Ł.W. and D.B.A.; Visualization, Ł.W.; Supervision, D.B.A.; Project administration, Ł.W.; Funding acquisition, Ł.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Ministry of Science and Higher Education, Republic of Poland (Statutory Activity of the Central Mining Institute, Grant No. 11133011).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Restrictions apply to the availability of these data. Data were obtained from the Polish Mining Group and are available from the authors with the permission of the Polish Mining Group.

Acknowledgments

The authors would like to thank the Polish Mining Group for providing the data used for the calculations and discussing the results.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Dubiński, J.; Konopko, W. Rock Bursts. Assessment, Forecasting, Combating; Central Mining Institute: Katowice, Poland, 2000. (In Polish) [Google Scholar]
  2. Barański, A.; Drzewiecki, J.; Dubiński, J.; Kabiesz, J.; Konopko, W.; Kornowski, J.; Kurzeja, J.; Lurka, A.; Makówka, J.; Mutke, G.; et al. Principles of Using the Complex Method and Detailed Methods for Assessing the State of Rock Burst Hazard in Hard Coal Mines; Central Mining Institute: Katowice, Poland, 2012. (In Polish) [Google Scholar]
  3. Biliński, A. Rock burst in the light of mechanics of destressed rock mass. AGH Sci. J. 1985, 2, 275–291. (In Polish) [Google Scholar]
  4. Drzęźla, B.; Białek, J.; Jaworski, A. Prediction method of stress distribution in areas of completed mining. Publ. Inst. Geophys. Pol. Acad. Sci. 1998, 10, 49–57. [Google Scholar]
  5. Bańka, P. Determination of potential level of rock burst hazard based on the result of analytical predictions of stress distributions and the level of induced seismicity. Acta Geodyn. Geomater. 2004, 1, 19–26. [Google Scholar]
  6. Kabiesz, J.; Makówka, J. Selected elements of rock burst state assessment in case studies from the Silesian hard coal mines. Min. Sci. Technol. 2009, 19, 660–667. [Google Scholar] [CrossRef]
  7. Małkowski, P.; Ostrowski, Ł. Convergence monitoring as a basis for numerical analysis of changes of rock-mass quality and Hoek-Brown failure criterion parameters due to longwall excavation. Arch. Min. Sci. 2019, 64, 93–118. [Google Scholar] [CrossRef]
  8. Masny, W. Powered support in dynamic load conditions—Numerical analysis. Arch. Min. Sci. 2020, 65, 453–468. [Google Scholar] [CrossRef]
  9. Patyńska, R.; Stec, K. Regional rock burst indicator for structural units of Upper Silesian Coal Basin. Stud. Geotechn. Mech. 2017, 39, 27–37. [Google Scholar] [CrossRef]
  10. Rudziński, Ł.; Mirek, K.; Mirek, J. Rapid ground deformation corresponding to a mining-induced seismic event followed by a massive collapse. Nat. Haz. 2019, 96, 461–471. [Google Scholar] [CrossRef]
  11. Chlebowski, D.; Burtan, Z. Geophysical and analytical determination of overstressed zones in exploited coal seam: A case study. Acta Geophys. 2021, 69, 701–710. [Google Scholar] [CrossRef]
  12. Krawiec, K. Dynamic elastic properties of the hard coal seam at a depth of around 1260 m. Min. Res. Manag. 2021, 37, 159–176. [Google Scholar] [CrossRef]
  13. Kabiesz, J. Effect of the form of data on the quality of mine tremors hazard forecasting using neural networks. Geotech. Geol. Eng. 2006, 24, 1131–1147. [Google Scholar] [CrossRef]
  14. Cichy, T.; Prusek, S.; Świątek, J.; Apel, D.B.; Pu, Y. Use of Neural Networks to Forecast Seismic Hazard Expressed by Number of Tremors Per Unit of Surface. Pure Appl. Geophys. 2020, 177, 5713–5722. [Google Scholar] [CrossRef]
  15. Wiszniowski, J.; Plesiewicz, B.; Lizurek, G. Machine learning applied to anthropogenic seismic events detection in Lai Chau reservoir area, Vietnam. Comput. Geosci. 2021, 146, 104628. [Google Scholar] [CrossRef]
  16. Nguyen, L.Q.; Le, T.T.T.; Nguyen, T.G.; Tran, D.T. Prediction of underground mining-induced subsidence: Artificial neural network based approach. Min. Miner. Dep. 2023, 17, 45–52. [Google Scholar] [CrossRef]
  17. Bo, H.; Lu, G.; Li, H.; Guo, G.; Li, Y. Development of a Dynamic Prediction Model for Underground Coal-Mining-Induced Ground Subsidence Based on the Hook Function. Remote Sens. 2024, 16, 377. [Google Scholar] [CrossRef]
  18. Pu, Y.; Apel, D.B.; Szmigiel, A.; Chen, J. Image Recognition of Coal and Coal Gangue Using a Convolutional Neural Network and Transfer Learning. Energies 2019, 12, 1735. [Google Scholar] [CrossRef]
  19. Xu, D.; Wang, Y.; Huang, J.; Liu, S.; Xu, S.; Zhou, K. Prediction of geology condition for slurry pressure balanced shield tunnel with super-large diameter by machine learning algorithms. Tunn. Undergr. Space Technol. 2023, 131, 104852. [Google Scholar] [CrossRef]
  20. Bai, M.; Wang, L.; Xu, Z. Study on a neutral network model and its application in predicting the risk of rock burst. China Saf. Sci. J. 2002, 12, 65–69. [Google Scholar]
  21. Chen, H.J.; Li, N.H.; Ni, D.X.; Shang, Y.Q. Prediction of rockburst by artificial neural network. Chin. J. Rock. Mech. Eng. 2003, 22, 762–768. [Google Scholar]
  22. Ge, Q.F.; Feng, X.T. Classification and prediction of rockburst using AdaBoost combination learning method. Rock Soil Mech. 2008, 29, 943–948. [Google Scholar]
  23. Su, G.S.; Zhang, X.F.; Yan, L.B. Rockburst prediction method based on case reasoning pattern recognition. J. Min. Saf. Eng. 2008, 25, 63–67. [Google Scholar]
  24. Su, G.S.; Zhang, K.S.; Chen, Z. Rock burst prediction using Gaussian process machine learning. In Proceedings of the International Conference on Computational Intelligence and Software Engineering, Wuhan, China, 11–13 December 2009. [Google Scholar]
  25. Gong, F.; Li, X.; Zhang, W. Rockburst prediction of underground engineering based on Bayes discriminant analysis method. Rock Soil Mech. 2010, 31, 370–378. [Google Scholar]
  26. Zhou, J.; Li, X.; Shi, X. Long-term prediction model of rockburst in underground openings using heuristic algorithms and support vector machines. Saf. Sci. 2012, 50, 629–644. [Google Scholar] [CrossRef]
  27. Jia, Y.; Lu, Q.; Shang, Y. Rockburst prediction using particle swarm optimization algorithm and general regression neural network. Chin. J. Rock Mech. Eng. 2013, 32, 343–348. [Google Scholar]
  28. Adoko, A.C.; Gokceoglu, C.; Wu, L.; Zuo, Q.J. Knowledge-based and data-driven fuzzy modeling for rockburst prediction. Int. J. Rock Mech. Min. Sci. 2013, 61, 86–95. [Google Scholar] [CrossRef]
  29. Zhou, J.; Li, X.; Mitri, H.S. Classification of Rockburst in Underground Projects: Comparison of Ten Supervised Learning Methods. J. Comput. Civ. Eng. 2016, 30, 04016003. [Google Scholar] [CrossRef]
  30. Li, N.; Jimenez, R.; Feng, X. The influence of Bayesian Networks structure on rock burst hazard prediction with incomplete data. Proc. Eng. 2017, 191, 206–214. [Google Scholar] [CrossRef]
  31. Shirani Faradonbeh, R.; Taheri, A. Long-term prediction of rockburst hazard in deep underground openings using three robust data mining techniques. Eng. Comput. 2018, 35, 659–675. [Google Scholar] [CrossRef]
  32. Li, N.; Jimenez, R. A logistic regression classifier for long-term probabilistic prediction of rock burst hazard. Nat. Hazards 2018, 90, 197–215. [Google Scholar] [CrossRef]
  33. Pu, Y.; Apel, D.B.; Lingga, B. Rock burst prediction in kimberlite using decision tree with incomplete data. J. Sustain. Min. 2018, 17, 158–165. [Google Scholar] [CrossRef]
  34. Zheng, Y.; Zhong, H.; Fang, Y.; Zhang, W.; Liu, K.; Fang, J. Rockburst prediction model based on entropy weight integrated with grey relational BP neural network. Adv. Civil Eng. 2019, 2019, 3453614. [Google Scholar] [CrossRef]
  35. Ghasemi, E.; Gholizadeh, H.; Adoko, A.C. Evaluation of rockburst occurrence and intensity in underground structures using decision tree approach. Eng. Comput. 2020, 36, 213–225. [Google Scholar] [CrossRef]
  36. Ke, B.; Khandelwal, M.; Asteris, P.G.; Skentou, A.D.; Mamou, A.; Armaghani, D.J. Rock-burst occurrence prediction based on optimized naïve bayes models. IEEE Access 2021, 9, 91347–91360. [Google Scholar] [CrossRef]
  37. Shukla, R.; Khandelwal, M.; Kankar, P.K. Prediction and Assessment of Rock Burst Using Various Meta-heuristic Approaches. Min. Metal. Explor. 2021, 38, 1375–1381. [Google Scholar] [CrossRef]
  38. Ullah, B.; Kamran, M.; Rui, Y. Predictive Modeling of Short-Term Rockburst for the Stability of Subsurface Structures Using Machine Learning Approaches: T-SNE, K-Means Clustering and XGBoost. Mathematics 2022, 10, 449. [Google Scholar] [CrossRef]
  39. Li, J.; Fu, H.; Hu, K.; Chen, W. Data Preprocessing and Machine Learning Modeling for Rockburst Assessment. Sustainability 2023, 15, 13282. [Google Scholar] [CrossRef]
  40. Sun, J.; Wang, L.; Zhang, H.; Shen, Y. Application of fuzzy neural network in predicting the risk of rock burst. In Proceedings of the 6th International Conference on Mining Science & Technology, Xuzhou, China, 18–20 October 2009; pp. 536–543. [Google Scholar]
  41. Shi, Y.; Li, P.; Wang, Y.; Zhang, J. Classification and prediction of rock burst based on BP neural network. Electron. J. Geotech. Eng. 2015, 20, 5839–5848. [Google Scholar]
  42. Wojtecki, Ł.; Iwaszenko, S.; Apel, D.B.; Cichy, T. An attempt to use machine learning algorithms to estimate the rock burst hazard in underground excavations of an active hard coal mine. Energies 2021, 14, 6928. [Google Scholar] [CrossRef]
  43. Wojtecki, Ł.; Iwaszenko, S.; Apel, D.B.; Bukowska, M.; Makówka, J. Use of machine learning algorithms to assess the state of rock burst hazard in underground coal mine openings. J. Rock Mech. Geotech. Eng. 2022, 14, 703–713. [Google Scholar] [CrossRef]
  44. Miao, D.; Yao, K.; Wang, W.; Liu, L.; Sui, X. Risk prediction of coal mine rock burst based on machine learning and feature selection algorithm. Georisk Assess. Manag. Risk Eng. Syst. Geohazards 2024, 1–14. [Google Scholar] [CrossRef]
  45. Dubiński, J.; Konopko, W. Directions of increasing the effectiveness of rock burst prevention. Pol. Min. Rev. 1995, 4, 21–27. (In Polish) [Google Scholar]
  46. Bukowska, M. Rock Mass Proneness to Rock Burst—Geological and Geomechanical Research Methods; Central Mining Institute: Katowice, Poland, 2012. (In Polish) [Google Scholar]
  47. Szecówka, Z. Change in Some Mechanical Properties of Coal Due to Irrigation in the Aspect of Combating Coal Bumps; Statement No. 568, Central Mining Institute: Katowice, Poland, 1972. (In Polish) [Google Scholar]
  48. Konopko, W. Experimental Basis for Qualifying Mining Workings in Hard Coal Mines to the Degree of Rock Burst Hazard; Scientific Works of the Central Mining Institute No. 795: Katowice, Poland, 1994. (In Polish) [Google Scholar]
  49. Bukowska, M. The rock bursts in the Upper Silesian Coal Basin in Poland. J. Min. Sci. 2012, 48, 445–456. [Google Scholar] [CrossRef]
  50. Kidybiński, A.; Smołka, J. Influence of the strength and dynamics of rock breakdown on the rock mass tendency to burst. Sci. J. Silesian Univ. Technol. Ser. Min. 1988, 960, 341–350. (In Polish) [Google Scholar]
  51. Bukowska, M. Post-peak failure modulus in problems of mining geo-mechanics. J. Min. Sci. 2013, 49, 731–740. [Google Scholar] [CrossRef]
  52. Dubiński, J.; Mutke, G. Application of PPV method for the assessment of stability hazard of underground excavations subjected to rock mass tremors. AGH J. Min. Geoeng. 2012, 36, 125–132. [Google Scholar]
  53. Mutke, G.; Dubiński, J.; Lurka, A. New criteria to assess seismic and rock burst hazard in coal mines. Arch. Min. Sci. 2015, 60, 743–760. [Google Scholar] [CrossRef]
  54. Mutke, G.; Masny, W.; Prusek, S. Peak particle velocity as an indicator of the dynamic load exerted on the support of underground workings. Acta Geodyn. Geomater. 2016, 13, 367–378. [Google Scholar] [CrossRef]
  55. Bishop, C.M. Pattern Recognition and Machine Learning; Springer: New York, NY, USA, 2006. [Google Scholar]
  56. Sutton, R.S.; Barto, A.G. Reinforcement Learning: An Introduction, 2nd ed.; MIT Press: Cambridge, UK, 2018. [Google Scholar]
  57. Project Jupyter. Available online: https://jupyter.org/ (accessed on 15 March 2023).
  58. Kluyver, T.; Ragan-Kelley, B.; Pérez, F.; Granger, B.; Bussonnier, M.; Frederic, J.; Kelley, K.; Hamrick, J.; Grout, J.; Corlay, S.; et al. Jupyter Notebooks—A publishing format for reproducible computational workflows. In Positioning and Power in Academic Publishing: Players, Agents and Agendas; Loizides, F., Schmidt, B., Eds.; IOS Press: Amsterdam, The Netherlands, 2016; pp. 87–90. [Google Scholar]
  59. Pedregosa, F.; Varoquaux, G.; Gramfort, A.; Michel, V.; Thirion, B.; Grisel, O.; Blondel, M.; Müller, A.; Nothman, J.; Louppe, G.; et al. Scikit-learn: Machine learning in python. J. Mach. Learn. Res. 2011, 12, 2825–2830. [Google Scholar]
  60. McCulloch, W.S.; Pitts, W. A logical calculus of the ideas immanent in nervous activity. Bull. Math. Biophys. 1943, 5, 115–133. [Google Scholar] [CrossRef]
  61. Hebb, D. The Organization of Behavior; Wiley: New York, NY, USA, 1949. [Google Scholar]
  62. Rosenblatt, F. The perceptron: A probabilistic model for information storage and organization in the brain. Psychol. Rev. 1958, 65, 386–408. [Google Scholar] [CrossRef]
  63. Minsky, M.L.; Papert, S. Perceptrons: An Introduction to Computational Geometry; MIT Press: Cambridge, UK, 1969. [Google Scholar]
  64. Rumelhart, D.E.; Hinton, G.E.; Williams, R.J. Learning internal representations by error propagation. In Parallel Distributed Processing: Explorations in the Microstructure of Cognition; Rumelhart, D.E., McClelland, J.L., PDP Research Group, Eds.; MIT Press: Cambridge, UK, 1986; Volume 1. [Google Scholar]
  65. LeCun, Y.; Boser, B.; Denker, J.S.; Henderson, D.; Howard, R.E.; Hubbard, W.; Jackel, L.D. Backpropagation Applied to Handwritten Zip Code Recognition. Neural Comput. 1989, 1, 541–551. [Google Scholar] [CrossRef]
  66. Bishop, C.M. Neural Networks for Pattern Recognition; Clarendon Press: Oxford, UK, 1995. [Google Scholar]
  67. Fausett, L. Fundamentals of Neural Networks; Prentice Hall: New York, NY, USA, 1994. [Google Scholar]
  68. Kingma, D.; Ba, J. Adam: A Method for Stochastic Optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar] [CrossRef]
  69. Kearns, M.; Valiant, L. Cryptographic limitations on learning Boolean formulae and finite automata. In Proceedings of the Twenty-First Annual ACM symposium on Theory of Computing, Seattle, WA, USA, 14–17 May 1989; pp. 433–444. [Google Scholar] [CrossRef]
  70. Freund, Y.; Schapire, R.E. Experiments with a new boosting algorithm. In Proceedings of the 13th Conference on Machine Learning, Bari, Italy, 3–6 July 1996; pp. 148–156. [Google Scholar]
  71. Freund, Y.; Schapire, R.E. A decision-theoretic generalization of on-line learning and an application to boosting. J. Comput. Syst. Sci. 1997, 55, 119–139. [Google Scholar] [CrossRef]
  72. Friedman, J.H. Greedy function approximation: A gradient boosting machine. Ann. Stat. 2001, 29, 1189–1232. [Google Scholar] [CrossRef]
  73. Fix, E.; Hodges, J.L. Discriminatory Analysis. Nonparametric Discrimination: Consistency Properties (Report); USAF School of Aviation Medicine: Randolph Field, TX, USA, 1951. [Google Scholar]
  74. Cover, T.M.; Hart, P.E. Nearest neighbor pattern classification. IEEE Trans. Inf. Theory 1967, 13, 21–27. [Google Scholar] [CrossRef]
Figure 1. Violin plots representing statistics of the (a) GEO index, (b) tremor energy, and (c) PPV of samples included in the database.
Figure 1. Violin plots representing statistics of the (a) GEO index, (b) tremor energy, and (c) PPV of samples included in the database.
Applsci 14 05209 g001
Figure 2. RadViz plot for points representing rock bursts (1) and cases without damage and destruction in the openings (0).
Figure 2. RadViz plot for points representing rock bursts (1) and cases without damage and destruction in the openings (0).
Applsci 14 05209 g002
Figure 3. ROC curves for models of selected machine learning algorithms: (a) MLP, (b) AdaBoost, (c) GB, (d) KNN, (e) GNB, (f) ensemble classifier.
Figure 3. ROC curves for models of selected machine learning algorithms: (a) MLP, (b) AdaBoost, (c) GB, (d) KNN, (e) GNB, (f) ensemble classifier.
Applsci 14 05209 g003
Figure 4. Precision–recall curves for models of selected machine learning algorithms: (a) MLP, (b) AdaBoost, (c) GB, (d) KNN, (e) GNB, (f) ensemble classifier.
Figure 4. Precision–recall curves for models of selected machine learning algorithms: (a) MLP, (b) AdaBoost, (c) GB, (d) KNN, (e) GNB, (f) ensemble classifier.
Applsci 14 05209 g004
Table 1. The size of randomly selected datasets.
Table 1. The size of randomly selected datasets.
DatasetTraining DatasetTest Dataset
PNPN
142 (462)45911115
238 (457)46315111
340 (465)46113113
443 (449)45810116
544 (465)4579117
644 (463)4579117
740 (462)46113113
843 (462)45810116
944 (468)4579117
1039 (474)46214112
Table 2. Efficiency of MLPC models.
Table 2. Efficiency of MLPC models.
DatasetTPTNFPFNAccuracyRecallPrecisionF1-Score
16112350.940.550.670.60
212106530.940.800.710.75
36108570.910.460.550.50
46110640.920.600.500.55
56109830.910.670.430.52
66109830.910.670.430.52
79106740.910.690.560.62
87112430.940.700.640.67
95108940.900.560.360.43
107106670.900.500.540.52
mean0.920.620.540.57
Table 3. Efficiency of AdaBoost classifier models.
Table 3. Efficiency of AdaBoost classifier models.
DatasetTPTNFPFNAccuracyRecallPrecisionF1-Score
17110540.930.640.580.61
281011070.870.530.440.48
381011250.870.620.400.48
43109770.890.300.300.30
551031440.860.560.260.36
661061130.890.670.350.46
761031070.870.460.380.41
87108830.910.700.470.56
96972030.820.670.230.34
1061001280.840.430.330.38
mean0.870.560.370.44
Table 4. Efficiency of GB classifier models.
Table 4. Efficiency of GB classifier models.
DatasetTPTNFPFNAccuracyRecallPrecisionF1-Score
16110550.920.550.550.55
27104780.880.470.500.48
39106740.910.690.560.62
46108840.900.600.430.50
571071020.900.780.410.54
67109820.920.780.470.58
781031050.880.620.440.52
88110620.940.800.570.67
971041320.880.780.350.48
1041075100.880.290.440.35
mean0.900.630.470.53
Table 5. Efficiency of KNN classifier models.
Table 5. Efficiency of KNN classifier models.
DatasetTPTNFPFNAccuracyRecallPrecisionF1-Score
131041180.850.270.210.24
212991230.880.800.500.62
310912230.800.770.310.44
451051150.870.500.310.38
561061130.890.670.350.46
66962130.810.670.220.33
711991420.870.850.440.58
85108850.900.500.380.43
951001740.830.560.230.32
1061001280.840.430.330.38
mean0.850.600.330.42
Table 6. Efficiency of GNB classifier models.
Table 6. Efficiency of GNB classifier models.
DatasetTPTNFPFNAccuracyRecallPrecisionF1-Score
17823340.710.640.180.27
212872430.790.800.330.47
39783540.690.690.200.32
45962050.800.500.200.29
56853230.720.670.160.26
68863110.750.890.210.33
79872640.760.690.260.38
871001630.850.700.300.42
96882930.750.670.170.27
107991370.840.500.350.41
mean0.770.670.240.34
Table 7. Efficiency of ensemble classifier models.
Table 7. Efficiency of ensemble classifier models.
DatasetTPTNFPFNAccuracyRecallPrecisionF1-Score
17109640.920.640.540.58
213104720.930.870.650.74
3111031020.900.850.520.65
45109750.900.500.420.45
57110720.930.780.500.61
68108910.920.890.470.62
7101031030.900.770.500.61
88112420.950.800.670.73
971051220.890.780.370.50
107105770.890.500.500.50
mean0.910.740.510.60
Table 8. Area under ROC curves for models of the selected machine learning algorithms.
Table 8. Area under ROC curves for models of the selected machine learning algorithms.
DatasetMLPAdaBoostGBKNNGNBEnsemble Classifier
10.760.800.750.590.670.79
20.880.720.700.850.790.90
30.710.750.810.790.690.88
40.770.620.770.700.660.72
50.800.720.850.790.700.86
60.800.790.860.740.810.91
70.820.690.760.860.730.84
80.830.820.870.720.780.88
90.740.750.830.710.710.84
100.720.660.620.660.690.72
mean0.780.730.780.740.720.83
Table 9. Area under precision–recall curves for models of selected machine learning algorithms.
Table 9. Area under precision–recall curves for models of selected machine learning algorithms.
DatasetMLPAdaBoostGBKNNGNBEnsemble Classifier
10.630.630.570.280.420.60
20.770.520.520.660.580.77
30.530.530.640.550.460.69
40.570.330.530.430.370.48
50.560.430.600.520.420.65
60.560.520.630.460.550.68
70.640.450.550.650.490.65
80.680.600.690.460.510.74
90.470.460.570.410.430.58
100.550.410.400.410.450.53
mean0.600.490.570.480.470.64
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wojtecki, Ł.; Bukowska, M.; Iwaszenko, S.; Apel, D.B. Machine Learning-Based Classification of Rock Bursts in an Active Coal Mine Dominated by Non-Destructive Tremors. Appl. Sci. 2024, 14, 5209. https://doi.org/10.3390/app14125209

AMA Style

Wojtecki Ł, Bukowska M, Iwaszenko S, Apel DB. Machine Learning-Based Classification of Rock Bursts in an Active Coal Mine Dominated by Non-Destructive Tremors. Applied Sciences. 2024; 14(12):5209. https://doi.org/10.3390/app14125209

Chicago/Turabian Style

Wojtecki, Łukasz, Mirosława Bukowska, Sebastian Iwaszenko, and Derek B. Apel. 2024. "Machine Learning-Based Classification of Rock Bursts in an Active Coal Mine Dominated by Non-Destructive Tremors" Applied Sciences 14, no. 12: 5209. https://doi.org/10.3390/app14125209

APA Style

Wojtecki, Ł., Bukowska, M., Iwaszenko, S., & Apel, D. B. (2024). Machine Learning-Based Classification of Rock Bursts in an Active Coal Mine Dominated by Non-Destructive Tremors. Applied Sciences, 14(12), 5209. https://doi.org/10.3390/app14125209

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop