Next Article in Journal
Development of an Open-Source Testbed Based on the Modbus Protocol for Cybersecurity Analysis of Nuclear Power Plants
Next Article in Special Issue
Review on Compressive Sensing Algorithms for ECG Signal for IoT Based Deep Learning Framework
Previous Article in Journal
Development of TiO2-Based Photocatalyst Supported on Ceramic Materials for Oxidation of Organic Pollutants in Liquid Phase
Previous Article in Special Issue
Continuous Estimation of Finger and Wrist Joint Angles Using a Muscle Synergy Based Musculoskeletal Model
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Application of Machine Learning in the Field of Intraoperative Neurophysiological Monitoring: A Narrative Review

1
Department of Rehabilitation Medicine, Pohang Stroke and Spine Hospital, Pohang 37659, Korea
2
School of Computer Science and Electrical Engineering, Handong Global University, Pohang 37554, Korea
*
Author to whom correspondence should be addressed.
Appl. Sci. 2022, 12(15), 7943; https://doi.org/10.3390/app12157943
Submission received: 7 July 2022 / Revised: 2 August 2022 / Accepted: 5 August 2022 / Published: 8 August 2022
(This article belongs to the Special Issue Research on Biomedical Signal Processing)

Abstract

:
Intraoperative neurophysiological monitoring (IONM) is being applied to a wide range of surgical fields as a diagnostic tool to protect patients from neural injuries that may occur during surgery. However, several contributing factors complicate the interpretation of IONM, and it is labor- and training-intensive. Meanwhile, machine learning (ML)-based medical research has been growing rapidly, and many studies on the clinical application of ML algorithms have been published in recent years. Despite this, the application of ML to IONM remains limited. Major challenges in applying ML to IONM include the presence of non-surgical contributing factors, ambiguity in the definition of false-positive cases, and their inter-rater variability. Nevertheless, we believe that the application of ML enables objective and reliable IONM, while overcoming the aforementioned problems that experts may encounter. Large-scale, standardized studies and technical considerations are required to overcome certain obstacles to the use of ML in IONM in the future.

1. Introduction

Intraoperative neurophysiological monitoring (IONM) is an essential diagnostic tool for the improvement of patient safety via detection of neurological changes during surgery. IONM is currently being applied in various types of surgery, such as open cranial surgery, spinal decompression, head and neck surgery, and peripheral nerve surgery [1,2,3]. The most prominent advantage of IONM is its use to confirm functional integrity in real time during surgery. When a warning sign occurs, an immediate rescue intervention can be performed in the operating room, minimizing neural injuries and enabling rapid postoperative treatment [4].
However, despite its advantages, some limitations exist in interpreting IONM. In particular, several factors complicate the real-time interpretation of multimodal signals during IONM. In interpreting neurophysiological changes related to surgical factors, a multidisciplinary approach between surgeons and physiatrists is essential, and substantial information sharing between them is vital for accurate interpretation [5]. Further, non-surgical factors, such as anesthesia, the general condition of the patient, and mechanical defects, have to be considered simultaneously with surgical factors [6]. Another hurdle in interpreting IONM is that experts may interpret the same results differently [7]. Therefore, the performance and interpretation of IONM require a substantial level of training to minimize inter-rater variability. Lastly, to respond sensitively to neural deterioration that occurs during surgery, the expert must maintain high concentration even during long operations. Consequently, IONM is a complicated, labor-intensive, and expensive process (Figure 1) [8].
The use of machine learning (ML) in medical research and clinical practice is rapidly expanding [9]. In particular, ML is increasingly being used for diagnosis and prognosis [10,11], as well as for disease classification, replacing existing methods [10]. Moreover, ML can execute proxy decision-making at the level of medical experts [12] and can readily and efficiently handle a large number of samples and variables [13]. Artificial intelligence (AI) models have the additional benefit of continuously improving themselves by learning from additional data and by applying more advanced techniques [14]. Although their performance depends on data quality and learning algorithms, in general, ML models have yielded promising results in clinical medicine [10,15].
In this narrative review, we focus on ML and its application in the field of IONM. We first summarize studies in which ML algorithms have been applied to IONM and then present a comprehensive review of ML algorithms. Furthermore, we discuss the limitations that should be considered in the application of ML to IONM. Finally, we conclude by pointing out the scope for future research to enable ML-based technologies to support human experts and cover the shortcomings of IONM.

2. Literature Review

2.1. Search Protocol

We searched articles from 2016 to 2022 with the following search terms: (“machine learning” OR “deep learning” OR “artificial intelligence”) AND (“intraoperative neuromonitoring” OR “intraoperative neurophysiological monitoring”). We searched the literature from the Cochrane Library, MEDLINE, and EMBASE and conducted a hand search of suitable manuscripts. We selected only original articles with human participants. In addition, we only included articles written in English. Finally, we excluded imaging or morphological research and anesthetic research (Figure 2).

2.2. Related Studies

Table 1 summarizes studies on the application of ML algorithms to IONM.
Jamaludin et al. [16] presented k-nearest neighbors (KNN)- and bagged trees-based ML models to predict positive outcomes after lumbar surgeries in 55 patients. The positive outcome was defined as a motor improvement after the surgery. They compared ML-based prediction methods with pre-existing criteria (50% of transcranial motor evoked potential improvement). In their work, the ML-based method showed a relatively higher sensitivity (87.5%) but lower specificity (33.3%), which was inferior to the pre-existing criteria for predicting postoperative motor improvement. Consequently, they suggested that their proposed method had more room to advance with a large-scale study.
Agaronnik et al. [17] developed a deep learning-based automated detection system for neuromonitoring documentation. They first identified operative reports containing neuromonitoring documentation by rule-based natural language processing. Subsequently, they applied a deep learning-based natural language processing model to identify events indicating a change in status, difficulty in establishing baseline signals, and a stable course, from the reports of 993 patients who underwent spinal surgery. For detection of a change in status, they achieved an area under the receiver operating characteristic curve (AUROC) of 1.0 and an F1 score of 0.80 (discussed further below). Their results suggest that deep learning-based natural language processing models can identify medical documentation of IONM from a large volume of reports in a validated and timely manner.
Kortus et al. [18] predicted electromyography (EMG) signal characteristics during thyroid surgery in 34 patients. They utilized a Bayesian convolutional neural network (CNN) approach to simultaneously classify action potentials and assess signal characteristics. The extended model with two hidden layers with sigmoid activation yielded the best predictive value, with an accuracy of 97.6%. By applying a Bayesian deep learning model, they estimated the uncertainty of the model output, which improved the interpretability of the prediction. They demonstrated that the deep learning model was suited for robust interpretation of electrophysiological signals.
Zha et al. [19] applied a deep learning model to free-running EMG for recurrent laryngeal nerve monitoring during thyroid surgery. They classified the EMG according to morphology and presented a hybrid model that combined a CNN approach with a long short-term memory (LSTM) network. Their proposed CNN-LSTM hybrid model yielded an accuracy of 89.54% and a sensitivity of 94.23%. Their results demonstrated the possibility of reducing inter-rater variability in the reading of free-running EMGs by using deep learning models, reducing the interpretive burden on the expert.
Verdonck et al. [20] presented a model for the interpretation of outliers via train-of-four (TOF) measurements during intraoperative acceleromyographic neuromuscular monitoring. They used a cost-sensitive logistic regression model to analyze 533 TOF measurements from 35 patients. In terms of the predictive power of this model, the AUROC was 0.91 (95% confidence interval: 0.72–0.97) and the F1 score was 0.86 (0.48–0.97). Their model proved outstanding for binary classification. Their study is important since it showed that the model could analyze TOF measurements to automatically identify outliers during intraoperative accelero-myographic neuromuscular monitoring.
Qiao et al. [21] conducted visual evoked potential (VEP) monitoring in 76 patients who underwent surgical decompression for sellar region tumors. They presented a model that could classify amplitude changes in VEPs during surgery, by combining CNN and recurrent neural network (RNN) algorithms. The target class was divided into three groups: increased VEP amplitude (>25% increase), decreased VEP amplitude (>25% decrease), and no change in VEP amplitude. In this study, the overall accuracy of multiclass classification was 87.4% (84.2–90.1%). The sensitivities for classification of no change in VEP, increasing VEP, and decreasing VEP were 92.6%, 78.9%, and 83.7%, respectively, and their specificities were 80.5%, 93.3%, and 100.0%, respectively.
Somatosensory evoked potential (SEP) is a modality that acts as the framework of intraoperative spinal surgery monitoring [4]. Fan et al. [22] utilized least squares and multi-support vector regression models on 15 patients undergoing spinal surgery to intraoperatively interpret the SEP results. They defined the warning criteria as an amplitude reduction of ≥50% or a latency delay of ≥10%. Target outcomes were classified as successful, false-positive, or trauma cases. Their intelligent decision system lowered the false warning rate compared with their conventional method and enabled more accurate detection of spinal cord trauma. The multi-support vector regression model performed better than the least squared support vector regression model.
Table 1. The application of machine learning in the field of intraoperative neurophysiological monitoring.
Table 1. The application of machine learning in the field of intraoperative neurophysiological monitoring.
Author
(Year)
SamplesIONM ModalityModelsTarget OutcomeSummary of Results
Jamaludin et al. [16] (2022)55 patients who underwent lumbar surgeriesMEPKNN and Bagged treesPositive outcome (motor improvement)The proposed method was inferior to the existing criteria.
-
Sensitivity: 87.5%
-
Specificity: 33.3%
Agaronnik et al. [17] (2022)993 patients who underwent spinal surgeryMEP and SEPDeep learning-based natural language processingChange in status
-
AUROC: 1.00
-
F1 score: 0.80
Difficulty establishing baseline
-
AUROC: 0.97
-
F1 score: 0.80
Stable course
-
AUROC: 0.91
-
F1 score: 0.93
Kortus et al. [18] (2021)34 patients who underwent thyroid surgeryEMGBayesian CNNClassification of action potentials
-
Accuracy: 97.6%
-
Precision: 97.7%
-
Recall: 97.6%
Zha et al. [19] (2021)5 patients who underwent thyroid surgery Free-running EMGHybrid CNN-LSTM modelEMG signal waveforms
(quiet, evoked, irritation, burst, injury, and artifact)
The hybrid model could automatically classify the free-running EMG.
-
Accuracy: 89.54%
-
Sensitivity: 94.23%
Verdonck et al. [20] (2021)533 TOF samples from 35 patientsAMGCost-sensitive logistic regressionOutlier TOF measurementAMG-based intraoperative measurements of TOF outliers displayed an increased monitoring consistency.
-
F1 score: 0.86
-
AUROC: 0.91
Qiao et al. [21] (2019)76 cases with sellar region tumorVEPCNN and RNN combinationIncreasing, decreasing, or no change of VEP amplitude
-
Overall accuracy of CNN/RNN combined vs. traditional method using single VEP images: 87.4% and 83.1%, respectively
Fan et al. [22] (2016)10 successful surgeries
(158 samples)
SEPLS-SVR and M-SVRSuccessful case: no interruption
False positive case: surgery interrupted by an expert without spinal cord injury
Trauma case: surgery interrupted by an expert, with spinal cord injury
-
False positive rates
NBM vs. LS-SVR vs. M-SVR: 0.304, 0.080, and 0.068, respectively.
-
True warning rate
NBM vs. LS-SVR vs. M-SVR: 0.500, 0.714, and 0.714, respectively.
4 false positives
(72 samples)
1 trauma case
(14 samples)
IONM, intraoperative neurophysiological monitoring; MEP, motor evoked potential; KNN, k-nearest neighbors; SEP, somatosensory evoked potential; AUROC, area under the receiver operating characteristic curve; EMG, electromyography; CNN, convolutional neural network; LSTM, long short-term memory; TOF, train-of-four; AMG, acceleromyography; VEP, visual evoked potential; RNN, recurrent neural network; LS, least squares; M, multi; SVR, support vector regression; NBM, nominal baseline method.

3. Overview of Machine Learning

ML is a subfield of AI in which the knowledge to perform a target task is learnt from data. ML can be applied to various tasks, such as regression and classification [23]. An ML system consists of multiple steps as illustrated in Figure 3 (top), some of which can be omitted depending the type of the target task, the nature of the data, and the property of the ML model. In IONM, the ML system often takes (serial) evoked potentials and EMG signals as input. The preprocessing step transforms and normalizes the data to make it easier to process in subsequent steps. The feature extraction step converts the data into vector representation, from which the ML model produces the prediction result. In IONM, the ML model predicts discriminating altered signals, postoperative neurologic deficit, or postoperative functional gain. Optionally, the postprocessing step refines or reformats the prediction results.
There are various ML models. Linear models are widely used for numerical data analysis owing to their simplicity and interpretability. However, more sophisticated methods, such as neural networks, support vector machines (SVMs), and decision trees (DTs), are also frequently used in medical data analysis [24]. In particular, tree-based ensemble techniques such as random forests and extreme gradient boosting (XG-Boost) have exhibited excellent performances in numerical data analysis. In recent years, deep learning has attracted considerable attention owing to its outstanding performance [25]. Deep learning is based on neural networks and is particularly effective in analyzing complex data, such as images, text, and time-series data [26,27].
The training algorithms adjust the parameters or structure of the model to optimize a learning objective as shown in Figure 3 (bottom). Widely used learning criteria include loss minimization and likelihood maximization. In supervised learning, the model learns from the ground truth labels specified by human experts. The training algorithm minimizes loss, such as cross-entropy or the mean squared error, that reflects the difference between the model output and the ground truth [28]. Further, unsupervised learning is used to learn data distribution or specific tasks, such as clustering and reproduction, without labels [29]. In recent years, self-supervised learning has been widely used for learning feature representations from a large volume of unlabeled data [30]. In self-supervised learning, the model achieves knowledge for multiple tasks through artificially defined tasks whose ground truth can be derived from the data itself, e.g., predicting the next data from the past data and filling in the masked part of the data. Reinforcement learning trains a model that interacts with the environment. The model then performs actions that affect the environment, and the environment rewards the model’s actions. The training algorithm optimizes the model to select the actions that maximize the expected reward.
Figure 3. A general framework for training and applying ML models. The bottom figure illustrates supervised learning using ground truth labels.
Figure 3. A general framework for training and applying ML models. The bottom figure illustrates supervised learning using ground truth labels.
Applsci 12 07943 g003
The validation and performance evaluation of ML models is crucial in their application to the field of medicine [28]. For evaluation, the primary metric for classification tasks is accuracy, which is the fraction of correctly classified samples among all test samples. Precision and recall are widely used for detection and identification tasks. Precision is the fraction of retrieved instances that were relevant, while recall is the fraction of relevant instances that were retrieved. The F1-score, defined as the harmonic mean of precision and recall, is used to measure the balanced performance of the model [31].
In general, researchers divide data into three non-overlapping subsets with different purposes: training, validation, and testing. The training set is used only to train the model, and the validation set is used to determine hyperparameters and select a model. The performance of the selected ML model is assessed on the testing set. One potential problem in evaluation is that bias in the selection of the testing set can harm the reliability of the evaluation results [32]. This can be minimized by applying techniques such as k-fold cross-validation. The k-fold cross-validation procedure is as follows [33]:
  • Randomly split the dataset into k groups (e.g., for k = 5 , the groups are S 1 ,   S 2 ,   S 3 ,   S 4 , and S 5 ).
  • Repeat the training and testing k times. At the k -th iteration, use S k for testing and all other groups for training and hyperparameter determination.
  • Average the k evaluation results.
However, despite the rapid development of ML, its adoption in medicine has been slower than that in other areas [34]. In clinical practice, the decision-making process of a human expert is very sophisticated and complex. A model can only learn such a process by being trained on a large number of high-quality samples [9]. The collection and labeling of large-scale medical datasets are labor-intensive and expensive. In addition, medical research involves the use of human data, which is accompanied by ethical considerations. Another hurdle in the application of ML to the medical field is the hesitation of many human experts to accept the results of black-box models [35]. Moreover, deep learning-based image segmentation techniques are essential for the analysis of advanced medical images, such as those generated with computed tomography or magnetic resonance imaging [36]. Therefore, deep learning models can only be applied to medical research if the infrastructure is in place for the accurate and efficient processing of large quantities of image segmentation results [37,38].

4. Representative Machine Learning Models for IONM-Related Research

Previous works on IONM have applied various types of ML models. In this section, we briefly introduce the representative ML models that can be useful to predict or analyze IONM data, including those listed in Table 1.

4.1. Neural Networks

Neural networks have been the major ML models applied to the field of IONM. For example, these analyze neuromonitoring documents by a natural language processing model implemented with a Transformer. CNN and RNN have been used to classify action potentials and EMG signal waveforms, respectively. Here, we described the overview of neural networks.

4.1.1. Artificial Neural Networks

An artificial neural network (ANN) is an ML algorithm imitating the human brain [39]. Neurons in the human brain combine signals (stimuli) from multiple upstream neurons; when the combined stimulus exceeds the threshold, the neuron relays the resulting signals to the downstream neurons. ANNs were designed according to this principle. A neural network consists of an input layer that receives data from multiple inputs, an output layer that produces the output results, and one or more hidden layers between them [24]. Each layer comprises multiple neurons and contains learning parameters, such as connection weights ( w i j ) and biases ( b j ), where i and j are the indices for the input and output neurons, respectively. Given an input vector,
x = x 1 , x 2 , , x n
each layer computes the output,
y j = y 1 , y 2 , , y m
via a linear combination followed by a nonlinearity; for instance, the output may be calculated as follows:
y j = f ( i w i j x i + b j )
where f · denotes a nonlinear activation function. The behavior of a neural network depends on the values of the connection weights and biases. The learning algorithm determines the optimal values of these parameters for the target task by minimizing the loss function on the training data.
The neural network learns to map an input into the desired output. An ANN composed of multiple layers with nonlinear activation functions can approximate complex maps [27]. However, the training of a large-scale neural network requires a large amount of training data and computational power [40]. In particular, when trained with a limited number of samples, ANNs often suffer from overfitting and are not generalizable to different input data.

4.1.2. Convolutional Neural Networks

Composed of large numbers of layers, deep neural networks can perform high levels of abstraction and are, therefore, excellent in processing complex data such as images, text, and sounds. CNNs are the most popular deep learning architectures for image processing [41]. CNNs have heterogeneous structures that combine various types of layers, such as those for convolution, pooling, normalization, and self-attention, as well as dense layers, skip connections, and dropouts [36]. Most CNN layers are composed of multi-channel feature maps. The convoluted layer learns the position-invariant local features to capture the 2D local patterns of the input image.

4.1.3. Recurrent Neural Networks

RNNs are a class of neural networks that are specialized in the processing of iterative and sequential data. Unlike feedforward networks, RNNs have feedback connections to deliver information from the past to the future [42]. RNNs have been widely used in natural language and speech processing. In medicine, RNNs are frequently used for the analysis of medical data that require continuous signal reading, such as EMG or fluoroscopic images [19,43]. An encoder RNN and a decoder RNN can be combined to learn mappings between sequential data of different lengths [9].
Simple RNNs have the limitation that they cannot learn long-term dependencies from time-series data. LSTM networks [44] and gated recurrent units (GRUs) [45] are advanced RNN architectures that overcome this limitation by applying the gating mechanism. However, all RNNs, including LSTM networks and GRUs, have the drawback that they are hard to parallelize and, therefore, slower than other architectures.

4.1.4. Transformers

Recently, transformers have exhibited higher performance than RNNs in natural language and speech processing [46]. A transformer network consists of a stack of transformer blocks, each of which combines a self-attention sub-layer and a multi-layer perceptron sub-layer. Self-attention is easy to parallelize and refers to a wider context than recurrent connections in RNNs do. RNNs and transformers can be used to process medical textual data (e.g., extracting a specific word from a text-based medical record) [47].

4.1.5. Bayesian Neural Networks

Most neural networks provide prediction results as fixed values without an indication of uncertainty. Without uncertainty modeling, it is challenging to determine the appropriate level of confidence in the output. This obstacle causes reluctance among medical experts to accept the results of ML models.
Bayesian neural networks provide the posterior distribution of the output value rather than a single deterministic value [48]. In general, the prediction and its uncertainty are provided as the mean and variance of the output distribution, respectively.
Bayesian neural networks model one or both of two types of uncertainties: aleatoric uncertainty, which captures variation inherent in the data, and epistemic uncertainty, which accounts for uncertainty in the model [49]. Epistemic uncertainty is modeled by estimating the posterior distribution of the model parameters, whereas aleatoric uncertainty is modeled by a neural network that estimates the density of the output quantity. Bayesian neural networks can be implemented with various types of neural networks including CNN, RNN, and transformers.

4.2. Support Vector Machines

SVMs have higher generalizability than other traditional ML models when the training data are limited. When the training samples are represented in vector space, SVMs find the boundary with the maximum margin from the nearest samples in each class [50]. Linear SVMs find linear boundaries, whereas nonlinear SVMs combine a linear SVM and a nonlinear transformation using the “kernel trick” to classify complex nonlinear patterns [51]. SVMs are binary classifiers. However, it is possible to discriminate between multiple classes by combining multiple SVMs in one-to-one or one-to-the-rest approaches.

4.3. Regularized Logistic Regression

Logistic regression is a classification algorithm that extends linear regression by combining it with a logistic function. This method is widely used in medical statistics. However, if the model deals with many input variables with an insufficient number of training samples, logistic regression poses the risk of overfitting. When overfitting, the model yields a low degree of training loss but a substantial degree of validation loss. Overfitting is a common issue with most ML models that generally increases with the capacity of the model [52]. Overfitting can be reduced with the application of regularization techniques. The loss function of regularized logistic regression was designed to reduce not only the prediction error but also the norm of the weight vector. L1 and L2 regularization reduce the L1 and L2 norms of the weight vector, respectively [10,53]. L2 regularization is generally more effective in reducing overfitting, whereas L1 regularization is used more frequently for feature selection. L1 regularization is also known as lasso regularization, and L2 regularization as ridge regularization. Elastic net is a hybrid model that applies both L1 and L2 regularizations [54].

4.4. Random Forests

Ensemble learning is a promising technique to improve the accuracy of ML by combining the results of multiple classification or regression models [10,55]. There are two primary strategies to combine ML models: bagging and boosting. In bagging, the training data are randomly split into overlapping subsets, and each model is trained on one of them. The results of the models are combined by averaging or voting. Random forests are ensemble models that combine decision trees by using the bagging strategy. Decision trees are vulnerable to overfitting [28], which is effectively reduced with bagging [56]. In addition, random forests retain the advantage of decision trees in that they do not require data normalization and are relatively easy to interpret.

4.5. Extreme Gradient Boosting

Boosting, or bootstrapping, is another ensemble strategy that builds multiple sequential models, one at a time. In each step, the model learns by focusing on the samples that were misclassified by the previous models. XG-Boost is one of the most advanced boosting algorithms, yielding excellent performance with tabular data [55]. It is an extension of the gradient boosting machine (GBM) with improved scalability [57]. GBMs build decision trees via function approximation. In each step, the GBM finds a new decision tree that reduces the bias of the previous trees via a gradient-based method derived from second-order Taylor approximations. GBMs excel in many tasks but are slow and not scalable [58]. XG-Boost uses multiple techniques to improve the efficiency of the GBM in terms of computational and memory requirements. Consequently, it is 10 times faster than conventional GBMs and scalable to billions of examples in distributed or memory-limited environments. In addition, its accuracy can be improved and overfitting reduced by adjustment of the hyperparameters according to the data and environment, such as computing infra and memory size [59].

4.6. Hyperparameters for Each ML Model

Each ML model has a set of hyperparameters to specify the model size, the types of the components, the strength of regularization, and other properties. The performance of the ML model is significantly affected by the hyperparameters [60,61]. Table 2 lists the major hyperparameters of ML models. Some hyperparameters in Table 2 specify training options, such as the batch size, learning rate, maximum number of training iterations, early stopping criteria, and algorithms for initialization and optimization, which are important for training ML models.

5. Limitations

Nonsurgical factors are important confounders in the interpretation of IONM. In particular, IONM modalities are very sensitive to changes brought about by anesthesia-related factors [62]. The anesthetic methodology used, the use of neuromuscular blockade, the patient’s blood pressure and body temperature, and prolonged operation time, among other factors, can substantially affect IONM signals, even in the absence of a neural insult [63]. Cross-disciplinary collaboration is essential in the construction of a model that can consider these various factors simultaneously. In addition, since many variables need to be processed, it is essential that models are trained and validated on high-quality datasets with large numbers of samples that share the same features.
Another point to consider is that false positives results in ambiguity in the interpretation of IONM signals [64]. When a warning signal occurs during surgery, regardless of its reliability (true or false positive), surgeons and anesthesiologists respond by initiating a rescue intervention process [6]. This may be an important confounding factor in the predictive value of ML algorithms. If postoperative neurological deficit is defined as a dependent variable during the construction of an ML model, there may be disagreements in the input to provide to an ML algorithm when a warning signal is issued. Therefore, as demonstrated by Zha et al. [19], morphological classification may be a more realistic alternative to the direct interpretation of evoked potential. In other words, although the ML algorithm reads the signals, the human expert’s intervention remains essential in determining the reliability and cause of a warning sign.
However, inter-rater variability in the interpretation of IONM signals is inevitable when human experts are involved [65]. For example, results will be interpreted differently depending on the definition of the baseline. The presence or absence of a warning sign depends on whether the baseline is static or changes in response to previous waveforms during the surgery [66]. Studies also vary in their definitions of postoperative neurological deficits [67]. This difference in the interpretation of the ground truth can cause high variability between ML algorithms. The interpretation of IONM results may also vary depending on the degree of training of the expert [8].

6. Future Perspectives

To build an ML-based IONM model that can be applied to the clinical field in the future, the following issues should be considered.
IONM is a diagnostic tool that uses multiple modalities, and the interpretational method differs slightly among modalities. To date, we are aware of only limited studies on the use of ML in the interpretation of MEP and SEP; such a model may play a key role in central nervous system (CNS) surgeries. Therefore, additional validation studies of the ML models for each modality should be conducted. Furthermore, several studies have demonstrated that, in the interpretation of IONM signals, predictive power can be increased by considering multiple modalities simultaneously rather than single modalities [68,69]. Therefore, there is a need for complex models that can utilize various modalities simultaneously.
Moreover, models should be tailored for the prediction of optimal outcomes according to each surgical method. For example, evoked-potential warning signs in open cranial surgery differ slightly from those that occur during spinal cord decompression surgery [70]. In addition, warning signs for peripheral nerve surgery are completely different from those that occur in CNS surgery [71].
We believe that ML-based IONM interpretational models will be useful in the prediction of patients’ outcomes by assessing their intraoperative evoked potential. This topic has been actively studied in the field of cervical decompression surgery [72]. Furthermore, such studies are being conducted for open cranial and peripheral nerve surgeries [71]. If ML models can perform precise sequential interpretation, they will be able to interpret a patient’s neural integrity during surgery to predict subsequent clinical recovery.

7. Conclusions

IONM is a valuable tool to improve patient safety and minimize neurological damage during surgery. However, its interpretation is relatively complicated, expensive, and labor-intensive. Therefore, it requires extensive training for human experts. The efficiency and reliability of IONM may be enhanced with the use of ML models. However, standardized, large-scale data collection and technical considerations are required to overcome the limitations to such use of ML models and to provide practical support for experts. Further, since IONM comprises many modalities and is used in different types of surgery, ML models should be tailored accordingly for optimal performance. Similarly, ML models that use multiple modalities should be established. Much research is required to achieve these goals. Through these efforts, ML-based IONM interpretation can become a valuable and applicable technology and can ultimately further guarantee patient safety.

Author Contributions

Conceptualization, D.P.; investigation, D.P. and I.K.; writing—original draft preparation, D.P. and I.K.; writing—review and editing, D.P. and I.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Stankovic, P.; Wittlinger, J.; Georgiew, R.; Dominas, N.; Hoch, S.; Wilhelm, T. Continuous intraoperative neuromonitoring (cIONM) in head and neck surgery—A review. HNO 2020, 68, 86–92. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  2. Shiban, E.; Meyer, B. Intraoperatives Neuromonitoring in der rekonstruktiven Halswirbelsäulenchirurgie. Orthopäde 2018, 47, 526–529. [Google Scholar] [CrossRef] [PubMed]
  3. Einarsson, H.B.; Poulsen, F.R.; Derejko, M.; Korshoj, A.R.; Qerama, E.; Pedersen, C.B.; Halle, B.; Nielsen, T.H.; Clausen, A.H.; Korshoj, A.R.; et al. Intraoperative neuromonitoring during brain surgery. Ugeskr Laeger 2021, 183, V09200712. [Google Scholar] [PubMed]
  4. Stecker, M. A review of intraoperative monitoring for spinal surgery. Surg. Neurol. Int. 2012, 3, S174–S187. [Google Scholar] [CrossRef] [PubMed]
  5. Tewari, A.; Francis, L.; Samy, R.N.; Kurth, D.C.; Castle, J.; Frye, T.; Mahmoud, M. Intraoperative neurophysiological monitoring team’s communique with anesthesia professionals. J. Anaesthesiol. Clin. Pharmacol. 2018, 34, 84–93. [Google Scholar] [CrossRef] [PubMed]
  6. Park, D.; Kim, B.H.; Lee, S.-E.; Jeong, E.; Cho, K.; Park, J.K.; Choi, Y.-J.; Jin, S.; Hong, D.; Kim, M.-C. Usefulness of Intraoperative Neurophysiological Monitoring during the Clipping of Unruptured Intracranial Aneurysm: Diagnostic Efficacy and Detailed Protocol. Front. Surg. 2021, 8, 631053. [Google Scholar] [CrossRef] [PubMed]
  7. Gruenbaum, B.F.; Gruenbaum, S.E. Neurophysiological monitoring during neurosurgery: Anesthetic considerations based on outcome evidence. Curr. Opin. Anaesthesiol. 2019, 32, 580–584. [Google Scholar] [CrossRef]
  8. Wojtczak, B.; Kaliszewski, K.; Sutkowski, K.; Głód, M.; Barczyński, M. The learning curve for intraoperative neuromonitoring of the recurrent laryngeal nerve in thyroid surgery. Langenbeck’s Arch. Surg. 2016, 402, 701–708. [Google Scholar] [CrossRef] [Green Version]
  9. Toh, C.; Brody, J.P. Applications of Machine Learning in Healthcare. In Smart Manufacturing—When Artificial Intelligence Meets the Internet of Things; IntechOpen: London, UK, 2021. [Google Scholar]
  10. Park, D.; Jeong, E.; Kim, H.; Pyun, H.W.; Kim, H.; Choi, Y.-J.; Kim, Y.; Jin, S.; Hong, D.; Lee, D.W.; et al. Machine Learning-Based Three-Month Outcome Prediction in Acute Ischemic Stroke: A Single Cerebrovascular-Specialty Hospital Study in South Korea. Diagnostics 2021, 11, 1909. [Google Scholar] [CrossRef]
  11. Kim, J.O.; Jeong, Y.-S.; Kim, J.H.; Lee, J.-W.; Park, D.; Kim, H.-S. Machine Learning-Based Cardiovascular Disease Prediction Model: A Cohort Study on the Korean National Health Insurance Service Health Screening Database. Diagnostics 2021, 11, 943. [Google Scholar] [CrossRef]
  12. Yoo, T.K.; Ryu, I.H.; Choi, H.; Kim, J.K.; Lee, I.S.; Kim, J.S.; Lee, G.; Rim, T.H. Explainable Machine Learning Approach as a Tool to Understand Factors Used to Select the Refractive Surgery Technique on the Expert Level. Transl. Vis. Sci. Technol. 2020, 9, 8. [Google Scholar] [CrossRef] [Green Version]
  13. Schinkel, M.; Paranjape, K.; Nannan Panday, R.S.; Skyttberg, N.; Nanayakkara, P.W.B. Clinical applications of artificial intelligence in sepsis: A narrative review. Comput. Biol. Med. 2019, 115, 103488. [Google Scholar] [CrossRef]
  14. Telikani, A.; Tahmassebi, A.; Banzhaf, W.; Gandomi, A.H. Evolutionary Machine Learning: A Survey. ACM Comput. Surv. 2022, 54, 1–35. [Google Scholar] [CrossRef]
  15. Shin, S.; Austin, P.C.; Ross, H.J.; Abdel-Qadir, H.; Freitas, C.; Tomlinson, G.; Chicco, D.; Mahendiran, M.; Lawler, P.R.; Billia, F.; et al. Machine learning vs. conventional statistical models for predicting heart failure readmission and mortality. ESC Heart Fail. 2020, 8, 106–115. [Google Scholar] [CrossRef]
  16. Jamaludin, M.R.; Lai, K.W.; Chuah, J.H.; Zaki, M.A.; Hasikin, K.; Abd Razak, N.A.; Dhanalakshmi, S.; Saw, L.B.; Wu, X. Machine Learning Application of Transcranial Motor-Evoked Potential to Predict Positive Functional Outcomes of Patients. Comput. Intell. Neurosci. 2022, 2022, 2801663. [Google Scholar] [CrossRef]
  17. Agaronnik, N.D.; Kwok, A.; Schoenfeld, A.J.; Lindvall, C. Natural language processing for automated surveillance of intraoperative neuromonitoring in spine surgery. J. Clin. Neurosci. 2022, 97, 121–126. [Google Scholar] [CrossRef]
  18. Kortus, T.; Krüger, T.; Gühring, G.; Lente, K. Automated robust interpretation of intraoperative electrophysiological signals—A bayesian deep learning approach. Curr. Dir. Biomed. Eng. 2021, 7, 69–72. [Google Scholar] [CrossRef]
  19. Zha, X.; Wehbe, L.; Sclabassi, R.J.; Mace, Z.; Liang, Y.V.; Yu, A.; Leonardo, J.; Cheng, B.C.; Hillman, T.A.; Chen, D.A.; et al. A Deep Learning Model for Automated Classification of Intraoperative Continuous EMG. IEEE Trans. Med. Robot. Bionics 2021, 3, 44–52. [Google Scholar] [CrossRef]
  20. Verdonck, M.; Carvalho, H.; Berghmans, J.; Forget, P.; Poelaert, J. Exploratory Outlier Detection for Acceleromyographic Neuromuscular Monitoring: Machine Learning Approach. J. Med. Internet Res. 2021, 23, e25913. [Google Scholar] [CrossRef]
  21. Qiao, N.; Song, M.; Ye, Z.; He, W.; Ma, Z.; Wang, Y.; Zhang, Y.; Shou, X. Deep Learning for Automatically Visual Evoked Potential Classification During Surgical Decompression of Sellar Region Tumors. Transl. Vis. Sci. Technol. 2019, 8, 21. [Google Scholar] [CrossRef] [Green Version]
  22. Fan, B.; Li, H.-X.; Hu, Y. An Intelligent Decision System for Intraoperative Somatosensory Evoked Potential Monitoring. IEEE Trans. Neural Syst. Rehabil. Eng. 2016, 24, 300–307. [Google Scholar] [CrossRef]
  23. Kersting, K. Machine Learning and Artificial Intelligence: Two Fellow Travelers on the Quest for Intelligent Behavior in Machines. Front. Big Data 2018, 1, 6. [Google Scholar] [CrossRef] [Green Version]
  24. Chang, M.; Canseco, J.A.; Nicholson, K.J.; Patel, N.; Vaccaro, A.R. The Role of Machine Learning in Spine Surgery: The Future Is Now. Front. Surg. 2020, 7, 54. [Google Scholar] [CrossRef]
  25. Zhang, Z. A gentle introduction to artificial neural networks. Ann. Transl. Med. 2016, 4, 370. [Google Scholar] [CrossRef] [Green Version]
  26. Bengio, Y.; Courville, A.; Vincent, P. Representation Learning: A Review and New Perspectives. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 1798–1828. [Google Scholar] [CrossRef]
  27. LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef]
  28. Badillo, S.; Banfai, B.; Birzele, F.; Davydov, I.I.; Hutchinson, L.; Kam-Thong, T.; Siebourg-Polster, J.; Steiert, B.; Zhang, J.D. An Introduction to Machine Learning. Clin. Pharmacol. Ther. 2020, 107, 871–885. [Google Scholar] [CrossRef] [Green Version]
  29. Sidey-Gibbons, J.A.M.; Sidey-Gibbons, C.J. Machine learning in medicine: A practical introduction. BMC Med. Res. Methodol. 2019, 19, 64. [Google Scholar] [CrossRef] [Green Version]
  30. Putri, W.R.; Liu, S.H.; Aslam, M.S.; Li, Y.H.; Chang, C.C.; Wang, J.C. Self-Supervised Learning Framework toward State-of-the-Art Iris Image Segmentation. Sensors 2022, 22, 2133. [Google Scholar] [CrossRef]
  31. Erickson, B.J.; Kitamura, F. Magician’s Corner: 9. Performance Metrics for Machine Learning Models. Radiol. Artif. Intell. 2021, 3, e200126. [Google Scholar] [CrossRef]
  32. Dobbin, K.K.; Simon, R.M. Optimally splitting cases for training and testing high dimensional classifiers. BMC Med. Genom. 2011, 4, 31. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  33. Korjus, K.; Hebart, M.N.; Vicente, R. An Efficient Data Partitioning to Improve Classification Performance While Keeping Parameters Interpretable. PLoS ONE 2016, 11, e0161788. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  34. Deo, R.C. Machine Learning in Medicine. Circulation 2015, 132, 1920–1930. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  35. Watson, D.S.; Krutzinna, J.; Bruce, I.N.; Griffiths, C.E.M.; McInnes, I.B.; Barnes, M.R.; Floridi, L. Clinical applications of machine learning algorithms: Beyond the black box. BMJ 2019, 364, l886. [Google Scholar] [CrossRef] [Green Version]
  36. Hesamian, M.H.; Jia, W.; He, X.; Kennedy, P. Deep Learning Techniques for Medical Image Segmentation: Achievements and Challenges. J. Digit. Imaging 2019, 32, 582–596. [Google Scholar] [CrossRef] [Green Version]
  37. Wang, L.; Chen, K.C.; Gao, Y.; Shi, F.; Liao, S.; Li, G.; Shen, S.G.F.; Yan, J.; Lee, P.K.M.; Chow, B.; et al. Automated bone segmentation from dental CBCT images using patch-based sparse representation and convex optimization. Medical Physics 2014, 41, 043503. [Google Scholar] [CrossRef] [Green Version]
  38. Lundervold, A.S.; Lundervold, A. An overview of deep learning in medical imaging focusing on MRI. Z. Med. Phys. 2019, 29, 102–127. [Google Scholar] [CrossRef]
  39. Faust, O.; Hagiwara, Y.; Hong, T.J.; Lih, O.S.; Acharya, U.R. Deep learning for healthcare applications based on physiological signals: A review. Comput. Methods Programs Biomed. 2018, 161, 1–13. [Google Scholar] [CrossRef]
  40. Michelson, J.D. CORR Insights®: What Are the Applications and Limitations of Artificial Intelligence for Fracture Detection and Classification in Orthopaedic Trauma Imaging? A Systematic Review. Clin. Orthop. Relat. Res. 2019, 477, 2492–2494. [Google Scholar] [CrossRef]
  41. Greenspan, H.; van Ginneken, B.; Summers, R.M. Guest Editorial Deep Learning in Medical Imaging: Overview and Future Promise of an Exciting New Technique. IEEE Trans. Med. Imaging 2016, 35, 1153–1159. [Google Scholar] [CrossRef]
  42. Yu, Y.; Si, X.; Hu, C.; Zhang, J. A Review of Recurrent Neural Networks: LSTM Cells and Network Architectures. Neural Comput. 2019, 31, 1235–1270. [Google Scholar] [CrossRef]
  43. Xia, P.; Hu, J.; Peng, Y. EMG-Based Estimation of Limb Movement Using Deep Learning With Recurrent Convolutional Neural Networks. Artif. Organs 2018, 42, E67–E77. [Google Scholar] [CrossRef]
  44. Hochreiter, S.; Schmidhuber, J. Long Short-Term Memory. Neural Comput. 1997, 9, 1735–1780. [Google Scholar] [CrossRef]
  45. Chung, J.; Gulcehre, C.; Cho, K.; Bengio, Y. Gated Feedback Recurrent Neural Networks. In Proceedings of the 32nd International Conference on Machine Learning, PMLR, Lille, France, 7–9 July 2015; pp. 2067–2075. [Google Scholar]
  46. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, L.; Polosukhin, I. Attention is all you need. In Proceedings of the 31st Conference on Neural Information Processing Systems, Long Beach, CA, USA, 6 December 2017. [Google Scholar]
  47. Demner-Fushman, D.; Chapman, W.W.; McDonald, C.J. What can natural language processing do for clinical decision support? J. Biomed. Inform. 2009, 42, 760–772. [Google Scholar] [CrossRef] [Green Version]
  48. Bishop, C.M. Bayesian Neural Networks. J. Braz. Comput. Soc. 1997, 4, 61–68. [Google Scholar] [CrossRef]
  49. Kendall, A.; Gal, Y. What Uncertainties Do We Need in Bayesian Deep Learning for Computer Vision? In Proceedings of the 31st Conference on Neural Information Processing Systems (NIPS), Long Beach, CA, USA, 4–9 December 2017; pp. 5580–5590. [Google Scholar]
  50. Noble, W.S. What is a support vector machine? Nat. Biotechnol. 2006, 24, 1565–1567. [Google Scholar] [CrossRef]
  51. Smola, A.J.; Schölkopf, B. A tutorial on support vector regression. Stat. Comput. 2004, 14, 199–222. [Google Scholar] [CrossRef] [Green Version]
  52. Hawkins, D.M. The Problem of Overfitting. J. Chem. Inf. Comput. Sci. 2003, 44, 1–12. [Google Scholar] [CrossRef]
  53. Hajipour, F.; Jozani, M.J.; Moussavi, Z. A comparison of regularized logistic regression and random forest machine learning models for daytime diagnosis of obstructive sleep apnea. Med. Biol. Eng. Comput. 2020, 58, 2517–2529. [Google Scholar] [CrossRef]
  54. Zou, H.; Hastie, T. Regularization and variable selection via the elastic net. J. R. Stat. Soc. Ser. B (Stat. Methodol.) 2005, 67, 301–320. [Google Scholar] [CrossRef] [Green Version]
  55. Dong, X.; Yu, Z.; Cao, W.; Shi, Y.; Ma, Q. A survey on ensemble learning. Front. Comput. Sci. 2019, 14, 241–258. [Google Scholar] [CrossRef]
  56. Yang, L.; Wu, H.; Jin, X.; Zheng, P.; Hu, S.; Xu, X.; Yu, W.; Yan, J. Study of cardiovascular disease prediction model based on random forest in eastern China. Sci. Rep. 2020, 10, 5245. [Google Scholar] [CrossRef] [Green Version]
  57. Friedman, J.H. Greedy function approximation: A gradient boosting machine. Ann. Stat. 2001, 29, 1189–1232. [Google Scholar] [CrossRef]
  58. Natekin, A.; Knoll, A. Gradient boosting machines, a tutorial. Front. Neurorobotics 2013, 7, 21. [Google Scholar] [CrossRef] [Green Version]
  59. Chen, T.; Guestrin, C. XGBoost. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 13–17 August 2016; pp. 785–794. [Google Scholar]
  60. Ansarullah, S.I.; Mohsin Saif, S.; Abdul Basit Andrabi, S.; Kumhar, S.H.; Kirmani, M.M.; Kumar, D.P. An Intelligent and Reliable Hyperparameter Optimization Machine Learning Model for Early Heart Disease Assessment Using Imperative Risk Attributes. J. Healthc. Eng. 2022, 2022, 9882288. [Google Scholar] [CrossRef]
  61. Watanabe, S.; Shimobaba, T.; Kakue, T.; Ito, T. Hyperparameter tuning of optical neural network classifiers for high-order Gaussian beams. Opt. Express 2022, 30, 11079–11089. [Google Scholar] [CrossRef]
  62. Ugawa, R.; Takigawa, T.; Shimomiya, H.; Ohnishi, T.; Kurokawa, Y.; Oda, Y.; Shiozaki, Y.; Misawa, H.; Tanaka, M.; Ozaki, T. An evaluation of anesthetic fade in motor evoked potential monitoring in spinal deformity surgeries. J. Orthop. Surg. Res. 2018, 13, 227. [Google Scholar] [CrossRef]
  63. Nunes, R.R.; Bersot, C.D.A.; Garritano, J.G. Intraoperative neurophysiological monitoring in neuroanesthesia. Curr. Opin. Anaesthesiol. 2018, 31, 532–538. [Google Scholar] [CrossRef]
  64. Chung, J.; Park, W.; Hong, S.H.; Park, J.C.; Ahn, J.S.; Kwun, B.D.; Lee, S.-A.; Kim, S.-H.; Jeon, J.-Y. Intraoperative use of transcranial motor/sensory evoked potential monitoring in the clipping of intracranial aneurysms: Evaluation of false-positive and false-negative cases. J. Neurosurg. 2019, 130, 936–948. [Google Scholar] [CrossRef] [Green Version]
  65. Ney, J.P.; van der Goes, D.N.; Nuwer, M.; Emerson, R.; Minahan, R.; Legatt, A.; Galloway, G.; Lopez, J.; Yamada, T.; Ney, J.P.; et al. Evidence-based guideline update: Intraoperative spinal monitoring with somatosensory and transcranial electrical motor evoked potentials: Report of the Therapeutics and Technology Assessment Subcommittee of the American Academy of Neurology and the American Clinical Neurophysiology Society. Neurology 2012, 79, 292–294. [Google Scholar] [CrossRef] [Green Version]
  66. Chen, J.H.; Shilian, P.; Cheongsiatmoy, J.; Gonzalez, A.A. Factors Associated With Inadequate Intraoperative Baseline Lower Extremity Somatosensory Evoked Potentials. J. Clin. Neurophysiol. 2018, 35, 426–430. [Google Scholar] [CrossRef] [PubMed]
  67. Nasi, D.; Meletti, S.; Tramontano, V.; Pavesi, G. Intraoperative neurophysiological monitoring in aneurysm clipping: Does it make a difference? A systematic review and meta-analysis. Clin. Neurol. Neurosurg. 2020, 196, 105954. [Google Scholar] [CrossRef] [PubMed]
  68. Taskiran, E.; Brandmeier, S.; Ozek, E.; Sari, R.; Bolukbasi, F.; Elmaci, I. Multimodal intraoperative neurophysiologic monitoring in the spinal cord surgery. Turk. Neurosurg. 2017, 27, 436–440. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  69. Grasso, G.; Landi, A.; Alafaci, C. Multimodal Intraoperative Neuromonitoring in Aneurysm Surgery. World Neurosurg. 2017, 101, 763–765. [Google Scholar] [CrossRef] [PubMed]
  70. MacDonald, D.B. Overview on Criteria for MEP Monitoring. J. Clin. Neurophysiol. 2017, 34, 4–11. [Google Scholar] [CrossRef] [PubMed]
  71. Park, D.; Kim, D.Y.; Eom, Y.S.; Lee, S.-E.; Chae, S.B. Posterior interosseous nerve syndrome caused by a ganglion cyst and its surgical release with intraoperative neurophysiological monitoring. Medicine 2021, 100, e24702. [Google Scholar] [CrossRef] [PubMed]
  72. Akbari, K.K.; Badikillaya, V.; Venkatesan, M.; Hegde, S.K. Do Intraoperative Neurophysiological Changes During Decompressive Surgery for Cervical Myeloradiculopathy Affect Functional Outcome? A Prospective Study. Glob. Spine J. 2020, 12, 366–372. [Google Scholar] [CrossRef]
Figure 1. Schematic illustration of intraoperative neurophysiological monitoring (IONM). A multidisciplinary approach between the surgeon, physiatrist, and anesthesiologist is necessary throughout the process. Several confounding factors, such as surgical, anesthesiologic, and mechanical factors, as well as the patient’s condition and inter-rater variability complicate the interpretation of IONM.
Figure 1. Schematic illustration of intraoperative neurophysiological monitoring (IONM). A multidisciplinary approach between the surgeon, physiatrist, and anesthesiologist is necessary throughout the process. Several confounding factors, such as surgical, anesthesiologic, and mechanical factors, as well as the patient’s condition and inter-rater variability complicate the interpretation of IONM.
Applsci 12 07943 g001
Figure 2. Flow chart of article selection.
Figure 2. Flow chart of article selection.
Applsci 12 07943 g002
Table 2. Important hyperparameters of machine learning models.
Table 2. Important hyperparameters of machine learning models.
ML ModelsMajor Hyperparameters
ANNThe number of layers
The number of units in each layer
The type of activation functions (e.g., ReLU, sigmoid, tanh, softmax, ELU, swish, mish)
Training hyperparameters (e.g., batch size, learning rate, maximum number of iterations, early stopping criteria, and the algorithms for initialization and optimization)
CNNAll of the hyperparameters of artificial neural networks
The type of each layer (e.g., conv., max-pooling, batch-norm, dropout, group conv.)
The width, height, and channel of each layer
Kernel size, stride, padding
Existence of skip connection
RNN
(including LTSM and GRU)
All of the hyperparameters of artificial neural networks
Unidirectional or bidirectional
The size of cell blocks (LSTM)
TransformersThe number of layers
The total dimension of hidden features
The number of heads in the multi-head attention
The dimension of key and value
The dimension of MLP layers
All of the training hyperparameters of artificial neural networks
SVMThe weight of soft margin
The type of kernel (e.g., polynomial, Gaussian, RBF) and its parameters (e.g., γ of RBF kernel)
Training hyperparameters (e.g., learning rate, number of iterations)
Regularized
logistic regression
Regularization factors
Training hyperparameters (e.g., learning rate, number of iterations)
Random forestsThe number of trees
Maximum depth of the tree
Maximum number of leaf nodes
Quality measure of a split (e.g., Gini, entropy, log_loss)
Regularization factor
XG-BoostThe number of trees
Maximum depth of the tree
Type of booster and its parameters (e.g., learning rate, gamma, max delta step)
Regularization factors
ANN, artificial neural network; ReLU, rectified linear unit; ELU, exponential linear unit; CNN, convolutional neural network; RNN, recurrent neural networks; LSTM, long short-term memory; GRU, gated recurrent unit; MLP, multi-layer perceptron; SVM, support vector machines; RBF, radial basis function; XG, extreme gradient.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Park, D.; Kim, I. Application of Machine Learning in the Field of Intraoperative Neurophysiological Monitoring: A Narrative Review. Appl. Sci. 2022, 12, 7943. https://doi.org/10.3390/app12157943

AMA Style

Park D, Kim I. Application of Machine Learning in the Field of Intraoperative Neurophysiological Monitoring: A Narrative Review. Applied Sciences. 2022; 12(15):7943. https://doi.org/10.3390/app12157943

Chicago/Turabian Style

Park, Dougho, and Injung Kim. 2022. "Application of Machine Learning in the Field of Intraoperative Neurophysiological Monitoring: A Narrative Review" Applied Sciences 12, no. 15: 7943. https://doi.org/10.3390/app12157943

APA Style

Park, D., & Kim, I. (2022). Application of Machine Learning in the Field of Intraoperative Neurophysiological Monitoring: A Narrative Review. Applied Sciences, 12(15), 7943. https://doi.org/10.3390/app12157943

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop