1. Introduction
The early diagnosis of many neurological diseases, such as Alzheimer’s disease (AD), represents a complex challenge for modern neurology [
1,
2,
3]. Early symptoms, which include subtle changes in memory and behavior, are often mistakenly regarded as part of normal aging, making it difficult to identify such conditions in their early stages [
4,
5,
6,
7,
8].
Diagnosis is often complicated by the absence of a single diagnostic test, requiring a multidisciplinary approach that includes clinical assessments, neuropsychological tests, and brain imaging tests. [
9,
10]. The situation is further complicated by the overlap of symptoms with those of other neurodegenerative conditions [
11,
12,
13,
14,
15,
16]. Although advances in imaging techniques and biomarker identification have improved the ability to detect brain alterations [
17,
18], these tools have limitations related to cost, availability, and sometimes invasiveness. The presence of neuropathological features typical of AD in asymptomatic individuals also raises questions about the clinical significance of such findings [
18,
19].
Early diagnosis of AD is essential for optimal management of the disease, but is hampered by the limited effectiveness of current therapies in slowing its progression [
20], underlining the urgent need to develop more effective treatments [
21,
22,
23]. The psychosocial impact of the diagnosis on patients and families requires diagnostic precision, careful communication, and ongoing support, highlighting the importance of improving diagnostic precision and understanding of the pathogenesis of AD [
24]. In the context of intense research and clinical efforts, electroencephalography (EEG) emerges as a key tool for the early diagnosis and monitoring of AD, thanks to its ability to detect brain abnormalities typical of the disease [
25,
26]. Unlike more advanced and invasive diagnostic techniques such as functional magnetic resonance imaging (fMRI) [
27,
28] and positron emission tomography (PET) [
29,
30], which involve staying in confined environments or exposure to radioactive substances and present logistical challenges, EEG—obtained via the 10–20 system—stands out for its non-invasiveness, ease of use, and cost-effectiveness, making it particularly suitable for repeated studies and monitoring of patients. This makes it essential in the early diagnosis and monitoring of AD [
27,
28,
29,
30,
31].
The usefulness of EEG for the diagnosis and monitoring of AD is mainly linked to its ability to detect specific neurophysiological markers that indicate functional brain alterations, such as the slowing of global electrical activity of the brain as evidenced by changes in frequency bands named
,
,
,
, and
[
32]. The band
(
Hz) signals slow brain activity linked to cortical damage,
(
Hz) indicates transitions between sleep and wakefulness suggesting potential dysfunctions,
(
Hz) is associated with resting states and reflects the alteration of brain organization in AD, and
(
Hz) highlights levels of attention and mental activity, which is useful for observing cognitive changes in the patient [
32,
33]. Finally, the
rhythm, above 30 Hz, is associated with complex cognitive processes such as object recognition and meaning attribution, and it is mainly detectable in the frontal regions [
32,
33,
34,
35,
36,
37]. Detailed EEG analysis, which includes the observation of specific changes in frequency bands, helps define a neurophysiological profile of AD [
38,
39,
40].
However, the presence of artifacts in EEG signals, originating from both physiological and external sources, can mask or distort crucial frequency bands of the EEG signal, compromising the clarity and integrity of essential neural information. Therefore, there is a need for advanced techniques to classify and clean EEG signals [
41,
42,
43].
With the advancement of signal processing techniques and the integration of artificial intelligence (AI), the effectiveness of EEG in distinguishing AD from other neurological conditions has improved, offering promising prospects for clinical management [
44,
45,
46], enabling more efficient and accurate classification and artifact removal, and overcoming the limitations of conventional methods [
47]. AI not only lightens the clinical workload through automation [
48,
49,
50,
51] but also opens up greater standardization in analysis protocols and the discovery of new neurophysiological markers [
52,
53,
54,
55,
56,
57].
The introduction of methodologies based on fuzzy logic, in the field of AI, represents a significant advancement in the removal of artifacts from EEG signals. This approach, inspired by the way human reasoning manages ambiguous or incomplete information, is particularly effective in managing the uncertainty and imprecision typical of biological signals, proving essential to overcoming the challenges related to the intrinsic variability of EEG data [
58,
59,
60,
61,
62,
63,
64,
65]. The use of fuzzy techniques is motivated by their effectiveness in managing the complexity of neurological signals, significantly improving the ability to differentiate between relevant brain activity and artifactual distortions [
66,
67,
68,
69,
70]. Takagi–Sugeno (TS) type fuzzy systems prove to be extremely effective tools in removing artifacts from EEG signals, thanks to their mathematical structure that allows the output functions to be defined based on mathematical models rather than on fuzzy sets, as happens in Mamdani systems. This feature gives TS systems not only greater precision and efficiency, but also facilitates analysis for clinicians, thanks to their intuitiveness and ease of use. Unlike other more complex AI techniques, which operate in a “black-box” mode, TS systems allow for greater transparency in the decision-making process. A further advantage of TS systems compared to Mamdani systems is the possibility of structuring the output in network mode. This configuration makes TS systems particularly suitable for the application of neural learning algorithms, significantly expanding the potential of these systems in modeling and analyzing EEG signals. The ability to integrate neural networks into fuzzy TS systems improves artifact removal and enhances the applicability of these techniques in complex clinical contexts, where the precision and reliability of signal analysis are critical [
70].
Despite extensive literature on the use of fuzzy logic techniques, neural networks, and other artificial intelligence techniques to remove artifacts from EEG signals [
71,
72,
73,
74], in-depth studies on intuitionistic fuzzy systems (IFS) are lacking, highlighting the need for a review examining this innovative area. IFSs, offering superior management of uncertainties and ambiguities, promise significant improvements in the removal of EEG artifacts. A review focused on IFS could bridge the gap between scientific discoveries and clinical applications, improving accuracy and efficiency in EEG analysis [
75,
76].
The main contributions of this review are summarized in the following list.
In-depth evaluation of Type-1 Takagi–Sugeno fuzzy systems in the removal of EEG signal artifacts
In an era where neuroscience and technology increasingly intersect, the quality of EEG data emerges as a critical juncture for the success of numerous clinical and research applications. Against this backdrop, the central goal of this review is to rigorously explore and delineate the effectiveness of Type-1 Takagi–Sugeno fuzzy systems, a promising frontier in advanced EEG signal processing. Our investigation specifically focuses on their ability not only to identify but also to effectively eliminate artifacts, including those generated internally, such as involuntary muscle movements, as well as external ones, like electrical interference. Through a detailed analysis of selected case studies, we aim not only to assess the performance of these systems, but also to illustrate how they can be implemented to significantly enhance the integrity of EEG data. With an approach that balances technical rigor and practical applicability, this review endeavors to provide a comprehensive overview that may serve as a springboard for further innovations in the field of biomedical signal processing.
Comparative evaluation of fuzzy system performance versus traditional non-fuzzy methods in EEG signal artifact removal
In an ongoing effort to refine the accuracy of EEG signal analysis techniques, which is essential for both clinical applications and research, the need for a comprehensive comparative analysis between fuzzy systems and traditional non-fuzzy methods emerges. This review aims to critically examine and compare the performance of newly developed fuzzy systems against established techniques such as Independent Component Analysis (ICA), Principal Component Analysis (PCA), and Artifact Subspace Reconstruction (ASR). The goal is to determine which of these approaches is most effective in removing artifacts, thereby offering new insights into improving EEG data quality. Through this systematic comparison, we intend not only to identify the most performant method but also to contribute to setting more robust standards for future EEG signal processing applications, paving the way for more precise research and more reliable clinical interventions.
Integrating fuzzy systems with advanced deep learning approaches to enhance EEG signal analysis
This review explores how the fusion of fuzzy systems with cutting-edge deep learning technologies can enhance EEG signal analysis, particularly in the realms of clinical applications and research. We delve into the synergistic potential of combining fuzzy logic’s robustness in handling uncertainty with the powerful feature extraction capabilities of deep learning models like Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs). The objective is to assess how these integrated approaches can significantly improve the accuracy and reliability of EEG analysis, thereby driving forward the frontiers of neuroscientific research and clinical diagnostics.
Innovative fuzzy techniques for artifact removal in EEG, with a special focus on intuitionistic fuzzy systems
This review analyzes new fuzzy techniques, particularly those employing intuitionistic fuzzy systems, to tackle the challenges associated with artifact removal in EEG signals. We explore how these advanced fuzzy systems enhance the ability to distinguish between true neural activity and noise, thereby improving the reliability of EEG analyses in clinical and research settings. The focus on intuitionistic fuzzy systems highlights their potential in handling the inherent uncertainties and ambiguities of EEG data, paving the way for more precise and effective artifact mitigation strategies.
In the expansive field of scientific research on EEG signal analysis, the literature is replete with significant contributions, comprising hundreds of publications that explore various innovative methods for artifact removal. For this review, we have meticulously selected a limited number of studies to provide a focused overview showcasing the exceptional performances, especially of fuzzy approaches, in managing artifacts in EEG signals within the international 10–20 system context. This deliberate selection highlights how diverse methodologies, despite operating in vastly different contexts, converge in the efficacy of artifact removal, a crucial aspect for ensuring the accuracy of clinical diagnoses and neuroscientific research. Particularly, fuzzy approaches have proven to be extraordinarily versatile and effective, adapting to different configurations and analytical needs from standard to highly specialized ones. This flexibility suggests that the potential for future development of such methods is extensive and largely untapped. For emerging researchers, this area offers fertile ground for innovation. For instance, one could investigate the integration of advanced fuzzy systems with deep learning technologies to create even more robust and accurate models, or explore the application of these techniques in new domains, such as the study of under-examined neurological conditions or the optimization of BCI (brain–computer interface) systems. Moreover, adapting fuzzy systems to enhance user interfaces could transform these technologies from research tools into practical and accessible solutions for clinicians. Additionally, considering the increasing importance of transparency and interpretability in artificial intelligence models, further studies could focus on enhancing the understandability of fuzzy systems, making them more intuitive and thereby facilitating their adoption in clinical contexts. This review thus aims not only to summarize the current state of affairs but also to stimulate the scientific community, particularly young researchers, to embrace these challenges. It proposes new research directions and expands the possibilities for the application of these promising technologies, making the review interesting and engaging for all types of researchers.
2. Novelty and Contributions
This study, which explores advanced EEG data analysis, particularly for the early diagnosis of AD, clearly demonstrates how the innovative approaches proposed enhance the precision in artifact removal and refine the diagnostic process. The document establishes a robust methodological framework that integrates systems of advanced fuzzy logic and intuitionistic fuzzy systems, elements particularly effective in managing the uncertainties typical of EEG data. These systems, capable of accurately discerning between legitimate neurological activity and noise, significantly elevate diagnostic accuracy.
The integration of such fuzzy systems with deep learning technologies represents a significant advancement, substantially expanding the boundaries of EEG analysis. This synergistic combination facilitates a deeper and more accurate interpretation of neurophysiological markers associated with AD, enhancing the potential for earlier and more precise diagnoses. This review conducts a rigorous comparison between the effectiveness of fuzzy systems and traditional non-fuzzy methods in artifact removal, providing a detailed evaluation that highlights the advantages of integrating advanced computational technologies. Through the analysis of specific case studies and empirical data, this article illuminates the superior capabilities of fuzzy systems in various clinical and research settings, emphasizing their value in enhancing the integrity of EEG data.
It is important to note that the scientific studies analyzed during the review were conducted following rigorous protocols on samples of individuals with well-defined demographic characteristics, using standardized methodologies and instrumentation to ensure the reproducibility and validity of the results. This aspect is crucial, as it ensures that the conclusions are supported by data collected under controlled and comparable conditions, further enhancing the robustness and reliability of the methodological innovations proposed.
These contributions, ranging from technological innovation to investigative methodology, not only strengthen the scientific knowledge base in the field of EEG analysis but also provide more effective and reliable tools for clinical professionals. The fusion of fuzzy logic and deep learning techniques in EEG signal analysis establishes new standards for neuroscientific research and opens new prospects for improving the diagnosis and management of neurodegenerative diseases, promising to guide the field towards future diagnostic innovations.
The remainder of this review unfolds as follows, guiding the reader through a structured exploration of our findings and discussions. Once standardized protocols for advanced clinical practice in the use of EEG have been introduced (
Section 3),
Section 4 focuses on the identification and elimination of artifacts in EEG signals. Next,
Section 5 explores and analyzes the fundamental principles of BSS techniques, and then
Section 6 asks how wavelet transforms can improve the analysis of EEG signals.
Section 7 introduces the basic concepts of EMD and then leaves room for the examination of PCA (
Section 8). The key concepts of adaptive filters, and the basic principles of machine learning for the analysis of EEG signals, are discussed in
Section 9 and
Section 10, respectively.
Section 11 discusses the integration of different techniques for a hybrid approach to artifact removal, while
Section 12 focuses attention on fuzzy techniques, with particular emphasis on more innovative intuitionistic approaches. Moreover, scientific development and technology transfer are discussed in
Section 13.
Finally, conclusions and some reflections on future research developments conclude the review, offering perspectives on how these technologies can be further developed and implemented.
3. EEG Essentials: Unveiling Standard Protocols for Advanced Clinical Practice
Performing an EEG requires compliance with standardized protocols for patient preparation and the technical implementation of the examination [
77,
78].
3.1. Subject Demographics
This review includes a detailed analysis of the demographic characteristics of subjects involved in studies on Alzheimer’s disease (AD) and healthy control groups. In the studies examined, participants diagnosed with AD were generally recruited from specialized neurology and geriatrics clinics, while control group subjects were selected from volunteers with no history of neurodegenerative diseases. The average age of participants with AD typically ranges from 65 to 80 years, with a common average age around 72 years. In control groups, the average age generally ranges from 60 to 78 years, with a mean age of approximately 70 years. The gender distribution in various studies tends to be balanced, with a slight female predominance (approximately 55% female and 45% male), reflecting the higher incidence of AD in women.
Studies often apply stringent exclusion criteria to maintain sample homogeneity, excluding individuals with significant neurological comorbidities, severe psychiatric disorders, or the use of medications that could affect brain electrical activity. Informed consent is commonly obtained from participants or their legal guardians, ensuring ethical compliance in research.
Demographic characteristics of participants, including age, gender, education level, and socio-economic status, are typically collected through structured questionnaires administered before EEG recording. These data are crucial for analysis, as factors like age and education level can influence EEG characteristics and the progression of AD. Participants’ education levels usually range from primary education to university education, with a prevalence of subjects having completed at least secondary education.
The inclusion of control groups with comparable age and gender is crucial for establishing a reference of normalcy and differentiating AD-specific changes from physiological variations associated with aging. This review highlights how demographic selection in various studies allows for robust comparative analysis, enhancing the validity and generalizability of results related to the use of EEG in the diagnosis and monitoring of AD.
3.2. Patient Preparation
Before the EEG, the patient is informed about the exam and how to prepare, their informed consent is obtained, and the scalp is cleaned to ensure the quality of the signal, avoiding the use of conditioners or similar products. It is essential that the patient presents with clean, dry hair, as any hair product residue can interfere with the conduction of the electrical signal. Furthermore, the patient is advised to avoid taking caffeine and stimulant or depressant drugs in the hours before the exam, unless they have been approved by the doctor performing the EEG, since these substances can alter electrical activity cerebral.
It is recommended that the patient get enough sleep the night before the exam to reduce the possibility of artifacts due to tiredness or involuntary movement. If the test involves recording sleep, the patient may be asked to reduce sleep slightly the previous night to facilitate falling asleep during the EEG.
3.3. Sampling Rate
The importance of the choice of sampling rate in the recording and analysis of EEG signals is examined, with a particular focus on studies concerning AD. The studies analyzed in this review generally adopt a sampling frequency of 256 Hz. This choice is closely linked to the need to accurately capture the relevant frequency bands of the EEG, which include the , , , , and bands, already discussed in the introduction.
A sampling rate of 256 Hz is adequate to detect the specific characteristics of brain electrical activity associated with AD, including slowing brain rhythms, and allows for the phenomenon of aliasing to be avoided. This phenomenon, which can introduce distortions into the data when high-frequency components are not sampled correctly, is mitigated through the use of low-pass filters with a cut-off frequency around 128 Hz, compliant with the principles of the Nyquist theorem.
The adoption of a sampling rate of 256 Hz offers a good compromise between temporal resolution and efficient data management, ensuring that the frequency bands critical for the diagnosis of AD are adequately represented. This frequency helps maintain convenient data management, reducing storage requirements and facilitating analysis without compromising the quality of EEG recordings.
For studies requiring more detailed analysis of the microstructures of brain activity, such as short-duration events or rapid oscillations, a higher sampling rate could be considered. However, for standard analysis of AD-related neurophysiological alterations, 256 Hz is considered ideal to ensure accurate representation of relevant frequency bands and practical data management.
3.4. Technical Setup and Recording Protocols
The 10–20 system is anchored on specific anatomical landmarks of the skull, using the nasion, located at the bridge of the nose, and the inion, a prominent bone at the back of the skull, as primary reference points. Electrodes are systematically positioned at intervals of 10% or 20% of the total distance between these landmarks. This strategic placement encompasses diverse regions of the scalp, including the frontal, central, and parietal lobes providing comprehensive coverage of the occipital and temporal areas and a view of the cerebral cortex.
In the studies reviewed, electrodes typically consist of silver/silver chloride (Ag/AgCl) that are chosen for their stability and low impedance. These electrodes are positioned according to the 10–20 system map, which includes standard locations such as Fp1 and Fp2 (frontal polar); F3 and F4 (frontal); C3 and C4 (central); P3 and P4 (parietal); O1 and O2 (occipital); F7 and F8 (frontal lateral); T3 and T4 (temporal); T5 and T6 (temporal lateral); and Fz, Cz, and Pz (midline electrodes). To facilitate accurate and efficient electrode placement, an elastic cap embedded with electrodes aligned according to the 10–20 system is often employed [
79,
80]. This cap ensures consistent positioning and adjusts to different head sizes, significantly enhancing patient comfort during the procedure.
To optimize electrical conductivity between the scalp and the electrodes, a conductive gel or paste is applied. This practice reduces impedance and enhances signal quality, ensuring that each electrode maintains optimal contact with the scalp. The weak electrical signals generated by neuronal activity are then amplified by a sophisticated amplifier, designed with high input impedance and a broad bandwidth to capture a wide range of EEG frequencies.
The amplified signals are digitized using a data acquisition system that includes an analog-to-digital converter (ADC). This system is complemented by software for detailed signal processing and analysis, with sampling rates, as specified above, typically set at 256 Hz or higher to ensure precise capture of the EEG signal and avoid aliasing. The review notes that EEG recordings are generally conducted in a quiet, dimly lit room to minimize external stimuli and reduce artifacts, with patients comfortably positioned to minimize movement-induced artifacts.
The 10–20 system’s standardization offers numerous benefits, providing a consistent methodology for electrode placement that facilitates data comparison across diverse studies and clinical contexts. It ensures comprehensive cortical coverage, thereby enhancing the diagnostic value of EEG. The system’s design also prioritizes user-friendliness, enabling quick and accurate electrode placement, which is particularly advantageous in both clinical settings and large-scale research studies [
81,
82,
83,
84].
3.5. Post-Procedure
Post-EEG, the electrodes are removed, the scalp is cleaned, and a neurologist analyzes the data to detect abnormalities. The results are shared and explained to the patient and/or their family, along with clinical recommendations
The following section deals with the analysis of EEG signals, which is particularly complex due to the artifacts that not only introduce noise but also ambiguity, originating both from physiological sources internal to the patient, such as eye movements, muscular activity, and cardiac rhythms, and from environmental influences external factors such as electromagnetic interference and electrode movements.
4. EEG Clarity: Unraveling Artifacts
In the analysis of EEG signals, artifacts not only introduce noise but also introduce elements of ambiguity or fuzziness, significantly complicating the interpretation of the data. This fuzziness derives both from physiological sources internal to the patient (eye movements, muscle activity, and heart rhythms that overlap with the signal frequencies) and from external environmental influences (electromagnetic interference or electrode movements that alter the spectral profile of the signal) [
85,
86,
87].
Table 1 illustrates the different types of artifacts.
Figure 1 displays a spike artifact caused by head shaking and/or a possible loose electrode; only P3 is involved because the patient is resting their head on a pillow.
Then, it appears useful to adopt advanced filtering techniques and signal processing algorithms to isolate and remove artifacts, preserving data integrity (i.e., balance between minimizing ambiguity and maximizing useful information). These approaches exploit both temporal, spatial, and spatio-temporal procedures, allowing the identification and selective removal of artifactual components from the EEG signal, preserving the relevant neurophysiological information. Of note, the integration of AI and machine learning techniques is emerging as a promising area to further improve artifact removal by offering adaptive predictive models for artifact removal.
Although each method of removing artifacts from EEGs has its peculiarities, none are universally effective on their own. Therefore, to significantly improve the quality of the EEG signal, it is essential to integrate different techniques into a single framework, exploiting the strengths of each to obtain better results and more accurate data interpretation [
75,
88].
We will begin by exploring a series of EEG artifact removal techniques, recognized today as the gold standard in the field. These methodologies will be subjected to a SWOT analysis to reveal their strengths and weaknesses, thus outlining the context in which they fit within current research and clinical practice. This critical examination will serve as a foundation for introducing and discussing more advanced hybrid approaches. In particular, we will focus on the evolution towards systems that incorporate fuzzy elements, even in an intuitionistic form, to deal more precisely with the intrinsic uncertainty in EEG signals.
Through this path, we will delve into the dynamics and potential of hybrid approaches that blend different technologies and methodologies to overcome the limits of standard techniques. In this context, the fuzzy element emerges as a vital component, offering a sophisticated means to model the ambiguity and uncertainty often present in neurological data. The introduction of systems based on intuitionistic fuzzy logic represents a qualitative leap in the ability to analyze EEG signals, proposing a more accurate and flexible approach in the management of artifacts.
This transition towards more complex and integrated methodologies not only aims to improve the efficiency in removing artifacts but also aims to refine our understanding of brain signals, expanding the possibilities for diagnosis and monitoring of neurological conditions. In this way, the adoption of hybrid approaches with a strong fuzzy component represents a significant advance in neuroscientific research, promising to lead to more precise interpretation of EEG data and more accurate modeling of brain activity.
Below, some standard non-fuzzy techniques employed in removing artifacts from EEG signals in the international 10–20 system, highlighting their methods and applications in the field of neurophysiological analysis.
5. Decoding the Intricacies of Blind Source Separation (BSS): A Deep Dive into Core Principles
This technique uses a mathematical model interpreting EEG signals as linear combinations of independent source signals and artifacts [
89,
90,
91,
92,
93]:
where
is the
matrix of EEG signals (
m, of electrodes,
n, number of samples).
is the
unknown mixing matrix (
p, number of source signals and artifacts), which represents how the source signals and artifacts combine.
is the
matrix of the source signals, including both the signals of brain interest and the artifacts. Finally,
, of size
, represents the measurement noise. The goal is to estimate
and separate
from
, eliminating artifacts from
to reconstruct clean EEG signals.
The use of a single BSS technique encounters limitations due to the variety of artifacts and the complexity of the EEG signals, making universal effectiveness difficult. Challenges include signal variability, assumptions that are not always valid, and sensitivity to set up parameters, which can affect the accuracy of artifact removal.
Addressing artifacts in EEG signals requires an integrated approach that combines several BSS techniques and preprocessing methodologies, rather than the use of a single technique. This strategy takes advantage of the complementary advantages of various methods, improving the quality of the EEG signal for more accurate analysis and expanding the practical use of clean EEG data.
5.1. Independent Component Analysis (ICA)
ICA is an advanced BSS technique focused on the isolation of statistically independent components in EEGs, distinguishing itself by its ability to separate signals into subcomponents with less statistical overlap [
94,
95]. This makes it particularly suitable for removing complex artifacts, such as ocular or cardiac ones, improving the distinction between real brain activity and interference. The accuracy of ICA in identifying independent signal sources facilitates the elimination of artifacts and improves the interpretation of EEG data, making it essential in both neuroscientific research and clinical applications.
Mathematically, the ICA derives a demixing matrix,
, such that [
96,
97]:
where
estimates the separate EEGs (artifact-free signals), and the artifacts,
, optimize
to maximize the statistical independence between the components of
.
ICA calculates
by optimizing a contrast function, such as negentropy, which measures the non-Gaussianity (and therefore independence) of the components. Once
is calculated, the EEG signals are separated from the artifacts by identifying and removing the corresponding components in
. Choosing which components to remove often requires manual analysis or automatic criteria based on known characteristics of the artifacts, such as frequency, topography, or temporal behavior. After removing artifacts, clean EEG signals are obtained as:
where
are the components of
that represent the clean brain activity, and
is the matrix of corresponding reduced mixing.
5.2. Canonical Correlation Analysis (CCA)
This approach considers two sets of variables resulting from different electrode configurations or from EEG measurements under different experimental conditions. The goal is to discover the maximum correlations between these two sets of signals, which can represent, for example, different mental states or responses to different stimuli [
98,
99].
Let
be the
matrix representing the EEG signals during an experimental condition, and
the
matrix representing the EEG signals, possibly from the same set of electrodes or from a different set, during another experimental condition. CCA finds linear combinations
and
, with
a and
b weight vectors, such that the correlation between
and
is maximized. The canonical correlation,
r, between these linear combinations is expressed as:
where
is the covariance matrix between
and
,
is the covariance matrix of
, and
is the covariance matrix of
. The objective is to maximize
r regarding
and
, subject to the constraints that the variances of the linear combinations
and
are normalized (i.e.,
and
).
The solution to this problem is obtained through the analysis of the eigenvalues of the matrices per and , where the eigenvalues represent the squared canonical correlations , and the corresponding eigenvectors and indicate the weight vectors that maximize these correlations.
5.3. Mastering Artifact Removal: A Comparative Analysis of BSS, ICA, and CCA Techniques
Blind Source Separation (BSS), Independent Component Analysis (ICA), and Canonical Correlation Analysis (CCA) stand as pivotal methods in the identification and elimination of EEG artifacts, each bringing its set of strengths, limitations, and challenges to the forefront of neuroscience research. BSS excels in flexibility, adapting seamlessly to various signal types, yet it often struggles with determining the optimal number of sources and handling complex artifacts. ICA is particularly effective against specific artifacts, such as ocular disturbances, although it is constrained by its assumption of statistical independence among sources. CCA shines in studies that correlate EEG signals with external stimuli, yet its reliance on correlated datasets narrows its applicability to particular experimental conditions.
Despite their inherent potentials, the effectiveness of these techniques is intricately linked to the application context, the characteristics of the signals involved, and the precision in their implementation. The strategic integration of these methodologies could pave the way for a more robust and adaptable artifact removal approach, underscoring the need for a multidimensional strategy that transcends the individual limitations of each method while maximizing their collective benefits. This integrative approach not only enhances the understanding of complex EEG data but also propels the field toward more accurate and reliable interpretations of neural activity.
A summary of this SWOT analysis is shown in
Table 2.
6. Unlocking the Power of Wavelet Transforms
It decomposes a signal into components at different frequency scales, making it easier to identify and remove artifacts in EEG signals. The continuous wavelet transform (CWT) of a signal
is defined as the convolution of
with a family of wavelet functions
, which are derived from a parent function
through dilation and translation operations. Dilation is controlled by the parameter
a, which regulates the scale or frequency of the wavelet, while translation is determined by the parameter
b, which regulates the temporal position of the wavelet. The CWT is expressed as [
100,
101]:
where
represents the wavelet coefficient at scale
a and position
b, and
is the complex conjugate of the wavelet function. This transformation produces a two-dimensional representation of the original signal in terms of scale and time, allowing for detailed analysis of its frequency components over time.
For practical analysis of EEG signals and removal of artifacts, the discrete wavelet transform (DWT), which is a sampled version of the CWT, is most frequently used.
DWT operates through a series of filtering and subsampling operations, exploiting the decomposition of the signal into a combination of wavelet functions, each corresponding to a specific frequency scale and temporal position. This process takes advantage of the ability of wavelets to offer a multi-resolution representation of the signal, isolating specific characteristics at different scales.
Let be the EEG signal from which to remove the artifacts. The choice of a parent wavelet is crucial, since it determines the properties of the wavelet functions generated for the decomposition. The mother wavelet is a function that satisfies certain conditions, including zero integration (which ensures its oscillating nature) and rapid decay towards zero (which ensures temporal and frequency localization).
In the DWT, is iteratively processed to separate it into high- and low-frequency components via high-pass () and low-pass () filters, with n decomposition level. At each iteration or level, the signal is decomposed as follows:
Detail components , which are obtained by filtering with and represent high-frequency information or rapid variations in the signal;
Components of approximation , which are obtained by filtering with and contain low-frequency information or the general trend of the signal.
The decomposition of
can be mathematically expressed by
After applying filters, the signal is downsampled by a factor of 2, reducing the size of the signal for the next level of decomposition.
Artifacts, being unwanted components often localized in specific frequency bands, can manifest as high-frequency variations and therefore be captured in the detail components. By identifying these components and removing or modifying them, artifacts can be eliminated from the signal.
After the removal of the detail components corresponding to the artifacts, the reconstruction of the clean EEG signal occurs through the inverse process of the DWT, combining the residual approximation components with the modified or filtered details. The reconstruction formula involves combining the upsampled and re-filtered series with
and
to obtain the original signal without the artifacts:
The choice of the mother wavelet is based on the types of artifacts and the characteristics of the signal. More regular and symmetric wavelets are, respectively, useful for gradual artifacts and for signals with symmetric characteristics as well as influencing the ability to detect transient or extended events. Daubechies wavelets are known for their regularity and rough symmetry, ideal for highlighting significant details in EEG signals, with the order determining their sensitivity to signal variations. Coiflets, offering greater symmetry, are suitable for transient signals such as EEG artifacts. Symlets, with improved symmetry compared to Daubechies, are effective in removing artifacts while maintaining the characteristics of biological signals.
Decoding Wavelet Approaches: A Comprehensive SWOT Analysis
Wavelet techniques in eliminating artifacts from EEG signals have advantages such as versatility and effectiveness in treating disturbances on various frequency scales, while maintaining important neural information. Challenges include the need for specialized knowledge to select the appropriate wavelet and determine decomposition levels, making the approach less accessible to laypeople. Opportunities lie in the development of new wavelet functions and decomposition algorithms that could improve artifact removal and integration with other signal processing techniques, promising more effective systems. Threats include technological advancement and the increasing complexity of EEG data that may surpass the current capabilities of wavelet techniques, as well as the need for real-time processing that poses computational efficiency challenges.
Table 3 summarizes the information highlighted above.
7. Unveiling the Essentials of Empirical Mode Decomposition (EMD)
EMD is a nonlinear adaptive technique for signal analysis that decomposes a signal into a finite number of intrinsically oscillating components called Intrinsic Mode Functions (IMF). This approach is particularly useful for non-stationary and non-linear signals such as EEG, allowing for the removal of artifacts while maintaining the characteristics of the original signal [
102,
103].
The EMD process for an EEG signal takes place through iterations that progressively extract the IMFs, which satisfy two conditions:
The number of local extrema and the number of zero crossings must differ by at most one throughout the function;
At each point, the average of the local maximum values (upper envelope) and local minimum values (lower envelope) must be zero.
The process begins by identifying all local extrema in
. These endpoints are then used to construct the upper and lower envelopes by interpolation, typically using cubic splines. The upper envelope,
, is obtained by interpolating all the local maxima, while the lower envelope,
, is obtained by interpolating all the local minima of the signal. Once the envelopes are obtained, the average
between the upper and lower envelope is calculated:
The first IMF candidate,
, is obtained by subtracting
from the original signal,
This process is repeated on
treating
as the new signal from which to extract the envelope, until
meets the conditions to be considered an IMF. At this point,
is definitively accepted as the first IMF of the original signal. The residue.
, obtained by subtracting
from
, becomes the new signal on which to apply the entire process to extract the next IMF:
This procedure is iterated, and each residual, , is treated as the new signal to extract the next IMF, until the final residual no longer shows significant oscillation and can be considered as a trend or as residual noise.
Removing artifacts in EEG signals through EMD involves analyzing the extracted IMFs to identify those that represent artifacts based on characteristics such as frequency, amplitude, or temporal behavior unrelated to the brain activities of interest. The IMFs identified as artifacts are then excluded, and the clean EEG signal is reconstructed by summing the remaining IMFs:
where
k is the number of IMFs deemed free of artifacts and
is the final residual after extraction of the last significant IMF.
EMD thus provides a flexible and adaptive method for the analysis of EEG signals, allowing for the decomposition of the signal into components with clear physical meaning and facilitating the removal of artifacts without the assumption of linearity or stationary of the signal.
EMD Uncovered: A Strategic SWOT Analysis of the EMD
EMD is effective in processing EEG signals due to its ability to adapt to nonlinear and nonstationary data, decomposing it in ways that reflect natural oscillations. This allows one to remove artifacts while preserving neural information, thus offering flexibility in the analysis of time-varying signals. EMD challenges include a high computational burden and the need for manual interpretation of modes, limiting automatic use and variably handling noise and artifacts. The opportunities for EMD lie in the development of more efficient variants and integration with other techniques, potentially revolutionizing artifact identification and removal. However, threats include technological advancement and the increasing complexity of EEG data, as well as the challenge of real-time processing required for clinical applications.
Table 4 summarizes the above details.
8. Principal Component Analysis (PCA): Unpacking the Fundamentals
It is a statistical technique used to reduce the dimensionality of data while preserving most of the original variance. Applied to the removal of artifacts in EEG signals, PCA can isolate and eliminate major components that represent noise or artifacts, thereby improving the quality of the EEG signal for further analysis. Let us consider a matrix of EEG signals
X of size
, where
m represents the number of channels (electrodes) and
n the number of time points (samples). The goal is to transform
X into a new coordinate space that better represents the variance in the data. Initially, you average each channel and subtract this average from each channel to center the data around zero. This step is critical to ensure that the PCA correctly identifies the directions of maximum variance without being influenced by the original location of the data. The centered matrix [
104,
105]
which reflects how variances and covariances are distributed across channels. Subsequently, we proceed with the eigenvalue decomposition of
, finding the eigenvalues,
, and the eigenvectors
. Eigenvalues reflect the amount of variance captured by each principal component (PC), while eigenvectors indicate the directions (or principal components) in the original data space that maximize variance. Mathematically, this step is expressed as
where the eigenvalues are typically sorted as
where
is the matrix containing the eigenvectors of
as columns.
represents the data projected onto the principal components, where each row corresponds to a PC.
To remove artifacts, the main components that correspond to the artifacts are identified by analyzing the characteristics of the PCs, such as the waveform or spectral content. These PCs are therefore excluded from data reconstruction.
Finally, the clean EEG signals are reconstructed using only the PCs deemed free of artifacts. If
is the reduced eigenvector matrix that excludes the artifactual PCs, the reconstruction of the clean data
is obtained as
where
represents the selected principal components (excluding those of the artifacts).
PCA minimizes the impact of artifacts in EEG signals by preserving essential neural information, using dimensionality reduction to isolate and eliminate unwanted influences, and improving signal quality for future analysis.
Principal Component Analysis (PCA): A Rigorous SWOT Exploration
PCA is effective in reducing the dimensionality of EEG signals, facilitating the isolation and removal of artifacts by transforming the data into principal components that account for the largest variances. This process, however, has its challenges, including the assumption that artifacts always correspond to the components of greatest variance, risking eliminating important neural signals or retaining artifacts. A significant limitation is also the potential loss of important temporal information. Nonetheless, integrating PCA with other analysis techniques offers new opportunities to improve the accuracy of artifact removal. Future challenges include adapting to the increasing complexity of EEG data and the need for real-time processing for clinical applications, which challenge the effectiveness of PCA due to its computational demands.
Table 5 summarizes the highlights of the SWOT analysis.
9. Fundamental Concepts Underlying Adaptive Filters
The use of adaptive filters in the field of EEG signal processing marks a significant step forward, offering an efficient mechanism to address and remove artifacts while keeping valuable neural information intact [
106]. These filters, distinguished by their ability to automatically adapt to signal changes, represent a breakthrough in EEG data cleaning. Among the various technologies available, Least Mean Squares (LMS) and Recursive Least Squares (RLS) filters stand out for their effectiveness and versatility, offering a sophisticated approach to the challenge of isolating and eliminating interference without compromising the original signal. The following description will focus exclusively on these two filters, exploring their characteristics, operating mechanisms, and application contexts. Through this focus, we aim to highlight how the LMS filter, with its simplicity and effectiveness, and the RLS filter, known for its speed and precision, are instrumental in significantly improving the quality of EEG analyses, contributing to the development of neuroscientific research and to the refinement of diagnostic processes in the clinical setting.
9.1. Exploring the Dynamics of Least Mean Squares (LMS) Algorithm
Suppose we have a reference signal,
, which represents an estimate of the artifact present in the EEG signal,
. The filtered output,
, is the result of the convolution of the filter weight vector,
, with the reference signal,
, represented as [
107,
108]:
where
M is the number of filter coefficients, and
is the i-th weight of the filter at a time
t. The error,
, is therefore the difference
The weight update formula in the LMS algorithm is used to minimize the mean square error between the desired signal and the filter output. Weight updates are given by:
where
is the learning rate, a key parameter that affects the speed of convergence of the algorithm towards optimal error minimization. This parameter must be chosen carefully to ensure that the algorithm converges without becoming unstable.
By iterating over t, LMS dynamically adapts the filter weights based on the calculated error at each step, thus optimizing the weights to minimize the error between the desired EEG signal and the filter output. This adaptive process allows the artifact to be effectively isolated and removed, without the need for exact a priori knowledge of the nature of the artifact.
9.2. Deciphering Recursive Least Squares (RLS): Advanced Insights
The relationship between the desired signal,
, the reference signal,
, and the filter output,
, can be expressed as [
109,
110]:
where
is the vector of filter weights at the time
t, and
is the vector representing the reference signal and its delays until ’(M-1)-th order, with
M indicating the filter order. The error between
and
is given by:
The objective of the RLS is to minimize the weighted sum of squared errors up to time
t, denoted as
where
is the forgetfulness factor, a constant between 0 and 1 that determines how quickly the weight of past errors decreases.
RLS updates the filter weights using the following rule:
where
is the gain vector of the filter at the time
t, calculated as:
and
is the inverse of the prediction error covariance matrix updated as follows:
Initially, is typically set as an identity matrix multiplied by a large scalar value, assuming limited knowledge of the initial state of the system.
The RLS algorithm dynamically updates the filter weights to minimize the mean square error, adapting in real time to changes in signals and artifacts. This ability for rapid and precise updates makes RLS particularly effective at removing artifacts from EEG signals, where characteristics can change quickly.
Table 6 summarizes the strengths and weaknesses of the filters presented here.
9.3. Adaptive Filters: A Comprehensive SWOT Breakdown
LMS and RLS adaptive filters represent advanced methodologies for removing artifacts from EEG signals, each with its specific advantages and limitations. The LMS filter is appreciated for its simplicity and adaptability, offering an effective method to address different noises in EEG data, but faces challenges in parameter optimization and can struggle with nonlinear artifacts. Nonetheless, integration with other techniques promises a more robust approach to artifact removal. Advances in machine learning and artificial intelligence could further improve the capabilities of the LMS filter. On the other hand, the RLS filter stands out for its high-precision and real-time update capability, making it ideal for applications requiring high signal fidelity. However, its computational complexity and the need for accurate calibration of initial parameters represent significant obstacles. The potential for integration with AI and the development of less computationally demanding variants could extend the applicability of RLS. Both filters address the challenge of the increasing complexity of EEG signals and the need for real-time processing, underscoring the importance of continuous innovations to maintain their effectiveness in removing artifacts across a wide range of scenarios.
Table 6 displays the relevant SWOT matrix.
10. Machine Learning Basics: Unlocking AI’s Core Principles
Machine learning approaches for artifact removal in EEG signals leverage the capability of learning algorithms to learn from specific examples to identify and filter out undesired elements from the data [
111,
112,
113]. These methods employ datasets that include both clean EEG signals and signals contaminated by artifacts, allowing models to distinguish useful signal characteristics from artifact characteristics through a supervised learning process. Accurate dataset preparation is crucial here, as each example must be clearly labeled to ensure that the model develops effective predictive capabilities. Integrating fuzzy systems with deep learning techniques can lead to significant improvements in this area. Fuzzy systems, with their ability to handle uncertainty and produce decisions based on rules that can simulate human reasoning, can optimize the classification of signal features in the presence of ambiguous labels or highly variable signals that might confuse conventional deep learning models [
114,
115,
116,
117].
A hybrid model that combines the automatic feature extraction capabilities of deep learning with the robustness of fuzzy systems in managing ambiguity can improve the accuracy of artifact removal and offer greater flexibility in dealing with different types of artifacts. Such integration can significantly enhance the quality of EEG signals for subsequent uses, such as clinical monitoring and neuroscientific research [
118,
119,
120,
121,
122,
123].
Validating such hybrid approaches through comparative studies that examine performance against methods that use solely machine learning or fuzzy systems can include sensitivity analyses and accuracy evaluations. The goal is to develop robust and reliable methodologies that can be effectively implemented in various biomedical signal analysis contexts, providing more sophisticated and accurate tools for industry professionals [
124,
125,
126,
127,
128].
A common approach involves the use of neural networks, particularly Convolutional Neural Networks (CNNs), for their ability to automatically extract meaningful features from spatio-temporal data [
129,
130,
131], and Recurrent Neural Networks (RNNs), which are effective in learning temporal dependencies in sequential data [
132,
133,
134].
We denote a dataset of EEG signals here, by , where is the signal matrix of the i-th EEG sample with m channels and n time points, and is the label indicating the presence (1) or absence (0) of artifacts.
10.1. Exploring CNN-Based Approaches: Cutting-Edge Techniques
A CNN processes the input
through several layers [
129,
130,
131].
Convolutional Layers
Each convolutional layer transforms the input through a set of convolutional filters,
, where each filter has parameters
and applies a convolution followed by a nonlinear activation (e.g., ReLU,
). The output of each filter, for a given layer, is given by
where
is the convolutional output,
is the bias, and
is the nonlinear activation function.
Pooling Layers
They reduce the dimensionality of the output of convolutional layers by applying operations such as max pooling, which selects the maximum value in specific sub-regions of the input:
Fully Connected Layers
The last layers of the network are fully connected and map the extracted features to the final outputs representing the model’s predictions:
where
W and
b are the respective weights and biases.
The objective function to be minimized, often chosen as the cross-entropy between the predicted and true labels to classify whether the signal segment contains artifacts, can be formulated as
The training of the network is carried out by optimizing the parameters
W and
b by minimizing
J. The updating of the parameters typically occurs via gradient descent or its variants such as Adam, calculating the gradients of the objective function regarding the parameters:
where
is the learning rate [
129,
130,
131].
After training, a neural network is capable of classifying new EEG signal segments by detecting the presence of artifacts. Based on the model’s predictions, segments believed to contain artifacts can be modified or eliminated. Some advanced models can also generate a “clean” EEG signal as output, introducing methods of removing artifacts through signal reconstruction. Specifically, Convolutional Neural Networks (CNNs) can analyze previously unseen segments of an EEG, assigning each a probability of containing artifacts, using sigmoid or softmax functions to transform the network’s output into those probabilities:
where
denotes the sigmoid function and
is the output of the last fully connected layer of the CNN for the
i-th segment.
Based on , a decision is made on how to treat each segment of the EEG signal. Typically, a threshold, (e.g., ), is established to classify a segment as containing an artifact if . Segments identified as artifacts can be removed, interpolated, or treated with specific artifact removal techniques.
If a segment is classified as containing artifacts, there are several mathematically sound strategies for its removal or treatment.
10.1.1. Model-Based Interpolation and Substitution
Segments containing artifacts can be replaced by interpolating adjacent non-artifactual segments. If
and
are clean segments
and after an artifactual segment,
, a simple linear interpolation method to replace
A trained model, such as an RNN or other CNN, can be used to generate an estimate of the clean EEG signal, , based on the surrounding non-artifactual segments or on contextual characteristics.
10.1.2. Continuous Updating and Optimization
The training process of CNNs in the analysis of EEG signals is dynamic and allows for continuous updates (fine-tuning) upon detection of new data or artifacts, ensuring the effectiveness of the model over time and in various conditions. These machine learning methods combine neural network learning with advanced mathematical strategies to effectively deal with signal segments deemed artifactual, offering an advanced framework for removing artifacts from EEG signals.
10.2. RNN-Based Approaches
Approaches using RNNs for analyzing EEG signals benefit from their ability to process sequential data and learn long-term relationships between samples, which is a useful feature for dealing with the dynamic and non-stationary nature of EEG signals.
An RNN processes sequences of data,
by updating its hidden state,
, at each time step based on the previous hidden state,
, and on the current input,
. The basic dynamics of a simple RNN cell can be expressed as [
132,
133,
134]:
where
and
are the weights that connect the hidden state to the new hidden state, the input to the hidden state, and the hidden state to the output, respectively. The terms
and
are the biases for the hidden state and the output, and
is an activation function, typically a hyperbolic tangent (tanh) for the hidden state and a sigmoid or softmax for the output, depending on whether the task is a regression or classification.
10.3. Applying CNN Techniques to EEG Sequences for Artifact Removal
During the training phase, an RNN is trained on a dataset
where
indicates the ground truth, which can be the presence of artifacts or the clean EEG signal corresponding to
. The training goal is to minimize a loss function,
J, that measures the discrepancy between the network outputs,
, and the target labels or values,
. For classification problems, a common choice for
J is cross-entropy, while for regression problems the mean square error (MSE) can be used:
where
L is the specific loss function (e.g., cross-entropy or MSE).
To optimize the weights of the RNN, one can use a technique called Backpropagation Through Time (BPTT), which extends the concept of backpropagation by managing the temporal dependencies of the data. BPTT computes the gradients of the loss function regarding the weights of the network by unrolling the network over time and applying the chain rule across time sequences.
10.4. Artifact Removal
After training, the RNN can be used to process new EEG signals, identifying and filtering artifacts. For signal segments classified as containing artifacts, post-processing techniques may include removal or replacement.
10.5. CNN and RNN Approaches: A Detailed SWOT Analysis
CNNs and RNNs are promising techniques for removing artifacts from EEG signals, each with its strengths and challenges. CNNs excel at automatically learning features from EEG data, offering precise artifact removal due to their ability to adapt to signal complexities. However, they require large volumes of labeled data for training and have high computational demands, posing challenges in real-time applications and in hardware-constrained contexts. Despite this, the evolution of hardware dedicated to AI and the integration with other methodologies promise to overcome these limitations. On the other hand, RNNs, taking advantage of their ability to process sequential data, offer unique advantages in the analysis of EEG signals, thanks to their ability to capture temporal dependencies and preserve the integrity of the original signal. However, they encounter obstacles such as the problem of vanishing or exploding gradients and also require large datasets for training. Despite the challenges, advancements in deep learning and the introduction of advanced architectures such as LSTM and GRU open up new possibilities for the effectiveness of RNNs in removing artifacts. Both technologies, however, must navigate a rapidly evolving environment with the increasing complexity of EEG signals, which presents ongoing challenges in adapting to new types of artifacts and managing increased data complexity.
Table 7 summarizes these remarks.
11. Exploring Hybrid Approaches: Integrating Diverse Techniques
Adopting a hybrid approach to artifact removal is essential, given their considerable diversity and complexity. Integrating different methods within a single hybrid framework not only improves flexibility and adaptability regarding signal variations and analytical needs, but also optimizes the quality of the EEG signal while preserving essential information contents. Given the wide range of possible combinations between the various approaches, this review focuses attention on the most recent hybrid approaches, highlighting the importance of adaptive and versatile strategies to effectively address the challenges presented by EEG signals, both in research and for clinicians.
Table 8 provides a comprehensive synthesis of hybrid approaches.
11.1. Exploring Hybrid Approaches for Internal Artifact Removal: Case Studies
Ref. [
16] proposes a classification method based on Recurrent Neural Networks (RNN) to distinguish healthy subjects from those affected by Alzheimer’s disease using EEG data. Through the application of preprocessing techniques such as PCA, and especially robust PCA (RPCA) on corrupted data, the authors achieved an accuracy greater than 97% in the tests. RPCA, in particular, has been shown to improve accuracy by approximately 5% compared to standard PCA, even in the presence of a high rate of data corruption. These results pave the way for the use of this model for the early diagnosis of Alzheimer’s disease and suggest the possibility of extending the approach to various neurodegenerative diseases, emphasizing the effectiveness of advanced signal processing techniques in improving accuracy of classification based on brain signals.
Ref. [
135] describes an advanced method based on spatio-temporal Convolutional Neural Networks and Independent Component Analysis (ICA) for the automatic removal of artifacts from magnetic electroencephalography (MEG) signals. These artifacts, typically originating from eye blinks, saccades, and cardiac activity, are identified and eliminated without the use of additional electro-oculography (EOG) and electrocardiography (ECG) electrodes, thus simplifying imaging procedures and improving patient comfort. Using this approach, it was possible to achieve high accuracy in artifact classification, with results showing a detection accuracy of 98.95%, a sensitivity of 96.74%, and a specificity of 99.34% on a large sample of 217 subjects, both in resting conditions and during the execution of specific tasks.
This method represents a significant advance in the processing of MEG signals, providing an automatic and highly effective solution for cleaning data from non-neuronal interference. The ability to automatically adapt to acquisition time and the potential elimination of the need for EOG or ECG electrode monitoring promise dramatic improvements in the efficiency and accessibility of MEG imaging for both clinical and research applications.
The study described in [
136] extends the use of a robust adaptive noise cancellation scheme for the simultaneous removal of eye blinks, eye movements, amplitude drift, and recording bias. In particular, volumetric conduction was characterized, estimating the levels of signal propagation through all scalp EEG recording areas due to ocular artifact generators. Each electrode is treated as a separate subsystem to be filtered, assumed uncorrelated and uncoupled. The results show a correlation between the raw and processed signal of more than 95–99.9% in the regions not affected by ocular artifacts and a correlation of 40–70% in the regions dominated by ocular artifacts. The results were compared with Independent Component Analysis (ICA) and artifact space reconstruction methods, showing how some local quantities are better handled by the real-time adaptive framework. The integration of Convolutional Neural Networks (CNN) further improves the discrimination and classification ability of EEG signals. The decoding performance, compared with multi-day experimental data from two subjects, for a total of 19 sessions, with and without raw data filtering, highlights a significant increase, justifying the effectiveness of the method for real-time closed loop BMI applications.
11.2. Hybrid Approaches for External Artifact Removal: Case Study Analysis
In [
137], an innovative solution for the removal of external artifacts through the combined use of EMD and CCA is introduced, which demonstrates exceptional effectiveness in eliminating motion artifacts without compromising neural information in the EEG. Its computational efficiency, combined with improved signal cleaning accuracy, positions this approach as a promising method to address one of the most pressing challenges in EEG signal analysis: the reliable elimination of external artifacts.
Ref. [
138] introduces an innovative method for external artifact removal and signal classification, which is combining DWT and ANN. This hybrid approach led to an average increase of 15–16% in classification accuracy, demonstrating the effectiveness of the proposed methodology in improving both the quality of the EEG signal and the reliability of BCI systems. Through the integration of advanced signal processing and machine learning techniques, the work opens up new perspectives for optimizing BCI systems, promising significant improvements in usability and performance, and making them more accessible and reliable for users.
The paper [
139] analyzes and proposes methods based on blind source separation (BSS) to eliminate motion artifacts from EEG signals, using combinations of Independent Component Analysis (ICA), canonical correlation analysis (CCA), transform discrete wavelet (DWT), and stationary wavelet transform (SWT). Through testing on both pure and simulated EEG signals to emulate motion artifacts, the authors demonstrate that the CCA algorithm surpasses ICA in efficiency and speed in removing artifacts, while preserving important neural information. This significantly contributes to improving the accuracy and reliability of EEG-based diagnoses and signals promising directions for future research.
Table 9 summarizes the SWOT analysis.
11.3. Exploring Hybrid Strategies for Internal and External Artifact Removal: Case Studies Insights
Ref. [
140] describes an innovative two-step method for removing artifacts from EEG signals, using DWT, CCA, and an anomaly detection algorithm. This approach is effective against various types of artifacts, with particularly good results for ocular ones, while it is less effective for muscular artifacts due to their wide frequency distribution. Removal of power line artifacts improves as the noise intensity increases. The method demonstrates excellent performance at low SNR, maintaining the original EEG information. The analyses confirm the effectiveness of the method, especially through comparisons with other techniques, underlining its potential for different EEG applications, thanks to its ability to adapt to data and experimental conditions.
Ref. [
141] discusses the use of the Artifact Subspace Reconstruction (ASR) method to automatically clean EEG signals of artifacts, underlining the importance of this technique for improving data reliability in contexts such as brain–computer interfaces and clinical monitoring. ASR proves effective in removing artifacts caused by eye and muscle movements, among others, while ensuring the preservation of neural components essential for accurate analyses, while the paper highlights the potential of ASR in real-time applications and the need for parameter optimization to maximize effectiveness, specific quantitative details illustrating the degree of improvement or efficiency of the method are lacking.
Ref. [
142] presents an automated framework based on the use of two-dimensional Convolutional Neural Networks (CNNs) for the recognition and removal of artifacts from EEG signals, represented in the scalp topographies of the Independent Components (ICs). This method stands out for its ability to accurately identify and classify various types of artifacts, including those due to eye movements, muscle and cardiac activity, as well as general electrical interference. The proposed strategy significantly improves the reliability and performance of EEG-based brain–computer interfaces (BCIs), providing a fast, accurate, and scalable system for cleaning EEG signals from both external and internal artifacts. Thanks to its optimized architecture and ease of training, the framework is perfectly suited for use in online BCI contexts, where a fast processing response is crucial. This innovative solution promises a dramatic improvement in the quality of interpretation of brain signals, facilitating the application of BCI technologies in various practical and clinical scenarios.
To confirm the superior performance of CNN techniques compared to LMS and RLS, Ref. [
143] evaluated the performance of these methods through the analysis of plots in the time domain and the calculation of metrics such as the root-mean-square error (RMSE) and the improvement of the signal-to-noise ratio (
SNR). They found that FCNN predicts the clean EEG signal better than the other two algorithms, especially in the presence of noise, offering a notable improvement in the quality of the de-noised signal. This work highlights the potential of deep learning methods in cleaning EEG signals, suggesting that FCNN may be particularly useful in clinical and research applications requiring high-quality EEG data. As above,
Table 10 summarizes these remarks.
Having reviewed the standard techniques, we will now focus on fuzzy methodologies as mentioned in the introduction, exploring how these advanced approaches improve the removal of artifacts in EEG signals, enriching the analysis with their ability to handle uncertainty and ambiguity of neurological data.
12. Evolving Fuzzy Techniques for EEG Artifact Removal
Fuzzy logic turns out to be a particularly suitable tool for removing artifacts in EEG signals, being capable of effectively dealing with the uncertainty and variability intrinsic to these signals. Through the use of fuzzy sets and membership functions, it enables accurate representation of neural artifacts and features, facilitating a more intuitive and contextualized filtering process. The integration of fuzzy inference rules with EEG field expertise contributes to the creation of highly adaptive and customizable artifact removal systems, thus significantly improving the quality of EEG signals and, consequently, the accuracy of diagnoses and the effectiveness of clinical research.
12.1. Takagi–Sugeno (TS) Fuzzy Techniques for Removing Artifacts in EEG Signals
The use of fuzzy TS systems in removing artifacts from EEG signals greatly improves the analysis, effectively addressing the complexity and ambiguity of the signals. These systems simultaneously evaluate multiple signal characteristics for more complete and precise artifact removal.
In fuzzy TS systems, the output is a function of multiple inputs. Consider a system with
n inputs (extracted features),
,
, …,
, and one output,
y. Each rule can be expressed as:
where
,
, …,
are fuzzy sets associated with the various inputs, and
is a function describing the linear or non-linear relationship between inputs and output. As in [
144], each
is evaluated through fuzzy membership functions,
to determine its degree of membership in the relevant fuzzy sets.
With fuzzy evaluations of each feature, inference rules are implemented to establish how different degrees of membership influence artifact removal. For each rule , the fuzzy output corresponds to the filtering or correction decision based on the combination of the input characteristics.
The system output, which determines the artifact removal action, is obtained by aggregating the outputs of all the rules. The aggregation can be and via the centroid method, where the overall output
y is given by the weighted average of the outputs of each rule, weighted by their degrees of truth,
:
in which:
12.2. Fuzzy TS Systems: SWOT Analysis
Takagi–Sugeno Type-1 fuzzy systems are designed to implement fuzzy modeling without integration of degrees of non-membership, using membership functions and if–then rules that output linear or constant mathematical functions. This structuring allows the systems to operate with less computational complexity, making them efficient for real-time processing of EEG signals where the types of artifacts are well identified and consistent. However, the lack of management of degrees of non-membership limits the system’s ability to adapt to new or variable artifacts that were not included in the initial model. This is especially problematic for external artifacts such as electromagnetic interference or voltage variations, which can introduce highly variable and unpredictable signals. Likewise, internal artifacts such as rapid eye movements or cardiac signal fluctuations may not be correctly identified and removed if they do not fall within the default parameters of the membership functions. Despite these limitations, significant opportunities exist for the evolution of these systems. Integrating with machine learning algorithms to dynamically update membership functions or introducing components that simulate degrees of non-membership could broaden the range of artifacts that systems can effectively handle. This improvement could increase their applicability not only in controlled environments but also in more complex and dynamic clinical and research contexts. The main threat to their long-term effectiveness is the evolution of signal processing technologies, including the rise of intuitionistic approaches that offer more comprehensive and adaptive modeling of uncertainties present in EEG signals. Without continuous upgrades, Takagi–Sugeno Type-1 systems could become less competitive when compared to more advanced solutions capable of better adapting to complex artifact scenarios. For a compact and immediate view of the strengths, weaknesses, opportunities, and threats associated with these systems, we recommend consulting
Table 11.
12.3. Fuzzy Approaches for the Removal of Artifacts: Case Studies
In the context of contemporary research on neurology and EEG signal analysis, a limited number of publications have been selected to outline four essential guidelines on the emerging field of EEG artifact removal using fuzzy TS systems. This focused selection highlights the effectiveness of fuzzy logic-based techniques, integrated with neural networks, in significantly improving the quality of EEG signals. Through these exemplary studies, it is observed how fuzzy approaches not only contribute to the removal of internal and external artifacts but also provide a substantial increase in diagnostic result accuracy, underscoring their importance in the early and accurate diagnosis of neurological disorders. This synthesis of selected research provides a clear picture of the potential and future directions of scientific research in EEG artifact removal using fuzzy TS techniques.
The first method [
145] mentioned utilizes the TS fuzzy system to address internal artifacts in Alzheimer’s patients, demonstrating how phase synchronization analysis can isolate intrinsic anomalies associated with pathological conditions in the EEG signal. This approach improves model interpretation and enhances classification accuracy, highlighting the importance of precisely modeling EEG signal characteristics in Alzheimer’s diagnosis.
The second study [
146] extends the application of fuzzy techniques to the analysis of EEG networks through the Weighted Visibility Graph, providing a distinctive framework of the structural differences in EEG signals between healthy subjects and those afflicted with Alzheimer’s. This approach marks a significant advancement in recognizing the specific network topologies characteristic of neurological disorders, showing how the integration of network-based and fuzzy learning techniques can significantly enhance the effectiveness of EEG signal classification.
The third and fourth methods [
147,
148], utilizing ANFIS (Adaptive Neuro-Fuzzy Inference System) for EEG signal classification in detecting epileptic seizures and in brain–computer interfacing, respectively, demonstrate the applicability of fuzzy techniques in real-time EEG signal processing. These approaches emphasize the ability of fuzzy techniques to provide accurate analysis and fine modulation of the artifact removal process, maintaining the integrity of the original signal for more precise diagnoses and advanced applications such as device control via BCI. In addition to the reasons already outlined in the introduction of the review, the choice of ANFIS over more sophisticated fuzzy techniques is based on their excessive computational complexity. Although advanced TS techniques can offer a slight improvement in performance, the increase in computational complexity they require is not proportionate to the benefits obtained. This makes ANFIS a preferable choice, as it provides an optimal balance between computational efficiency and performance improvement, effectively adapting to application contexts where computing resources may be limited.
Table 12 summarizes the above information.
12.4. An Important Expansion: Intuitionistic Fuzzy Systems (IFS)
Adopting an intuitionistic approach to fuzzy TS systems in removing artifacts from EEG signals constitutes a notable advance, adding an evaluation dimension that refines uncertainty management and increases flexibility in dealing with ambiguities in EEG signals. This mathematical advancement enriches the understanding of the uncertainty related to each component of the signal, enhancing the effectiveness in distinguishing artifacts from true neural components.
In intuitionistic fuzzy TS systems (ITS), each rule not only evaluates the presence of a characteristic through a degree of membership but also considers a degree of non-membership, thus introducing a bi-valued approach that allows for more sophisticated management of uncertainty. A rule in such a system can be expressed as:
where
,
, …,
are intuitionistic fuzzy sets,
,
, …
are the characteristics of the EEG signal, and
y is the artifact removal action calculated by a function
f.
Every intuitionistic fuzzy set,
, is characterized by a membership function,
, which measures the degree to which
x belongs to
and a non-membership function,
, which measures the degree to which
does not belong to
. The uncertainty index,
for an element
regarding a fuzzy set,
can be defined as:
and
This index measures the uncertainty or ambiguity in the classification of EEG signals, improving the accuracy of the evaluation in cases of indeterminate presence of artifacts. The intuitionistic approach requires the analysis of signal characteristics related to artifacts, using membership and non-membership functions. Artifact removal decisions are based on the aggregation of these grades and the uncertainty index for each feature.
For each we define and by modulating the shape of the membership and non-membership functions.
The final output,
y, indicating the artifact removal decision, is obtained by aggregating the outputs of all inference rules weighted by their degrees of membership and non-membership. For example,
Finally, the convex combination of
and
produces the final output of the IFS:
in which
represents the weight of
. Obviously, if
, the system becomes a classical TS; if
, only the non-membership component impacts the TS system. In this paper, to equally consider the components (membership and non-membership), we set
.
12.5. In-Depth SWOT Analysis of IFS
One of the main advantages of intuitionistic fuzzy systems is their superior flexibility and adaptability, which makes them suitable for real-time applications where signals can exhibit rapid changes. This is essential for both the removal of internal artifacts—generated internally by the subject, such as heartbeat or eye movement—and external ones, caused by external sources such as electromagnetic interference or current fluctuations. Thanks to their ability to adapt dynamically, intuitionistic fuzzy systems can reconfigure their parameters to specifically respond to these different types of artifacts, significantly improving removal accuracy. However, these systems also present significant challenges related to their complexity. Design, implementation, and optimization require in-depth knowledge of fuzzy logic and specific characteristics of EEG signals, as well as increased computational capacity to handle the processing of adversarial information. This can increase the cost and difficulty of maintenance compared to traditional fuzzy systems. A further opportunity for intuitionistic fuzzy systems emerges from integration with neural techniques, such as deep neural networks. This combination can further improve performance in removing artifacts, especially external ones, by taking advantage of the ability of neural networks to learn complex data representations and adapt to nonlinear characteristics of EEG signals. This hybrid approach could be particularly effective in dealing with artifacts that conventional methods struggle to identify and isolate.
The SWOT analysis is summarized in
Table 13 for quick reference.
12.6. Fuzzy Intuitionistic Approaches for the Removal of Internal/External Artifacts: Case Studies
The paper [
149] examines an innovative approach for removing artifacts in EEG signals, employing intuitionistic fuzzy sets. The procedure begins with extracting features from EEG signals using Fourier transformations, which transform the signals from the time domain to the frequency domain. This step is crucial to improve the identification of significant features needed for subsequent classification. Subsequently, PCA analysis is used to reduce the dimensionality of the data, focusing the most relevant information in a smaller number of variables. The core of the proposed method concerns the fuzzification of this reduced data, followed by its processing through a classifier based on fuzzy decision trees. This classifier benefits significantly from the ability of intuitionistic fuzzy sets to handle uncertainty. The results demonstrate that the proposed approach not only improves the classification accuracy of EEG signals, but also offers greater robustness in handling artifacts that are often present in abundance in EEG data. The ability to effectively reduce the impact of artifacts, while maintaining high classification accuracy, represents a clear advantage over conventional methods, signaling a significant breakthrough in EEG monitoring technology. These innovative aspects highlight the potential of intuitionistic fuzzy sets in improving the diagnosis and treatment of neurological conditions, facilitating more informed and accurate clinical decisions.
The paper [
150] discusses an innovative extension of intuitionistic fuzzy sets applied to the classification of EEG signals. The authors introduce a degree of negative hesitation that helps to better manage uncertainty in decision-making situations and pattern classification. The procedure described in the article is based on a construction of degrees of membership, non-membership, and negative hesitation through the analysis of the area of overlap between the projections of the elements and classes in a two-dimensional space. This method allows the uncertainty associated with the elements to be expressed more flexibly and improves the discrimination between different classes of patterns, which is particularly useful in the recognition of complex patterns such as those of EEG signals. The results highlight the effectiveness of NHFS in classifying EEG signals, with a significant improvement compared to traditional methods based on intuitive fuzzy sets. The ability of the approach to handle data uncertainty and ambiguity has led to greater classification accuracy, demonstrating that this new approach can overcome some limitations of existing methods, particularly in applications where data has a high degree of overlap and complexity.
The paper [
151] proposes an innovative approach for the processing of EEG signals, based on the use of an intuitionistic fuzzy strategy in the context of double-center large-margin distribution machines. This methodology stands out for the integration of an intuitionistic fuzzy function that improves computational efficiency through a precise determination of the degree of non-membership of a sample, calculated based on the distance from the centers of two different categories. The peculiarity of the model lies in its ability to optimize the margin distribution, rather than focusing solely on the minimum margin. Using mean and variance of the margin as key parameters, the model aims for more effective generalization, overcoming the limitations of previous methods that do not optimally handle uncertainty and noise. Furthermore, the model incorporates a regularization term to minimize structural risk, further improving robustness against noise and potentially facilitating the removal of artifacts from EEG signals. Through experiments on synthetic datasets and benchmarks, the approach has been shown to significantly outperform existing methods in terms of noise immunity and classification ability. The sensitivity analysis of the parameters confirmed the high accuracy and stability of the model, highlighting its superiority in managing EEG signals, even in the presence of artifacts.
The paper [
152] explores a new line of research that revolutionizes the analysis of EEG signals and the removal of artifacts through an integrated approach that uses complex Gaussian fuzzy numbers and multi-source information fusion techniques. The research introduces an innovative methodology where the Box–Cox transformation and the discrete Fourier transformation play key roles. The Box–Cox transformation adapts the data distribution for a more harmonious fit with uncertainty models, thus optimizing the analysis of EEG signals to highlight the underlying dynamics with greater clarity. In parallel, the discrete Fourier transform extracts high-level features by transforming data from the real to complex number range, significantly improving the ability to discriminate between signal and noise. The use of complex Gaussian fuzzy numbers to generate complex basic mass assignments represents a breakthrough in accurate uncertainty modeling and is particularly useful in the presence of anomalous or outlier data, thus facilitating more effective artifact removal. The proposed method elevates uncertainty modeling and stands out for its ability to integrate and harmonize information from different sources. This approach enables more coherent and consensual decision-making in uncertain scenarios, paving the way for new frontiers in biomedical signal analysis and beyond. The proposed research not only expands existing knowledge but also opens new application possibilities, making the processing of EEG signals more robust and reliable for a wide range of clinical and technological applications.
Table 14 provides a concise vision of the information covered above.
13. Discussion Focused on Scientific Development and Technology Transfer
13.1. Advantages of Fuzzy Approaches in Clinical Implications and Research
Adopting fuzzy and intuitionistic systems in EEG data analysis for diagnosing neurodegenerative diseases such as AD improves diagnostic accuracy and paves the way for the development of personalized therapies. The ability of these systems to handle nonlinear and imprecise data is particularly advantageous in clinical settings, where individual variability can complicate accurate diagnosis using traditional methods. Moreover, these systems offer significant potential for the development of automated diagnostic platforms, reducing the workload of medical staff and minimizing human errors.
13.2. Technology Transfer and Commercialization
The commercialization of technologies based on fuzzy logic and intuitionistic systems could revolutionize the market for diagnostic tools, particularly in areas related to early detection of neurological disorders. Technology transfer from research laboratories to businesses can be facilitated through strategic partnerships with biotechnological and pharmaceutical industries. These partnerships could accelerate the development of smart wearable devices that integrate EEG signal analysis for continuous monitoring, offering a new tool in the hands of patients and healthcare professionals.
13.3. Innovation and Societal Impacts
Innovation in nonlinear signal processing methods for emotion recognition through able to reliably recognize and interpret individual emotional reactions, these tools could find applications in enhancing human–machine interactions, personalized healthcare, and psychological support programs. The application of these technologies in workplace and educational environments could improve productivity and well-being, providing real-time data on individuals’ emotional states [
153].
13.4. Sustainable and Responsible Development
Integrating principles of sustainable and responsible development in the advancement of these technologies is essential. This includes ensuring that data handling respects patient privacy and security, a critical aspect for public trust and subsequent technology adoption. Moreover, developing algorithms that require fewer computational resources can help reduce the environmental impact of their large-scale use.
The future of research and development in EEG signal processing using fuzzy and nonlinear systems is promising, with multiple opportunities for clinical and societal transformation. Emphasizing interdisciplinary approaches, responsible innovation, and ethics will be crucial to fully realize the potential of these advanced technologies. Maintaining an open dialogue among developers, users, legislators, and the public will ensure that technological progress proceeds equitably and beneficially for all.
14. Insights and Horizons
In this review we have analyzed a qualitative and detailed exploration of techniques for artifact removal from EEG signals, with a special emphasis on the new frontiers presented by both standard and intuitionistic fuzzy approaches, particularly in the context of early diagnosis of AD. The analysis begins with a review of non-fuzzy techniques, currently considered the gold standard, which establish a solid foundation for comparison and subsequently introduce the innovative fuzzy methodologies discussed.
Intuitionistic fuzzy approaches, extensively covered in the review, are distinguished by their ability to effectively manage both internal and external artifacts. By adding an extra layer of uncertainty management, these systems demonstrate compatibility with real-time applications due to their comparable computational complexity. This versatility makes them valuable in both research and clinical contexts, enhancing the quality of EEG signal analysis and, consequently, the precision of diagnostic outcomes.
This review is designed to assist both young researchers, by providing an essential overview of the current research trends in the field of artifact removal, and clinicians, by presenting fuzzy approaches as readable and accessible tools. These methods facilitate easy understanding and upgradeability, thanks to the operators’ ability to interact with applications without necessarily having in-depth technical knowledge, thus promoting better technology transfer and improving human–machine interaction.
Intuitionistic fuzzy approaches are not only underpinned by a robust theory that ensures stability under certain conditions, but are also the subject of in-depth studies in specialist texts that thoroughly examine their theoretical foundations. However, given the breadth of the topic, this review focuses on outlining research directions rather than providing a comprehensive review of the most recent literature.
Regarding future developments, intuitionistic fuzzy approaches open up significant prospects for innovation in EEG signal analysis. The ability of these systems to integrate and learn from the variability of biological data could lead to the creation of even more effective and personalized algorithms. Integrating these systems with other forms of artificial intelligence, such as deep learning, promises to further revolutionize the field, offering models capable not only of precisely identifying and removing artifacts but also dynamically adapting to changes in the signal and environmental conditions. These innovations could dramatically improve the diagnosis and monitoring of neurodegenerative conditions such as AD, resulting in more timely and targeted interventions and, consequently, better patient outcomes.