Next Article in Journal
Load Frequency Optimal Active Disturbance Rejection Control of Hybrid Power System
Next Article in Special Issue
Measuring Student Engagement through Behavioral and Emotional Features Using Deep-Learning Models
Previous Article in Journal
Performance of Linear and Spiral Hashing Algorithms
Previous Article in Special Issue
Optimization of Gene Selection for Cancer Classification in High-Dimensional Data Using an Improved African Vultures Algorithm
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Integrating Eye Movement, Finger Pressure, and Foot Pressure Information to Build an Intelligent Driving Fatigue Detection System

Information Management Department, National Yunlin University of Science and Technology, Douliu 640, Taiwan
*
Author to whom correspondence should be addressed.
Algorithms 2024, 17(9), 402; https://doi.org/10.3390/a17090402
Submission received: 2 August 2024 / Revised: 1 September 2024 / Accepted: 4 September 2024 / Published: 8 September 2024
(This article belongs to the Special Issue Algorithms for Feature Selection (2nd Edition))

Abstract

:
Fatigued driving is a problem that every driver will face, and traffic accidents caused by drowsy driving often occur involuntarily. If there is a fatigue detection and warning system, it is generally believed that the occurrence of some incidents can be reduced. However, everyone’s driving habits and methods may differ, so it is not easy to establish a suitable general detection system. If a customized intelligent fatigue detection system can be established, it may reduce unfortunate accidents. With its potential to mitigate unfortunate accidents, this study offers hope for a safer driving environment. Thus, on the one hand, this research hopes to integrate the information obtained from three different sensing devices (eye movement, finger pressure, and plantar pressure), which are chosen for their ability to provide comprehensive and reliable data on a driver’s physical and mental state. On the other hand, it uses an autonomous learning architecture to integrate these three data types to build a customized fatigued driving detection system. This study used a system that simulated a car driving environment and then invited subjects to conduct tests on fixed driving routes. First, we demonstrated that the system established in this study could be used to learn and classify different driving clips. Then, we showed that it was possible to judge whether the driver was fatigued through a series of driving behaviors, such as lane drifting, sudden braking, and irregular acceleration, rather than a single momentary behavior. Finally, we tested the hypothesized situation in which drivers were experiencing three cases of different distractions. The results show that the entire system can establish a personal driving system through autonomous learning behavior and further detect whether fatigued driving abnormalities occur.

1. Introduction

Car driving is indispensable in today’s high-speed and highly mobile society, and driving safety is critical. However, safe driving must be achieved through constant eye, hand, and foot coordination. Of course, during this driving process, it may be acceptable if the eyes, hands, and feet do not cooperate for a short period, but it may also cause unimaginable consequences. Fatigued (or drowsy) driving is one of the above situations that may occur daily. Generally speaking, fatigued driving refers to slowness, blurring, weakness, dizziness, hallucinations, or even coma due to the driver’s inability to control the driver’s mind or body during driving. The usual reasons for this phenomenon are related to the driver’s diseases, medications, physiological aging, and lack of sleep. Although everyone is aware of the dangers of fatigued driving, accidents caused by fatigue still occur from time to time. This is because most fatigued driving occurs unexpectedly when the driver thinks there will be no problem. Especially when micro-sleep occurs, most people will feel that they are awake. However, the scary thing is that even a few seconds of sleep may occur and cause an accident.
t is generally believed that some of the first signs of fatigued driving can be detected before a critical situation occurs. There are quite a few studies on drowsy driving. Related surveys can be found in [1,2,3,4,5]. Some scholars have proposed invasive physiological monitoring methods, such as electroencephalogram (EEG) [6], electrocardiogram (ECG) [7], and skin conductivity [8]. Intrusion detection generally affects humans. Overall, its acceptance is low. Therefore, some non-intrusive proposals have been made. Non-invasive methods use different algorithms to monitor drivers’ facial features, eye signals, head movements, hand movements, and other physiological characteristics to infer driver fatigue [9,10,11,12,13,14,15,16]. There have been many improvements in artificial intelligence deep learning software and hardware in recent years. Therefore, some scholars advocate using these technologies to enhance fatigue judgment from the physiological characteristics of drivers [17,18,19,20].
People generally believe that most fatigued driving will cause symptoms such as blurred vision, red eyes, narrowed field of vision, unconscious nodding, frequent yawning, facial numbness, slow reaction, the inability to concentrate, decreased thinking ability, stiff and slow movements, loss of sense of direction, and speeding up and down. Therefore, in response to the above phenomenon, the method generally used by scholars to explore the problem of fatigue driving is to use face and eye image detection [21,22]. Some scholars have also suggested adding mouth shape changes, head movement, and eye information [23,24]. The above studies emphasize the fatigue phenomenon displayed by the head. Another group of scholars suggests judging fatigue phenomena such as slow response and stiff and slow movements by observing the position and posture changes of the driving hands [25]. The above methods are all started from the perspective of image processing. We know that image processing is generally affected by environmental factors (such as light, camera angle, etc.) but also varies significantly due to personal usage habits (such as wearing glasses, mouth photos, etc.). Therefore, another group of scholars emphasized the force exerted by the hand [26,27,28]. We know that fatigue driving presents many symptoms that vary from person to person and vary based on environmental conditions. While the studies mentioned above emphasize local fatigue detection, this study explores integrating information from three sources: eye movement and hand and foot force exertion.
Safe driving must rely on a high degree of coordination between the eyes, hands, feet, muscles, bones, ligaments, joints, etc., that control every movement. To achieve this goal, the human brain plays a vital role. Our brain is a highly developed organ, and its coordination with the human body’s eyes, hands, and feet is an exquisite masterpiece. However, we must emphasize that sometimes, even though a driving control action is seen as simple (especially in an “unconscious state”), it is still highly contingent on the coordination of eyes, hands, and feet. Based on the above motivation, if a system that integrates eye movement, finger pressure, and foot pressure sensing information can be designed and controlled through an autonomous learning mechanism (“brain”), it is generally believed that the occurrence of accidents caused by fatigued driving can be reduced.
We plan to start with user behavior classification and train and learn each cluster, which is called customized learning. A customized detection system may be generated when driving data are considerably accumulated. Of course, the feasibility of using this customized intelligent system for fatigue detection will be significantly increased.
Previously, our team developed an intelligent learning system—an artificial neuromolecular system (ANM system) [29]. It is an information-processing architecture that captures biological structure/function relationships. In particular, in addition to information processing between neurons, it also emphasizes information processing within neurons. Because of this, it has sufficient system dynamics and can be transformed into a particular input/output information processor through evolutionary learning. Due to these features, it can perform specific functions according to the needs of the problem domain. It has been proven that it can be effectively applied in different fields, such as chopstick robot movement [29], finger motion control [30], and rehabilitation action control [31]. The entire system is implemented on a digital computer (program).
There were two differences between the ANM system currently developed in this study and the one from before 2015. The first is a classification system in terms of processing units, that is, the analysis of correctly classifying a series of time series input data [29]. The second change is that this research has added processing unit functions that can convert a series of sequential inputs into other sequential data [30,31]. We know that the functions of living things in nature result from the high interaction between some component molecular structures. Therefore, some scholars [32,33] have further proposed that it results from various weak interactions between constituent molecules. In other words, traditional learning algorithm research often needs to pay more attention to these interactive processes because they are too complex and unknown. In recent years, research on deep learning has shown rapid growth due to the acceleration of computer software and hardware, dramatically increasing its application scope. However, this line of study still falls into the style of Hebbian information processing, which still has the so-called problem of stagnating at the regional optimal solution. The most significant difference between this study and deep learning research is our hope to increase the dynamics of the processing unit of the ANM system, especially in adding weak interaction functions. Through this increase, we hope to make the system’s learning curve show a smooth improvement method: continuous improvement (there is no complete stagnation of learning). When we allow the system to learn long enough, it can progress toward completely solving the problem.
We all know that collecting data on fatigued driving based on real-life driving conditions is very dangerous in a natural environment. Because of this, conducting it in a simulation is relatively appropriate. Instead, in this study, we use the City Car Driving (CCD) 1.5.9.2 simulation software to conduct driving tests and data collection. This simulation environment provides permutations and combinations of varying vehicle conditions, weather, and vehicles, allowing us to perform testing and data collection directly on specific situational settings through the driving simulator in a relatively simple and safe manner.

2. The System

The system will be described in three parts. The first part explains the experimental test bed of this study—the driving simulation environment. The second part describes the sensing system. The third part explains the learning mechanism of the ANM system.

2.1. Car Driving Environment

The CCD system provides different settings (including routes, road conditions, vehicle conditions, weather, and driving modes) that allow us to conduct driving tests in a specific environment. In addition, because it is a simulation system, we can conduct different tests under the same driving environment. Figure 1 shows the driver’s field of vision in front of the vehicle. Figure 2 shows the steering wheel, accelerator pedal, and brake pedal. The steering wheel has a maximum rotation angle of 270 degrees and has an automatic return function.

2.2. Sensing System

2.2.1. Eye Movement Tracking

Currently, four standard eye-tracking methods are used. Electro-oculography is the earliest tracking method used. This method is simple to use but has the disadvantage of poor accuracy. A scleral search coil method can improve accuracy, but its disadvantage is that it is invasive. The use of eye tracker technology improves the shortcomings of the above two methods. However, its disadvantage is that the user must wear the device on the head correctly and without deviation. Video-based pupil/corneal reflection combination is currently the most advanced and widely used method. The Tobii EyeX, which is the eye-tracking device used in this study, is one such device (Figure 3). It uses near-infrared light (800-1500 nanometers) to track the movement and gaze of the user’s eyes. When the user’s line of sight moves, the cursor will move to the corresponding position on the computer screen. A plug-and-play USB device records the subject’s visual field and eye movements. The device only captures eye gaze data (not the user’s face), so there are no privacy concerns. Figure 4 shows part of the user’s captured gaze data (two-dimensional X- and Y-axes).

2.2.2. Finger Pressure Sensing and Plantar Pressure Sensing

Eight piezoresistive pressure sensors were used in this study. Three relate to the brake pedal, three to the gas pedal, and two to the steering wheel. The brake and oil pedal are placed at the pedal’s front, middle, and rear, respectively. The steering wheel is installed at both ends where people usually use the steering wheel. Although the brake and gas pedals are each equipped with three sensors, we will add up the three values and take the average when collecting data due to the different methods of use. All of the piezoresistive pressure sensors mentioned above will be connected to an Arduino board to collect the subject’s finger pressure data and transmit it to the computer. Figure 5 is a simple schematic diagram without further processing. After testing its accuracy, this study further strengthened the closeness of the sensor to these devices.

2.3. The ANM System

Generally speaking, computer programs using symbolic design methods are unsuitable for evolutionary autonomous learning. A slight change in a program may result in a malfunctioning program. This is because the fitness curve between the structure of the entire program and the functions represents a peak and valley full of distinct highs and lows. In other words, the feasible paths between mountain peaks are steep, which is unsuitable for evolutionary autonomous learning. Using the evolutionary learning method, the final result may lead to learning stagnation, falling into the so-called local optimal solution.
In response to the problems faced by symbolic programs, traditional neural networks use the connection relationship between neurons (including the strength of the connection) to store (or express) information to deal with this problem. When inputs and outputs change, the connections between neurons in the network must also be adjusted accordingly. In other words, the system’s function entirely relies on the different link relationships on the network to express. Unfortunately, the processing of neurons’ molecular and chemical messages has been completely ignored. In recent years, the information-processing function of neurons has been gradually discovered. For example, the second type of transmitter (cAMP) may play a role in controlling the firing of neurons in the central nervous system. These theories propose that some information transmitters and regulators on the cell membrane are converted into signals of the second type of transmitter. Then, this cAMP acts on some proteins (kinases), which control some reactor proteins that regulate ion channels or connect microtubules. These proteins directly or indirectly affect the opening of ion channels; that is, they directly or indirectly affect the potential or firing of neurons. Some other researchers believe the cytoskeleton plays the role of integrating signals (information) or memory functions. It is known that the cytoskeleton of neurons is a multi-molecular network of microtubules, microfilaments, and neurofilaments and some proteins (MAPs) that connect these molecules. These MAPs may coordinate other information-processing behaviors within neurons.
In addition to using the relationship between network neurons to express information, the ANM system also adds information processing within the neurons. However, a detailed simulation of intraneuronal dynamics would require significant computational cost (computer time). Therefore, we only consider modeling this neural information processing relatively abstractly. Even so, the fitness curve presented by the structure–function relationships represented by the internal dynamics of the entire neuron must be rich enough to be suitable for evolutionary learning. We would use the adjective “multidimensional bypass” to describe the curve between this structure and the function it represents. Intuitively, this kind of curve is due to adding extra spatial degrees, which increases the chance of saddle points. The theoretical basis is that when the number of constituent elements increases, the interaction between them increases, thereby increasing the opportunities for saddle points. In addition to adding more interactive elements, two features (redundancy and weak interactions) that facilitate evolutionary learning also play a crucial role. In simulating the internal dynamics of neurons, the ANM system uses evolutionary learning inside the neurons to place the above three factors.

2.3.1. Neuromolecular Information Processing

The ANM system assumes that information processing occurs in the cytoskeleton of neurons, which we call neuromolecular information processing. This study used a two-dimensional space cellular automata (CA) to simulate the information-processing method on the cytoskeleton. We call these types of neurons information-processing (IP) neurons. Figure 6 illustrates the molecular structure of an information-processing neuron, where each grid represents a unit molecule of the cytoskeleton. This study assumed three types of molecules (represented by C1, C2, and C3). Each molecule type is responsible for signal transmission and has different transmission characteristics. For example, the transmission speed of the C1 type of elements is the slowest, but the influence of the signal transmission is the strongest. In contrast, the signal transmission influence of the C3 type of elements is the weakest but has the fastest transmission speed. Among the three, the transfer speed and influence of the C2 element type are between C1 and C3.
In the cytoskeleton, each component unit can act as a signal input and output site. The input site is called a readin enzyme, whereas the output site is a readout enzyme. The readin enzymes receive signals from outside the neuron and convert them into signals that flow through the molecular structure, while readout enzymes play a role in controlling whether neurons fire. The neuron fires when a specific combination of signals reaches a location with readout enzymes and the total signal kinetic state reaches a certain level. However, this model has some limitations: the readin enzyme can be configured on any element, but the readout enzyme can only be configured on the C1 element. This is based on the hypothesis in this study that only certain combinations of signals will cause neurons to fire.
When a signal from outside the cell is sent to the cell membrane, it causes the reader enzyme to activate and further turn on elements in their exact location. Each enabled piece will affect adjacent aspects of the same type in turn. As described above, it initiates a specific signal flow in the cytoskeleton. For example, as shown in Figure 6, when the read enzyme at the (2, 2) position receives a signal, it will activate the C2 element and generate a signal that moves along the C2 element, moving from (2, 2) to (8, 2). To form a unidirectional signal flow during the process, the aspect turned on will enter a very short backlash period after transmitting the signal. During the backlash period, the element can no longer be activated and must wait until the backlash period ends, which ensures unidirectional transmission.
Signals on different types of components can also affect each other through the MAP between them (of course, these effects are asymmetric). When a signal from one end reaches a place with a MAP, it will affect the kinetic state of different types of elements at the other end through the MAP (or even prompt the other end to generate new signal flows). The neuron triggers when a specific combination of signals reaches a location with a readout enzyme. The firing time of the neuron depends on how the cytoskeleton in the neuron integrates and processes these messages.
The two-dimensional cytoskeleton in the ANM system is arranged in a wrap-around manner. There will be no boundary restrictions when moving within the cytoskeleton. Each basic unit has eight possible directions to move and might form a circular path. Figure 7 shows a schematic diagram of the signal movement path. For example, in Figure 7a, the signal starting at location (3, 3) will move along (2,3), (1,3), and (8,3) and finally stop at (7,3). The other example, as shown in Figure 7b, is that the signal starting at location (5,2) will follow (4,1), (3,8), (2,7), and (1,6) and finally stop at (8,5).
Currently, the ANM system has 576 information-processing neurons (called cytoskeletal neurons). Each neuron has a different cytoskeletal structure when the system is initially designed. Figure 8 shows a hierarchical structure diagram of all information-processing neurons, from the population to the molecule level. Each neuron has different information-processing capabilities. To allow them to learn autonomously, they are divided into eight competitive subnetworks (each subnetwork has 72 information-processing neurons). The competitive learning approach used in this study allows each sub-network to have an “information processing neuron” with a very similar cytoskeletal structure (in the following, we refer to information-processing neurons with very similar cytoskeletal structures in different subnetworks as the same bundle of neurons). Given the same input data for the neurons in the same bundle, the output behavior of these neurons with similar cytoskeletons will be very similar (but not exactly equal). Furthermore, for the entire subnetwork group, the structure of the neurons in each subnetwork is also very similar. Through the characteristics of this similar structure, we can allow these different subnetworks to compete. First, we evaluate the performance of each subnet then select the better-performing subnet and copy it to the less-performing subnet (assuming that a slight error occurs during the copying process, a so-called mutation). The learning process described above is similar to Darwinian evolution, which will train these subnetworks to achieve the intended purpose of this study.
Evolutionary learning uses a Darwinian evolutionary search method, which can be roughly divided into three steps (Figure 9).

2.3.2. Manipulation Network

As mentioned, the learning algorithm used in this study produces something similar to competitive learning by allowing each subnetwork to change its cytoskeletal structure. Changing the cytoskeletal structure shapes the ANM system with gradual structure/function switching properties. This feature helps generate paths in multi-dimensional spaces (please refer to Section 2.1), and the system thus has a relatively high chance of escaping from the regional optimal solution when searching. However, this kind of change is a gradual fine-tuning, which improves slowly and takes a long time. The difficulty is relatively high if the goal is to train a large group of neurons to complete a specified task. From another perspective, it may not be necessary to use this approach because it may be possible to train a small group of suitable neurons to perform the same task. To deal with this problem, this study uses another type of neuron, whose task is to select appropriate information-processing neurons to achieve it (that is, only the chosen neurons will participate in information processing and fitness evaluation). This type of neuron is called a control neuron (CN). This study assumes that control neurons have hierarchical control (selection) functions. This hierarchical control method controls a group of information-processing neurons (selected) to achieve the task. This way of selecting neurons is called orchestration. The current operation method uses two layers of control neurons, as shown in Figure 10, to find suitable neurons through the Darwinian variation–selection method.
As mentioned, the current ANM system has 576 information-processing neurons (or 72 bundles of neurons). A lower-level control neuron controls each bundle. Therefore, there are a total of 72 low-level control neurons. This study utilizes another layer of neurons, high-level control neurons, to control the firing of lower-level control neurons. Learning of control neurons occurs only between higher and lower layers. In other words, each high-level control neuron can select different low-level control neurons and change along with the learning process. However, the information-processing neurons controlled by low-level control neurons do not change with the learning process. The entire evolutionary learning step is first to evaluate the performance of each high-level control neuron and select the better-performing high-level control neuron. Then, the better-performing higher-level control neurons are copied to the worse-performing higher-level control neurons. It is assumed that there are slight errors during the copying process, which results in the low-level control neurons controlled by the replicator and the copied being different (Figure 11).
Evolutionary learning is implemented by allowing alternating learning between control and information-processing neurons. The current approach is to allow the system to learn at the control neuron level for a while and then allow the system to learn at the information-processing neuron level for another period. This cycle allows each level to learn sequentially until the system is stopped or the assigned task is completed.

3. Application Domain

The following will first explain how to collect driving data. We then explain how to preprocess the collected data. The last part describes how to connect these data to the system.

3.1. Driving Data Collection

Figure 12 shows our settings for the CCD system. We selected a U-shaped route with light traffic and good weather conditions, with average driving patterns for general roads. We selected a straight highway route, with 10% light and 80% heavy traffic. The details of data collection are explained below. We first connected the Tobii eye tracker to the computer, then used Open Broadcaster Software (OBS) 29.0.2 to record the coordinate information of the eye tracker through the computer screen, and finally turned on the CCD system to determine whether the eye tracker was correctly displayed on the screen. When all of the above equipment and subjects are ready, we further confirm whether the data collection of the eye tracker and hand and foot pressure are synchronized. Finally, data collection was officially carried out. The entire process was screen-recorded, and the time was recorded until the simulation ended.
The following explains how to synchronize the data collected through eye, hand, and foot sensors. We note that the data collection of hands and feet was performed through Arduino. These two data were synchronized. The main problem is synchronizing eye movement and hand and foot data. The current approach was to find the screen recording data based on when the hand and foot pressure values were obtained. When the image data were found, the HoughCircles function provided by the open-source computer vision library was used to find the coordinate values displayed by the eye tracker, which completed the integration of specific eye, hand, and foot data at a particular time.
To maintain the consistency of data collection, this study invited the same subjects to collect data ten times in the same driving environment. Each time, the driver drove on the entire route. The data on eye movements, braking, finger pressure, and accelerator were collected at intervals of 0.5 s. The subject was asked to drive the same route every time, but the traffic conditions varied (due to different traffic lights, pedestrians, and traffic volume). It took about three to five minutes from the beginning to the end for a drive test. In terms of driver behavior, this study assumed two types of driving: everyday driving and distracted driving. The former means the driver is relatively focused, while the latter means the driver has wandering eyes and changes lanes at will.

3.2. Data Preprocessing

In this study, three participants were invited for data collection. All three collected data on general roads, while two also collected data on highways. Taking the first participant as an example, using the sliding window method (with a sliding speed of five time series points), all of the collected time series data were divided into 567 driving segments. The driving duration varied among participants, resulting in a different number of final driving segments. Each clip has 100 timing points (approximately 50 s of driving time). The clustering method used in this study is described below. The first step is to set a threshold. The dynamic timing warping (DTW) method is then used to compare two driving clips. When the difference between two driving clips is within the threshold, they belong to the same cluster. A new data cluster will be created when the newly-added clip does not belong to any cluster. This study followed the above method and organized the 567 driving clips into 73 clusters. The setting of this threshold is arbitrary and has no substantive significance. When this value is set too high, the number of clusters generated will be relatively small (relatively, it means that the difference within the cluster is more significant). On the contrary, relatively more clusters will be generated when set too low. When there is relatively enough data on subjects in the future, the choice of this value will have more substantial significance.

3.3. Connecting Driving Data with the I/O Interface of the ANM System

As mentioned above, the length of each driving clip is 100 time series points, which is approximately 50 s of driving time. Each time series point has six values, including eye movement x- and y-axis data (represented by eye-x and eye-y), left and proper finger pressure on the steering wheel (represented by hand-left and hand-right), the average foot pressure of the brake pedal (expressed as foot-brake), and the average pressure of the accelerator pedal (expressed as foot-gas). This study assumes that during each driving clip, the behavior in the first 25 s will affect the behavior in the next 25 s. The method of this study is to use the movements, finger pressure, and accelerator foot pressure (including accelerator and brake) in the first 25 s as the input data of the system, and then the eye movements, finger pressure, and accelerator foot pressure in the next 25 s as the system’s output data. For each data set, the smaller the difference between the outputs generated by the ANM system and the expected output, the better its learning performance is. Figure 13 gives an example of timing data 25 s before and 25 s after a specific time. In other words, we hope to convert the input data of Figure 13a into the waveform of Figure 13b through the ANM system.
All information-processing neurons of the ANM system are divided into six groups, corresponding to the above six categories of data (eye-x, eye-y, hand-left, hand-right, foot-brake, foot-gas) (Figure 14). The firing behavior of each group of information-processing neurons represents the data conversion of specific output data. In the current implementation, we use the time difference between two adjacent firing neurons of the same group to describe the degree of data conversion. This study assumes that the relationship between the time difference and the degree of conversion is similar to a sigmoid-like waveform (Equation (1)). For a particular group of outputs, the waveforms generated by all of the same group of neurons that produce firing behavior will be superimposed in series to form a specific output waveform. Loss is the absolute difference between the waveform generated by the ANM system and the expected waveform. The smaller the loss value, the better the fitness of the system.
D e g r e e   o f   t r a n s f o r m a t i o n = ( 1 1 + e ( 2 × t ) 0.5 ) × 2 × 90
L o s s = i j = 1 50 ( E i j A i j )
where E i j and A i j represent the expected trajectory and the trajectory generated by the ANM system, respectively; i = eye-x, eye-y, hand-left, hand-right, foot-brake, and foot-gas.

4. Experiments

Six experiments were conducted in this study. The first part focuses on the learning ability of the system, exploring whether the ANM system can be used to learn each driving clip. The second part examines the relationship between a series of driving clips. If there is some correlation, we can judge whether the driver is fatigued through a series of driving behaviors (rather than a single momentary behavior). The third part is performed to classify different driving behaviors. The fourth part is based on the learning experiment from the first part, followed by adaptive experiments using the obtained information. It investigates whether the ANM system can be trained to apply one person’s system to different people. The fifth part introduces driving segments with varying noise levels to explore whether the ANM system can detect abnormal behavior in drivers while the vehicle is in motion. The final experiment tests whether the driver has distracted (fatigued) driving.

4.1. Learning Capability

As mentioned earlier, in this study, the data collected by the first participant on general roads were divided into 73 clusters, the second participant’s data into 62 clusters, and the third participant’s data into 67 clusters. For highway driving, the first participant’s data were divided into 58 clusters and the second into 56. This experiment hopes to understand whether the ANM system has considerable learning capabilities for each data cluster. In other words, can we use the ANM system for judgment learning for each type of driving behavior? That is to say, we can use the driving behavior of the first 25 s to judge the reaction behavior of the next 25 s. If the preliminary experimental results prove that the above inference is feasible, we can gradually increase the driver’s behavior. On the other hand, we can gradually increase the system’s complexity according to the needs of the problem domain (for example, different weather, traffic flow, and road conditions). In this experiment, we randomly selected 20 clusters from the 73 clusters of the first participant in a general road environment. Additionally, nine clusters were selected from the other two participants in a general road environment. For the highway data, nine clusters were selected from the two participants in a heavy and light traffic environment. Each cluster also randomly selected a driving clip to test whether the relationship between the first 25 s and the last 25 s of driving behavior can be established. The results showed that the degree of learning improvement was relatively rapid in the early stages of learning but slowed down in the later stages. The degree of improvement became smaller and smaller in the last stages. Most importantly, however, the system showed continued improvement, even in the later stages of learning.
We allowed the ANM system’s learning to terminate when the learning improvement was relatively slow. The results show (Table 1, Table 2, Table 3, Table 4, Table 5, Table 6 and Table 7) that the learning results in each cluster are above 75%. We add here that if the system can continue learning, there is still room for continuous improvement (i.e., higher accuracy). However, the experimental results in this part of the article were suspended at appropriate times to explore the different information processing of ANM systems more broadly.

4.2. Correlation Analysis before and after Driving Clips

In the previous part of the experiment, we randomly selected 20 driving clips for learning. For each clip, this experiment wants to explore whether there is some correlation between the clips before and after the clip. In other words, if so, we can use different driving clips continuously (rather than just relying on a single driving segment) to determine whether the driver is driving fatigued. The testing method of this experiment is to use the ANM system that has been trained for a long time in the previous experiment and conduct individual tests on each of the first one to four driving clips and the last one to four driving clips used in the training period.
This experiment randomly selected 6 of the 20 learned systems from the first experiment. The correlation between a specific driving clip’s first and last four driving clips is tested for each learned system. If their loss values are not much different from each other (compared with the loss value that the system has not learned; please refer to Table 1), it means that there is some correlation between adjacent driving clips. In other words, driver behavior changes step by step rather than in leaps and bounds. The results (Figure 15) show that driver behavior is highly similar when the gap between two driving clips is relatively tiny. As the gap gradually increases, so does the difference in driver behavior. The important thing is that there is some U-shaped relationship between them. This relationship represents two meanings. The first is that it again shows that the ANM system has a gradual transformation capability when the system’s performance function will slowly change due to changes in the input data. The second is in driving fatigue detection; we can verify whether the driver is fatigued through a series of driving behaviors (clips).

4.3. Cluster Analysis

The test data of the second experiment were related to driving clips before and after a specific driving clip. In other words, this is a driving segment of continuous driving before and after a certain driving period. This experiment’s test data are driving clips from different periods. As mentioned before, during the data collection phase of this study, subjects were invited to drive the same route ten times. The test data of this experiment are similar driving clips of these ten drives at different periods. We interpret the former as the same type of driving segments, while the latter refers to utterly different driving segments. Simply put, the second experiment was a test during the same driving period, while this experiment was a driving test across various periods.
This experiment uses the 20 learning systems from the first experiment and finds all similar driving clips at different periods. Table 8 shows the number of similar clips between each driving clip and other clips in different periods. Figure 16 further organizes the data in Table 2. The results show that among the 20 groups, 10 (more than 50%) have loss values between 1022.2 and 1442.2. These values are not much different from the learned values in Table 1. The loss values of the other eight groups are between 1442.2 and 1862.2. These values are slightly higher than those learned in Table 1 but differ considerably. From the above results, we can roughly say that drivers’ behavior is similar to a certain extent. In other words, we can perform classification to some extent from the driver’s fragmentary behavior.

4.4. Adaptability

4.4.1. Different Participants Driving the General Road Environment

Based on the learning capability, subsequent adaptive experiments were conducted to explore whether the ANM system could further train and apply the system of one individual to others. Our approach is to test whether different subjects perform similarly in similar driving environments. The approach adopted is to use a system trained on one user’s driving data to be tested on another user’s driving data. This may be an assessment of the effectiveness of applying driving skills or knowledge learned through one participant to another participant in a particular driving scenario. It is a type of stress test. This study interprets it as adaptive capacity. The trained system was tested using nine driving segments from another participant with similar driving patterns. For each test, the system’s loss values were initially recorded when encountering different drivers, followed by observing the outcomes after running for 500 iterations. Results from Table 9, Table 10, Table 11, Table 12, Table 13 and Table 14 indicated that some driving segments achieved improvement rates of over 75% after just 500 training iterations, while others showed around 40% improvement. Even with relatively brief training periods, achieving 40% improvement demonstrates the adaptability of the ANM system. These data suggest that there is still room for learning. Moreover, using training data from the same individual may better suit drivers with similar habits; for example, Participant One and Participant Three share similar driving habits, as do Participant Two and Participant Three. In the future, systems trained on Participants One and Two could be applied to Participant Three.

4.4.2. Different Participants Driving on the Highway

Like the previous experiment, the driver routes were changed to highway driving. We selected nine driving segments from both heavy and light traffic to observe the results of the system running 500 iterations on different drivers’ driving segments. From Table 15, Table 16, Table 17 and Table 18, it can be observed that the improvement rate on highways is generally lower than that on general roads. This suggests that driving behavior on highways differs from that on general roads. The traffic conditions on the road also influence driving behavior. It was also noted that the initial loss values for highway driving are at least one or two thousand units lower than those for general roads, indicating that highway driving behavior is more straightforward from the beginning and can be adequately handled using the system trained on general road driving.

4.5. Noise

In this experiment, six driving segments from Participant One driving on general roads were selected, and different levels of noise (1%, 2%, 5%, and 10%) were added to each segment. The aim was to investigate whether the ANM system can detect abnormal behavior in drivers while the vehicle is in motion. As shown in Figure 17, even adding just 1% noise resulted in a significant increase in loss values, which increased further as the noise levels increased. The experimental results indicate that the ANM system can detect abnormal behavior in drivers while the vehicle is in motion.
σ n o i s e = k × σ o r i g i n a l

4.6. Fatigued Driving

This study sorted out the possible symptoms of driver fatigue from relevant literature, including blurred vision, involuntary nodding, increasing eye closing frequency, longer eye closing time, continuous yawning, facial numbness, slow eye reaction, stiff movements, changing lanes at will, etc. Based on the current settings of this study, this experiment assumes that drivers are prone to three possible fatigue driving conditions: changing lanes at will, eyes wandering and closing, and slow reaction speed. The first case (eyes wandering and closing) tests eye movement information, while the last two instances (switching lanes at will and slow reaction speed) exist mainly to explore the effects caused by abnormal pressure on the hands and feet. The first case is to ask subjects to move and close their eyes while driving deliberately. The primary purpose of this experiment is to observe the impact of changes in eye movement. The second case (switching lanes at will) is simulated by asking the subjects to switch lanes at will many times while driving, while the third case (slow reaction speed) is simulated by asking the subjects to react with slow hand and foot reactions. As before, the settings for all driving environments are the same. The driving clips collected in this experiment’s first, second, and third cases are 40, 43, and 63, respectively. Each case is tested separately in this experiment, and the average value is taken. The average loss obtained in each case is relatively high (please refer to the loss value in Table 19).
In the following, we further explore whether these different distraction phenomena can be further trained through the ANM system. Similar to the second experiment, we took 6 of the 20 learned systems to conduct the above research. For each distraction case, five tests were performed. For each test, we first observed the loss values when the system faced driver distraction and then allowed the system to run for 500 generations to observe the final results. The results are shown in Table 20. After 500 generations of learning, some results can be good, while others still have considerable room for effort. However, looking at these data, there is still room for learning. However, the distraction test is a semi-random variation. There is still considerable room for discussion, or a certain degree of controversy, in trying to draw concrete conclusions from this approach.

5. Discussion

The computer industry is growing significantly as Moore’s Law continues to ferment in recent years. However, the performance of the hardware could be improved by the functionality established by the original developers. In contrast, software systems can be used infinitely according to the user’s imagination. The two play a complementary role, meaning they must complement each other for the entire computer system to achieve the state of truth, goodness, and beauty.
Unfortunately, current software design is geared towards programmable design. Under this premise, making programming more convenient is a goal people often consider, and structured design has become a method people usually use. The thinking of structured programming is to use appropriate symbols to represent the ideas people want to express and how to operate these designed symbols (so-called algorithms). When the entire system design moves towards programmable structural design, it will face a severe problem. When a specific function needs to be changed, even slightly, it may need to be significantly changed. From another perspective, when an existing system is created with minor changes, the entire system may become completely unsuitable (for example, modification of some initial settings). A feasible thinking is to put features conducive to malleability into software design thinking. We all know that biological systems are highly self-modifying. The ANM system used in this study captures some of the characteristics of organisms that are conducive to modification and implements them in the design of software systems.
From the perspective of building a customized intelligent fatigued driving detection system, biological-like adaptability is undoubtedly an ideal goal. This is because it must be able to meet different needs, such as various groups of people. Under this premise, an intelligent system must have rich learning capabilities, conduct long-term continuous problem solving of complex problems, and have a considerable degree of plasticity to adapt to different needs. The difficulty is that everyone’s driving habits are entirely different. Therefore, establishing a fatigue detection system suitable for mass popularization is still a long way away. Customized design is an inevitable trend. Intelligent assistance systems must find the best answer and adjust at any time according to the user’s needs in a self-corrective manner. In addition, the system must have a certain level of noise tolerance to cope with transient changes in user movements while operating in a disturbed environment.
This research integrates information obtained from three sensing devices: eye movement, finger pressure, and plantar pressure. It uses an autonomous learning architecture to build a customized fatigued driving detection system. Then, we explore its feasibility for fatigued driving detection through different experiments. First, we verified that the ANM system can be used to learn and classify driving clips. Then, we verified that we could judge whether the driver was fatigued by a series of driving behaviors (rather than a single momentary behavior). Finally, we verify the results under the assumption that drivers are experiencing three different distractions. The authors would like to add that the current stage of this research emphasizes functional exploration, that is, the feasibility of establishing an intelligent system that integrates eye, hand, and foot sensing. It is still different from the actual real driving situation.
The current research on intelligent fatigue driving detection includes two lines of directions. One is the fatigue driving sensor, and the other is the intelligent system. Regarding the studies on sensors, some studies focus on the judgment of the movements of the eyes, hands, and feet, while some focus on the analysis of the force of the hands and feet. However, whether the methods performed are action discrimination or force analysis, one of the limitations of research in this area is how to process sensor data from different sources in a timely or even synchronous manner, and another limitation is how to process sensor data from different sources appropriately. In the data synchronization processing part, this research is still at the stage of functional exploration. Therefore, the entire research is still limited to manual processing in the data collection part. However, the ANM system used in this study has an autonomous learning function in the information integration part. During the learning process, it can discover each information source’s role in exploring different types of fatigue driving. However, this research still has limitations: it requires significant computer computing time to operate the entire ANM system with current computer hardware. Note that the concept of ANM system construction is a multi-layered competitive network. The information processing within each neuron can be comparable with that of the network architecture. We use discontinuous event processing to simulate such a multi-layer network architecture wherein each neural activity change is an event. In this way, we can make the system produce different timing processing dynamics (that is, converting from a series of time and space information to another series of time and space information). If we plan to use a sequential processing computer to simulate the entire system dynamics, it will require a lot of computing resources. Because of this, the information-processing capabilities that this study can present are also limited to some extent. In the future, when the hardware shows considerable growth, we can increase the dynamics within neurons (for example, by growing the essential components within the information-processing unit and the relationship between each other) or increase the processing methods of operating control neurons. Future research in this area can improve the integration of facial expressions and head information. Many scholars have made considerable research results in this area. In this way, we can make the system produce different timing processing dynamics.
On the other hand, this study considers further integrating the information of the electroencephalograph (this research team has obtained preliminary experimental results in this regard, but it has yet to be mature at this stage to the extent that it can be published publicly). In terms of algorithms, the system used in this study should make additional use of current deep learning technology (long- and short-term memory) to increase the system’s functionality in a Hebbian manner (that is, converting a series of spatio-temporal information into another series of spatio-temporal information). If we plan to use a sequential processing computer to simulate the dynamics of the entire system, it will require a lot of computing resources. Because of this, the information-processing capabilities that this study can present are also limited. To a certain extent, in the future, when the hardware shows considerable growth, the limitation of information-processing capabilities that the ANM system can simulate will increase (for example, by increasing the essential components inside the information-processing unit and their relationship to each other); alternatively, increasing the processing methods of operating control neurons can improve the integration of facial outcomes.

6. Conclusions

Fatigued driving is a problem that most people will face, and this problem usually occurs without conscious awareness or through the driver not paying attention. If a customized intelligent assistance system can be built to assist driving from people’s sensory systems, it is generally believed to help people drive more or less safely. Current development in this area is mainly accomplished by integrating deep learning, image processing, biomedicine, human factors engineering, and other technologies. In addition to driving fatigue, workplace fatigue caused by high-risk workplaces is similar. Most methods are to establish intelligent physiological fatigue detection systems based on physiological characteristics such as personal faces, eyes, mouth, and hand movements. However, everyone’s driving behavior differs, and determining how to meet different customized needs is a severe issue. Intelligent systems play an essential bridge role in customization.
In response to the customization issue, the ANM system proposed in this study has more processing of the internal information of neurons than the general deep learning technology. The former emphasizes processing information within neurons, while the latter emphasizes processing information between neurons. Again, we emphasize that the ANM system can also include information processing between neurons. Under this premise, the ANM system in this study can also use the internal dynamics of a single neuron to express information processing between neurons. In other words, a single neuron in the ANM system is enough to handle the information processing that a traditional neural network can represent. Most importantly, we establish the information-processing activities inside neurons by capturing the characteristics of gradual changes in biological structure/function. The experimental results of this study prove its permanent learning ability and sufficient adaptability.
We indeed use simulations to generate distraction data. Undoubtedly, there is still a considerable difference between these data and the data generated by the driver’s actual fatigue. In other words, the simulation data used in this study cannot accurately reflect the actual behavioral information of fatigue. However, this study emphasizes that collecting fatigue driving data from actual drivers is challenging. Undoubtedly, it is not only hazardous but also costly. Secondly, another insurmountable problem is sorting out the so-called “drowsy driving episodes” from a continuous period of driving behavior. It is a controversial issue, as judging “drowsy driving episodes could be subjective. In particular, it is more difficult for everyone to have different driving behaviors. Regarding this issue, the purpose of this study is not to immediately apply the entire system to fatigue driving detection but to prove whether the learning system used in this study has a self-correction mechanism for continuous learning. When the whole system matures, we can transplant it to actual driving situations and establish a personalized fatigue driving detection system through long-term personal use by drivers. In the current experimental stage, this study uses a series of simulation aspects to gradually fill the gap between simulated data and actual data during the experimental stage.
This study explores the establishment of a non-intrusive system. It allows us to research different topics without harming or affecting users. Although this study uses a simple information-processing system that integrates eye movement, finger bending/pressure, and plantar pressure sensing, the results obtained through this study can prove that in the future, through better eyes, hand, and foot sensing equipment, we can build a state-of-the-art, intelligent, customized fatigued driving detection system. It can even be developed into a simple and portable device or combined with a cloud server to calculate and analyze data, significantly increasing the possibility of creating a customized intelligent system.

Author Contributions

Conceptualization, J.-C.C. and Y.-Z.C.; methodology, J.-C.C.; software, J.-C.C.; validation, J.-C.C. and Y.-Z.C.; formal analysis, J.-C.C. and Y.-Z.C.; investigation, J.-C.C. and Y.-Z.C.; resources, J.-C.C.; data curation, J.-C.C. and Y.-Z.C.; writing—original draft preparation, J.-C.C. and Y.-Z.C.; writing—review and editing, J.-C.C.; visualization, J.-C.C.; supervision, J.-C.C.; project administration, J.-C.C.; funding acquisition, J.-C.C. All authors have read and agreed to the published version of the manuscript.

Funding

This study was partly funded by the Taiwan Ministry of Science and Technology (Grant MOST 110-2221-E-224-041-MY3).

Institutional Review Board Statement

This study was conducted according to the guidelines of the Declaration of Helsinki and approved by the Human Research Ethics Committee of the National Cheng Kung University (Approval No.: NCKU HREC-E-110-318-2; date: 9 August 2021).

Informed Consent Statement

Written informed consent has been obtained from the participants of the experiments to publish this paper.

Data Availability Statement

The data can be accessed found through the following link: https://drive.google.com/drive/folders/1Jlh9OHuIC5JzFMqda642x4fPdqdEYRJZ?usp=drive_link (accessed on 1 August 2024).

Conflicts of Interest

The authors have no conflicts of interest.

References

  1. Sikander, G.; Anwar, S. Driver Fatigue Detection Systems: A Review. IEEE Trans. Intell. Transp. Syst. 2019, 20, 2339–2352. [Google Scholar] [CrossRef]
  2. Kamti, M.K.; Iqbal, R. Evolution of driver fatigue detection techniques—A review from 2007 to 2021. Transp. Res. Record. 2022, 2676, 485–507. [Google Scholar] [CrossRef]
  3. Němcová, A.; Svozilová, V.; Bucsuházy, K.; Smíšek, R.; Mézl, M.; Hesko, B.; Belak, M.J.; Bilík, M.; Maxera, P.; Seitl, M.; et al. Multimodal features for detection of driver stress and Fatigue: Review. IEEE Trans. Intell. Transp. Syst. 2020, 22, 3214–3233. [Google Scholar] [CrossRef]
  4. Kaplan, S.; Guvensan, M.A.; Yavuz, A.G.; Karalurt, Y. Driver behavior analysis for safe driving: A survey. IEEE Trans. Intell. Transp. Systems. 2015, 16, 3017–3032. [Google Scholar] [CrossRef]
  5. Abbas, Q.; Alsheddy, A. Driver fatigue detection systems using multi-sensors, smartphone, and cloud-based computing platforms: A comparative analysis. Sensors 2020, 21, 56. [Google Scholar] [CrossRef]
  6. Fu, R.; Wang, H.; Zhao, W. Dynamic driver fatigue detection using hidden Markov model in actual driving conditions. Expert Syst. Appl. 2016, 63, 397–411. [Google Scholar] [CrossRef]
  7. Lee, B.G.; Park, J.; Pu, C.; Chung, W. Smart watch-based driver vigilance indicator with kernel-fuzzy-C-Means-Wavelet method. IEEE Sens. J. 2016, 16, 242–253. [Google Scholar] [CrossRef]
  8. Foy, H.J.; Chapman, P. Mental workload is reflected in driver behavior, physiology, eye movements, and prefrontal cortex activation. Appl. Ergon. 2018, 73, 90–99. [Google Scholar] [CrossRef] [PubMed]
  9. Kuwahara, A.; Nishikawa, K.; Hirakawa, R.; Kawano, H.; Nakatoh, Y. Eye fatigue estimation using blink detection based on eye aspect ratio mapping. Cogn. Robot. 2022, 2, 50–59. [Google Scholar] [CrossRef]
  10. Yang, Z.; Ren, H. Feature extraction and simulation of EEG Signals during exercise-induced Fatigue. IEEE Access 2019, 7, 46389–46398. [Google Scholar] [CrossRef]
  11. Chui, K.T.; Tsang, K.F.; Chi, H.R.; Ling, B.W.; Wu, C.K. An accurate ECG-based transportation safety drowsiness detection scheme. IEEE Trans. Ind. Inform. 2016, 12, 1438–1452. [Google Scholar] [CrossRef]
  12. Balasubramanian, V.; Adalarasu, K. EMG-based analysis of change in muscle activity during simulated driving. J. Bodyw. Mov. Ther. 2007, 11, 151–158. [Google Scholar] [CrossRef]
  13. Yi, Y.; Zhang, H.; Zhang, W.; Yuan, Y.; Li, C. Fatigue working detection based on facial multi-feature fusion. IEEE Sens. J. 2023, 23, 5956–5961. [Google Scholar] [CrossRef]
  14. Yan, C.; Coenen, F.; Yue, Y.; Yang, X.; Zhang, B. Video-based classification of driving behavior using a hierarchical classification system with multiple features. Int. J. Pattern Recognit. Artif. Intel. 2016, 30, 1650010:1–1650010:33. [Google Scholar] [CrossRef]
  15. Xiao, W.; Liu, H.; Ma, Z.; Chen, W.; Sun, C.; Shi, B. Fatigued driving recognition method based on multi-scale facial landmark detector. Electronics 2022, 11, 4103. [Google Scholar] [CrossRef]
  16. Fang, H.; Li, J.; Tang, H.; Xu, C.; Zhu, H.; Xiu, Y.; Li, Y.; Lu, C. AlphaPose: Whole-body regional multi-person pose estimation and tracking in real-Time. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 45, 7157–7173. [Google Scholar] [CrossRef]
  17. Savaş, B.K.; Becerikli, Y. Real-time driver fatigue detection system based on multi-task ConNN. IEEE Access 2020, 8, 12491–12498. [Google Scholar] [CrossRef]
  18. Ansari, S.; Du, H.; Naghdy, F.; Stirling, D. Automatic driver cognitive fatigue detection based on upper body posture variations. Expert Syst. Appl. 2022, 203, 117568. [Google Scholar] [CrossRef]
  19. Chen, J.; Yan, M.; Zhu, F.; Xu, J.; Li, H.L.; Sun, X. Fatigued driving detection method based on combination of BP neural network and time cumulative effect. Sensors 2022, 22, 4717. [Google Scholar] [CrossRef]
  20. Shulei, W.; Zihang, S.; Huandong, C.; Yuchen, Z.; Yang, Z.; Jinbiao, C.; Qiaona, M. The road rage detection algorithm is based on fatigued driving and facial feature point location. Neural Comput. Appl. 2022, 34, 12361–12371. [Google Scholar] [CrossRef]
  21. Shi, L.-C.; Lu, B.-L. Eye-based vigilance estimation using extreme learning machines. Neurocomputing 2013, 102, 135–143. [Google Scholar] [CrossRef]
  22. D’orazio, T.; Leo, M.; Distante, A. Eye detection in face images for a driver vigilance system. In Proceedings of the IEEE Intelligent Vehicles Symposium, Parma, Italy, 14–17 June 2004; pp. 95–98. [Google Scholar]
  23. Cheng, W.; Wang, X.; Mao, B. A multi-feature fusion algorithm for driver fatigue detection based on a lightweight convolutional neural network. Visual Comput. 2024, 40, 2419–2441. [Google Scholar] [CrossRef]
  24. Zhou, C.; Zhao, Y.; Liu, S.; Zhao, Y.; Li, X.; Cheng, C. Research on driver facial fatigue detection based on Yolov8 model. In Proceedings of the 5th International Conference on Information Science, Parallel and Distributed Systems (ISPDS 2024), Guangzhou, China, 31 May–2 June 2024. [Google Scholar] [CrossRef]
  25. Soulmana, B.; Boukebbab, S.; Boulahlib, M.S. Hand position on steering wheel during fatigue and sleepiness case: Driving simulator. Adv. Transp. Stud. 2021, 53, 68–84. [Google Scholar]
  26. Desai, A.V.; Haque, M.A. Vigilance monitoring for operator safety: A simulation study on highway driving. J. Saf. Res. 2006, 37, 139–147. [Google Scholar] [CrossRef]
  27. Chieh, T.C.; Mustafa, M.M.; Hussain, A.; Zahedi, E.; Majlis, B. Driver fatigue detection using steering grip force. In Proceedings of the IEEE Student Conference on Research and Development, Putrajaya, Malaysia, 25–26 August 2003; pp. 45–48. [Google Scholar]
  28. Eskandarian, A.; Mortazavi, A. Evaluation of a smart algorithm for commercial vehicle driver drowsiness detection. In Proceedings of the 2007 IEEE Intelligent Vehicles Symposium, Istanbul, Turkey, 13–15 June 2007; pp. 553–559. [Google Scholar]
  29. Chen, J.-C. A study of the continuous optimization problem using a wood robot controlled by a biologically motivated system. J. Dyn. Syst. Meas. Control. 2015, 137, 071008. [Google Scholar] [CrossRef]
  30. Chen, J.-C. Bridging the finger-action gap between hand patients and healthy people in daily life with a biomimetic System. Biomimetics 2023, 8, 76. [Google Scholar] [CrossRef] [PubMed]
  31. Chen, J.-C. Using artificial neuro-molecular system in robotic arm motion control—Taking simulation of rehabilitation as an example. Sensors 2022, 22, 2584. [Google Scholar] [CrossRef] [PubMed]
  32. Conrad, M. Adaptability, the Significance of Variability from Molecule to Ecosystem; Plenum Press: New York, NY, USA, 1983. [Google Scholar]
  33. Darzacq, X.; Tjian, R. Weak multivalent biomolecular interactions: A strength versus numbers tug of war with implications for phase partitioning. RNA 2022, 28, 48–51. [Google Scholar] [CrossRef]
Figure 1. The simulation situation ahead of the vehicle.
Figure 1. The simulation situation ahead of the vehicle.
Algorithms 17 00402 g001
Figure 2. The simulation system includes the steering wheel, accelerator, and brake pedal.
Figure 2. The simulation system includes the steering wheel, accelerator, and brake pedal.
Algorithms 17 00402 g002
Figure 3. Tobii EyeX controller.
Figure 3. Tobii EyeX controller.
Algorithms 17 00402 g003
Figure 4. Part of the user’s captured gaze data (two-dimensional X- and Y-axes).
Figure 4. Part of the user’s captured gaze data (two-dimensional X- and Y-axes).
Algorithms 17 00402 g004
Figure 5. (a) Two piezoresistive pressure sensors were installed on the simulated driving steering wheel. (b) Three piezoresistive pressure sensors were installed on the simulated pedal.
Figure 5. (a) Two piezoresistive pressure sensors were installed on the simulated driving steering wheel. (b) Three piezoresistive pressure sensors were installed on the simulated pedal.
Algorithms 17 00402 g005
Figure 6. The molecular structure of an information-processing neuron is represented as a two-dimensional grid.
Figure 6. The molecular structure of an information-processing neuron is represented as a two-dimensional grid.
Algorithms 17 00402 g006
Figure 7. (a) A signal flows in an upward direction. (b) A signal flows in a left-upward direction.
Figure 7. (a) A signal flows in an upward direction. (b) A signal flows in a left-upward direction.
Algorithms 17 00402 g007
Figure 8. A hierarchical structure diagram of all information-processing neurons (cytoskeletal neurons).
Figure 8. A hierarchical structure diagram of all information-processing neurons (cytoskeletal neurons).
Algorithms 17 00402 g008
Figure 9. Evolutionary learning process at the level of information-processing neurons. (a) Evaluate the performance of each subnet and select a few subnets with better performance; (b) Copying occurs from better-performing subnets to worse-performing subnets (copying occurs in the same bundle of neurons with a similar cytoskeletal structure); and (c) Change the subnets with poor performance.
Figure 9. Evolutionary learning process at the level of information-processing neurons. (a) Evaluate the performance of each subnet and select a few subnets with better performance; (b) Copying occurs from better-performing subnets to worse-performing subnets (copying occurs in the same bundle of neurons with a similar cytoskeletal structure); and (c) Change the subnets with poor performance.
Algorithms 17 00402 g009
Figure 10. Two layers of control neurons control information-processing neurons.
Figure 10. Two layers of control neurons control information-processing neurons.
Algorithms 17 00402 g010
Figure 11. Evolutionary learning process at the level of control neurons. (a) Cytoskeletal neurons controlled by each reference neuron are activated sequentially to evaluate their performance. (b) Assume the cytoskeletal neurons controlled by R2 achieve better performance. The pattern of neural activities controlled by R2 is copied to R1. (c) R1 controls a slight variation of the neural grouping controlled by R2, assuming some errors occur during the copy process.
Figure 11. Evolutionary learning process at the level of control neurons. (a) Cytoskeletal neurons controlled by each reference neuron are activated sequentially to evaluate their performance. (b) Assume the cytoskeletal neurons controlled by R2 achieve better performance. The pattern of neural activities controlled by R2 is copied to R1. (c) R1 controls a slight variation of the neural grouping controlled by R2, assuming some errors occur during the copy process.
Algorithms 17 00402 g011
Figure 12. A schematic diagram of the driving environment setting for this study.
Figure 12. A schematic diagram of the driving environment setting for this study.
Algorithms 17 00402 g012
Figure 13. (a) An example of timing data 25 s before a specific time (b) An example of timing data 25 s after a particular time.
Figure 13. (a) An example of timing data 25 s before a specific time (b) An example of timing data 25 s after a particular time.
Algorithms 17 00402 g013
Figure 14. Input/Output interface of the ANM system.
Figure 14. Input/Output interface of the ANM system.
Algorithms 17 00402 g014
Figure 15. The correlation between a specific driving clip’s first and last four driving clips. Annotation 5 on the x-axis is the loss value of a specific driving segment. On the x-axis, the numbers 1 to 4 are the first 4 driving segments of the driving segment, while 6 to 9 are the loss values of the last 4 driving segments.
Figure 15. The correlation between a specific driving clip’s first and last four driving clips. Annotation 5 on the x-axis is the loss value of a specific driving segment. On the x-axis, the numbers 1 to 4 are the first 4 driving segments of the driving segment, while 6 to 9 are the loss values of the last 4 driving segments.
Algorithms 17 00402 g015
Figure 16. Analysis of loss data of 20 clusters.
Figure 16. Analysis of loss data of 20 clusters.
Algorithms 17 00402 g016
Figure 17. Loss values at different noise levels.
Figure 17. Loss values at different noise levels.
Algorithms 17 00402 g017
Table 1. Participant A’s performance during the first cycle and at termination while driving in a general road environment.
Table 1. Participant A’s performance during the first cycle and at termination while driving in a general road environment.
ClusterRunLoss at
Cycle 1
Loss at
Termination
Improvement RateRunLoss at
Cycle 1
Loss at
Termination
Improvement Rate
133933.0948.075.9%333161.0559.082.3%
264082.4863.378.9%363410.0744.078.2%
393289.0756.077.0%392937.0739.075.0%
4124139.1848.479.5%422933.0975.066.8%
5153979.0956.076.0%453104.0730.076.5%
6184676.1864.581.5%483057.6537.982.4%
7214316.0661.084.7%513039.0290.090.5%
8244759.3772.083.8%542866.2460.383.9%
9275792.0879.084.8%573425.01068.068.8%
10304230.6537.487.3%604132.0957.076.8%
Table 2. Participant B’s performance during the first cycle and at termination while driving in a general road environment.
Table 2. Participant B’s performance during the first cycle and at termination while driving in a general road environment.
ClusterRunLoss at
Cycle 1
Loss at
Termination
Improvement Rate
163699.2991.473.2%
2123392.5712.279.0%
3184220.0823.380.5%
4244346.3783.282.0%
5304276.71035.075.8%
6364738.4850.782.0%
7423563.8888.975.1%
8483599.1918.374.5%
9543211.71222.761.9%
Table 3. Participant C’s performance during the first cycle and termination while driving in a general road environment.
Table 3. Participant C’s performance during the first cycle and termination while driving in a general road environment.
ClusterRunLoss at
Cycle 1
Loss at
Termination
Improvement Rate
163604.2944.773.8%
2123712.41207.667.5%
3184202.01303.269.0%
4244693.61199.574.4%
5304034.11307.767.6%
6363665.6959.573.8%
7423229.5898.672.2%
8483638.01146.368.5%
9542787.7743.673.3%
Table 4. Participant B’s performance during the first cycle and termination time while driving on highways with a light traffic environment.
Table 4. Participant B’s performance during the first cycle and termination time while driving on highways with a light traffic environment.
ClusterRunLoss at
Cycle 1
Loss at
Termination
Improvement Rate
162822.4710.974.8%
2122977.7800.273.1%
3182830.0605.178.6%
4242927.1620.178.8%
5303057.8563.081.6%
6363016.9618.379.5%
7423032.6592.880.5%
8484410.4743.483.1%
9543716.0879.676.3%
Table 5. Participant B’s performance during the first cycle and termination time while driving on highways in a heavy traffic environment.
Table 5. Participant B’s performance during the first cycle and termination time while driving on highways in a heavy traffic environment.
ClusterRunLoss at
Cycle 1
Loss at
Termination
Improvement Rate
163206.2973.369.6%
2122769.2784.171.7%
3182859.1513.182.1%
4243075.1740.175.9%
5302783.0601.878.4%
6363002.4619.979.4%
7422863.0723.374.7%
8483837.9917.476.1%
963206.2973.369.6%
Table 6. Participant C’s performance during the first cycle and termination time while driving on highways with a light traffic environment.
Table 6. Participant C’s performance during the first cycle and termination time while driving on highways with a light traffic environment.
ClusterRunLoss at
Cycle 1
Loss at
Termination
Improvement Rate
163965.3982.675.2%
2122698.2913.066.2%
3182576.9882.465.8%
4242871.1660.177.0%
5302901.4917.068.4%
6362692.3774.471.2%
7422844.9722.674.6%
8484427.9706.084.1%
9544836.1580.688.0%
Table 7. Participant C’s performance during the first cycle and termination time while driving on highways in a heavy traffic environment.
Table 7. Participant C’s performance during the first cycle and termination time while driving on highways in a heavy traffic environment.
ClusterRunLoss at
Cycle 1
Loss at
Termination
Improvement Rate
162768.5709.274.4%
2122659.0729.572.6%
3182570.7743.371.1%
4243157.3665.078.9%
5302628.6688.173.8%
6362770.8769.772.2%
7423034.5756.875.1%
8483885.2724.881.3%
9542726.7657.375.9%
Table 8. Average loss and number of clips within a cluster.
Table 8. Average loss and number of clips within a cluster.
ClusterAverage LossNo. of ClipsClusterAverage LossNo. of Clips
12244.148111142.376
21275.676121823.777
31467.258131076.776
41304.366141549.575
52191.264151434.186
61401.183161365.4108
71700.074171022.274
81199.981181239.1137
91550.554191719.670
101487.583201543.965
Table 9. Participant A’s data were tested with Participant B’s learned system in a general road environment.
Table 9. Participant A’s data were tested with Participant B’s learned system in a general road environment.
ClusterRunLoss at
Cycle 1
Loss at
Termination
Improvement Rate
162723.21443.247.0%
2121838.01095.540.4%
3182829.41573.544.4%
4242052.01140.744.4%
5302517.21438.642.8%
6362080.390656.4%
7422131.81069.949.8%
8481755.11149.534.5%
9541222.71219.90.2%
Table 10. Participant A’s data were tested with Participant C’s learned system in a general road environment.
Table 10. Participant A’s data were tested with Participant C’s learned system in a general road environment.
ClusterRunLoss at
Cycle 1
Loss at
Termination
Improvement Rate
164517.71721.961.9%
2123799.81039.672.6%
3183854.72119.645.0%
4244077.31144.571.9%
5304263.11559.363.4%
6363620.81175.267.5%
7423855.0116669.8%
8484339.01372.368.4%
9543955.41002.874.6%
Table 11. Participant B’s data were tested with Participant A’s learned system in a general road environment.
Table 11. Participant B’s data were tested with Participant A’s learned system in a general road environment.
ClusterRunLoss at
Cycle 1
Loss at
Termination
Improvement Rate
161989.1980.250.7%
2121147.3812.529.2%
3182745.4800.770.8%
4241986.3892.455.1%
5301590.3949.540.3%
6362599.2719.572.3%
7421507.3784.747.9%
8481807.8965.646.6%
9541708.71291.924.4%
Table 12. Participant B’s data were tested with Participant C’s learned system in a general road environment.
Table 12. Participant B’s data were tested with Participant C’s learned system in a general road environment.
ClusterRunLoss at
Cycle 1
Loss at
Termination
Improvement Rate
164639.92029.156.3%
2123487.71038.570.2%
3184939.22148.156.5%
4244896.91176.076.0%
5304295.01500.265.1%
6364164.11338.667.9%
7423208.11257.760.8%
8483888.71511.361.1%
9543880.11133.970.8%
Table 13. Participant C’s data was tested using Participant A’s learned system in a general road environment.
Table 13. Participant C’s data was tested using Participant A’s learned system in a general road environment.
ClusterRunLoss at
Cycle 1
Loss at
Termination
Improvement Rate
164735.41102.676.7%
2125003.51676.166.5%
3185240.0864.583.5%
4244386.81158.573.6%
5305040.41364.972.9%
6365215.3891.382.9%
7424759.3773.883.7%
8484716.21099.576.7%
9546275.71768.271.8%
Table 14. Participant C’s data were tested on a general road environment with Participant B’s learned system.
Table 14. Participant C’s data were tested on a general road environment with Participant B’s learned system.
ClusterRunLoss at
Cycle 1
Loss at
Termination
Improvement Rate
164583.81271.872.3%
2125558.31572.771.7%
3186050.31784.370.5%
4245058.91169.776.9%
5306005.61030.382.8%
6366015.41451.775.9%
7424716.21099.576.7%
8484287.01225.971.4%
9545126.31370.173.3%
Table 15. Participant A’s data in a light traffic environment were tested with Participant B’s learned system.
Table 15. Participant A’s data in a light traffic environment were tested with Participant B’s learned system.
ClusterRunLoss at
Cycle 1
Loss at
Termination
Improvement Rate
161337.9543.359.4%
212968.2823.315.0%
318989.4581.441.2%
4241144.8701.338.7%
530762.5603.320.9%
6361541.7783.549.2%
7421182.0936.120.8%
8481626.5684.857.9%
9541656.6684.758.7%
Table 16. Participant A’s data in a heavy traffic environment were tested with Participant B’s learned system.
Table 16. Participant A’s data in a heavy traffic environment were tested with Participant B’s learned system.
ClusterRunLoss at
Cycle 1
Loss at
Termination
Improvement Rate
161574.5523.366.8%
212972.3687.729.3%
3181013.6543.746.4%
4241040.0382.063.3%
5301196.3842.329.6%
6361581.0861.145.5%
7421710.51066.337.7%
8482630.3923.464.9%
9541529.3864.243.5%
Table 17. Participant B’s data in a light traffic environment were tested with Participant A’s learned system.
Table 17. Participant B’s data in a light traffic environment were tested with Participant A’s learned system.
ClusterRunLoss at
Cycle 1
Loss at
Termination
Improvement Rate
161316.2672.248.9%
2121169.8858.726.6%
3181014.8604.240.5%
4241337.9378.671.7%
5301081.9584.346.0%
636951.2567.640.3%
7421156.4489.257.7%
8481257.6792.237.0%
9541697.7750.655.8%
Table 18. Participant B’s data in a heavy traffic environment were tested with Participant A’s learned system.
Table 18. Participant B’s data in a heavy traffic environment were tested with Participant A’s learned system.
ClusterRunLoss at
Cycle 1
Loss at
Termination
Improvement Rate
161214.6877.527.8%
2121395.9880.836.9%
318674.3455.232.5%
4241461.4839.042.6%
5301541.7531.265.5%
636766.1550.628.1%
742718.9512.828.7%
8481351.0906.132.9%
9541497.5875.841.5%
Table 19. The average loss of each distraction case.
Table 19. The average loss of each distraction case.
Distraction CaseNo. of ClipsAverage Loss
Eyes wandering and closing403293.1
Switching lanes at will433452.3
Slow reaction speed633457.7
Table 20. The improvement rate of loss at cycle 1 and cycle 500.
Table 20. The improvement rate of loss at cycle 1 and cycle 500.
Switching Lanes at WillEyes Wandering/ClosingSlow Reaction Speed
ClipRunLoss at
Cycle 1
Loss at
Cycle 500
Improvement RateLoss at
Cycle 1
Loss at
Cycle 500
Improvement RateLoss at
Cycle 1
Loss at
Cycle 500
Improvement Rate
114232.9795.781.2%3577.9923.774.2%4310.01231.471.4%
24120.11020.775.2%3943.01578.760.0%4777.8826.582.7%
33961.81549.660.9%3323.11387.258.3%5024.91377.272.6%
43657.51431.560.9%3323.11387.258.3%4664.91261.273.0%
54010.41145.771.4%3099.11475.352.4%3632.91009.572.2%
214021.71001.475.1%3037.91130.062.8%3091.71340.956.6%
23919.31238.568.4%3523.61492.857.6%4548.3930.179.6%
33331.81909.342.7%2952.41450.250.9%3834.21256.067.2%
42860.81437.449.8%2439.71150.152.9%3353.01513.154.9%
53007.41585.747.3%2881.31128.660.8%2617.0991.762.1%
313056.6580.981.0%2468.9864.865.0%3267.71264.761.3%
23416.3883.874.1%3335.51423.657.3%4170.0726.582.6%
33640.81450.160.2%2864.41564.045.4%4181.61183.271.7%
42944.61024.465.2%2791.01020.363.4%3462.91455.658.0%
53286.11068.867.5%2719.31445.546.8%3142.11032.567.1%
413225.0462.685.7%3092.1889.971.2%3605.51228.165.9%
23660.51122.169.3%3345.91213.663.7%2969.0965.767.5%
33517.01383.660.7%3536.91323.062.6%3791.01027.172.9%
43467.31065.569.3%3209.21093.265.9%3347.21377.558.8%
53684.71293.964.9%3186.81348.357.7%3446.51125.267.4%
514054.9763.681.2%4384.61273.870.9%3091.71340.956.6%
25115.51367.873.3%4838.11412.570.8%4548.3930.179.6%
34268.91324.969.0%4124.71684.959.2%3834.21256.067.2%
44150.11524.563.3%3734.51173.768.6%3352.01513.154.9%
54181.01481.064.6%3445.81069.269.0%2617.0991.762.1%
612171.6822.262.1%2769.2968.565.0%2939.91095.462.7%
22616.21257.052.0%2884.01458.449.4%3663.4785.078.6%
33182.41773.544.3%2596.01515.641.6%4206.41206.871.3%
42669.51067.960.0%3018.11177.561.0%3812.21082.271.6%
52932.51128.461.5%2662.91114.458.2%3126.1816.973.9%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chen, J.-C.; Chen, Y.-Z. Integrating Eye Movement, Finger Pressure, and Foot Pressure Information to Build an Intelligent Driving Fatigue Detection System. Algorithms 2024, 17, 402. https://doi.org/10.3390/a17090402

AMA Style

Chen J-C, Chen Y-Z. Integrating Eye Movement, Finger Pressure, and Foot Pressure Information to Build an Intelligent Driving Fatigue Detection System. Algorithms. 2024; 17(9):402. https://doi.org/10.3390/a17090402

Chicago/Turabian Style

Chen, Jong-Chen, and Yin-Zhen Chen. 2024. "Integrating Eye Movement, Finger Pressure, and Foot Pressure Information to Build an Intelligent Driving Fatigue Detection System" Algorithms 17, no. 9: 402. https://doi.org/10.3390/a17090402

APA Style

Chen, J. -C., & Chen, Y. -Z. (2024). Integrating Eye Movement, Finger Pressure, and Foot Pressure Information to Build an Intelligent Driving Fatigue Detection System. Algorithms, 17(9), 402. https://doi.org/10.3390/a17090402

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop