Next Article in Journal
Unified Accurate Attitude Control for Dual-Tiltrotor UAV with Cyclic Pitch Using Actuator Dynamics Compensated LADRC
Next Article in Special Issue
A Review of the Methods on Cobb Angle Measurements for Spinal Curvature
Previous Article in Journal
A Multi-Objective Task Scheduling Strategy for Intelligent Production Line Based on Cloud-Fog Computing
Previous Article in Special Issue
A Method of Short Text Representation Fusion with Weighted Word Embeddings and Extended Topic Information
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multi-level Hierarchical Complex Behavior Monitoring System for Dog Psychological Separation Anxiety Symptoms

1
Department of Computer and Information Science, Sejong Campus, Korea University, Sejong City 30019, Korea
2
Department of Animal Science, University of California, Davis, CA 95616, USA
3
Department of Computer Convergence Software, Sejong Campus, Korea University, Sejong City 30019, Korea
*
Authors to whom correspondence should be addressed.
Sensors 2022, 22(4), 1556; https://doi.org/10.3390/s22041556
Submission received: 7 January 2022 / Revised: 15 February 2022 / Accepted: 15 February 2022 / Published: 17 February 2022

Abstract

:
An increasing number of people own dogs due to the emotional benefits they bring to their owners. However, many owners are forced to leave their dogs at home alone, increasing the risk of developing psychological disorders such as separation anxiety, typically accompanied by complex behavioral symptoms including excessive vocalization and destructive behavior. Hence, this work proposes a multi-level hierarchical early detection system for psychological Separation Anxiety (SA) symptoms detection that automatically monitors home-alone dogs starting from the most fundamental postures, followed by atomic behaviors, and then detecting separation anxiety-related complex behaviors. Stacked Long Short-Term Memory (LSTM) is utilized at the lowest level to recognize postures using time-series data from wearable sensors. Then, the recognized postures are input into a Complex Event Processing (CEP) engine that relies on knowledge rules employing fuzzy logic (Fuzzy-CEP) for atomic behaviors level and higher complex behaviors level identification. The proposed method is evaluated utilizing data collected from eight dogs recruited based on clinical inclusion criteria. The experimental results show that our system achieves approximately an F1-score of 0.86, proving its efficiency in separation anxiety symptomatic complex behavior monitoring of a home-alone dog.

1. Introduction

The number of dogs raised as pets increased due to the beneficial impacts on the owners’ mental health, which is more evident for owners living alone or with fewer family members [1,2,3]. Unfortunately, despite owners being emotionally attached to their dogs, it is practically unfeasible for them to constantly look after their dogs. Thus, owners are forced to leave them at home alone in some cases, increasing the risks of dogs developing psychological disorders such as Separation Anxiety (SA) [4,5]. The latter is considered the most common dog psychiatric disorder, often accompanied by complex behavioral symptoms, such as high-frequency destructive behavior, which damages their surrounding environment, e.g., furniture and appliances, and excessive vocalization, which disturbs the neighboring community [6,7,8]. In addition, these undesired complex behavioral symptoms are the primary reasons forcing owners to relinquish their dogs [9,10]. In America alone, nearly 670,000 dogs are euthanized each year, mainly due to behavioral problems related to psychiatric disorders [11]. Therefore, to improve dogs’ welfare and prevent them from developing separation anxiety, it is necessary to observe and monitor abnormal complex behavioral symptoms in advance and treat them successfully [10]. However, since SA is only triggered by the owner’s real or perceived absence [12], direct observation revealing the dog’s behavioral symptoms can be disruptive. In the past 20 years, SA symptom observation in dogs was already studied utilizing subjective ratings such as interviewing the owners [6,12] or relying on manual behavior recognition from recorded videos [4,6]. Nevertheless, these methods are laborious and inefficient, and they cannot automatically detect the early psychological symptoms of SA.
Spurred by the deficiencies of current methods, this study aims to propose a novel approach based on computer techniques, instead of manual methods, to automatically monitor a cage-free dog’s early primary SA symptomatic complex behaviors identified as ‘Excessive destructive behavior’, ‘Excessive exploratory behavior’, and ‘Excessive vocalization’. The current recording-based manual observation scheme first observes the dog’s head and body postures (poses and motions) to identify the atomic behaviors and then aggregate the latter into complex behaviors [6,8,13,14,15]. By summarizing these observation methods, we created a taxonomy of the dog activities involving the three levels presented in Table 1. Level-1 activities represent a dog’s body pose or motion at a specific time [16]. In this case, a set of head or body postures compose an atomic behavior. For example, the ‘Walking’ behavior comprises a set of ‘Walk’ postures. Accordingly, the Level-2 activities represent the dog’s atomic behavior, a fundamental behavior [4,17]. Finally, the Level-3 activities represent the dog’s abnormal complex behavior, divisible and aggregated by a set of high-frequency atomic behaviors [4,8]. For instance, the complex abnormal behavior of ‘Excessive vocalization’ is aggregated by a set of high-frequency ‘Barking’ atomic behaviors [8,10,18].
Various studies aiming to recognize and detect dog activities at different levels were proposed, with Table 2 summarizing the most important ones [17,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33]. Early research focused on Level-1 dog posture recognition, leading to the initial and essential work towards dog behavior recognition. Recent approaches focused on Level-2 dog behavior recognition and started to detect abnormal behaviors related to a dog’s welfare. Despite the research proposing several methods, only a few recent studies managed a Level-2 atomic behavior recognition accuracy of 90% [25,27,31]. However, recent studies are unable to detect separation anxiety symptomatic behaviors for the following limitations:
  • Although some studies included potentially abnormal behaviors relevant to a dog’s well-being, they mainly focused on Level-1 postures or Level-2 abnormal atomic behaviors, e.g., ‘Barking’. Nevertheless, these techniques are insufficient to determine the specific disorder that dogs might suffer from. For instance, the atomic behavior of barking can be related to noise phobia and be triggered when the dog hears outside noise, and the behavior is only considered a separation anxiety-related abnormal behavior when its frequency is high. Hence, solely recognizing the potential abnormal atomic behavior cannot be directly used to provide an accurate diagnosis of separation anxiety symptoms.
  • To the best of our knowledge, only one separation anxiety reduction system [34] includes Level-3 separation anxiety-related symptomatic complex behaviors. However, training this system requires the owner’s direct participation, e.g., the owner labels the complex behaviors, such as ‘Destructive behavior’, using a smartphone. Hence, this architecture is unable to monitor complex behavioral symptom scenarios automatically.
  • The feasibility of implementing a dog automatic monitoring system related to psychological separation anxiety symptoms has not been reported yet. Thus, the current research gap increases the challenge of automatically inferring Level-3 complex behaviors from lower levels [35].
To address these limitations, this paper proposes an end-to-end, knowledge-based multi-level hierarchical system, which automatically monitors a home-alone cage-free dog starting from Level-1 (head and body postures), going through Level-2 (atomic behaviors), and reaching Level-3 (separation anxiety-related symptomatic complex behaviors). At the lowest level, we apply stacked Long Short-Term Memory (LSTM) models to recognize the dog’s posture using preprocessed time-series data collected from head and body wearable sensors. The stacked LSTM guarantees effective and stable performance for posture recognition when using time-series data [36,37]. Then, based on the extracted head and body postures, we design a dog behaviors detection algorithm using Complex Event Processing (CEP) with dog behavior knowledge-based pattern rules for Level-2 atomic behaviors and Level-3 complex behaviors identification. The suggested CEP technology models the knowledge hierarchy and automatically detects meaningful complex events [38,39]. However, it is challenging to define ambiguous and uncertain psychological knowledge using CEP rules. For instance, ‘Excessive vocalization’ cannot be quantified to build CEP numerical pattern rules. To overcome the limitations of the basic CEP rules, we introduce fuzzy logic that handles imprecision and effectively represents psychological knowledge [40,41], extending the basic CEP rules for Level-3 symptomatic complex behavior monitoring. To evaluate the proposed system, we develop a prototype system with real-world datasets that include eight dogs’ daily routines and separation anxiety scenarios. The dogs were recruited based on clinic separation anxiety inclusion criteria [42].
The remainder of this paper is structured as follows: Section 2 describes the proposed method for automatically monitoring a freely moving dog’s separation anxiety symptomatic complex behavior; Section 3 presents the experimental setup and analyzes the results; finally, Section 4 concludes this paper.

2. Proposed Method

2.1. System Structure

The proposed dog monitoring system architecture is illustrated in Figure 1, comprising five layers: data collection, data preprocessing, dog posture recognition, dog behavior monitoring, and application layer.

2.2. Data Collection Layer and Data Preprocessing Layer

As mentioned in Section 1, the Level-1 posture analysis is the first crucial step towards understanding dog behavior and detecting a dog’s symptomatic complex behavior related to separation anxiety [25]. Moreover, the symptomatic complex behaviors are not independent of each other. For example, in most cases, dogs that present head posture-related complex behavior ‘Excessive vocalization’, exhibit body posture-related complex behavior ‘Excessive destructive behavior’ simultaneously. Therefore, it is necessary to collect head and body postures in dogs concurrently when monitoring their behavior. In this work, the data collection layer relies on wearable devices with tri-axial accelerometers to automatically collect a freely moving dog’s head and the body posture raw time-series data. The wearable sensor is convenient for detecting postures and is practical for real-world situations as it does not require well-controlled environments [43]. Furthermore, accelerometers already proved their measurement abilities for the pose and motion of a wide range of species [25]. In prior works, head and body posture data collection explored various sensor locations on a dog’s body. It concluded that back-mounted and neck-mounted devices produced high-quality data for the head and body posture estimation [25,44], achieving a recognition accuracy of approximately 90% [25,27,30]. Therefore, we use two dog wearable devices with tri-axial accelerometers on the dog’s neck and back to collect the dog’s head and body posture raw data, respectively.
The data preprocessing layer aims to dynamically segment, normalize, and format the raw time-series data as an input stream to the LSTM-based model, guaranteeing continuous dog posture recognition. Traditionally, sensor-based data segmentation method uses sliding windows to detect the activity’s start and end time [45]. In this work, we set the sliding window size as one second to ensure the system detects each posture’s central part. Besides, due to the various dog sizes, the accelerometer data have different value ranges, with larger values dominating the LSTM network training and ultimately imposing a natural bias [46]. Therefore, the min–max normalization method is utilized to normalize the time-series data between the values 0 and 1 [47]. After normalization, the data are converted into an appropriate format and input to the LSTM network as a 3D vector with the shape (Samples) × (Timesteps) × (Features). In this study, ‘Timesteps’ is set to the value 50, which corresponds to the sequence of data received from the sensor during one second, and the ‘Features’ are the accelerometer’s x-, y-, and z-axis.

2.3. Dog Posture Recognition Layer

The purpose of this layer is to classify the preprocessed dog’s motion data into an understandable head or body posture category. These postures are the hierarchy’s basic level (Level 1) activities that are used to detect the higher-level atomic and complex behaviors. For posture recognition, earlier methods using models such as SVM relied on hand-crafted features extracted from the input data through fixed mathematical rules [21,22,46]. However, to engineer hand-crafted features, domain knowledge about the specific application is required [48]. Recently, deep learning techniques were widely employed in several recognition fields, automating the feature extraction process without requiring domain knowledge [49,50,51]. One of the deep learning models, the stacked LSTMs, is a well-suited network for recognizing sensor time-series data and enhancing the model’s accuracy [36,37,46,52,53]. Therefore, we employ stacked LSTMs as our system base analyzer to classify the Level-1 dog’s postures and motions.
The proposed stacked LSTM network structure for dog posture recognition is illustrated in Figure 2, where the two stacked LSTM networks for dog head and body posture recognition are independent and parallel. Each stacked LSTM network comprises two LSTM layers of 64 units each. The output layer is a softmax layer calculating the probability and classifying the data into one of the Level-1 postures presented in Table 1. The head and body postures are then forwarded to the next layer to be used to detect higher level behaviors.

2.4. Dog Behavior Monitoring Layer

This layer abstracts a multi-level knowledge-based hierarchical structure from the previous manual recognition methods to recognize Level-2 and Level-3 dog behaviors effectively. Based on the hierarchical structure, we utilize the Complex Event Processing (CEP) technology with knowledge rules to automatically detect Level-2 and Level-3 dog behaviors. The CEP combined with knowledge-based rules can automatically identify causality patterns and detect meaningful complex events with time relationships. However, while basic CEP rules are enough to detect level 2 atomic behaviors based on level 1 postures, it is challenging to employ them to effectively express the Level-3 psychological separation anxiety-related behavioral information based on specific indicators. For example, it is difficult to quantify the number of ‘Barking’ to detect the abnormal status ‘Excessive vocalization’. Therefore, we introduce a fuzzy logic concept to extend the effectiveness of the CEP rules, constituting a Fuzzy-CEP dog behavior monitoring system.

2.4.1. Abstraction Hierarchy of Dog Separation Anxiety-Related Behaviors

As previously explained, existing manual observation methods of dog behavior highlighted that a set of understandable primitive postures compose an atomic behavior [13,15]. Additionally, a set of temporal and coherent atomic behaviors can be aggregated into a dog’s symptomatic complex behavior [8]. Therefore, we exploit the hierarchy concept, an effective method for expressing the aggregation or composition relationship between activities [43,54]. Figure 3 illustrates the proposed three-level hierarchy for primary complex behavioral symptoms appropriate for separation anxiety detection. As a reminder, the three levels defined in Table 1 include Level-1 (head and body postures), Level-2 (atomic behaviors), and Level-3 (separation anxiety-related symptomatic complex behaviors). Composition C1 and C2 are the two types of relationships between Level-1 and Level-2. The C1 relationship represents that an atomic behavior comprises a set of identical postures during the observation time, while the C2 relationship denotes that atomic behavior comprises various postures during the observation time. For instance, atomic behavior ‘Sniffing’ involves body posture ‘Walk’ and head posture ‘Head down’. The Aggregation (A) is the relationship between Level-2 and Level-3, representing complex behaviors aggregated by related lower-level atomic behaviors that are more frequent than when the owner is at home [13]. For instance, the complex behavior ‘Excessive exploratory behavior’ is aggregated by excessive and higher-frequency atomic behavior ‘Walking’ and atomic behavior ‘Sniffing’.

2.4.2. Hierarchy Modeling for Dog Behavior Automatic Detection

The basic approach involves using the rule-based method to automatically monitor a dog’s behavior based on the hierarchy presented in Figure 3 [55,56]. However, the latter method is limited in using simple rules for dog behavior detection, and thus rule-based techniques cannot infer the higher-level activity from a set of lower-level activities with time relationships [57,58]. In this work, we exploit the complex event processing technology as our primary method to address this issue. The CEP can simultaneously and automatically identify meaningful events and generate higher-level events based on relationships, i.e., time and aggregation relationships [59,60]. Furthermore, CEP rules can be extended by custom aggregate functions according to fundamental requirements [61].
Figure 4 shows how the CEP technology models a dog’s behavior based on hierarchy: (1) the Level-1 head and body postures are input into event streams of the CEP hierarchy, calculated by stacked LSTM networks that utilize the preprocessed sensor datasets; (2) the Level-2 dog atomic behaviors are detected through the atomic behavior Event Processing Network (EPN) that relies on the extracted postures; (3) the Level-3 complex behaviors are then detected by the complex behavior EPN that exploits the related atomic behaviors. Each network includes the event processing engines and the CEP rules; (4) the CEP engine selects the lower-level events that satisfy the CEP pattern rules and generates higher-level events; (5) the CEP rule is defined by events and event constructors, expressing the relationships between the events [41]. In Figure 4, colored events represent the ones that compose a pattern when a matching has occurred within the time window frame. Accordingly, events using the same color correspond to a detected pattern. In this work, the events are the activities of each layer modeled as presented next. Specifically, a posture event ( P ) is denoted as:
P = E   ( i d , s ,   p ,   t ) ,
where i d is the dog’s subject ID, s is the sensor ID, p is the dog’s posture class, and t is the posture’s timestamp. An atomic behavior event ( A ) is denoted as:
A = E   ( i d ,   a b ,   t s , t e ) ,     t s < t e ,
where a b is the dog’s atomic behavior class, t s ,   t e are the behavior event starting time and ending time.   t e t s is observation time interval. A complex behavior event ( C ) is denoted as:
C = E   ( i d , c b ,   d , t s , t e ) ,     t s < t e ,
where c b is the dog’s complex behavior class, and d is the symptom state of complex behavior.
This work adapts the CEP event constructors for dog behavior detection presented in Table 3. Additionally, we create a fuzzy function appropriate for the dog’s psychological separation anxiety symptomatic complex behavior detection.
The specific dog behavior detection rules are introduced in Table 4, which model the C1 and C2 relationship combination of the atomic behaviors in the atomic behavior event processing network. For instance, if the dog maintains posture ‘Dig’ without changing within two seconds of observation, the atomic behavior ‘Digging’ will be generated. Besides, if the body posture ‘Walk’ is detected followed by the head posture ‘Head down’ within two seconds, the complex behavior network generates the atomic behavior ‘Sniffing’. In the latter network, the CEP engine receives and calculates the total frequency of the complex behavior-related atomic behaviors within an observation interval of 15 s. Then, the CEP system uses fuzzy CEP rules to detect whether the total frequency is abnormal, i.e., if the behavior happens more frequently when the owner is not at home [8,10,18]. Further details on the fuzzy function are presented in Section 2.4.3.

2.4.3. Fuzzy Function of Dog Monitoring System

Most of the dogs’ psychological separation anxiety knowledge is a natural language originating from experts [6,8,14,15,18,62]. Therefore, it is challenging to effectively express ambiguous and uncertain psychological knowledge using basic CEP rules. For instance, the symptomatic complex behavior ‘Excessive vocalization’ cannot be quantified to build pattern rules. To address this issue, we introduce fuzzy logic and expand the existing CEP rules in the complex behavior EPN. As a common approach to solving imprecise and vague problems, Fuzzy logic has a long history in automated clinical diagnosis [63,64]. Moreover, it is easier for experts to map their expertise into fuzzy logic than sophisticated probabilistic methods [40,41]. Figure 5 illustrates the fuzzy logic function structure of the proposed dog monitoring system, with the main steps described as follows:
1.
Fuzzification: the fuzzifier applies the relevant membership functions to transform the crisp variables to fuzzy linguistic variables, whose values are natural language words instead of numerical values. This work utilizes domain knowledge [4,8,18,65], and thus the input linguistic variable is the frequency ( f ) of each complex behavior (destructive behavior, exploratory behavior, and vocalization). Specifically, F ( f ) = { S e l d o m ,   C o n s i s t e n t ,   M o s t } is the set of decompositions for the linguistic variable frequency, with each F(f) member covering a portion of the overall frequency values. For example, in Figure 6a, the frequency is 30% (0.3) of the observation time, classified as 50% ‘Seldom’ and 50% ‘Consistent’. The fuzzifier transforms the crisp frequency input using the trapezoidal and triangular membership functions illustrated in Figure 6a. Similarly, the output linguistic variables are the symptom diagnosis indices involving two linguistic variables, i.e., { N o r m a l ,   A b n o r m a l } , with the trapezoidal membership function illustrated in Figure 6b.
2.
Fuzzy rules and inference: based on the domain knowledge, the dog separation anxiety-detection fuzzy matrix is presented in Table 5. For instance, if the ‘Exploratory behavior’ is ‘Consistent’ or ‘Most’, the separation anxiety symptom state is ‘Abnormal’ [10].
3.
Defuzzification: this stage utilizes the center of gravity [66], one of the most common defuzzifiers, to obtain the shape’s centroid generated by superimposing the fuzzy rules shapes.
4.
Threshold Decision: based on the defuzzification result, a heuristic decision threshold is employed depending on the domain knowledge [4,18,65], which ultimately produces a final binary classification (normal or abnormal behavior). If the result exceeds a threshold, the complex behavior is diagnosed as the abnormal status ‘Excessive’ [67]. Further details on the fuzzy logic description can be found in [66,67,68].
The overall CEP-based dog behavior detection system is described in Algorithm 1. Specifically, the input event stream involves the posture events (L1) and the output of the separation anxiety-related atomic behaviors (L2) and complex behaviors (L3). Initially, the predefined rules in each event-processing network, linguistic variables, membership functions, and fuzzy logic rules are initialized (line 3). The overall system is defined by the event types: L1 posture, L2 atomic behavior, and L3 complex behavior events (lines 4–6). The upper case represents the event type, and the lower case represents the event instance. The fuzzy function is defined in lines 8–16. When the CEP system keeps receiving and creating events of different levels, the algorithm searches within the observation time for events satisfying the predefined rules of different level networks (lines 17–27). Once the events satisfy the rules, the CEP algorithm creates a behavior event and publishes the event to the event channel. Then, the system will return the detected Level-2 (L2) and Level-3 (L3) behaviors (line 28). For completeness, pi, pj−1, and pj are the postures to be detected in the atomic behavior network rules, and L2m and L2k denote the Level-2 stream, where each steam has the same type of atomic behaviors, with m ≠ k.
Algorithm 1. Complex Behavior Detection for SA.
1Input:L1 = {p1, p2, …, pi, …, pj, …, pn}
2Output:L2 = {a1, a2, …, ai, …, aj, …, an}, L3 = {c1, c2,…, ci, …, cn}
3Initialize: Pre-defined CEP rules, pre-defined linguistic variables, membership functions and fuzzy rules
4Define Posture event type = P (id, s, posture, t)
5Define Atomic behavior event type = A (id, atomic behavior, ts, te)
6Define Complex behavior event type = C (id, complex behavior, d, ts, te)
7//Fuzzy function
8Function F(frequent)
9     Convert frequent to fuzzy values by membership functions
10     Evaluate the rules in the rule base
11     Combine the results of each rule
12     results = Center of gravity calculation
13     If results > Threshold
14     Then classification result = Abnormal
15     Else classification result = Normal
16Return classification result
17//Level 2 dog atomic behavior detection
18If select * from L1
19where repeat pi. posture more than two times ∧ win (2 s)
20Then create ai (id, related atomic behavior, tn, tn+1)
21If select * from L1
22where pj−1. posturepj. posturewin (2 s)
23Then create aj (id, related atomic behavior, tn, tn+1)
24//Level 3 dog complex behavior detection
25If select * from L2
26where F(C(L2m, L2k)) ∧ Win (15 s) = Normal or Abnormal
27Then create ci (id, related complex behavior, classification result of symptoms, ts, te)
28ReturnL2, L3

2.5. Dog Application Layer

In this layer, a web application is designed to report and analyze the monitoring results of separation anxiety-related complex behaviors in dogs. Upon automatically identifying the complex behaviors related to separation anxiety, the system sends the monitoring results to the owner or scientist.

3. Results

3.1. Data Collection and Datasets

The performance of the proposed detection monitoring system was evaluated using raw activity sensor data and video recordings of eight dogs recruited based on separation anxiety clinic criteria defined in [42]. Table 6 presents the basic information of the eight dogs.
The data were collected either in the owner’s premises or in the laboratory as shown in the examples in Figure 7. A red bounding box was used in Figure 7 to help identify the location of the dog in every example. The data collection procedure was conducted in three phases. (1) Preparation: mounting two lightweight motion sensors (LPMS-B2, size: 39 × 39 × 8 mm, weight: 12 g, sample rate: 50 Hz) on the dog’s neck and back to collect its head and body tri-axial accelerometer data. At this stage, a camera was set to record the dog’s activity with video and sound to use during the labeling and ensure the accuracy of the sensor data labels. (2) Synchronization: sensor synchronization by connecting the sensor to a computer via Low Energy Bluetooth. Sensor activation ensures uninterrupted video camera recording. (3) Activity recording: each dog is left to move for 5–15 min naturally and freely while its posture and movements are being recorded by the sensors and a video camera. The data received from the sensor contains six columns, namely sensor ID, frame number, timestamp, and three-axis accelerometer (x-axis, y-axis, z-axis). Figure 8 shows examples of raw time-series data of dog postures. The total duration of each activity is shown in Table 7. The dataset does not contain any missing values, and the data were chosen and classified (as ground truth) with the help of animal behavior researchers.

3.2. Implementation

The prototype system was implemented using a computer running an Intel i7 CPU with 64 GB RAM utilizing Windows 10 and a GTX 1080 Ti GPU. The system used OpenMAT software [69] to capture the tri-axial accelerometer signals. Figure 9 depicts a screenshot of OpenMAT software. The LSTM-based network was trained using TensorFlow library [70], and the CEP network was implemented using the Esper library as it provides a CEP engine and integrated tools for modeling CEP rules [71]. The web application based on the database was designed using Plotly and Dash Python libraries [72].

3.3. Evaluation

3.3.1. Metrics

The recognition performance of each activity level is evaluated employing the Precision, Recall, and F1-score metrics [73]:
Precision = TP TP   +   FP
Recall = TP TP   +   FN
F 1   score = 2 ×   Precision   ×   Recall Precision   +   Recall
where true positive (TP) is the number of dog activities that are actually positive and classified as positive, False Positives (FP) is the number of dog activities that are actually negative and classified as positive, and False Negatives (FN) is the number of dog activities that are actually positive and classified as negative.

3.3.2. Posture Monitoring Results (Level-1 Classification)

The first experiment was conducted to confirm the effectiveness of the stacked LSTM models proposed in this paper, and the second experiment to compare its performance with other models. The data used in the first and second experiments contained 4500 time-series samples. We made sure the data are balanced by using exactly 500 samples of data for each class. The division of training and testing data happens through the 5-fold cross-validation where a different fold containing 900 samples is used in every iteration. Hence, the data are divided for training and testing using a 8:2 ratio in every iteration, i.e., 3600 and 900 samples, respectively. Both stacked LSTM networks used for head and body posture recognition were trained using cross-entropy loss [46] and Adam Optimizer [74] with decay rates β1 of 0.9 and β2 of 0.999 and a learning rate of 0.0025. The batch size was 25, trained for 50 epochs.
The results of the first experiment, i.e., Level-1 head and body posture classification, using our proposed method obtained an average F1-score of 0.947 presented in Table 8, proving that it can accurately identify the dog’s head and body postures. Concerning the ‘Bark’ and ‘Head up’ postures, the accuracy of the model is relatively lower as some barks were not loud enough, which increased the difficulty to differentiate between the ‘Bark’ and ‘Head up’ postures. Additionally, high-intensity panting causes the dog’s head and body to move constantly, which adds noise to the signal. Similarly, some dog’s body postures such as ‘Stand’ or ‘Dig’ moved slightly, leading our model to predict ‘Walk’ or ‘Jump’ postures falsely. Moreover, some transactions between two postures exist during recognition, reducing the recognition effect.
In the second experiment, we compared the LSTM approach with two current dog activity classifiers [25,27,31], i.e., Naïve Bayes (NB) and Support Vector Machine (SVM). We employed the same training and testing datasets for all methods with five-fold cross-validation to guarantee a fair and accurate comparison, and statistical features (min, max, mean, standard deviation) were used to train the SVM and NB models. Table 9 shows the performance results using the F1-score and confirms that the stacked LSTM networks outperform current classifiers.

3.3.3. Atomic Behavior Monitoring Results (Level-2 Classification)

Based on the results of Level-1 detection, we performed a Level-2 atomic behaviors identification experiment. This experiment considers 1070 dog atomic behavior data for Level-2 activity detection. Window slicing was used in this experiment for the data augmentation of some abnormal atomic behaviors that are exploited to detect the Level-3 behaviors [75,76]. Thus, the slicing window of 100-sample width and 50% overlap moved backward to augment ‘Escaping’ behaviors by 17 sequences and ‘Barking’ behaviors by 10 sequences. Then, an experiment was conducted to compare the recognition performance of Level-2 activities.
The first experimental results are presented in Table 10, revealing that the proposed system’s average detection accuracy approached 0.915. As summarized in Table 9, most dog atomic behaviors are correctly detected, confirming that the proposed method achieves good performance for Level-2 dog atomic behavior recognition.
In the second experiment, we compared our proposed method with SVM, Decision Tree (DT) and NB classifiers used in previous studies [24,25,27,31]. Similarly, in this experiment, the statistical features (min, max, mean and standard deviation) were used to train the SVM, DT, and NB models. As shown in Table 11, the proposed method (stacked LSTM + CEP) used the hierarchical structure achieved better performance results. SVM, DT and NB falsely recognized some of the ‘Sniffing’ behavior. This is because the activity is associated with head posture. It is relatively hard to distinguish the ‘Standing’ and ‘Sniffing’ only using a body sensor. Additionally, high-intensity panting ultimately increased atomic behaviors recognition error.

3.3.4. Complex Behavior Monitoring Results (Level-3 Classification)

Two experiments were conducted to confirm the performance of Level-3 dog separation anxiety symptomatic complex behaviors detection. The first experiment used 152 destructive behavior samples, 84 vocalization samples, and 231 exploratory behavior samples. Fifty-six destructive behaviors were added through data augmentation using a slicing window of 750-sample width and 86.7% overlap. Likewise, 47 additional vocalization sequences and 122 exploratory behavior sequences were generated with data augmentation. The heuristic decision threshold for vocalization is 1.0, and the heuristic decision threshold for destructive and exploratory behaviors is 1.5. Similarly, this experiment used previous Level-2 activities experiment results as input data sent to the Fuzzy-CEP system to evaluate the performance of Level-3 complex behaviors detection.
Table 12 depicts the results of the first experiment, measuring the precision, recall, and F1-score metrics. The results revealed that our approach achieved an F1-score of 0.86 for symptomatic complex behaviors, highlighting that the hierarchical structure employed achieved an appealing performance. Based on the appealing performance, we conclude that the proposed CEP-based monitoring method is promising for detecting the dogs’ separation anxiety signs. The performance decline of Level-3 is primarily due to continuous false recognitions of the Level-1 to the Level-2 activities such as ‘Vocalization Behavior’. Hence, exploiting more sensors, e.g., a sound sensor, would further enhance our method’s performance.
The second experiment compared our proposed method with SVM, DT, and Random Forest (RF) classifiers for Level-3 complex behaviors. The experiment exploited several statistical features (min, max, mean, and standard deviation) to train the SVM, RF, and DT models. As shown in Table 13, the proposed method (stacked LSTM + Fuzzy-CEP) used the hierarchical structure combined with two sensors and achieved better performance results.

3.3.5. Dog Monitoring System Web Application

We designed a web application to report and analyze the detected dogs’ separation anxiety-related complex behavior. Figure 10 depicts a snapshot of our web application, including a live video stream to check the dogs’ activities, an alarm table, a time scatter chart showing the dog’s normal/abnormal status, and a pie chart analyzing the detected complex behaviors.

4. Conclusions

Owners leave their dogs at home alone, potentially causing psychological disorders such as separation anxiety, often accompanied by complex behavioral symptoms like excessive destructive behavior, excessive exploratory behavior, and excessive vocalization. In particular, those undesired complex behavioral symptoms are the main reason for the relinquishment of dogs. Thus, we present an appropriate monitoring method by developing a multi-level hierarchical system that automatically monitors freely moving home-alone dogs. The multi-level hierarchical system starts from Level-1 (fundamental head and body postures), goes through Level-2 (atomic behaviors), and reaches Level-3 (separation anxiety-related symptomatic complex behaviors). Regarding Level-1, we apply the stacked LSTM model to recognize the dog’s head and body postures using the time-series data extracted from wearable sensors. Then, based on the extracted postures, the CEP engine uses dog behavior knowledge-based pattern rules for Level-2 atomic behavior and Level-3 complex behavior identification. To overcome the limitations of basic CEP rules, this work proposes a Fuzzy-CEP, as fuzzy rules can handle the imprecision and vagueness represented through psychological knowledge. Our experiments evaluated the proposed approach using data collected from eight dogs recruited based on clinical inclusion criteria. The experimental results demonstrate that our system achieves approximately an F1-score of 0.86, affording an appealing dog symptomatic complex behavior monitoring scheme appropriate for a real-world environment. Furthermore, the experiments reveal that our approach can provide a feasible way to describe complex behaviors related to psychiatric symptoms and help promote the implementation of artificial intelligence technology in the veterinary field. Our subsequent study intends to develop a robust dog behavior monitoring system to monitor separation anxiety symptoms by combining sensor, video, and sound data.

Author Contributions

Conceptualization, H.W., J.L., D.P. and Y.C.; methodology, H.W., J.L. and D.P.; validation, J.L., D.P. and Y.C.; data curation, H.W., O.A. and J.T.; writing—original draft preparation, H.W.; writing—review and editing, O.A, J.T., J.L. and D.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (NRF-2020R1I1A3070835 and NRF-2021R1I1A3049475).

Acknowledgments

The authors wish to acknowledge Anita Oberbauer, Animal Science, University of California, Davis, for her help in interpreting the significance of the results of this study.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Boya, U.O.; Dotson, M.J.; Hyatt, E.M. Dimensions of the dog–human relationship: A segmentation approach. J. Target. Meas. Anal. Mark. 2012, 20, 133–143. [Google Scholar] [CrossRef]
  2. Dotson, M.J.; Hyatt, E.M. Understanding dog–human companionship. J. Bus. Res. 2008, 61, 457–466. [Google Scholar] [CrossRef] [Green Version]
  3. Archer, J. Why do people love their pets? Evol. Hum. Behav. 1997, 18, 237–259. [Google Scholar] [CrossRef]
  4. Rehn, T.; Keeling, L.J. The effect of time left alone at home on dog welfare. Appl. Anim. Behav. Sci. 2011, 129, 129–135. [Google Scholar] [CrossRef]
  5. Norling, A.-Y.; Keeling, L. Owning a dog and working: A telephone survey of dog owners and employers in Sweden. Anthrozoös 2010, 23, 157–171. [Google Scholar] [CrossRef]
  6. Konok, V.; Doka, A.; Miklosi, A. The behavior of the domestic dog (Canis familiaris) during separation from and reunion with the owner: A questionnaire and an experimental study. Appl. Anim. Behav. Sci. 2011, 135, 300–308. [Google Scholar] [CrossRef]
  7. Kobelt, A.; Hemsworth, P.; Barnett, J.; Coleman, G. A survey of dog ownership in suburban Australia—Conditions and behaviour problems. Appl. Anim. Behav. Sci. 2003, 82, 137–148. [Google Scholar] [CrossRef]
  8. Lund, J.D.; Jorgensen, M.C. Behaviour patterns and time course of activity in dogs with separation problems. Appl. Anim. Behav. Sci. 1999, 63, 219–236. [Google Scholar] [CrossRef]
  9. Salman, M.D.; Hutchison, J.; Ruch-Gallie, R.; Kogan, L.; New, J.C., Jr.; Kass, P.H.; Scarlett, J.M. Behavioral reasons for relinquishment of dogs and cats to 12 shelters. J. Appl. Anim. Welf. Sci. 2000, 3, 93–106. [Google Scholar] [CrossRef]
  10. Overall, K.L.; Dunham, A.E.; Frank, D. Frequency of nonspecific clinical signs in dogs with separation anxiety, thunderstorm phobia, and noise phobia, alone or in combination. J. Am. Vet. Med. Assoc. 2001, 219, 467–473. [Google Scholar] [CrossRef] [Green Version]
  11. Dinwoodie, I.R.; Dwyer, B.; Zottola, V.; Gleason, D.; Dodman, N.H. Demographics and comorbidity of behavior problems in dogs. J. Vet. Behav. 2019, 32, 62–71. [Google Scholar] [CrossRef]
  12. Ogata, N. Separation anxiety in dogs: What progress was made in our understanding of the most common behavioral problems in dogs? J. Vet. Behav. Clin. Appl. Res. 2016, 16, 28–35. [Google Scholar] [CrossRef]
  13. Storengen, L.M.; Boge, S.C.K.; Strom, S.J.; Loberg, G.; Lingaas, F. A descriptive study of 215 dogs diagnosed with separation anxiety. Appl. Anim. Behav. Sci. 2014, 159, 82–89. [Google Scholar] [CrossRef]
  14. Scaglia, E.; Cannas, S.; Minero, M.; Frank, D.; Bassi, A.; Palestrini, C. Video analysis of adult dogs when left home alone. J. Vet. Behav.-Clin. Appl. Res. 2013, 8, 412–417. [Google Scholar] [CrossRef]
  15. Parthasarathy, V.; Crowell-Davis, S.L. Relationship between attachment to owners and separation anxiety in pet dogs (Canis lupus familiaris). J. Vet. Behav.-Clin. Appl. Res. 2006, 1, 109–120. [Google Scholar] [CrossRef] [Green Version]
  16. Barnard, S.; Calderara, S.; Pistocchi, S.; Cucchiara, R.; Podaliri-Vulpiani, M.; Messori, S.; Ferri, N. Quick, accurate, smart: 3D computer vision technology helps assessing confined animals’ behaviour. PLoS ONE 2016, 11, e0158748. [Google Scholar] [CrossRef] [Green Version]
  17. Ladha, C.; Hammerla, N.; Hughes, E.; Olivier, P.; Ploetz, T. Dog’s life: Wearable activity recognition for dogs. In Proceedings of the 2013 ACM International Joint Conference on Pervasive and Ubiquitous Computing, Zurich, Switzerland, 8–12 September 2013; pp. 415–418. [Google Scholar]
  18. Protopopova, A.; Kisten, D.; Wynne, C. Evaluating a humane alternative to the bark collar: Automated differential reinforcement of not barking in a home-alone setting. J. Appl. Behav. Anal. 2016, 49, 735–744. [Google Scholar] [CrossRef]
  19. Ribeiro, C.; Ferworn, A.; Denko, M.; Tran, J. Canine pose estimation: A computing for public safety solution. In Proceedings of the 2009 Canadian Conference on Computer and Robot Vision, Kelowna, BC, Canada, 25–27 May 2009; pp. 37–44. [Google Scholar]
  20. Mealin, S.; Domínguez, I.X.; Roberts, D.L. Semi-supervised classification of static canine postures using the Microsoft Kinect. In Proceedings of the 3rd International Conference on Animal-Computer Interaction, Milton Keynes, UK, 15–17 November 2016; p. 16. [Google Scholar]
  21. Winters, M.; Brugarolas, R.; Majikes, J.; Mealin, S.; Yuschak, S.; Sherman, B.L.; Bozkurt, A.; Roberts, D. Knowledge engineering for unsupervised canine posture detection from IMU data. In Proceedings of the 12th International Conference on Advances in Computer Entertainment Technology, Iskandar, Malaysia, 16–19 November 2015; p. 60. [Google Scholar]
  22. Valentin, G.; Alcaidinho, J.; Howard, A.; Jackson, M.M.; Starner, T. Towards a canine-human communication system based on head gestures. In Proceedings of the 12th International Conference on Advances in Computer Entertainment Technology, Iskandar, Malaysia, 16–19 November 2015; p. 65. [Google Scholar]
  23. Weiss, G.M.; Nathan, A.; Kropp, J.; Lockhart, J.W. WagTag: A dog collar accessory for monitoring canine activity levels. In Proceedings of the 2013 ACM Conference on Pervasive and Ubiquitous Computing Adjunct Publication, Zurich, Switzerland, 8–12 September 2013; pp. 405–414. [Google Scholar]
  24. Brugarolas, R.; Loftin, R.T.; Yang, P.; Roberts, D.L.; Sherman, B.; Bozkurt, A. Behavior recognition based on machine learning algorithms for a wireless canine machine interface. In Proceedings of the 2013 IEEE International Conference on Body Sensor Networks, Cambridge, MA, USA, 6–9 May 2013; pp. 1–5. [Google Scholar]
  25. Gerencsér, L.; Vásárhelyi, G.; Nagy, M.; Vicsek, T.; Miklósi, A. Identification of behaviour in freely moving dogs (Canis familiaris) using inertial sensors. PLoS ONE 2013, 8, e77814. [Google Scholar] [CrossRef] [Green Version]
  26. Ahn, J.; Kwon, J.; Nam, H.; Jang, H.-K.; Kim, J.-I. Pet Buddy: A wearable device for canine behavior recognition using a single IMU. In Proceedings of the 2016 International Conference on Big Data and Smart Computing (BigComp), Hong Kong, China, 18–20 January 2016; pp. 419–422. [Google Scholar]
  27. Zhan, X.; Huang, Q.; Zhu, C.; Li, X.; Liu, G. A Real-Time Police Dog Action Recognition System Based on Vision and IMU Sensors. In Proceedings of the 2020 IEEE International Conference on Multimedia & Expo Workshops (ICMEW), London, UK, 6–10 July 2020; pp. 1–2. [Google Scholar]
  28. Kumpulainen, P.; Valldeoriola, A.; Somppi, S.; Törnqvist, H.; Väätäjä, H.; Majaranta, P.; Surakka, V.; Vainio, O.; Kujala, M.V.; Gizatdinova, Y. Dog activity classification with movement sensor placed on the collar. In Proceedings of the 5th International Conference on Animal-Computer Interaction, Atlanta, GA, USA, 4–6 December 2018; pp. 1–6. [Google Scholar]
  29. Kiyohara, T.; Orihara, R.; Sei, Y.; Tahara, Y.; Ohsuga, A. Activity recognition for dogs based on time-series data analysis. In Proceedings of the International Conference on Agents and Artificial Intelligence, Lisbon, Portugal, 10–12 January 2015; pp. 163–184. [Google Scholar]
  30. Griffies, J.D.; Zutty, J.; Sarzen, M.; Soorholtz, S. Wearable sensor shown to specifically quantify pruritic behaviors in dogs. BMC Vet. Res. 2018, 14, 124. [Google Scholar] [CrossRef]
  31. Aich, S.; Chakraborty, S.; Sim, J.-S.; Jang, D.-J.; Kim, H.-C. The Design of an Automated System for the Analysis of the Activity and Emotional Patterns of Dogs with Wearable Sensors Using Machine Learning. Appl. Sci. 2019, 9, 4938. [Google Scholar] [CrossRef] [Green Version]
  32. Mundell, P.; Liu, S.; Guérin, N.A.; Berger, J.M. An automated behavior shaping intervention reduces signs of separation anxiety-related distress in a mixed-breed dog. J. Vet. Behav. 2020, 37, 71–75. [Google Scholar] [CrossRef]
  33. Chambers, R.D.; Yoder, N.C.; Carson, A.B.; Junge, C.; Allen, D.E.; Prescott, L.M.; Bradley, S.; Wymore, G.; Lloyd, K.; Lyle, S. Deep learning classification of canine behavior using a single collar-mounted accelerometer: Real-world validation. Animals 2021, 11, 1549. [Google Scholar] [CrossRef]
  34. Arce-Lopera, C.; Diaz-Cely, J.; García, P.; Morales, M. Technology-Enhanced Training System for Reducing Separation Anxiety in Dogs. In Proceedings of the International Conference on Human-Computer Interaction, Orlando, FL, USA, 26–31 July 2019; pp. 434–439. [Google Scholar]
  35. Hirskyj-Douglas, I.; Pons, P.; Read, J.C.; Jaen, J. Seven Years after the Manifesto: Literature Review and Research Directions for Technologies in Animal Computer Interaction. Multimodal Technol. Interact. 2018, 2, 30. [Google Scholar] [CrossRef] [Green Version]
  36. Graves, A.; Mohamed, A.-R.; Hinton, G. Speech recognition with deep recurrent neural networks. In Proceedings of the 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, Vancouver, BC, Canada, 26–31 May 2013; pp. 6645–6649. [Google Scholar]
  37. Hermans, M.; Schrauwen, B. Training and analysing deep recurrent neural networks. In Proceedings of the Advances in Neural Information Processing Systems, Lake Tahoe, NV, USA, 5–10 December 2013; pp. 190–198. [Google Scholar]
  38. Cugola, G.; Margara, A. Processing Flows of Information: From Data Stream to Complex Event Processing. Acm Comput. Surv. 2012, 44, 15. [Google Scholar] [CrossRef]
  39. Akbar, A.; Chaudhry, S.S.; Khan, A.; Ali, A.; Rafiq, W. On Complex Event Processing for Internet of Things. In Proceedings of the 2019 IEEE 6th International Conference on Engineering Technologies and Applied Sciences (ICETAS), Kuala Lumpur, Malaysia, 20–21 December 2019; pp. 1–7. [Google Scholar]
  40. Medjahed, H.; Istrate, D.; Boudy, J.; Dorizzi, B. Human activities of daily living recognition using fuzzy logic for elderly home monitoring. In Proceedings of the 2009 IEEE International Conference on Fuzzy Systems, Jeju, Korea, 20–24 August 2009; pp. 2001–2006. [Google Scholar]
  41. Yao, W.; Chu, C.-H.; Li, Z. Leveraging complex event processing for smart hospitals using RFID. J. Netw. Comput. Appl. 2011, 34, 799–810. [Google Scholar] [CrossRef]
  42. Cannas, S.; Frank, D.; Minero, M.; Aspesi, A.; Benedetti, R.; Palestrini, C. Video analysis of dogs suffering from anxiety when left home alone and treated with clomipramine. J. Vet. Behav.-Clin. Appl. Res. 2014, 9, 50–57. [Google Scholar] [CrossRef]
  43. Liu, L.; Peng, Y.; Liu, M.; Huang, Z. Sensor-based human activity recognition system with a multilayered model using time series shapelets. Knowl.-Based Syst. 2015, 90, 138–152. [Google Scholar] [CrossRef]
  44. Brugarolas, R.; Roberts, D.; Sherman, B.; Bozkurt, A. Posture estimation for a canine machine interface based training system. In Proceedings of the 2012 Annual International Conference of the IEEE Engineering in Medicine and Biology Society, San Diego, CA, USA, 28 August–1 September 2012; pp. 4489–4492. [Google Scholar]
  45. Triboan, D.; Chen, L.; Chen, F.; Wang, Z. Semantic segmentation of real-time sensor data stream for complex activity recognition. Pers. Ubiquitous Comput. 2017, 21, 411–425. [Google Scholar] [CrossRef]
  46. Ullah, M.; Ullah, H.; Khan, S.D.; Cheikh, F.A. Stacked lstm network for human activity recognition using smartphone data. In Proceedings of the 2019 8th European Workshop on Visual Information Processing (EUVIP), Roma, Italy, 28–31 October 2019; pp. 175–180. [Google Scholar]
  47. Han, J.; Kamber, M.; Pei, J. Data mining concepts and techniques third edition. Morgan Kaufmann Ser. Data Manag. Syst. 2011, 5, 83–124. [Google Scholar]
  48. Lowe, D.G. Object recognition from local scale-invariant features. In Proceedings of the 7th IEEE International Conference on Computer Vision, Kerkyra, Greece, 20–27 September 1999; pp. 1150–1157. [Google Scholar]
  49. Chen, Y.; Zhong, K.; Zhang, J.; Sun, Q.; Zhao, X. LSTM networks for mobile human activity recognition. In Proceedings of the 2016 International Conference on Artificial Intelligence: Technologies and Applications, Bangkok, Thailand, 24–25 January 2016; pp. 50–53. [Google Scholar]
  50. Hong, M.; Ahn, H.; Atif, O.; Lee, J.; Park, D.; Chung, Y. Field-Applicable Pig Anomaly Detection System Using Vocalization for Embedded Board Implementations. Appl. Sci. 2020, 10, 6991. [Google Scholar] [CrossRef]
  51. Choi, Y.; Atif, O.; Lee, J.; Park, D.; Chung, Y. Noise-robust sound-event classification system with texture analysis. Symmetry 2018, 10, 402. [Google Scholar] [CrossRef] [Green Version]
  52. Kim, D.Y.; Lee, S.H.; Jeong, G.M. Stack LSTM-Based User Identification Using Smart Shoes with Accelerometer Data. Sensors 2021, 21, 8129. [Google Scholar] [CrossRef]
  53. Zhang, M.; Guo, J.; Li, X.; Jin, R. Data-driven anomaly detection approach for time-series streaming data. Sensors 2020, 20, 5646. [Google Scholar] [CrossRef]
  54. Smith, J.M.; Smith, D.C. Database abstractions: Aggregation and generalization. ACM Trans. Database Syst. 1977, 2, 105–133. [Google Scholar] [CrossRef]
  55. Nowak-Brzezińska, A.; Wakulicz-Deja, A. Exploration of rule-based knowledge bases: A knowledge engineer’s support. Inf. Sci. 2019, 485, 301–318. [Google Scholar] [CrossRef]
  56. Khanna, G.; Cheng, M.Y.; Varadharajan, P.; Bagchi, S.; Correia, M.P.; Veríssimo, P.J. Automated rule-based diagnosis through a distributed monitor system. IEEE Trans. Dependable Secur. Comput. 2007, 4, 266–279. [Google Scholar] [CrossRef]
  57. Liang, Y.; Lee, J.; Hong, B.; Kim, W. Rule-based Complex Event Processing on Tactical Moving Objects. In Proceedings of the 2018 IEEE 4th International Conference on Computer and Communications (ICCC), Chengdu, China, 7–10 December 2018; pp. 2585–2589. [Google Scholar]
  58. Etzion, O.; Niblett, P. Event Processing in Action; Manning Publications Co.: Shelter Island, NY, USA, 2010; pp. 5–10. [Google Scholar]
  59. Ku, T.; Zhu, Y.L.; Hu, K.Y.; Lv, C.X. A novel pattern for complex event processing in rfid applications. In Enterprise Interoperability III; Springer: Berlin/Heidelberg, Germany, 2008; pp. 595–607. [Google Scholar]
  60. Buchmann, A.; Koldehofe, B. Complex event processing. It-Inf. Technol. 2009, 51, 241. [Google Scholar] [CrossRef]
  61. Stoa, S.; Lindeberg, M.; Goebel, V. Online analysis of myocardial ischemia from medical sensor data streams with esper. In Proceedings of the 2008 First International Symposium on Applied Sciences on Biomedical and Communication Technologies, Aalborg, Denmark, 25–28 October 2008; pp. 1–5. [Google Scholar]
  62. McCrave, E.A. Diagnostic-Criteria for Separation Anxiety in the Dog. Vet. Clin. N. Am.-Small Anim. Pract. 1991, 21, 247–255. [Google Scholar] [CrossRef]
  63. Awotunde, J.B.; Matiluko, O.E.; Fatai, O.W. Medical diagnosis system using fuzzy logic. Afr. J. Comput. ICT 2014, 7, 99–106. [Google Scholar]
  64. Ahmadi, H.; Gholamzadeh, M.; Shahmoradi, L.; Nilashi, M.; Rashvand, P. Diseases diagnosis using fuzzy logic methods: A systematic and meta-analysis review. Comput. Methods Programs Biomed. 2018, 161, 145–172. [Google Scholar] [CrossRef]
  65. Flannigan, G.; Dodman, N.H. Risk factors and behaviors associated with separation anxiety in dogs. J. Am. Vet. Med. Assoc. 2001, 219, 460–466. [Google Scholar] [CrossRef] [Green Version]
  66. Dernoncourt, F. Introduction to fuzzy logic. Mass. Inst. Technol. 2013, 21, 14–15. [Google Scholar]
  67. Pena-Reyes, C.A.; Sipper, M. A fuzzy-genetic approach to breast cancer diagnosis. Artif. Intell. Med. 1999, 17, 131–155. [Google Scholar] [CrossRef]
  68. Mendel, J.M. Fuzzy logic systems for engineering: A tutorial. Proc. IEEE 1995, 83, 345–377. [Google Scholar] [CrossRef] [Green Version]
  69. LP-RESEARCH. Available online: https://lp-research.com (accessed on 6 January 2022).
  70. TensorFlow. Available online: https://www.tensorflow.org (accessed on 30 December 2021).
  71. Esper. Available online: https://www.espertech.com/esper (accessed on 30 December 2021).
  72. Plotly. Available online: https://plotly.com/dash (accessed on 30 December 2021).
  73. Powers, D.M. Evaluation: From precision, recall and F-measure to ROC, informedness, markedness and correlation. arXiv 2020, arXiv:2010.16061. [Google Scholar]
  74. Kingma, D.P.; Ba, J. Adam: A method for stochastic optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
  75. Rashid, K.M.; Louis, J. Times-series data augmentation and deep learning for construction equipment activity recognition. Adv. Eng. Inform. 2019, 42, 100944. [Google Scholar] [CrossRef]
  76. Shrestha, A.; Dang, J. Deep learning-based real-time auto classification of smartphone measured bridge vibration data. Sensors 2020, 20, 2710. [Google Scholar] [CrossRef]
Figure 1. Proposed dog monitoring system architecture detects separation anxiety symptomatic complex behaviors and primarily focuses on ‘Excessive destructive behavior’, ‘Excessive exploratory behaviors’, and ‘Excessive vocalization’.
Figure 1. Proposed dog monitoring system architecture detects separation anxiety symptomatic complex behaviors and primarily focuses on ‘Excessive destructive behavior’, ‘Excessive exploratory behaviors’, and ‘Excessive vocalization’.
Sensors 22 01556 g001
Figure 2. Structure of two parallel stacked Long Short-Term Memory (LSTM) networks for dog head and body posture recognition.
Figure 2. Structure of two parallel stacked Long Short-Term Memory (LSTM) networks for dog head and body posture recognition.
Sensors 22 01556 g002
Figure 3. Abstraction hierarchy for dog separation anxiety-related complex behaviors detection.
Figure 3. Abstraction hierarchy for dog separation anxiety-related complex behaviors detection.
Sensors 22 01556 g003
Figure 4. Complex Event Processing (CEP) hierarchy structure for dog behavior monitoring.
Figure 4. Complex Event Processing (CEP) hierarchy structure for dog behavior monitoring.
Sensors 22 01556 g004
Figure 5. Fuzzy logic function structure of dog monitoring system.
Figure 5. Fuzzy logic function structure of dog monitoring system.
Sensors 22 01556 g005
Figure 6. Membership Functions: (a) system input is complex behavior frequency (vocalization, exploratory and destructive) with fuzzy sets { S e l d o m ( r e d ) ,   C o n s i s t e n t ( b l u e ) ,   M o s t ( g r e e n ) } ; (b) output is diagnosis index with fuzzy sets { N o r m a l ( b l u e ) ,   A b n o r m a l ( r e d ) } .
Figure 6. Membership Functions: (a) system input is complex behavior frequency (vocalization, exploratory and destructive) with fuzzy sets { S e l d o m ( r e d ) ,   C o n s i s t e n t ( b l u e ) ,   M o s t ( g r e e n ) } ; (b) output is diagnosis index with fuzzy sets { N o r m a l ( b l u e ) ,   A b n o r m a l ( r e d ) } .
Sensors 22 01556 g006
Figure 7. Examples of experimental areas for data collection. (a) Example of experiment conducted in laboratory; (b,c) examples of experiments conducted in owners’ apartments.
Figure 7. Examples of experimental areas for data collection. (a) Example of experiment conducted in laboratory; (b,c) examples of experiments conducted in owners’ apartments.
Sensors 22 01556 g007
Figure 8. Visualized examples of each time-series data type (50 Hz).
Figure 8. Visualized examples of each time-series data type (50 Hz).
Sensors 22 01556 g008
Figure 9. Screenshot of OpenMAT software used to capture tri-axial accelerometer signals.
Figure 9. Screenshot of OpenMAT software used to capture tri-axial accelerometer signals.
Sensors 22 01556 g009
Figure 10. Web application of proposed dog monitoring system.
Figure 10. Web application of proposed dog monitoring system.
Sensors 22 01556 g010
Table 1. Dog monitoring system hierarchy and activity definitions. (Types: (M)—Motion; (P)—Pose).
Table 1. Dog monitoring system hierarchy and activity definitions. (Types: (M)—Motion; (P)—Pose).
LevelCategoryName
(Type)
DescriptionRelated Lower-Level ActivityObservation Time
Level 1Head
posture
Up (P)Head is higher than the shoulders and body.-1 s
Down (P)Head is lower than shoulders and body.-
Bark (M)Bark movement.-
Body
posture
Walk (M)Gait motion.-
Lie (P)Side of the dog is in contact with the ground.-
Sit (P)Haunches are on the ground, and elbows are not in contact with the environment.-
Stand (P)All feet are on the ground without moving.-
Dig (M)Forelegs consecutively or concurrently move with each other.-
Jump (M)Both of the dog’s forelegs or all legs leave the ground.-
Level 2Atomic
behavior
SniffingHead downwards and close to the floor, while the dog is walking or standing.Walk, Stand, Head down2 s
EscapingRepetitive jumps represent an attempt of escape.Jump
BarkingRepetitive barks.Bark
WalkingWalk for more than 1 s.Walk
LyingLie for more than 1 s.Lie
SittingSit for more than 1 s.Sit
StandingStand for more than 1 s.Stand
DiggingDig for more than 1 s.Dig
Level 3Symptomatic
complex
behavior
Excessive
destructive
behavior
The dog is digging at a high frequency, possibly attempting to escape from exit points.Escaping,
Digging
15 s
Excessive
exploratory behavior
The dog is walking around in the house, sniffing at different objects, and nosing at and around the door, with a high frequency.Walking,
Sniffing
Excessive
vocalization
The dog is repetitively barking, howling, or whining for a long time.Multiple
barking
Table 2. Recent automatic recognition research of dog activities (published between 2009–2021). (Types: (D)—Disease-related behavior).
Table 2. Recent automatic recognition research of dog activities (published between 2009–2021). (Types: (D)—Disease-related behavior).
LevelSensorsLocationTechniqueTargetRef.
1AccelerometerBackPose Estimation algorithmBody posture[19]
CameraCeilingSemisupervised approachBody posture[20]
AccelerometerNeck, backKnowledge engineering approachBody posture[21]
GyroscopeNeckRule-based approachHead posture[22]
2AccelerometerNeckNeural Networks (NN), Instance-based learning (IBk), Random Forest (RF)Atomic behavior[23]
Accelerometer, gyroscopeBodyDecision Tree (DT), Hidden Markov Model (HMM)Atomic behavior[24]
Accelerometer, gyroscopeBackSupport Vector Machine (SVM)Atomic behavior[25]
Accelerometer,
gyroscope
NeckNot specifiedAtomic behavior[26]
Camera, accelerometer, angular velocityNeck, back, thigh, waistSVMAtomic behavior[27]
AccelerometerNeckLinear and quadratic discriminant analysisAtomic behavior[28]
AccelerometerNeckK-Nearest Neighbor (KNN)Atomic behavior (D)[17]
AccelerometerNeckDynamic Time Warping (DTW)Atomic behavior (D)[29]
AccelerometerNeckRule-based bio-inspired approachPruritic behavior (D)[30]
Accelerometer, gyroscopeNeck, tailArtificial Neural Network (ANN), Naïve Bayes (NB), RF, SVM, KNNAtomic behavior and emotion[31]
Microphone, cameraNot specifiedConvolutional Neural Network (CNN)Reducing separation anxiety (D)[32]
AccelerometerNeckMachine learning (Not specified)Atomic behavior (D)[33]
Table 3. Event constructors for dog behavior detection.
Table 3. Event constructors for dog behavior detection.
ConstructorSymbolExpressionMeaning
AndE1E2Conjunction of events E1 and E2
OrE1E2Disjunction of events E1 and E2
Repeat Sensors 22 01556 i001 Sensors 22 01556 i001E1Repeat of E1 events
FollowE1E2E1 occurs followed by E2
CountC( )C(E1)Calculation of the frequency of E1
WindowWin()Win(t)Observation time interval t
FuzzyF( )F(E1)Fuzzy logic calculation of E1
Table 4. Event processing pattern rules expression for dog behavior detection.
Table 4. Event processing pattern rules expression for dog behavior detection.
EPNRule TypeCEP Rules DefinitionExample
Atomic Behavior EPNC1In two-second observation time interval, the state maintains the same postures P without any change.Digging: Sensors 22 01556 i001 DigWin(2 s)
C2In two-second observation time interval, P1 occurs followed by P2.Sniffing: (WalkHead down)Win (2 s)
Complex Behavior EPNAIn 15-s observation time interval, count the total frequency of a1 and a2 occurrences and calculate the fuzzy function result.Excessive Exploratory:
F(C (WalkingSniffing)Win (15 s)) = Abnormal
Table 5. Fuzzy matrix for dog psychological separation anxiety symptoms monitoring.
Table 5. Fuzzy matrix for dog psychological separation anxiety symptoms monitoring.
Diagnosis IndexSeldomConsistentMost
Destructive BehaviorNormalAbnormalAbnormal
Exploratory BehaviorNormalAbnormalAbnormal
VocalizationNormalAbnormalAbnormal
Table 6. Basic information of subject dogs.
Table 6. Basic information of subject dogs.
SerialSizeNameBreedAge
1SmallKimiMaltese0.5
2SmallPrincePapillon9
3SmallDoudouMix1
4SmallTufeiMix1.5
5SmallLiliPapillon7
6MediumCocoMix4
7MediumPudingMix0.5
8LargeCoffeeMix7
Table 7. Total duration of each activity.
Table 7. Total duration of each activity.
LevelCategoryTotal Duration
Level 1Head postureBark10.7 min
Head down18.4 min
Head up33.8 min
Body postureDig13.3 min
Jump11.5 min
Lay12.0 min
Sit11.0 min
Stand18.9 min
Walk20.4 min
Level 2Atomic behaviorSniffing10.0 min
Escaping8.5 min
Barking8.4 min
Walking12.3 min
Lying8 min
Sitting6.8 min
Standing12.3 min
Digging9.3 min
Level 3Symptomatic
complex behavior
Destructive behavior48.5 min
Exploratory behavior72.3 min
Vocalization25 min
Table 8. Overall precision, recall, and F1-score of Level-1 postures.
Table 8. Overall precision, recall, and F1-score of Level-1 postures.
Level Two-Layer Stacked LSTM
CategoryPrecisionRecallF1-Score
Level 1Head
posture
Bark0.9440.9040.922
Head down0.9960.9980.997
Head up0.9140.9460.929
Body
posture
Dig0.8940.8890.889
Jump0.8790.8780.876
Lie0.9900.9910.990
Sit0.9880.9940.992
Stand0.9630.9670.975
Walk0.9620.9470.954
Average0.9480.9460.947
Table 9. Comparison of Level-1 posture identification performance.
Table 9. Comparison of Level-1 posture identification performance.
CategoryF1-Score
Proposed MethodSVMNB
Head postureBark0.9220.8560.665
Head down0.9970.8530.719
Head up0.9290.9900.978
Body postureDig0.8890.9690.935
Jump0.8760.9500.919
Lie0.9900.9960.996
Sit0.9920.6780.644
Stand0.9750.7460.674
Walk0.9540.9760.970
Average0.9470.8900.833
Table 10. Overall precision, recall, and F1-score of Level-2 atomic behaviors.
Table 10. Overall precision, recall, and F1-score of Level-2 atomic behaviors.
LevelStacked LSTM + CEP
CategoryNum.PrecisionRecallF1-Score
Level 2Sniffing1520.9090.9210.915
Escaping1050.9200.9810.949
Barking1010.8760.8420.859
Walking2200.9800.8910.933
Lying900.9870.8440.910
Sitting550.9810.9460.963
Standing2180.9060.8440.874
Digging1291.0000.8220.902
Average0.9450.8860.915
Table 11. Comparison of Level-2 atomic behaviors identification performance.
Table 11. Comparison of Level-2 atomic behaviors identification performance.
CategoryF1-Score
Proposed MethodSVMDTNB
Sniffing0.9150.7940.8690.757
Escaping0.9490.8240.8210.667
Barking0.8590.8330.7450.672
Walking0.9330.9510.9480.914
Lying0.9100.9090.9530.931
Sitting0.9630.6720.9310.657
Standing0.8740.7210.9260.564
Digging0.9020.9070.9170.919
Average0.9150.8270.8890.760
Table 12. Overall precision, recall, and F1-score of Level-3 complex behaviors.
Table 12. Overall precision, recall, and F1-score of Level-3 complex behaviors.
LevelStacked LSTM + Fuzzy-CEP
CategoryNum.PrecisionRecallF1-Score
Level 3Destructive BehaviorAbnormal910.8880.8680.878
Normal610.8100.8360.823
Exploratory BehaviorAbnormal1680.9400.9290.934
Normal630.8150.8410.828
Vocalization BehaviorAbnormal540.8910.9070.899
Normal300.8280.8000.814
Average0.8620.8640.863
Table 13. Comparison of Level-3 complex behaviors identification performance.
Table 13. Comparison of Level-3 complex behaviors identification performance.
LevelCategoryF1-Score
Proposed MethodSVMDTRF
Level-3Destructive BehaviorAbnormal0.8780.8590.8780.882
Normal0.8230.7360.7480.760
Exploratory BehaviorAbnormal0.9340.7060.6300.561
Normal0.8280.5230.5370.500
Vocalization BehaviorAbnormal0.8990.4930.6670.608
Normal0.8140.6110.6900.652
Average0.8630.6550.6920.660
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Wang, H.; Atif, O.; Tian, J.; Lee, J.; Park, D.; Chung, Y. Multi-level Hierarchical Complex Behavior Monitoring System for Dog Psychological Separation Anxiety Symptoms. Sensors 2022, 22, 1556. https://doi.org/10.3390/s22041556

AMA Style

Wang H, Atif O, Tian J, Lee J, Park D, Chung Y. Multi-level Hierarchical Complex Behavior Monitoring System for Dog Psychological Separation Anxiety Symptoms. Sensors. 2022; 22(4):1556. https://doi.org/10.3390/s22041556

Chicago/Turabian Style

Wang, Huasang, Othmane Atif, Jirong Tian, Jonguk Lee, Daihee Park, and Yongwha Chung. 2022. "Multi-level Hierarchical Complex Behavior Monitoring System for Dog Psychological Separation Anxiety Symptoms" Sensors 22, no. 4: 1556. https://doi.org/10.3390/s22041556

APA Style

Wang, H., Atif, O., Tian, J., Lee, J., Park, D., & Chung, Y. (2022). Multi-level Hierarchical Complex Behavior Monitoring System for Dog Psychological Separation Anxiety Symptoms. Sensors, 22(4), 1556. https://doi.org/10.3390/s22041556

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop