Next Article in Journal
Bio-Inspired Self-Healing, Shear-Thinning, and Adhesive Gallic Acid-Conjugated Chitosan/Carbon Black Composite Hydrogels as Suture Support Materials
Next Article in Special Issue
Simulation and Optimization Study on the Ventilation Performance of High-Rise Buildings Inspired by the White Termite Mound Chamber Structure
Previous Article in Journal
Optimization of Butterworth and Bessel Filter Parameters with Improved Tree-Seed Algorithm
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Bioinspired Garra Rufa Optimization-Assisted Deep Learning Model for Object Classification on Pedestrian Walkways

1
Department of Financial Information Security, Kookmin University, Seoul 02707, Republic of Korea
2
Department of Computer Science and Engineering, Saveetha School of Engineering, Saveetha Institute of Medical and Technical Sciences, Chennai 602105, India
3
Big Data and Machine Learning Lab, South Ural State University, Chelyabinsk 454080, Russia
4
College of IBS, National University of Science and Technology, MISiS, Moscow 119049, Russia
5
Department of Convergence Science, Kongju National University, Gongju-si 32588, Chungcheongnam-do, Republic of Korea
6
Basic Science Research Institution, Kongju National University, Gongju-si 32588, Chungcheongnam-do, Republic of Korea
*
Author to whom correspondence should be addressed.
Biomimetics 2023, 8(7), 541; https://doi.org/10.3390/biomimetics8070541
Submission received: 12 September 2023 / Revised: 14 October 2023 / Accepted: 2 November 2023 / Published: 11 November 2023
(This article belongs to the Special Issue Biomimetic Techniques for Optimization Problems in Engineering)

Abstract

:
Object detection in pedestrian walkways is a crucial area of research that is widely used to improve the safety of pedestrians. It is not only challenging but also a tedious process to manually examine the labeling of abnormal actions, owing to its broad applications in video surveillance systems and the larger number of videos captured. Thus, an automatic surveillance system that identifies the anomalies has become indispensable for computer vision (CV) researcher workers. The recent advancements in deep learning (DL) algorithms have attracted wide attention for CV processes such as object detection and object classification based on supervised learning that requires labels. The current research study designs the bioinspired Garra rufa optimization-assisted deep learning model for object classification (BGRODL-OC) technique on pedestrian walkways. The objective of the BGRODL-OC technique is to recognize the presence of pedestrians and objects in the surveillance video. To achieve this goal, the BGRODL-OC technique primarily applies the GhostNet feature extractors to produce a set of feature vectors. In addition to this, the BGRODL-OC technique makes use of the GRO algorithm for hyperparameter tuning process. Finally, the object classification is performed via the attention-based long short-term memory (ALSTM) network. A wide range of experimental analysis was conducted to validate the superior performance of the BGRODL-OC technique. The experimental values established the superior performance of the BGRODL-OC algorithm over other existing approaches.

1. Introduction

Recent technological developments such as computer vision (CV) and surveillance cameras (CCTV) have been utilized to protect the pedestrians and support safer walking practices. For this purpose, the type of characteristics of risk constituents is required to save the pedestrians from accidents [1]. Numerous CV techniques have been developed by focusing on the processes such as activity learning, feature extraction, data acquisition, behavioral learning, and scene learning. The main objective of such techniques is to compute the operations, such as video processing systems, traffic monitoring, scene identification, multicamera-relied challenges and methods, human behavior learning, activities analysis, vehicle prediction and monitoring, anomaly prediction techniques, etc. [2]. The current study focuses on anomalous forecast, a subfield of behavioral learning from the captured visual images. Moreover, anomalous prediction methods realize popular behavior with the help of training processes. The presence of numerous significant variations in the standard implementation process is defined as “anomalous” [3]. Specific instances of anomalies include cross-walking, presence of vehicles on paths, collapse of individuals while walking, signal avoidance at traffic junctions, vehicles making U-turns in red signals, and the unpredicted allocation of people in the crowd [4].
Pedestrian detection involves the automated identification of the walking persons from the information gathered from video and image sequences as well as the accurate location of the pedestrian region [5]. However, pedestrians can be identified as nonrigid objects in difficult environments, in various positions, under varying light conditions, and with changing levels of occlusion in real road situations. These scenarios increase the difficulty of accurately identifying the pedestrians [6]. With the fast growth of artificial intelligence (AI) technology, pedestrian identification has become a significant research area in CV. Pedestrian identification research approaches have been commonly categorized into two types, namely, conventional and deep learning (DL)-based identification techniques [7].
The DL technique is an advanced domain in the machine learning (ML) field that aims to determine the complex models of modest representations. The DL algorithms commonly depend on artificial neural networks (ANNs) that contain numerous hidden layers with nonlinear processing components [8]. The term “deep” corresponds to the presence of several hidden layers that are employed to modify the representation of the data. By applying the idea of feature learning, all the hidden layers of the neural networks design their input data in a new representation [9]. The layer control engages a higher level of generalization than the theoretical perception in the preceding layer. In DL frameworks, the hierarchy of feature learning at numerous levels is ultimately mapped to the output of the ML technique in one architecture [10]. Like ML algorithms, the DL approach can be categorized into two different classes such as unsupervised learning methods and supervised learning techniques, comprising deep neural networks (DNNs).
The current research paper outlines the design of the bioinspired Garra rufa optimization-assisted deep learning model for object classification (BGRODL-OC) technique on pedestrian walkways. The BGRODL-OC technique primarily applies the GhostNet feature extractor to produce a set of feature vectors. Moreover, the BGRODL-OC technique makes use of the GRO algorithm in the hyperparameter tuning process. Finally, the object classification process is performed using the attention-based long short-term memory (ALSTM) network. A wide range of experimental analysis was conducted to validate the superior performance of the BGRODL-OC method. In short, the key contributions of the paper are summarized herewith.
  • An effective BGRODL-OC technique is developed in this study, comprising GhostNet feature extraction, GRO-based hyperparameter tuning, and ALSTM-based classification for pedestrian walkway detection. To the best of the authors’ knowledge, the BRGODL-OC technique has never been mentioned in the literature.
  • The GhostNet model is developed to produce a collection of feature vectors. This model is known for its efficiency and effectiveness in deep-learning-based image analysis and in improving the accuracy of object detection.
  • The BRGO algorithm is employed for the hyperparameter tuning process, which helps in fine-tuning the model’s parameters to improve its performance in object classification.
  • The ALSTM model is presented for the object classification process, which can capture long-term dependencies in video data. The attention mechanism enhances the model’s ability to focus on relevant information, thus further improving the accuracy.

2. Related Works

Abdullah and Jalal [11] presented a new technique using the DL framework and conditional random field (CRF). In this study, the preprocessing was executed primarily, while the superpixels were produced secondarily, utilizing enhanced watershed transform. Then, the objects were segmented using a CRF. The relevant field was localized by employing the conditional probability while a temporal relationship was applied to find the areas. At last, a DL-based hierarchical network method was exploited for identification and classification of the objects. In [12], the authors proposed the automatic DL-based anomaly detection technology in pedestrian walkways (DLADT-PW) technique for susceptible transport user’s protection. The suggested technique comprised preprocessing as the main phase to be implemented for eliminating the noise and increasing the quality of the image. Similarly, the mask region convolutional neural network (Mask-RCNN) with DenseNet technique was utilized for identifying the operations. Harrou et al. [13] developed an innovative deep hybrid learning approach with a completely-directed attention module. The presented technique increased the modeling ability of the variational autoencoder (VAE) by combining it with the LSTM algorithm and employing a self-attention module at multiple phases of the VAE method.
Al Sulaie [14] introduced a novel golden jackal optimizer with DL-based anomaly detection in pedestrian walkways (GJODL-ADPW). In the developed GJODL-ADPW method, the Xception technique was utilized for efficient extraction of the feature method. The GJO technique was employed for optimal selection of the hyperparameters. Lastly, the bidirectional-LSTM (Bi-LSTM) methodology was implemented with an aim to detect the anomalies. Jayasuriya [15] suggested a method with the help of the convolutional neural network (CNN) approach. In this study, the localization was performed on a predesigned map. The extended Kalman filter (EKF) combines such annotations. In addition to this, an omnidirectional camera was added to the technique to enhance the efficient field of view (FoV) of the landmark detection method. The data-theoretic approach was also exploited to select a better viewpoint. Alia et al. [16] designed a hybrid DL technique along with a visualization model. This architecture had two key mechanisms. Firstly, deep optical flow and wheel visualization were utilized to produce the motion data maps; secondly, a false reduction method and EfficientNet-B0-based classifier were incorporated.
Kolluri and Das [17] implemented a technique by employing the hybrid metaheuristic optimizer with DL (IPDC-HMODL) in which 3-phase was offered. Primarily, the IPDC-HMODL approach employed multiple modal object detectors through RetinaNet and YOLO-v5 frameworks. Secondarily, the IPDC-HMODL technique implemented the kernel extreme learning machine (KELM) method for the classification of the pedestrians. Lastly, the hybrid salp swarm optimization (HSSO) method was utilized to optimally adapt the parameters. Alsolai et al. [18] introduced the innovative sine cosine algorithm with DL-based anomaly detection in pedestrian walkways (SCADL-ADPW) technique. This approach employed the VGG-16 framework for producing the feature vectors. In addition, the SCA algorithm was developed for the optimal hyperparameter tuning methods. In this study, the LSTM approach was exploited for anomaly detection.

3. The Proposed Model

In the current manuscript, an automatic object classification technique on pedestrian walkways termed BGRODL-OC method is developed. The objective of the BGRODL-OC algorithm is to recognize the presence of pedestrians and objects in the surveillance video. It encompasses several processes such as the GhostNet feature extractor, the GRO-based hyperparameter selection, and the ALSTM-based classification. Figure 1 illustrates the workflow of the BGRODL-OC algorithm.

3.1. Feature Extraction: GhostNet Model

In this phase, the GhostNet model is applied for the feature extraction process. The fundamental breakthrough of the GhostNet model is the introduction of the Ghost modules that reduce the number of convolutional sizes and calculations through cheap linear transformation so as to generate redundant feature mapping [19]. It also uses initial and cheap convolutions instead of typical convolutions. The input dataset is X R C × H × W , in which the H ,   W ,   and   C correspond to the number of height, width, and channels, and X first passes over the convolution kernels to be 1 × 1 first convolutions once the network is trained.
Y = X × F 1 × 1
Here, F 1 × 1 refers to pointwise convolution and Y R H × W × C o u t shows the inherent features. Next, the cheap convolution is used to generate further features and interconnect the generated features through the first convolution, as given below.
Y = C o n c a t Y , Y × F d p
In Equation (2), F d p refers to depthwise convolution and Y R ( H × W × C o u t ) indicates the output features.
Though the GhostNet model reduces the computational cost, its ability to capture the spatial information is reduced. Thus, the Ghostnet-V2 model adds another attention mechanism, i.e., DFC, based on the FC layer that has low hardware requirements. It is achieved by capturing the dependencies amongst longer distance pixels, and it can enhance the inference speeds. The computation of DFC attention is as follows.
Assume R H × W × C as an HW label, z i R C , Z = z 11 , z 12 , , z H W . The features are aggregated along the vertical and horizontal directions, correspondingly, and are formulated using the following equations.
α h w = h = 1 H F h , h w H z h w , h = 1,2 H , w = 1,2 , W ,
α h w = w = 1 Σ y F w , h w W a h w , h = 1,2 , H , w = 1,2 , , W ,
Here, F H and F W denote the transformation weights, indicates the elementwise multiplication, and A = a 11 ,   a 12 ,   ,   a H W represents the generated attention map. The original feature Z is taken as input while the long-range dependency along both the directions is captured.
Equations (3) and (4) are the representations of DFC attention that aggregate the pixels in 2D horizontal and vertical directions, correspondingly. These equations partially exploit the shared transformation of weights and perform them with convolution to increase the inference speed. This also avoids the time-consuming tensor operations. Two depthwise convolutions, sized 1 × K H and K W × 1 , are used, independent of the feature map sizes to adapt the input images of various resolutions.

3.2. Hyperparameter Tuning: GRO Algorithm

The current study uses the GRO algorithm to adjust the hyperparameters related to the GhostNet architecture. The GRO algorithm is a procedure that employs the mathematical rules and is used to identify the better approach so as to determine the solutions for the problem [20]. The procedure starts by determining a main function that is normally connected to many engineering problems. Then, a group of parameters is defined and the constraints are overcome to attain the required outcomes. Once these are determined, the software then begins the optimization procedure that employs the mathematical models to identify the most efficient and effective parameter values for resolving this issue. The optimizer procedure is iterative, i.e., modifying the distribution of resources increases the performance. The GRO technique is performed in three parts: the GRO initialization, leader crossover, and the follower crossover.
Procedure 1.
GRO initialization
The basic theory of GRO is to split the particles into several groups; each of the groups takes a unique group of leaders for either global or the local optimum group places. The GRO system also needs to deploy major rules like the notion that all the fishes can act as followers while the leaders rely on the connected global optimum point for all the groups. The follower portions can switch the very weak ones to stronger leaders, who can accomplish a better ideal value before the next iteration. It is essential to primarily adopt these maximal follower portions as a percentage. Further, the primary parameters are considered as the acceleration coefficients ( c l ,   c 2 ) and the inertia weighted ( ω ).
f o l l o w e r s   n u m b e r = t o t a l   n u m b e r   o f   p a r t i c l e s n u m b e r   o f   g r o u p s n u m b e r   o f   g r o u p s
Procedure 2.
Leaders’ crossover of GRO
The GRO approach contains two leader crossover procedures that need to be considered in the study. A primary model involves the selection of the novel leaders for all the groups, whereas the secondary model involves the election of a great leader who can lead the maximal number of followers. These stages assist as the guiding rules that found the approaches to be a vital element, thus offering flexibility to this method.
Procedure 3.
Followers’ crossover of GRO
There is a huge probability of determining the optimum performance from a problem space owing to the flexible motion among the groups. Some optimizer approaches are not as flexible to work from one searching space to another, which can cause confusion in most of the difficult issues. This problem appears because of the presence of numerous parameters and higher differential equation order from difficult problems. The GRO model deploys a process to seek a large space of the problem by utilizing the follower crossovers among the groups. First, an arbitrarily chosen fish, in all the groups, changes to a strong leader. Second, one step needs to be taken from the direction of all the leaders by evaluating the position ( X ) and velocity ( v ) , employed in Equations (6) and (7), correspondingly.
v i z + 1 = ω v i z + c 1 r 1 p i z X i z + c 2 r 2 G i z X i z
X i z + 1 = X i z + v i z + 1
The fitness function (FF) in the group figures is recomputed, containing every follower and leader. Equations (8) and (9) define a novel phase in the GRO method.
m o v i n g   f o l l o w e r s i = i n t e g e r E × r a n d o m
f o l l o w e r s i j = M a x f o l l o w e r s i j 1 m o v i n g   f o l l o w e r s i , 0
Here, f refers to the maximal feasibility of the moving fish. The pseudocode of GRO algorithm is given in Algorithm 1.
Algorithm 1: Pseudocode of the GRO algorithm
   Select the primary values (number of particles, leader number, FF limits)
   Followers number = n / leaders number
   Compute FF for n of particles, with sort FF
   While t < iteration do
   For i = 1 to leader counts
   Upgrade particles for the follower for leaders ( i ) utilizing optimizer system
   End for
    i = 2 to leader counts
    Random £ x =   mobile _ fishes ( i )
   Followers ( i ) = M a x (0,followers ( i ) mobile_fishes ( i ) )
   The total amount of mobile_fishes = total no. mobile_fishes+ mobile_fishes ( i )
   End for
   Followers(1) = total no. of mobile_fishes + Followers(1)
   Define the performance of sub-global for all the leaders
   Compute the global solution
End while
In the GRO approach, fitness selection is a vital factor. The encoded solution is applied to measure the goodness of the solution candidate. Here, the accuracy values remain the primary condition to design an FF.
F i t n e s s = m a x P
P = T P T P + F P
Here, T P and F P represent the true and false positive values, respectively.

3.3. Object Classification: ALSTM Model

The ALSTM model is used for the object classification process in this study. The underlying concept of the LSTM network is to control the data flow through gates and to utilize the memory cells (units) for storing and transferring the data [21,22]. Particularly, the LSTM network includes a memory cell along with input, forget, and output gates.
The input gate defines the amount of data that are fed as input to the memory units while the forget gate decides whether to remove the prior memory or not. Finally, the output gate decides the output of the hidden layer. The memory units are accountable for storing and transmitting long-term data that can be updated and controlled by calculating the gate units. Now, x t refers to the input dataset at t time ;   C t 1 denotes the memory values at t   1 time; h t 1 indicates the output values of the LSTM network at t   1 time. The three datasets x t ,   C t 1 , and h t 1 constitute the input dataset. C t corresponds to the memory values at t time ,   h t represents the output values of the LSTM network at t time, and the two datasets C t and h t constitute the output information.
The control functions of the forget, input, and the output gates are as follows.
f i t = s i g m a b i f + j U i , j f x j t + j W i , j f h j t 1
g i t = s i g m a b i g + j U i , j g x j t + j W i , j g h j t 1
q i ( t ) = s i g m a b i 0 + j U i , j 0 x j t + j W i , j 0 h j t 1
where b o ,   U c ,   and   W o refer to the bias, the input, and the cyclic weights of the forget gate, correspondingly.
Attention mechanism is the important component used in the NN model. The core principle is to allocate the attention weight to dissimilar parts of the input datasets, thus reducing the role of inappropriate parts. At the time of processing and learning tasks, this allows one to be more focused on the crucial data, which eventually improves the performance. This attention weight is used for computing the context vectors that capture the fittest data from the inputs.
In order to improve the model’s performance, the relevant equation is given below.
α i = e s h i , h t Σ j = 1 N e s h i , h t
α = i = 1 N α i h i
Now α j denotes the score of the feature vector and a high score designates great attention. s ( h i , h t ) shows the weight value of the i th input feature in the attention module, viz., the score ratio of the feature vector to the entire population. Next, each vector is added and averaged to attain the concluding vector, α . Figure 2 defines the architecture of the ALSTM network.

4. Results and Discussion

The proposed model was simulated in Python 3.6.5 tool on a PC configured with specifications such as i5-8600k, GeForce 1050Ti 4 GB, 16 GB RAM, 250 GB SSD, and 1 TB HDD. The parameter settings used for the study were as follows: learning rate: 0.01, dropout: 0.5, batch size: 5, epoch count: 50, and activation: ReLU. In this section, the performance of the BGRODL-OC technique is evaluated using the UCSD dataset [23], comprising images from the surveillance videos. Figure 3 depicts the sample images.
Table 1 shows the comparative accuracy ( a c c u y ) examination outcomes achieved by the BGRODL-OC technique on test 004 and 007 datasets [12,24,25]. The results infer that the MDT and FRCNN models attained ineffectual performance. Moreover, the CBODL-RPD, DLADT-PW, and RS-CNN methodologies exhibited significant performance. However, the BGRODL-OC technique accomplished the maximum performance on all the frames.
Table 2 shows the average a c c u y analysis outcomes accomplished by the BGRODL-OC technique and other recent models on two datasets. Figure 4 portrays the comparative average a c c u y analysis results of the BGRODL-OC method with the existing systems on the test-004 dataset. The experimental values denote that both FR-CNN and MDT systems reached the least average a c c u y values such as 87.42% and 85.17%, respectively. In addition, the DLADT-PW and RS-CNN techniques achieved a moderate performance with average a c c u y values such as 98.35% and 97.90%, respectively. Although the CBODL-RPD model attained a considerable a c c u y of 99.06%, the BGRODL-OC technique exhibited the maximum performance with an average a c c u y of 99.32%.
Figure 5 shows the comparative average a c c u y analysis outcomes of the BGRODL-OC technique with present methods on the test-007 dataset. The experimental values specify that the FR-CNN and MDT techniques accomplished the least average a c c u y values such as 80.51% and 74.61% individually. Also, the DLADT-PW and RS-CNN methodologies achieved a modest performance with average a c c u y values such as 89.50% and 84.78%, correspondingly. While the CBODL-RPD algorithm simulated the data with a significant a c c u y of 92.27%, the BGRODL-OC system attained the maximum performance with an average a c c u y of 93.18%.
Table 3 and Figure 6 portray the comparative TPR results of the BGRODL-OC approach on test sequence 004. The results show that the DLADT-PW and MDT models obtained poor performance. Then, the CBODL-RPD technique reported slightly decreased performance. Simultaneously, the RS-CNN and FR-CNN methods accomplished considerable results. However, the BGRODL-OC technique outperformed other models with the maximum TPR values.
Table 4 and Figure 7 portray the comparative TPR analysis outcomes of the BGRODL-OC system on test sequence 007. The observational data denote that the DLADT-PW and MDT algorithms acquired inferior performance. In addition, the CBODL-RPD approach achieved a moderately low performance. Concurrently, the RS-CNN and FR-CNN methodologies attained notable experimental outcomes. However, the BGRODL-OC system outperformed the rest of the techniques with better TPR values.
Table 5 shows the comparative area under the ROC (AUC) curve and computation time (CT) results of the BGRODL-OC technique. Figure 8 shows the comparative AUC score results achieved by the BGRODL-OC technique. The results infer that the DLADT-PW, FR-CNN, RS-CNN, and MDT systems exhibited the worst performance, with the lowest AUC scores, such as 89.24%, 89.88%, 90.03%, and 89.28%, respectively. Though the CBODL-RPD technique attained a slightly enhanced performance with an AUC score of 96.54%, the BGRODL-OC technique surpassed the compared methods by achieving a maximum AUC score of 97.80%.
Figure 9 shows the comparative CT outcomes achieved by the BGRODL-OC technique and other recent approaches. The results imply that the RS-CNN and MDT algorithms achieved ineffectual outcomes, with maximum CT values such as 3.19 s and 3.56 s, respectively. At the same time, the CBODL-RPD, DLADT-PW, and FR-CNN methods accomplished moderate performance, with CT values such as 2.90 s, 2.75 s, and 2.90 s. However, the BGRODL-OC technique achieved an effectual performance with a minimal CT of 1.08 s. These results show the enhanced performance of the BGRODL-OC technique.

5. Conclusions

In the current study, an automatic object classification technique for pedestrian walkways termed the BGRODL-OC technique was developed. The objective of the BGRODL-OC technique is to recognize the presence of pedestrians and objects in the surveillance video. It encompasses several processes such as the GhostNet feature extractor, GRO-based hyperparameter selection, and the ALSTM-based classification. To achieve the objective, the BGRODL-OC technique primarily applies the GhostNet feature extractors to produce a set of feature vectors. In addition, the BGRODL-OC technique makes use of the GRO algorithm for hyperparameter tuning. Finally, the object classification is performed using the ALSTM network. A wide range of experimental analysis was conducted to validate the superior performance of the BGRODL-OC algorithm. The experimental values exhibited the better performance of the BGRODL-OC approach over other current techniques, with a maximum AUC score of 97.80%. The future works of the BGRODL-OC technique can enhance its scalability for handling real-time video streams and extend its applicability to different environmental conditions and camera perspectives so as to further bolster the pedestrian safety and object classification accuracy.

Author Contributions

Conceptualization, E.Y.; data curation, K.S. and S.K.; formal analysis, E.Y. and S.K.; funding acquisition, C.S.; investigation, K.S.; methodology, E.Y. and K.S.; project administration, C.S.; resources, C.S.; software, S.K.; supervision, C.S.; validation, K.S. and C.S.; visualization, S.K.; writing—original draft, E.Y.; writing—review and editing, C.S. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by a research grant from Kongju National University in 2023 and this work also was supported by Institute of Information & communications Technology Planning & Evaluation (IITP) Grant funded by the Korea government (MSIT) (Robust AI and Distributed Attack Detection for Edge AI Security) under Grant 2021-0-00511.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Acknowledgments

We would like to thank Soo-yong Jeong for his contributions to the enhancement and refinement of this article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Abdullah, F.; Abdelhaq, M.; Alsaqour, R.; Alatiyyah, M.H.; Alnowaiser, K.; Alotaibi, S.S.; Park, J. Context-aware crowd tracking and anomaly detection via deep learning and social force model. IEEE Access 2023, 11, 75884–75898. [Google Scholar] [CrossRef]
  2. Elallid, B.B.; Hamdani, S.E.; Benamar, N.; Mrani, N. Deep learning-based modelling of pedestrian perception and decision-making in Refuge Island for autonomous driving. In Computational Intelligence in Recent Communication Networks; Springer International Publishing: Cham, Switzerland, 2022; pp. 135–146. [Google Scholar] [CrossRef]
  3. Wong, P.K.-Y.; Luo, H.; Wang, M.; Leung, P.H.; Cheng, J.C.P. Recognition of pedestrian trajectories and attributes with computer vision and deep learning techniques. Adv. Eng. Inform. 2021, 49, 101356. [Google Scholar] [CrossRef]
  4. Monika; Singh, P.; Chand, S. Computer vision-based framework for pedestrian movement direction recognition. J. Intell. Fuzzy Syst. 2023, 44, 8015–8027. [Google Scholar] [CrossRef]
  5. Lin, H.W.; Shivanna, V.M.; Chang, H.C.; Guo, J.I. Real-Time Multiple Pedestrian Tracking with Joint Detection and Embedding Deep Learning Model for Embedded Systems. IEEE Access 2022, 10, 51458–51471. [Google Scholar] [CrossRef]
  6. Fukuda, T.; Arai, I.; Endo, A.; Kakiuchi, M.; Fujikawa, K. Benchmark of Deep Learning Visual and Far-Infrared Videos Toward Weather-tolerant Pedestrian Traffic Monitoring. In Proceedings of the 2023 IEEE International Conference on Smart Mobility (SM), Thuwal, Saudi Arabia, 19–21 March 2023; pp. 45–50. [Google Scholar] [CrossRef]
  7. Sun, T. A Deep Learning Based Real-Time Pedestrian Recognition System. Doctoral Dissertation, Massachusetts Institute of Technology, Cambridge, MA, USA, 2021. [Google Scholar]
  8. Ansarnia, M.S.; Tisserand, E.; Schweitzer, P.; Zidane, M.A.; Berviller, Y. Contextual detection of pedestrians and vehicles in orthophotography by fusion of deep learning algorithms. Sensors 2022, 22, 1381. [Google Scholar] [CrossRef] [PubMed]
  9. Hao, Y. Research on multi-feature and machine learning hierarchical pedestrian detection methods based on deep learning. J. Phys. Conf. Ser. 2021, 1748, 022001. [Google Scholar] [CrossRef]
  10. Razzok, M.; Badri, A.; El Mourabit, I.; Ruichek, Y.; Sahel, A. Pedestrian Detection and Tracking System Based on Deep-SORT, YOLOv5, and New Data Association Metrics. Information 2023, 14, 218. [Google Scholar] [CrossRef]
  11. Abdullah, F.; Jalal, A. Multi-Pedestrians Anomaly Detection via Conditional Random Field and Deep Learning. In Proceedings of the 2023 4th International Conference on Advancements in Computational Sciences (ICACS), Lahore, Pakistan, 20–22 February 2023; pp. 1–6. [Google Scholar] [CrossRef]
  12. Pustokhina, I.V.; Pustokhin, D.A.; Vaiyapuri, T.; Gupta, D.; Kumar, S.; Shankar, K. An automated deep learning based anomaly detection in pedestrian walkways for vulnerable road users safety. Saf. Sci. 2021, 142, 105356. [Google Scholar] [CrossRef]
  13. Harrou, F.; Dairi, A.; Zeroual, A.; Sun, Y. Forecasting of Bicycle and Pedestrian Traffic Using Flexible and Efficient Hybrid Deep Learning Approach. Appl. Sci. 2022, 12, 4482. [Google Scholar] [CrossRef]
  14. Al Sulaie, S. Golden Jackal Optimization with Deep Learning-Based Anomaly Detection in Pedestrian Walkways for Road Traffic Safety. In International Conference on Innovative Computing and Communication; Springer Nature: Singapore, 2023; pp. 617–636. [Google Scholar] [CrossRef]
  15. Jayasuriya, M. Deep Learning Aided Visual Localisation in Urban Pedestrian Environments. Doctoral Dissertation, University of Technology Sydney, Sydney, Australia, 2021. [Google Scholar]
  16. Alia, A.; Maree, M.; Chraibi, M. A hybrid deep learning and visualization framework for pushing behaviour detection in pedestrian dynamics. Sensors 2022, 22, 4040. [Google Scholar] [CrossRef] [PubMed]
  17. Kolluri, J.; Das, R. Intelligent multimodal pedestrian detection using hybrid metaheuristic optimization with deep learning model. Image Vis. Comput. 2023, 131, 104628. [Google Scholar] [CrossRef]
  18. Alsolai, H.; Al-Wesabi, F.N.; Motwakel, A.; Drar, S. Assisting Visually Impaired People Using Deep Learning-based Anomaly Detection in Pedestrian Walkways for Intelligent Transportation Systems on Remote Sensing Images. J. Disabil. Res. 2023, 2, 49–56. [Google Scholar] [CrossRef]
  19. Li, Z.; Fang, X.; Zhen, T.; Zhu, Y. Detection of Wheat Yellow Rust Disease Severity Based on Improved GhostNetV2. Appl. Sci. 2023, 13, 9987. [Google Scholar] [CrossRef]
  20. Chillab, R.K.; Smida, M.B.; Jaber, A.S.; Sakly, A. Original Research Article Optimal DG allocation by Garra Rufa optimization for power loss reduction. J. Auton. Intell. 2023, 6, 779. [Google Scholar] [CrossRef]
  21. Yang, X.; Zhang, C.; Zhao, S.; Zhou, T.; Zhang, D.; Shi, Z.; Liu, S.; Jiang, R.; Yin, M.; Wang, G.; et al. CLAP: Gas Saturation Prediction in Shale Gas Reservoir Using a Cascaded Convolutional Neural Network–Long Short-Term Memory Model with Attention Mechanism. Processes 2023, 11, 2645. [Google Scholar] [CrossRef]
  22. Available online: http://www.svcl.ucsd.edu/projects/anomaly/dataset.htm (accessed on 18 August 2023).
  23. Sabir, M.F.S.; Oqaibi, H.; Binyamin, S.S.; Althaqafi, T.; Ragab, M. Colliding Bodies Optimization with Deep Belief Network-based Robust Pedestrian Detection. IEEE Access 2023, 11, 65084–65092. [Google Scholar] [CrossRef]
  24. Wang, B.; Yang, C. Video anomaly detection based on convolutional recurrent autoencoder. Sensors 2022, 22, 4647. [Google Scholar] [CrossRef] [PubMed]
  25. Ragab, M.; Farouk, M.; Sabir, S. Arithmetic optimization with deep learning enabled anomaly detection in smart city. Comput. Mater. Contin. 2022, 73, 381–395. [Google Scholar]
Figure 1. Workflow of the BGRODL-OC algorithm.
Figure 1. Workflow of the BGRODL-OC algorithm.
Biomimetics 08 00541 g001
Figure 2. Architecture of the ALSTM.
Figure 2. Architecture of the ALSTM.
Biomimetics 08 00541 g002
Figure 3. Sample images.
Figure 3. Sample images.
Biomimetics 08 00541 g003
Figure 4. Average a c c u y analysis results of the BGRODL-OC approach on test 004 dataset.
Figure 4. Average a c c u y analysis results of the BGRODL-OC approach on test 004 dataset.
Biomimetics 08 00541 g004
Figure 5. Average a c c u y analysis outcomes of the BGRODL-OC approach on test 007 dataset.
Figure 5. Average a c c u y analysis outcomes of the BGRODL-OC approach on test 007 dataset.
Biomimetics 08 00541 g005
Figure 6. Comparative TPR outcomes of the BGRODL-OC technique on test sequence 004.
Figure 6. Comparative TPR outcomes of the BGRODL-OC technique on test sequence 004.
Biomimetics 08 00541 g006
Figure 7. Comparative TPR outcomes of the BGRODL-OC technique on test sequence 007.
Figure 7. Comparative TPR outcomes of the BGRODL-OC technique on test sequence 007.
Biomimetics 08 00541 g007
Figure 8. AUC score analysis outcomes of the BGRODL-OC technique and other methods.
Figure 8. AUC score analysis outcomes of the BGRODL-OC technique and other methods.
Biomimetics 08 00541 g008
Figure 9. CT analysis outcomes of the BGRODL-OC technique and other techniques.
Figure 9. CT analysis outcomes of the BGRODL-OC technique and other techniques.
Biomimetics 08 00541 g009
Table 1. A c c u y analysis outcomes of the BGRODL-OC approach on test 004 and 007 datasets.
Table 1. A c c u y analysis outcomes of the BGRODL-OC approach on test 004 and 007 datasets.
Test-004 Accuracy
No. of FramesBGRODL-OCCBODL-RPDDLADT-PWRS-CNNFR-CNNMDT
FR-4097.6298.9998.4197.8486.0286.89
FR-4299.4598.9898.2397.6588.9085.67
FR-4699.5098.8598.2497.6688.2683.42
FR-5199.3899.6199.3297.6085.8886.39
FR-7599.3799.9599.0697.6388.6186.02
FR-10699.9098.9698.0798.2788.9286.99
FR-12399.4598.9898.2097.9386.2982.91
FR-13598.9398.9398.0697.8986.2583.26
FR-13699.6098.9398.0198.1988.9686.73
FR-13799.0798.9198.0898.2587.3486.13
FR-14999.8298.9098.0498.1086.0084.86
FR-15899.8598.9998.2098.0488.5186.59
FR-17798.9198.9498.2097.6786.4884.51
FR-17899.5199.0099.0997.7888.7984.05
FR-18099.4098.9898.0797.9586.0583.20
Test-007 Accuracy
No. of FramesBGRODL-OCCBODL-RPDDLADT-PWRS-CNNFR-CNNMDT
FR-7898.5597.3297.3392.5686.7478.77
FR-91100.64100.9198.0193.9787.7077.38
FR-92100.45100.16100.6895.5288.7267.78
FR-110100.4999.5797.0593.8685.9273.46
FR-11397.6096.1194.9090.3386.8976.72
FR-11589.4188.2785.8185.2484.6073.46
FR-125100.44100.3199.8094.1391.7671.02
FR-142100.0699.4499.4997.1284.4071.34
FR-14687.1185.9886.6282.6477.1881.28
FR-14790.7189.4185.6883.8582.9869.79
FR-14877.7876.2571.0156.6255.3770.05
FR-15096.8095.9189.8985.4084.2973.55
FR-17884.3183.0776.0571.6065.1678.76
FR-17982.4381.5174.3465.9063.1675.64
FR-18090.8889.8285.8983.0082.7180.10
Table 2. Average a c c u y analysis results of the BGRODL-OC approach on test 004 and 007 datasets.
Table 2. Average a c c u y analysis results of the BGRODL-OC approach on test 004 and 007 datasets.
Average Accuracy (%)
MethodsBGRODL-OCCBODL-RPDDLADT-PWRS-CNNFR-CNNMDT
Test-00499.3299.0698.3597.9087.4285.17
Test-00793.1892.2789.5084.7880.5174.61
Table 3. Comparative TPR outcomes of the BGRODL-OC technique and other existing methods on test sequence 004.
Table 3. Comparative TPR outcomes of the BGRODL-OC technique and other existing methods on test sequence 004.
True Positive Rate (TPR) (Test Sequence-004)
False Positive Rate (FPR) BGRODL-OCCBODL-RPDDLADT-PWRS-CNNFR-CNNMDT
0.000.00000.00000.00000.00000.00000.0000
0.050.21940.21560.19210.19200.15470.1211
0.100.52340.36850.36370.31640.27850.2272
0.150.63100.46040.49270.44740.42430.3380
0.200.77020.59610.60930.54620.49720.5149
0.250.95080.71200.71660.67260.61700.6559
0.300.99100.87830.85790.87860.81060.8008
0.350.98450.93730.90190.93880.87900.8631
0.400.98450.91130.87600.91130.85070.8381
0.450.99310.93150.93400.97950.87980.9278
0.500.99980.95920.95170.99980.91420.9693
0.550.99980.96670.97190.99980.94660.9899
0.600.99980.96670.98690.99980.97960.9950
0.650.99980.96670.98440.99980.97810.9939
0.700.99980.97940.99200.99980.98860.9863
0.750.99980.98690.99310.99980.99560.9941
0.800.99980.98950.99410.99980.99950.9940
0.850.99980.99310.99310.99980.99860.9939
0.900.99980.99560.99410.99980.99940.9958
0.950.99980.99560.99310.99980.99990.9939
1.000.99980.99560.99310.99970.99940.9950
Table 4. Comparative TPR outcomes of the BGRODL-OC technique and other existing methods on test sequence 007.
Table 4. Comparative TPR outcomes of the BGRODL-OC technique and other existing methods on test sequence 007.
True Positive Rate (Test Sequence-007)
False Positive RateBGRODL-OCCBODL-RPDDLADT-PWRS-CNNFR-CNNMDT
0.000.00000.00000.00000.00000.00000.0000
0.050.62280.44480.36070.2220.36090.3034
0.100.63030.47030.40060.39950.43120.3429
0.150.89520.71620.54750.51330.6320.455
0.200.95860.79360.59910.71320.75680.5312
0.250.97850.82350.8490.78720.77840.6676
0.300.9560.89520.9190.86250.89650.788
0.350.92570.95830.94990.96770.89570.8363
0.400.9260.96530.95390.95970.93170.999
0.450.94360.81460.85610.83970.86260.831
0.500.97110.86350.87910.87570.93340.8591
0.550.97110.88180.89110.89230.93660.8576
0.600.97110.90280.91360.89350.96290.8584
0.650.97110.90290.92250.90510.96170.8771
0.700.97110.90770.95670.91130.96250.9147
0.750.97110.93430.98270.92640.96030.9397
0.800.99710.94680.98220.94270.9610.9407
0.850.99710.94640.98670.96750.97710.9406
0.900.99710.95710.97980.99080.9680.9772
0.950.99710.99320.96910.99670.96020.9701
1.000.99710.99400.99810.99670.98570.9701
Table 5. AUC score and CT analysis outcomes of the BGRODL-OC technique and other algorithms.
Table 5. AUC score and CT analysis outcomes of the BGRODL-OC technique and other algorithms.
MethodsAUC Score (%)Computational Time (s)
BGRODL-OC97.801.08
CBODL-RPD96.542.90
DLADT-PW89.242.75
RS-CNN90.033.19
FR-CNN89.882.90
MDT89.283.56
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yang, E.; Shankar, K.; Kumar, S.; Seo, C. Bioinspired Garra Rufa Optimization-Assisted Deep Learning Model for Object Classification on Pedestrian Walkways. Biomimetics 2023, 8, 541. https://doi.org/10.3390/biomimetics8070541

AMA Style

Yang E, Shankar K, Kumar S, Seo C. Bioinspired Garra Rufa Optimization-Assisted Deep Learning Model for Object Classification on Pedestrian Walkways. Biomimetics. 2023; 8(7):541. https://doi.org/10.3390/biomimetics8070541

Chicago/Turabian Style

Yang, Eunmok, K. Shankar, Sachin Kumar, and Changho Seo. 2023. "Bioinspired Garra Rufa Optimization-Assisted Deep Learning Model for Object Classification on Pedestrian Walkways" Biomimetics 8, no. 7: 541. https://doi.org/10.3390/biomimetics8070541

APA Style

Yang, E., Shankar, K., Kumar, S., & Seo, C. (2023). Bioinspired Garra Rufa Optimization-Assisted Deep Learning Model for Object Classification on Pedestrian Walkways. Biomimetics, 8(7), 541. https://doi.org/10.3390/biomimetics8070541

Article Metrics

Back to TopTop