A Road Behavior Pattern-Detection Model in Querétaro City Streets by the Use of Shape Descriptors
Abstract
:1. Introduction
- Human operators are required to monitor day-to-day activities.
- Human intervention is required to locate the same object if multiple cameras are used.
1.1. Related Work
1.2. Theorical Background
2. Materials and Methods
2.1. Scenery
2.2. Features Extraction
Algorithm 1. Pseudocode employed for geometric feature extraction. | |
Input | |
Output | |
set intensity criterion; set radius of the structuring object; set connectivity pixel; while has a sequence then get image(i); set image(i) to grayscale; set image(i) to normalize; set background model; set pixel intensity criterion; set morphological process; [L,n] = get moving objects; for each object found n then xy = get connected components; arx = calculate the area; per = calculate the perimeter; cxy = find the centroid in x,y; cir = calculate the circularity value; eul = find Euler’s number; std = calculate the standard deviation in x,y; [m1,..., m7] = calculating Hu’s inv. moments; [n,coord,puntos] = detect Harris corners; save and insert to T dataset; end return T; | |
end |
2.3. Time Series
2.4. Causality Analysis
2.5. External Dependence Model
3. Results
3.1. Scenery
3.2. Features Extraction
3.3. Time Series
3.4. Causality Analysis
3.5. External Dependence Model
- High positive correlations: if the correlation value is high and positive, it means that the direction of the flow of the vehicles is downward (they go down), that is, if there is flow in camera , proportionally there will also be flow in camera .
- High negative correlations: negative values represent a flow change; if high, it means that the number of vehicles decreases (opposite flow).
4. Discussion
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Vishwakarma, S.; Agrawal, A. A survey on activity recognition and behavior understanding in video surveillance. Vis. Comput. 2013, 29, 983–1009. [Google Scholar] [CrossRef]
- Camero, A.; Alba, E. Smart City and information technology: A review. Cities 2019, 93, 84–94. [Google Scholar] [CrossRef]
- Betanzo-Quezada, E.; Romero-Navarrete, J.A.; Obregón-Biosca, S.A. Researches on urban freight transport in the Mexican city of Queretaro: From central and peri-urban areas. J. Urban Environ. Eng. 2015, 9, 12–21. [Google Scholar] [CrossRef]
- Ochoa-Olán, J.D.J.; Betanzo-Quezada, E.; Romero-Navarrete, J.A. A modeling and micro-simulation approach to estimate the location, number and size of loading/unloading bays: A case study in the city of Querétaro, Mexico. Transp. Res. Interdiscip. Perspect. 2021, 10, 100400. [Google Scholar] [CrossRef]
- Trencher, G. Towards the smart city 2.0: Empirical evidence of using smartness as a tool for tackling social challenges. Technol. Forecast. Soc. Chang. 2019, 142, 117–128. [Google Scholar] [CrossRef]
- Haluza, D.; Jungwirth, D. Artificial Intelligence and Ten Societal Megatrends: An Exploratory Study Using GPT-3. Systems 2023, 11, 120. [Google Scholar] [CrossRef]
- Wang, W.; He, F.; Li, Y.; Tang, S.; Li, X.; Xia, J.; Lv, Z. Data information processing of traffic digital twins in smart cities using edge intelligent federation learning. Inf. Process. Manag. 2023, 60, 18. [Google Scholar] [CrossRef]
- Yang, B.; Lv, Z.; Wang, F. Digital Twins for Intelligent Green Buildings. Buildings 2022, 12, 856. [Google Scholar] [CrossRef]
- Amen, M.A.; Afara, A.; Nia, H.A. Exploring the Link between Street Layout Centrality and Walkability for Sustainable Tourism in Historical Urban Areas. Urban Sci. 2023, 7, 67. [Google Scholar] [CrossRef]
- Husain, A.A.; Maity, T.; Yadav, R.K. Vehicle detection in intelligent transport system under a hazy environment: A survey. IET Image Process. 2020, 14, 1–10. [Google Scholar] [CrossRef]
- Mohammed, A.S.; Amamou, A.; Ayevide, F.K.; Kelouwani, S.; Agbossou, K.; Zioui, N. The Perception System of Intelligent Ground Vehicles in All Weather Conditions: A Systematic Literature Review. Sensors 2020, 20, 6532. [Google Scholar] [CrossRef] [PubMed]
- Su, Y.; Chen, X.; Cang, C.; Li, F.; Rao, P. A Space Target Detection Method Based on Spatial–Temporal Local Registration in Complicated Backgrounds. Remote Sens. 2024, 16, 669. [Google Scholar] [CrossRef]
- Selvi, C.T.; Amudha, J. Automatic Video Surveillance System for Pedestrian Crossing Using Digital Image Processing. Indian J. Sci. Technol. 2019, 12, 1–6. [Google Scholar] [CrossRef]
- Hsia, S.-C.; Wang, S.-H.; Wei, C.-M.; Chang, C.-Y. Intelligent Object Tracking with an Automatic Image Zoom Algorithm for a Camera Sensing Surveillance System. Sensors 2022, 22, 8791. [Google Scholar] [CrossRef]
- Dilek, E.; Dener, M. Computer Vision Applications in Intelligent Transportation Systems: A Survey. Sensors 2023, 23, 2938. [Google Scholar] [CrossRef] [PubMed]
- Shantaiya, S.; Verma, K.; Mehta, K.K. Multiple class image-based vehicle classification using soft computing algorithms. Int. Arab J. Inf. Technol. 2016, 13, 835–841. [Google Scholar]
- Moghadam, K.Y.; Noori, M.; Silik, A.; Altabey, W.A. Damage Detection in Structures by Using Imbalanced Classification Algorithms. Mathematics 2024, 12, 432. [Google Scholar] [CrossRef]
- Gu, S.; Wang, L.; Hao, W.; Du, Y.; Wang, J.; Zhang, W. Online Video Object Segmentation via Boundary-Constrained Low-Rank Sparse Representation. IEEE Access 2019, 7, 53520–53533. [Google Scholar] [CrossRef]
- Wang, Z.; Lv, Y.; Wu, R.; Zhang, Y. Review of GrabCut in Image Processing. Mathematics 2023, 11, 1965. [Google Scholar] [CrossRef]
- Rawassizadeh, R.; Dobbins, C.; Akbari, M.; Pazzani, M. Indexing Multivariate Mobile Data through Spatio-Temporal Event Detection and Clustering. Sensors 2019, 19, 448. [Google Scholar] [CrossRef]
- Zhuo, X.; Fraundorfer, F.; Kurz, F.; Reinartz, P. Automatic Annotation of Airborne Images by Label Propagation Based on a Bayesian-CRF Model. Remote Sens. 2019, 11, 145. [Google Scholar] [CrossRef]
- Zambrano-Martinez, J.L.; Calafate, C.T.; Soler, D.; Cano, J.-C.; Manzoni, P. Modeling and Characterization of Traffic Flows in Urban Environments. Sensors 2018, 18, 2020. [Google Scholar] [CrossRef] [PubMed]
- Asumadu-Sarkodie, S.; Owusu, P.A. The Kenya Case of Multivariate Causality of Carbon Dioxide Emissions. Preprints 2016, 1–28. [Google Scholar] [CrossRef]
- Liang, X.S. Normalized Multivariate Time Series Causality Analysis and Causal Graph Reconstruction. Entropy 2021, 23, 679. [Google Scholar] [CrossRef]
- Siggiridou, E.; Koutlis, C.; Tsimpiris, A.; Kugiumtzis, D. Evaluation of Granger Causality Measures for Constructing Networks from Multivariate Time Series. Entropy 2019, 21, 1080. [Google Scholar] [CrossRef]
- Ahmad, M.S.; Szczepankiewicz, E.I.; Yonghong, D.; Ullah, F.; Ullah, I.; Loopesco, W.E. Does Chinese Foreign Direct Investment (FDI) Stimulate Economic Growth in Pakistan? An Application of the Autoregressive Distributed Lag (ARDL Bounds) Testing Approach. Energies 2022, 15, 2050. [Google Scholar] [CrossRef]
- Baarnett, L.; Seth, A. The MVGC Multivariate Granger Causality Toolbox: A New Approach to Granger-causal Inference. J. Neurosci. Methods 2013, 223, 50–68. [Google Scholar] [CrossRef] [PubMed]
- Emonet; Varadarajan, J.; Odobez, J.-M. Multi-camera open space human activity discovery for anomaly detection. In Proceedings of the 2011 8th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS), Klagenfurt, Austria, 30 August–2 September 2011; pp. 218–223. [Google Scholar]
- Jiménez-Hernández, H.; González-Barbosa, J.-J.; Garcia-Ramírez, T. Detecting Abnormal Vehicular Dynamics at Intersections Based on an Unsupervised Learning Approach and a Stochastic Model. Sensors 2010, 10, 7576–7601. [Google Scholar] [CrossRef] [PubMed]
- Costanzo, A.; Faro, A. Towards an Open and Interoperable Platform for Real Time Decision Making in Intelligent Cities. In Proceedings of the 2012 Eighth International Conference on Signal-Image Technology & Internet-Based Systems (SITIS 2012), Naples, Italy, 28 November–1 December 2016; pp. 571–578. [Google Scholar]
- Wang, X. Intelligent multi-camera video surveillance: A review. Pattern Recognit. Lett. 2013, 34, 3–19. [Google Scholar] [CrossRef]
- Wiener, N. The theory of prediction. Modern Mathematics for the Engineer; McGraw-Hill: New York, NY, USA, 1956; pp. 165–190. [Google Scholar]
- Granger, C.W.J. Investigating Causal Relations by Econometric Models and Cross-spectral Methods. Econometrica 1969, 37, 424–438. [Google Scholar] [CrossRef]
- Deshpande, G.; LaConte, S.; James, G.A.; Peltier, S.; Hu, X. Multivariate Granger Causality Analysisof fMRI Data. Hum. Brain Mapp. 2009, 30, 1361–1373. [Google Scholar] [CrossRef] [PubMed]
- Sossa-Azuela, J.H.; Cuevas-Jiménez, E.B.; Zaldivar-Navarro, D. Alternative Way to Compute the Euler Number of a Binary Image. J. Appl. Res. Technol. 2011, 9, 335–341. [Google Scholar] [CrossRef]
- He, L.; Yao, B.; Zhao, X.; Yang, Y.; Shi, Z.; Kasuya, H.; Chao, Y. A fast algorithm for integrating connected-component labeling and euler number computation. J. Real-Time Image Process. 2018, 15, 709–723. [Google Scholar] [CrossRef]
- Diaz-De-Leon, S.; Sossa-Azulea, J.H. On the computation of the Euler number of a binary object. Pattern Recognit. 1996, 29, 471–476. [Google Scholar] [CrossRef]
- SZenzo, D.; Cinque, L.; Levialdi, S. Run-based algorithms for binary image analysis and processing. IEEE Trans. Pattern Anal. Mach. Intell. 1996, 18, 83–89. [Google Scholar]
- Dyer, C.R. Computing the Euler number of an image from its quadtree. Comput. Graph. Image Process. 1980, 13, 270–276. [Google Scholar] [CrossRef]
- Sampallo, G. Reconocimiento de Tipos de Hojas. Intel. Artif. Rev. Iberoam. Intel. Artif. 2003, 7, 55–62. [Google Scholar]
- Cervantes, J.; Taltempa, J.; García-Lamont, F.; Ruiz-Castilla, J.S.; Yee-Rendon, A.; Jalili, L.D. Análisis comparativo de las técnicsa utilizadas en un Sistema de Reconocimiento de Hojsa de Planta. Rev. Iberoam. De Intel. Artif. 2017, 14, 104–114. [Google Scholar]
- Herrera-Navarro, A.M.; Jiménez-Hernández, H.; Terol-Villalobos, I.R. Framework for characterizing circularity based on a probability distribution. Measurement 2013, 46, 4232–4243. [Google Scholar] [CrossRef]
- Herrera-Navarro, A.M.A.M.; Hernández, H.J.; Guerrero, F.M.; Terol-Villalobos, I.R.; Peregrina-Barreto, H. A New Measure of Circularity Based on Distribution of the Radius. Comput. Sist. 2013, 17, 515–526. [Google Scholar]
- Hu, M.-K. Visual pattern recognition by moment invariants. IEEE Trans. Inf. Theory 1962, 8, 179–187. [Google Scholar]
- Mei, Y.; Androutsos, D. Affine invariant shape descriptors: The ICA-Fourier descriptor and the PCA-Fourier descriptor. In Proceedings of the 2008 19th International Conference on Pattern Recognition (ICPR), Tampa, FL, USA, 8–11 December 2008; pp. 3614–3617. [Google Scholar]
- Yang, H.; Sengupta, S. Intelligent shape recognition for complex industrial tasks. IEEE Control. Syst. Mag. 1988, 8, 23–30. [Google Scholar] [CrossRef]
- Harris, C.; Sttephens, M. A combined Corner and Edge Detector. In Proceedings of the 4th Alvey Vision Conference, Manchester, UK, 31 August–2 September 1988; pp. 147–151. [Google Scholar]
- Moravec, H.P. Obstacle Avoidance and Navigation in the Real World by a Seeing Robot Rove; Stanford University: Pittsburgh, PA, USA, 1980. [Google Scholar]
- Lima, V.; Dellajustina, F.J.; Shimoura, R.O.; Girardi-Schappo, M.; Kamiji, N.L.; Pena, R.F.O.; Roque, A.C. Granger causality in the frequency domain: Derivation and applications. Rev. Bras. Ensino Física 2020, 42, e20200007-10. [Google Scholar] [CrossRef]
- Dubrow, A. Artificial Intelligence and Supercomputers to Help Alleviate Urban Traffic Problems; Texas Advanced Computing Center: Austin, TX, USA, 2017. [Google Scholar]
- Wang, Y.; Jodoin, P.-M.; Porikli, F.; Konrad, J.; Benezeth, Y.; Ishwar, P. CDnet 2014: An Expanded Change Detection Benchmark Dataset. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Columbus, OH, USA, 23–28 June 2014; pp. 387–394. [Google Scholar]
Name | Description | Value |
---|---|---|
Dataset TACC [50] | Scenery: Lamar and 38th street. | 36,000 frames. |
Dataset: CDnet [51] | Scenery: Highway. | 1700 frames. |
Dataset: study scenery | Scenery: Av. Paseo de la Constitución, Querétaro, Qro., México. | 1,514,286 frames. |
Lambda server | Workstation with NVIDIA GPU, Ubuntu S.O. | RTX4090, 16,384 CUDA. |
Python | Programming language. | 3.9.13 version. |
Dome cameras | Three dome cameras PTZ VIVOTEK. | Model SD9364-EHL. |
Name | Description |
---|---|
Object number | Number of connected objects in the image. |
Area | Area of objects, pixels of the area of objects in the image. |
Perimeter | Pixels of the perimeter of the image objects. |
Centroid | It is the geometric center of the body/point where the total area of a figure is considered to be concentrated. |
Standard deviation | A most common measure of dispersion that indicates how dispersed the data are concerning the mean. |
Circularity [43] | Percentage of circularity of objects. |
Euler number [46] | It is the total number of objects in the image minus the total number of holes in those connected objects. |
Harris corners [47] | It is used to extract certain types of features and infer the content of an image. A corner is the intersection of two edges and/or a point for which there are two dominant edge directions. |
Hu moments [44] | Set of seven invariant descriptors that quantify the shape of an object (centroid, area, and orientation)/[ordinary, centralized, and normalized]. |
OBJ | AREA | PERIMETER | X | Y | STDX | STDY | CIR | EULER | HARRIS | MHU1 | MHU2 | MHU3 | MHU4 |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1 | 62 | 27.589 | 69.806 | 97.323 | 2.9578 | 1.8087 | 0.80645 | 1 | 2 | 0.20291 | 0.01238 | 0.00286 | 0.01120 |
2 | 37 | 19.815 | 132.57 | 57.1620 | 2.2304 | 1.3645 | 0.86486 | 1 | 2 | 0.18919 | 0.00808 | 0.00068 | 0.00725 |
1 | 63 | 27.986 | 69.825 | 97.667 | 2.9267 | 1.9177 | 0.7619 | 1 | 2 | 0.20912 | 0.01300 | 0.00245 | 0.01360 |
2 | 24 | 15.316 | 132.5 | 57.5 | 1.7446 | 1.1421 | 0.83333 | 1 | 2 | 0.19444 | 0.00525 | 0.00091 | 0.01108 |
. | . | . | . | . | |||||||||
. | . | . | . | . | |||||||||
. | . | . | . | . | |||||||||
1 | 62 | 27.589 | 97.323 | 97.323 | 2.9578 | 1.8087 | 0.80645 | 1 | 2 | 0.20291 | 0.01238 | 0.00286 | 0.01120 |
. | . | . | . | . | |||||||||
. | . | . | . | . | |||||||||
. | . | . | . | . | |||||||||
1 | 62 | 27.589 | 69.806 | 97.323 | 2.9578 | 1.8087 | 0.80645 | 1 | 2 | 0.20291 | 0.01238 | 0.00286 | 0.01120 |
Value | Description |
---|---|
+0.93 | The positive correlation of 0.93 in blue between the zero regions of camera and camera means that when the zero region of camera presents traffic or there are high increases, there will also be increases proportionally in the zero region of camera . |
−0.69 | On the other hand, in the negative correlation of −0.69 in red between region one of camera and region zero of camera , an inverse behavior is presented, which means that, if there is a lot of traffic in camera of region one, it will decrease in region zero of camera . |
−0.67 | A similar case occurs in the negative correlation of −0.67 in red between region two of camera and region zero of camera , where an inverse behavior occurs, that is, if a lot of traffic occurs in camera of region two, it will decrease in the zero region of camera . |
−0.43 | Also, for the negative correlation of −0.43 in red between region three of camera and region zero of camera , there is an inverse behavior, inferring that, if a lot of traffic occurs in camera of region three, it will decrease in the zero region of camera . |
−0.46 | A similar case occurs for the negative correlation of −0.46 in red between region four of camera and region zero of camera , where an inverse behavior occurs, deducing that, if there is a lot of traffic in camera of region four, it will decrease in the zero region of camera . |
+0.85 | For the positive correlation of 0.85 (blue) between the zero regions of camera and camera , it means that when the zero region of camera presents traffic (high increases), proportionally there will also be traffic in the zero region of camera . |
−0.43 | Finally, in the negative correlation of −0.43 in red between region one of camera and region zero of camera , an inverse behavior is presented, which means that, if there is a lot of traffic in camera of region one, it will decrease in region zero of camera . |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Trejo-Morales, A.; Jimenez-Hernandez, H. A Road Behavior Pattern-Detection Model in Querétaro City Streets by the Use of Shape Descriptors. Appl. Syst. Innov. 2024, 7, 44. https://doi.org/10.3390/asi7030044
Trejo-Morales A, Jimenez-Hernandez H. A Road Behavior Pattern-Detection Model in Querétaro City Streets by the Use of Shape Descriptors. Applied System Innovation. 2024; 7(3):44. https://doi.org/10.3390/asi7030044
Chicago/Turabian StyleTrejo-Morales, Antonio, and Hugo Jimenez-Hernandez. 2024. "A Road Behavior Pattern-Detection Model in Querétaro City Streets by the Use of Shape Descriptors" Applied System Innovation 7, no. 3: 44. https://doi.org/10.3390/asi7030044
APA StyleTrejo-Morales, A., & Jimenez-Hernandez, H. (2024). A Road Behavior Pattern-Detection Model in Querétaro City Streets by the Use of Shape Descriptors. Applied System Innovation, 7(3), 44. https://doi.org/10.3390/asi7030044