A Comparative Study on Detection and Recognition of Nonuniform License Plates
Abstract
:1. Introduction
- We present a comparative study of four algorithms, out of which two algorithms investigate LP detection, while the other two techniques analyze LP recognition performance in unconstrained traffic environments.
- We report experiments on a challenging Pakistani traffic dataset with nonuniform appearances. Compared algorithms are applied to a variety of different vehicles that have significant variations in the appearance of their LPs. In addition, the computational complexity of these methods has also been investigated on varying image resolutions, from 1550 × 900 and down to 30 × 20 pixels for vehicles and plates, respectively.
- Our LP detection and recognition study is beneficial for beginners and researchers who aim to conduct research in machine learning to achieve various object detection and recognition tasks for their desired applications.
2. Related Work
3. Methodology
3.1. LP Detection
3.2. LP Recognition
4. Simulation Results
4.1. Dataset Description
4.2. Evaluation Measures and Settings
4.3. LP Detection Analysis
- Both the Faster-RCNN and E2E methods successfully detect one LP that appears in the input image. However, as shown in the second image of the first row in Figure 3b, there is false LP detection around the backlight of the car. Both these methods successfully handle LP detection from the frontal to angle variation.
- As shown in the second row in Figure 3, both the Faster-RCNN and the E2E methods successfully detect whenever there are up to three LPs that appear in the input image. In this case, the LP shooting angle does not affect the accuracy of these methods.
4.4. LP Recognition Analysis
- Both the DNN- and the CA-CenterNet-based methods successfully recognize a single LP that appears in the input image. Therefore, both the DNN and the CA-CenterNet methods successfully identified LPs on vehicles of different makes and orientations, highlighting the system’s versatility in real-world scenarios.
- In Figure 4a, the LPs that display characters ICRC 7219 and AGC 43 were accurately recognized in single-class LP recognition tasks by both of these methods. For multi-class LP recognition, as shown in the second row in Figure 4a, the system effectively handled nonstandard LP formats and additional text elements, such as the detection of 111 on the rear window of the white car in the second to last image in Figure 4a second row. We observed that images with embossed LP text were not recognized by either of these methods. One such example is the last image in the second row in Figure 4, which is the third car’s plate not processed by either the DNN or the CenterNet methods.
- Figure 4 also indicates that both DNN- and the CA-CenterNet-based methods are barely affected by the LP shooting angle and perform well with multi-style and different fonts.
4.5. Discussion
- As shown in Table 5, when the LP resolution varies from 66 × 36 to 55 × 30 pixels, 100% LP detection accuracy is obtained for up to four LPs that appear in an image. Especially, Faster-RCNN accurately handles the four LPs that appear in the input image by yielding 100% detection accuracy. For low-resolution LP images that range from 55 × 30 to 40 × 25 pixels, the DNN-based method performs better than the Faster-RCNN by yielding up to 98.90% LP detection accuracy.
- Overall, both detection methods are robust and yield a mean LP detection accuracy of 98.41%, with the E2E method performing slightly better than the Faster-RCNN by yielding a mean detection accuracy of 98.48% across all image resolutions, as shown in Table 5.
- For LP recognition and up to three LPs, both the DNN and the CA-CenterNet yield 100% accuracy for image resolution of 66 × 36 to 56 × 35 pixels. For low image resolution of up to 30 × 20 pixels, the CA-CenterNet performs slightly better than the DNN-based method.
- Overall, the LP recognition pool delivers a mean accuracy of 98.93%, with both the DNN and the CA-CenterNet methods yielding a mean accuracy of 98.90% and 98.96%, respectively.
- Generally, both LP detection and recognition pools are robust and yield a combined mean accuracy of 98.67%. These methods were tested in general and outdoor environments that varied from sunny days to rainy days, along with various image-capturing timing from dawn to early sunset. Therefore, our study concludes that all four algorithms that are used in the pools of LP detection and recognition are reliable as they yield high detection and recognition accuracy. In addition, these algorithms are also scalable as their performance does not drop when the number of LPs in the image is increased along with the decreasing image resolutions. Furthermore, all four methods used in the LP detection and recognition pool are consistent and yield over 98% accuracy for nonuniform and outdoor-captured LP images.
- Our study indicates that most of the LP detection and recognition systems are designed based on vehicle detection that has the same LP standards. There are several other methods that process multi-style LPs in different countries. Therefore, we recommend that if an LP detection and recognition system is to be developed for unconstrained environments, the E2E method can be used to locate LPs, while the CA-CenterNet-based method can be used for recognition purposes. If fine-tuned and used in a pipeline for detection and recognition purposes, these methods will yield further encouraging results. Moreover, their performance also indicates that these methods can be reliably used in many real-time applications that require quick and accurate LP detection and recognition. Moreover, Table 6 shows the complete breakdown of accuracies obtained for various numbers of images that are collected in our dataset. As can be seen, for each of the low- and high-density object collections, the compared algorithms perform well and all instances yield over 98% detection and recognition accuracy.
4.6. Computational Complexity
- Figure 5a shows the detection time for five different image resolutions for both LP detection and recognition methods. As shown in Figure 5a, the computational time for both detection and recognition methods significantly decreases as the image resolution is reduced. For instance, for the DNN method, the mean detection time decreases from 3.91 s at a resolution of 1550 × 900 to 1.78 s at 600 × 400. Similarly, the mean recognition time for the E2E method drops from 3.25 s at 1550 × 900 to 1.5 s at 600 × 400-pixel test image resolution. This indicates that lower image resolutions require substantially less processing time, which can be crucial for real-time applications where speed is essential.
- As shown in Figure 5a, the CA-CenterNet consistently shows the fastest processing times across all resolutions. For instance, it takes 2.5 s to process a 1550 × 900-pixel resolution image, while Faster-RCNN, the slowest, takes 4.8 s. Even at the lowest resolution of 600 × 400, CA-CenterNet processes the image in 1 s compared with Faster-RCNN’s 2.29 s. This makes CA-CenterNet a more efficient choice for tasks where quick detection and recognition are required, particularly at varying image resolutions. Out of the four methods shown in Figure 5a, we observed that the E2E method ranks second, while the DNN method ranks third, respectively, during LP detection and recognition tasks.
- As shown in Figure 5b, the LP detection pool takes 4 s to nearly 2 s to process image resolutions that vary from 15,550 × 900 to 600 × 400 pixels. At the same time, the LP recognition pool consumes from 3 s to slightly over 1 s for the aforementioned test image resolutions. This trend highlights the efficiency gained by working with lower-resolution images, which can significantly reduce the computational burden, particularly in scenarios where processing time is critical. The mean LP detection and recognition time varies from 3.5 s to 1.5 s.
- The execution time statistics presented above highlight the importance of selecting the appropriate algorithm and resolution based on the specific computational requirements and constraints of the task at hand. Therefore, we suggest that if real-time LP detection is desired, then E2E LP detection method is feasible. Similarly, for the task of real-time LP recognition, the CA-CenterNet-based method is recommended.
4.7. Further Analysis
4.8. Limitations
- As can be seen in Figure 7a, whenever the LP was broken, the detection algorithms failed to locate the LP area. Ultimately, when this LP was fed to the recognition pool, it could not recognize the license number. Such broken LP cases are often found on common highways in a typical Pakistani environment.
- We also observed that the algorithms employed in the LP detection and recognition pools also failed to yield satisfactory results whenever the LP area was partially or fully occluded. A severe case of LP occlusion is shown in the second image of the top row in Figure 7b.
- Surprisingly, we also found many vehicles that contained LPs with embossed characters on the LP area. A few such samples are shown in the top row in Figure 7c. In such cases, both LP detection and recognition algorithms could not yield encouraging results. The top image in Figure 7c is particularly challenging due to the visible shadow, which ultimately induces nonuniform illumination and makes it hard for any algorithm to detect and recognize the LP.
- In a Pakistani environment, we also observed many LPs that had faded letters and digits, as shown in Figure 7d. In such cases, a few digits were blurred. In some cases, the LP coating paint was eroded along with screws embedded therein. Since the resolution of these LPs was also very low, this case also posed a significant challenge for the detection and recognition algorithms. In some cases, we also observed the semi-circle typed LPs on heavy transport vehicles along with colored painted boundaries with different fonts that highlighted the vehicle license number. On a few of such tested images, both the LP detection and recognition algorithms struggled to yield accurate results. These are some of the unique cases and, unfortunately, no LP database exists to train the algorithms. Therefore, future studies could also collect such datasets and develop more robust algorithms that can handle the aforementioned issues. Furthermore, future studies could also focus to gather mixed-style LPs to train and improve these compared methods along with more exploration of machine and deep learning algorithms.
4.9. Final Remarks
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- He, C.; Wang, D.; Cai, Z.; Zeng, J.; Fu, F. A vehicle matching algorithm by maximizing travel time probability based on automatic license plate recognition data. IEEE Trans. Intell. Transp. Syst. 2024, 25, 9103–9114. [Google Scholar] [CrossRef]
- Song, Y.; Liu, Y.; Lin, Z.; Zhou, J.; Li, D.; Zhou, T.; Leung, M. Learning From AI-Generated Annotations for Medical Image Segmentation. IEEE Trans. Consum. Electron. 2024, 70, 4425–4434. [Google Scholar] [CrossRef]
- Peng, Z.; Gao, Y.; Mu, S.; Xu, S.S. Toward Reliable License Plate Detection in Varied Contexts: Overcoming the Issue of Undersized Plate Annotations. IEEE Trans. Intell. Transp. Syst. 2024, 25, 10689–10701. [Google Scholar] [CrossRef]
- Shashirangana, J.; Padmasiri, H.; Meedeniya, D.; Perera, C. Automated license plate recognition: A survey on methods and techniques. IEEE Access 2020, 9, 11203–11225. [Google Scholar] [CrossRef]
- Sultan, F.; Khan, K.; Shah, Y.A.; Shahzad, M.; Khan, U.; Mahmood, Z. Towards automatic license plate recognition in challenging conditions. Appl. Sci. 2023, 13, 3956. [Google Scholar] [CrossRef]
- Kaur, P.; Kumar, Y.; Ahmed, S.; Alhumam, A.; Singla, R.; Ijaz, M.F. Automatic License Plate Recognition System for Vehicles Using a CNN. Comput. Mater. Contin. 2022, 71, 1639–1653. [Google Scholar]
- Wang, Y.; Bian, Z.P.; Zhou, Y.; Chau, L.P. Rethinking and Designing a High-Performing Automatic License Plate Recognition Approach. IEEE Trans. Intell. Transp. Syst. 2021, 23, 8868–8880. [Google Scholar] [CrossRef]
- Huang, Q.; Cai, Z.; Lan, T. A New Approach for Character Recognition of Multi-Style Vehicle License Plates. IEEE Trans. Multimed. 2020, 23, 3768–3777. [Google Scholar] [CrossRef]
- Khare, V.; Shivakumara, P.; Chan, C.S.; Lu, T.; Meng, L.K.; Woon, H.H.; Blumenstein, M. A Novel Character Segmentation-Reconstruction Approach for License Plate Recognition. Expert Syst. Appl. 2019, 131, 219–239. [Google Scholar] [CrossRef]
- Seo, T.-M.; Kang, D.-J. A Robust Layout-Independent License Plate Detection and Recognition Model Based on Attention Method. IEEE Access 2022, 10, 57427–57436. [Google Scholar] [CrossRef]
- Li, H.; Wang, P.; Shen, C. Toward End-to-End Car License Plate Detection and Recognition with Deep Neural Networks. IEEE Trans. Intell. Transp. Syst. 2018, 20, 1126–1136. [Google Scholar] [CrossRef]
- Hsu, G.-S.; Chen, J.-C.; Chung, Y.-Z. Application-Oriented License Plate Recognition. IEEE Trans. Veh. Technol. 2013, 62, 552–561. [Google Scholar] [CrossRef]
- Gou, C.; Wang, K.; Yao, Y.; Li, Z. Vehicle License Plate Recognition Based on Extremal Regions and Restricted Boltzmann Machines. IEEE Trans. Intell. Transp. Syst. 2016, 17, 1096–1107. [Google Scholar] [CrossRef]
- Chen, C.L.P.; Wang, B. Random-Positioned License Plate Recognition Using Hybrid Broad Learning System and Convolutional Networks. IEEE Trans. Intell. Transp. Syst. 2022, 23, 444–456. [Google Scholar] [CrossRef]
- Ke, X.; Zeng, G.; Guo, W. An Ultra-Fast Automatic License Plate Recognition Approach for Unconstrained Scenarios. IEEE Trans. Intell. Transp. Syst. 2023, 24, 5172–5185. [Google Scholar] [CrossRef]
- Yang, Y.; Li, D.; Duan, Z. Chinese Vehicle License Plate Recognition Using Kernel-Based Extreme Learning Machine with Deep Convolutional Features. IET Intell. Transp. Syst. 2018, 12, 213–219. [Google Scholar] [CrossRef]
- Lee, Y.; Yun, J.; Hong, Y.; Lee, J.; Jeon, M. Accurate License Plate Recognition and Super-Resolution Using Generative Adversarial Networks on Traffic Surveillance Video. In Proceedings of the 2018 IEEE International Conference on Consumer Electronics-Asia (ICCE-Asia), Jeju, Republic of Korea, 24–26 June 2018; pp. 1–4. [Google Scholar]
- Xie, L.; Ahmad, T.; Jin, L.; Liu, Y.; Zhang, S. A New CNN-Based Method for Multi-Directional Car License Plate Detection. IEEE Trans. Intell. Transp. Syst. 2018, 19, 507–517. [Google Scholar] [CrossRef]
- Liang, J.; Chen, G.; Wang, Y.; Qin, H. EGSANet: Edge–Guided Sparse Attention Network for Improving License Plate Detection in the Wild. Appl. Intell. 2022, 52, 4458–4472. [Google Scholar] [CrossRef]
- Lee, Y.; Jeon, J.; Ko, Y.; Jeon, M.; Pedrycz, W. License Plate Detection via Information Maximization. IEEE Trans. Intell. Transp. Syst. 2021, 23, 14908–14921. [Google Scholar] [CrossRef]
- Arif, M.; Umair, M.; Umar, F.; Rana, H.; Zain, L.; Muhammad, H. A Comprehensive Review of Vehicle Detection Techniques Under Varying Moving Cast Shadow Conditions Using Computer Vision and Deep Learning. IEEE Access 2022, 10, 104863–104886. [Google Scholar] [CrossRef]
- Wang, Q.; Lu, X.; Zhang, C.; Yuan, Y.; Li, X. LSV-LP: Large-scale video-based license plate detection and recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 45, 752–767. [Google Scholar] [CrossRef] [PubMed]
- Huang, Q.; Cai, Z.; Lan, T. A single neural network for mixed style license plate detection and recognition. IEEE Access 2021, 9, 21777–21785. [Google Scholar] [CrossRef]
- Lee, Y.Y.; Halim, Z.A.; Wahab, M.N.A. License plate detection using convolutional neural network–back to the basic with design of experiments. IEEE Access 2022, 10, 22577–22585. [Google Scholar] [CrossRef]
- Silva, S.M.; Jung, C.R. A flexible approach for automatic license plate recognition in unconstrained scenarios. IEEE Trans. Intell. Transp. Syst. 2021, 23, 5693–5703. [Google Scholar] [CrossRef]
- Jiang, Y.; Jiang, F.; Luo, H.; Lin, H.; Yao, J.; Liu, J.; Ren, J. An efficient and unified recognition method for multiple license plates in unconstrained scenarios. IEEE Trans. Intell. Transp. Syst. 2023, 24, 5376–5389. [Google Scholar] [CrossRef]
- Qin, G.; Yang, S.; Li, S. A vehicle path tracking system with cooperative recognition of license plates and traffic network big data. IEEE Trans. Netw. Sci. Eng. 2021, 9, 1033–1043. [Google Scholar] [CrossRef]
- Khan, M.M.; Ilyas, M.U.; Khan, I.R.; Alshomrani, S.M.; Rahardja, S. License plate recognition methods employing neural networks. IEEE Access 2023, 11, 73613–73646. [Google Scholar] [CrossRef]
- Jia, W.; Xie, M. An efficient license plate detection approach with deep convolutional neural networks in unconstrained scenarios. IEEE Access 2023, 11, 85626–85639. [Google Scholar] [CrossRef]
- Shi, H.; Zhao, D. License plate localization in complex environments based on improved GrabCut algorithm. IEEE Access 2022, 10, 88495–88503. [Google Scholar] [CrossRef]
- Ding, H.; Gao, J.; Yuan, Y.; Wang, Q. An end-to-end contrastive license plate detector. IEEE Trans. Intell. Transp. Syst. 2023, 25, 503–516. [Google Scholar] [CrossRef]
- Mahmood, Z.; Khan, K.; Khan, U.; Adil, S.H.; Ali, S.S.A.; Shahzad, M. Towards automatic license plate detection. Sensors 2022, 22, 1245. [Google Scholar] [CrossRef] [PubMed]
- Qin, S.; Liu, S. Towards end-to-end car license plate location and recognition in unconstrained scenarios. Neural Comput. Appl. 2022, 34, 21551–21566. [Google Scholar] [CrossRef]
- Mahmood, Z.; Muhammad, N.; Bibi, N.; Ali, T. A Review on state-of-the-art Face Recognition Approaches. Fractals Complex Geom. Patterns Scaling Nat. Soc. 2017, 25, 1750025-1–1750025-19. [Google Scholar] [CrossRef]
- Fan, X.; Zhao, W. Improving robustness of license plates automatic recognition in natural scenes. IEEE Trans. Intell. Transp. Syst. 2022, 23, 18845–18854. [Google Scholar] [CrossRef]
- Mustafa, T.; Karabatak, M. Real Time Car Model and Plate Detection System by Using Deep Learning Architectures. IEEE Access 2024, 12, 107616–107630. [Google Scholar] [CrossRef]
Acronym | Meaning |
---|---|
ALPR | Automatic License Plate Recognition |
ALPDR | Auto License Plate Detection and Recognition |
CNN | Convolutional Neural Networks |
CRNet | Character Recognition Network |
Cross-Entropy Loss | |
DL | Deep Learning |
DNN | Deep Neural Networks |
DOE | Design Of Experiment |
DPOD-NET | Deformation Planar Object Detection Network |
EGSA | Edge-Guided Sparse Attention |
ITS | Intelligent Transportation Systems |
LPs | License Plates |
LPD | License Plate Detection |
LPR | License Plate Recognition |
LSV-LP | Large-Scale Video-based License Plate |
LSTM | Long-Short-Term-Memory |
OCR | Optical Character Recognition |
RNN | Recurrent Neural Network |
SAC | Sparse Attention Component |
ViBe | Visual Background Extractor |
YOLO | You Only Look Once |
Ref | Technique | Year | Merits | Datasets Used | Investigations |
---|---|---|---|---|---|
[6] | CNN architecture | 2021 | Real-time operation Simple architecture |
|
|
[7] | CNN+Cascaded | 2022 | Real-time and accurate identification |
|
|
[8] | RPN+Faster-RCNN | 2021 | Slightly over 98.50% accuracy |
|
|
[10] | CenterNet and attention-based networks | 2022 | Simple and lightweight backbone network to extract features |
|
|
[15] | Yolov3-tiny | 2023 | Processes 751 FPS |
|
|
[22] | Large scale video | 2023 | Scalable and fast |
|
|
[24] | End-to-End | 2024 | Contrast LP-based |
|
|
Objects | Low Density | Resolution | High Density | Resolution | |
---|---|---|---|---|---|
Classes | Cars | 8457 | 88 × 57~66 × 36 | 655 | 80 × 50~66 × 36 |
Vehicles | 85 × 50~66 × 36 | 75 × 50~66 × 36 | |||
LTVs | 85 × 50~66 × 36 | 45 × 50~66 × 36 | |||
Motorcycles | 4136 | 80 × 50~60 × 40 | 136 | 60 × 50~60 × 40 | |
Persons | 3025 | 50 × 80~20 × 40 | 112 | 40 × 60~20 × 40 | |
Total | 15,618 | 903 |
Parameter | Simulation Environment |
---|---|
Test image | Varied from 1550 × 900 to 600 × 400 pixels |
Optimizer | SGDM |
Learning rate | |
Validation frequency | 50 |
Epochs | 50 |
Batch size | 32 |
L2 regularization | |
Gradient threshold | L2 normalization |
Shuffle | Every epoch |
Momentum | 0.90 |
Object Type | Method | No. of Objects | LP Resolution (Pixels) | Accuracy % | Mean Accuracy % | Overall Mean Accuracy % | Observations |
---|---|---|---|---|---|---|---|
LP Detection | Faster-RCNN | 1 | 66 × 36 | 100 | 98.35 | 98.41 | Both LP detection methods are robust. When generally tested in an outdoor environment, they yield a mean accuracy of more than 98% for up to 6 LPs that are visible in an image. In all these cases, the LP resolution varies from 66 × 36 pixels to 30 × 20 pixels. Both these LP algorithms handle varying lighting conditions along with LP angle orientation. |
2 | 60 × 35 | 100 | |||||
3 | 56 × 35 | 100 | |||||
4 | 55 × 30 | 100 | |||||
5 | 40 × 25 | 96.10 | |||||
6 | 30 × 20 | 94.00 | |||||
E2E | 1 | 66 × 36 | 100 | 98.48 | |||
2 | 60 × 35 | 100 | |||||
3 | 56 × 35 | 100 | |||||
4 | 55 × 30 | 98.90 | |||||
5 | 40 × 25 | 96.00 | |||||
6 | 30 × 20 | 96.00 | |||||
LP Recognition | DNN | 1 | 66 × 36 | 100 | 98.90 | 98.93 | When tested in open environments, both the DNN and the CA-CenterNet methods yield nearly 99% accuracy. Up to 3 LPs, the LP recognition accuracy is 100%. Both these methods recognize the LP from frontal to angular orientations. |
2 | 60 × 35 | 100 | |||||
3 | 56 × 35 | 100 | |||||
4 | 55 × 30 | 97.90 | |||||
5 | 40 × 25 | 97.50 | |||||
6 | 30 × 20 | 98 | |||||
CA-CenterNet | 1 | 66 × 36 | 100 | 98.96 | |||
2 | 60 × 35 | 100 | |||||
3 | 56 × 35 | 100 | |||||
4 | 55 × 30 | 98.40 | |||||
5 | 40 × 25 | 97.70 | |||||
6 | 30 × 20 | 97.68 |
Methods | Low Density | High Density | ||||||
---|---|---|---|---|---|---|---|---|
Objects Type | Total Objects | Resolution | Acc % | Objects Type | Total Objects | Resolution | Acc % | |
Faster-RCNN | Cars/Vehicles/LTVs | 8457 | As indicated in Table 3 | 98.70 | Cars/Vehicles/LTVs | 8457 | As indicated in Table 3 | 98.36 |
Motorcycles | 4136 | 98.00 | Motorcycles | 4136 | 98.35 | |||
Persons | 3025 | NP | Persons | 3025 | NP | |||
E2E | Cars/Vehicles/LTVs | 8457 | 98.80 | Cars/Vehicles/LTVs | 8457 | 98.50 | ||
Motorcycles | 4136 | 98.15 | Motorcycles | 4136 | 98.46 | |||
Persons | 3025 | NP | Persons | 3025 | NP | |||
DNN | Cars/Vehicles/LTVs | 8457 | 99.20 | Cars/Vehicles/LTVs | 8457 | 98.90 | ||
Motorcycles | 4136 | 98.60 | Motorcycles | 4136 | 98.90 | |||
Persons | 3025 | NP | Persons | 3025 | NP | |||
CA-CenterNet | Cars/Vehicles/LTVs | 8457 | 99.30 | Cars/Vehicles/LTVs | 8457 | 99 | ||
Motorcycles | 4136 | 98.70 | Motorcycles | 4136 | 98.95 | |||
Persons | 3025 | NP | Persons | 3025 | NP |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Arshid, M.; Azam, M.R.; Mahmood, Z. A Comparative Study on Detection and Recognition of Nonuniform License Plates. Big Data Cogn. Comput. 2024, 8, 155. https://doi.org/10.3390/bdcc8110155
Arshid M, Azam MR, Mahmood Z. A Comparative Study on Detection and Recognition of Nonuniform License Plates. Big Data and Cognitive Computing. 2024; 8(11):155. https://doi.org/10.3390/bdcc8110155
Chicago/Turabian StyleArshid, Mehak, Muhammad Raees Azam, and Zahid Mahmood. 2024. "A Comparative Study on Detection and Recognition of Nonuniform License Plates" Big Data and Cognitive Computing 8, no. 11: 155. https://doi.org/10.3390/bdcc8110155
APA StyleArshid, M., Azam, M. R., & Mahmood, Z. (2024). A Comparative Study on Detection and Recognition of Nonuniform License Plates. Big Data and Cognitive Computing, 8(11), 155. https://doi.org/10.3390/bdcc8110155