1. Introduction
In the modern automotive industry, intelligent driver assistance systems, including traffic sign recognition systems, are increasingly important for enhancing traffic safety and the driving experience. These systems enable vehicles to recognize and interpret traffic signs, making this information immediately accessible to the driver, thereby improving safety and compliance with traffic regulations. However, testing and developing these systems is challenging because real-world testing is costly and time-consuming, and the replication of various traffic situations is limited [
1]. This study aims to create a laboratory environment that enables vehicle-level testing of traffic sign recognition systems in a cost-effective manner. The basic concept involves using an existing two-axis roller dynamometer and the IPG CarMaker v11.0 simulation software. The two-axis roller dynamometer enables realistic simulation of the vehicle’s speed and acceleration, while the IPG CarMaker software facilitates the creation of various road segments and traffic signs in a virtual environment.
The IPG CarMaker is an advanced simulation software that enables precise modeling of vehicle dynamics and environmental conditions. Through this software, different road segments and traffic scenarios, including various traffic signs, can be simulated. The system can simulate road segments in real time, allowing for the realistic recognition and interpretation of traffic signs appearing before the vehicle’s camera. Moreover, the simulation environment allows the testing of traffic signs from different countries without the need to travel [
2,
3]. This method saves costs by eliminating long journeys and time spent changing signs. Simulating different traffic signs in a laboratory setting is quick and simple, enabling rapid development and refinement of the sign recognition system. Additionally, laboratory testing offers an environmentally friendly solution by reducing CO
2 emissions. Using the roller dynamometer requires less energy to move the vehicle compared to real-road testing, especially at speeds above 50 km/h. Furthermore, the laboratory environment allows for repeatability in testing processes, which is crucial for evaluating reliability and accuracy [
4].
To establish the laboratory environment, the two-axis roller dynamometer and the IPG CarMaker simulation software must first be integrated. The roller dynamometer simulates the vehicle’s real speed and acceleration, while the CarMaker software constructs the road segments and the traffic signs to be examined. The simulation image is projected onto two large TV screens placed in front of the vehicle’s camera, providing a wide-angle display [
5]. During laboratory testing, the performance of the traffic sign recognition system is evaluated with various traffic signs. Our goal is for the system to achieve at least 90% accuracy in recognizing signs in the simulated environment. Additionally, we assess the energy and cost savings between simulation and real-world testing [
6,
7]. Upon successful application of the test environment, we plan further developments, including the simulation and testing of traffic sign recognition systems from different countries based on related standards. Future research will focus on enhancing the simulation environment and better integrating real-time data to provide an even more accurate and reliable testing environment [
8,
9].
The continuous development and proliferation of intelligent driver assistance systems in the modern automotive industry necessitate the creation of effective and reliable testing methods. Testing traffic sign recognition systems in a laboratory environment offers significant advantages, including cost-effectiveness, environmentally friendly solutions, and repeatability of testing processes. The proposed system allows for faster and more efficient development and refinement of vehicle traffic sign recognition systems, ultimately contributing to increased traffic safety and improved driving experience.
2. Materials and Methods
For the laboratory examination of camera-based traffic sign recognition systems, a display is primarily needed to provide information to the camera. For the normal operation of the sign recognition system, the vehicle must be in motion, which can be easily achieved on a chassis dynamometer (Energotest TMP-700 4WD). This way, the vehicle does not need to be disassembled, and its speed sensors do not need to be manipulated. The tests were conducted in the laboratory of the Széchenyi István University Zalaegerszeg Innovation Park. A two-axis performance measuring roller bench assisted the vehicle in keeping its wheels rotating. For roller bench measurement, the vehicle was secured by fastening it at the front to the control arms and at the rear to the towing points. The test vehicle was primarily in two-wheel-drive mode, with the non-driven axle wheels set to the same speed using the roller bench’s electric motor. This ensured that the vehicle’s wheel speed sensors received identical values, preventing the stability control system from activating during the measurement.
The test vehicle was a Lexus RX450h hybrid passenger car equipped with radar and ultrasonic sensors, in addition to the camera, to implement driver assistance functions. The vehicle’s brake system could be activated based on signals from the radar and ultrasonic sensors. Therefore, these sensors had to be disabled to prevent the vehicle from initiating emergency braking while traveling at a high speed on the roller bench. Foam material was applied in front of the ultrasonic sensors to absorb the signals and prevent the vehicle from attempting to brake. Radar-measured values could also trigger braking, but no shielding was necessary for this. Once the vehicle started but the sensors did not detect movement, the vehicle system considered the radar-measured values faulty, generating an error code and automatically deactivating the emergency braking function along with the following distance maintenance system. After testing, this error code had to be cleared, and the vehicle needed to be driven on the road to restore the system.
The test vehicle was also equipped with an aftermarket camera primarily used for developing autonomous driving functions. This allowed the observation of the differences between what the vehicle sees on the road and what it sees on the display placed in front of it in the lab. However, the recognition of traffic signs was carried out by the vehicle’s own system, we had no influence on its operation. The retrofitted camera was only used to record what the camera sees when the vehicle recognizes the signs. Traffic sign recognition was tested in two ways: first, by placing a single TV (LG 75″ 75UN71003LC 4K UHD Smart TV) in front of the vehicle, and second, by placing two TVs side by side to better cover the camera’s field of view. In both cases, the same simulation image ran on the TV displays, and the signs recognized by the vehicle system were compared. The simulation image depicted a simple straight road segment with speed limit signs (30, 50, 60, 70, 90 km/h) placed on the right side of the road, and finally, a national speed limit sign. The displays were placed as close to the vehicle as possible to maximize the coverage of the camera’s view. The system setup is shown in
Figure 1.
The simulation was implemented in the IPG CarMaker software, where the speed of the simulated vehicle had to match the speed of the roller bench. The real-time transmission of speed data was achieved through MATLAB R2020a, as IPG CarMaker supports this software. The roller bench speed data could not be directly imported into Simulink because it runs on a CAN communication network. The CAN data had to be converted into a form that could be transmitted via USB, which was achieved using a microcontroller and its associated CAN data processing module. The data rate of the roller bench network speed is 421,052 bit/s, necessitating a custom program to convert the data. The data stream of the IPG CarMaker simulation is constructed in a Simulink file, allowing the roller bench’s measured speed to be channeled into it. In the simulation, vehicle speed is not an input data but an output value. Therefore, the speed could not be directly introduced; instead, the roller bench speed had to be compared with the simulated vehicle’s speed. Based on this comparison, the throttle and brake pedals of the simulated vehicle were controlled. The principal structure of the data flow is shown in
Figure 2.
3. Results
Access to the built-in camera’s image of the test vehicle was not possible; therefore, conclusions had to be drawn based on the aftermarket camera’s image. The recognized sign appears on the vehicle’s dashboard, indicating whether the traffic sign was successfully recognized. By recording video footage, it was possible to determine the position and size of the signs in the camera’s field of view. Images were extracted from the video footage, with each image consisting of two overlaid camera frames. The first frame shows when the sign first came into view and the last frame is the final moment the sign was visible. These two frames were overlaid with 50% transparency and the top and bottom points of the sign were connected with drawn red lines. This method shows the path of the sign within the camera’s field of view from the center to the right edge. During measurements on the roller bench, the sign’s image could not traverse the entire camera field of view due to the size of the TV. However, by extending the red lines, it could be determined that the sign appeared in nearly the same area as long as it was visible.
Figure 3 shows the visual results of the road and the two laboratory measurements.
The road test was carried out for comparability; in this case, the environment could completely cover the camera image. In the simulated road section, six signs were placed: the first five indicated speed limits and the last one indicated the end of the speed limit. When only one TV was placed in front of the test vehicle, the system recognized all the signs. However, when two TVs were used, covering a larger portion of the camera’s field of view, there was one difference. The vehicle did not recognize the first sign indicating a 30 km/h speed limit. The tests were repeated multiple times, consistently yielding the same results.
Based on the tests, the manipulation of the sign recognition camera was successful. Since the tests were conducted with a commercially available vehicle, it is assumed that the system works on the road and can recognize the specified types of signs. The examinations indicate that projecting a digital image in front of the vehicle enables the sign recognition system to function just as it would on an actual road. The experiment yielded an unexpected result: the vehicle did not recognize one of the signs on the larger display, whereas it did on the smaller display. The test’s success was confirmed by the vehicle running on the roller bench and the simulation speed matching the vehicle speed. During the investigation, using the aftermarket camera, it was determined that the signs projected in front of the vehicle in the lab appeared in the same region as they would in reality. Despite the signs moving within a significantly smaller area on the projected image, the vehicle’s sign recognition system was able to correctly detect the signals.
4. Conclusions
The aim of the study was to implement the first step in testing camera-based traffic sign recognition systems in passenger vehicles by creating an environment where the camera system can function as it would on public roads. To achieve this, the vehicle was placed on a chassis dynamometer to drive its wheels at appropriate speeds. The roller dynamometer’s speed was successfully integrated into the simulation software through several stages, providing an image to the vehicle’s camera via TVs placed in front of it, thereby matching the simulation speed with the vehicle’s actual speed. The operation of the traffic sign recognition system was examined in two scenarios: one with a single display and another with two displays placed in front of the vehicle. The traffic signs on the screens moved through the same region of the camera image as they would in reality. In both cases, however, the displayed area covered a smaller field than the field of view of the aftermarket camera. In the first scenario, where a single TV was placed in front of the vehicle, the system recognized all the signs. When two TVs were used, the system failed to recognize the first sign (30 km/h speed limit), though it recognized the others. This discrepancy persisted across multiple test iterations, with no explanation found for this difference. The tests demonstrated the feasibility of creating an environment where the camera-based traffic sign recognition system can operate as it would in real-world conditions. This approach facilitates cost-effective testing of future camera systems, eliminating the need for expensive on-road or test track examinations.
Author Contributions
Conceptualization, R.P. and M.J.; methodology, R.P. and M.J.; software, D.J.; validation, R.P., M.J. and D.J.; investigation, R.P., M.J. and D.J.; data curation, R.P and D.J.; writing—original draft preparation, R.P. and M.J.; writing—review and editing, R.P. and M.J. All authors have read and agreed to the published version of the manuscript.
Funding
The research and the APC were financed by István Széchenyi University.
Institutional Review Board Statement
Not applicable.
Informed Consent Statement
Not applicable.
Data Availability Statement
Data are contained within the article.
Conflicts of Interest
The authors declare no conflicts of interest.
References
- Hasanujjaman, M.; Chowdhury, M.Z.; Jang, Y.M. Sensor fusion in autonomous vehicle with traffic surveillance camera system: Detection, localization, and AI networking. Sensors 2023, 23, 3335. [Google Scholar] [CrossRef]
- Gulino, M.S.; Fiorentino, A.; Vangi, D. Prospective and retrospective performance assessment of Advanced Driver Assistance Systems in imminent collision scenarios: The CMI-Vr approach. Eur. Transp. Res. Rev. 2022, 14, 3. [Google Scholar] [CrossRef]
- Hong, C.J.; Aparow, V.R. System configuration of human-in-the-loop simulation for level 3 autonomous vehicle using IPG CarMaker. In Proceedings of the 2021 IEEE International Conference on Internet of Things and Intelligence Systems (IoTaIS), Bali, Indonesia, 25–27 November 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 215–221. [Google Scholar] [CrossRef]
- Park, C.; Chung, S.; Lee, H. Vehicle-in-the-loop in global coordinates for advanced driver assistance system. Appl. Sci. 2020, 10, 2645. [Google Scholar] [CrossRef]
- Mihalič, F.; Truntič, M.; Hren, A. Hardware-in-the-loop simulations: A historical overview of engineering challenges. Electronics 2022, 11, 2462. [Google Scholar] [CrossRef]
- Triki, N.; Karray, M.; Ksantini, M. A comprehensive survey and analysis of traffic sign recognition systems with hardware implementation. IEEE Access 2024, 12, 144069–144081. [Google Scholar] [CrossRef]
- Jomnonkwao, S.; Champahom, T.; Ratanavaraha, V. Methodologies for determining the service quality of the intercity rail service based on users’ perceptions and expectations in Thailand. Sustainability 2020, 12, 4259. [Google Scholar] [CrossRef]
- Meng, Z.; Zhao, S.; Chen, H.; Hu, M.; Tang, Y.; Song, Y. The vehicle testing based on digital twins theory for autonomous vehicles. IEEE J. Radio Freq. Identif. 2022, 6, 710–714. [Google Scholar] [CrossRef]
- Schallauer, D.; Soteropoulos, A.; Cornet, H.; Klar, W.; Fürdös, A. Regulatory frameworks for testing automated vehicles: Comparative analysis of national regulations and key aspects for a sustainable implementation. In Sustainable Automated and Connected Transport; Emerald Publishing Limited: Leeds, UK, 2024; pp. 101–117. [Google Scholar] [CrossRef]
| Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).