Next Article in Journal
A Tensor Space Model-Based Deep Neural Network for Text Classification
Next Article in Special Issue
Interdisciplinary Urban Tunnel Control within Smart Cities
Previous Article in Journal
A Study on Sputtering of Copper Seed Layer for Interconnect Metallization via Molecular Dynamics Simulation
Previous Article in Special Issue
Comprehensive Analysis of Housing Estate Infrastructure in Relation to the Passability of Firefighting Equipment
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Methodology of Functional and Technical Evaluation of Cooperative Intelligent Transport Systems and Its Practical Application

Faculty of Transportation Sciences, Czech Technical University in Prague, 110 00 Prague, Czech Republic
*
Author to whom correspondence should be addressed.
Appl. Sci. 2021, 11(20), 9700; https://doi.org/10.3390/app11209700
Submission received: 9 September 2021 / Revised: 13 October 2021 / Accepted: 13 October 2021 / Published: 18 October 2021
(This article belongs to the Special Issue Intelligent Mobility in Smart Cities)

Abstract

:
In the area of smart cities, great emphasis is placed on many different fields such as energetics, information systems, and transportation. All of these should lead to a simplification of life thanks to smart technologies. If we talk about the transportation field, the main issues related to this area are safety, traffic efficiency, or the environment. Another condition is the successful acceptance of any new technology by its users. Cooperative systems prove to be a suitable solution for these issues, especially in urban areas. Today, pilot implementations of cooperative systems in European countries are being carried out. However, before they are put into full operation, they need to be tested, evaluated, and assessed. This article focuses on the latter two points, i.e., evaluation and assessment of the cooperative systems. For this purpose, a methodology was created, which describes the procedure chosen in the evaluation and assessment of cooperative systems in the Czech Republic and a demonstration of its use by example. The methodology is focused on three main areas, which in this case are functional evaluation, user acceptance, and impact assessment. For the area of user acceptance, the main source was questionnaires, impact assessment relied on measured data while functional evaluation was based on discussions with the drivers, evaluating the cooperative systems, the measured data, and the expert observations. All collected and measured data were then processed and some of the results of the evaluation of the selected service are presented at the end of this article.

1. Introduction

Today, Cooperative Intelligent Transport Systems (C-ITS) are becoming one of the main new technologies in transport. Their main goal is to increase traffic safety, traffic flow, and to reduce the negative impact on the environment through the acquisition of real-time targeted traffic information and rapid response to it.
C-ITS are based on the exchange of data between two vehicles via OBU (On-board unit) and between vehicles (OBU) and other elements such as RSU (Road-side unit), BO (Back-office), or mobile phones.
The basic principle of C-ITS is to send up-to-date information of various kinds such as a warning or a temporary event via C-ITS messages sufficiently in advance. These warnings can be divided according to the type of information they provide. A typical example might be a road works warning, a traffic jam ahead warning, or a weather condition warning. These events are generally called use cases. More information about C-ITS and its use cases can be found in [1,2,3].
In terms of this article, it is important that certain use-cases are defined in C-ITS, about which the C-ITS user is informed in advance where he can then decide how to react to this event. Before C-ITS is put into full operation, C-ITS must not only be tested but also evaluated and assessed.
One of the largest pilot projects dealing with individual aspects of C-ITS in Europe is called C-Roads. The C-Roads is a joint initiative of states with road operators and other partners involved in the testing and implementation of C-ITS with a focus on interoperability and harmonization at the European level. C-Roads addresses all fundamental issues related to C-ITS, including evaluation and assessment, which are the main subject of this article.
At the national level, C-ITS can be deployed and tested at selected pilot sites. Furthermore, there can be an evaluation and assessment of these systems in the form of selected use-cases.
The evaluation process could be deployed after the successful field testing is completed. Therefore, the evaluation is no longer focused on standards and specifications within the evaluation. The evaluation process is performed to determine the benefits of C-ITS.
This article deals with the methodology of C-ITS evaluation and assessment in the Czech Republic and demonstrates its use in practice in the form of the performed evaluations.
The presented methodology and example results have the advantage of working with real data from an existing C-ITS system in the field with test drivers, not just simulation or data from test circuits. Furthermore, the concept of two runs (with and without C-ITS technology) proved to be useful to compare these two approaches. The inclusion of additional sensors to evaluate the runs is also beneficial.

State of the Art

The essential document dealing with evaluation procedures is the FESTA Handbook [4]. It is a guideline document with intentions to gather all knowledge from experts, stakeholders, workshops, and seminars about Field operational tests (FOT) to create a common methodology. FOT follows evaluation and assessment methods for driver support systems testing newly developed or implemented systems to provide real-world impact and benefits. The evaluation processes are based on the so-called V diagram, which describes the individual steps recommended within the FOT.
The evaluation framework used in the InterCor project is based on the above-mentioned document [5]. This framework defines “what” needs to be evaluated. It defines the artifacts for input and output for evaluation, such as the research questions and hypotheses to be tested and answered, and the required key performance indicators and measurements from pilot data. In this framework, it is also stated that it is necessary to adjust the recommendations provided by the FESTA handbook according to the evaluated use-cases.
Another project dealing with C-ITS and thus its evaluation is the Nordicway project [6], which includes Finland, Sweden, and Denmark. The evaluation here was addressed in several different areas such as service ecosystem, user acceptance, or quality of service. For the methodology used in Nordicway 2, some steps defined in FESTA were used and adapted for the needs of evaluation.
The next project to be mentioned is called COOPERS [7,8]. Impact assessment is first determined on simulators, where it is possible to test some risks of the situation. The impact assessment was focused on the speed profiles, acceleration, lane-changing behavior, and car-following gaps. The drivers also filled in questionnaires after the drive aiming at the usefulness and, readiness of the information provided by C-ITS.
There are also other projects dealing with cooperative systems, including their evaluation, such as DRIVE C2X [9] and SCOOP [10]. For the SCOOP project, for example, the evaluation focuses not only on vehicle drivers but also on road operators, while the DRIVE C2X project focuses on another area of journey quality evaluation.
Another possibility of evaluation is the use of modeling tools. This approach is used, for example, in Italy [11] in the evaluation of platooning and its effects on safety or the environment.
In addition to the above-mentioned documents and articles, research is also moving towards modification and expansion of the FESTA handbook, e.g., [12,13].
In Australia, they are also working on evaluation based on cooperative awareness messages to assess safety impact [14].
Article [15] deals with a two-stage study including a structured interview with guidelines for assessing acceptance factors and an online study for their evaluation. The study is also focused on facilities that can be provided by the cooperative driver assistant systems in the future like braking, lane change, etc. The interesting conclusion reflects the different opinions of car and truck drivers on C-ITS problematic.
Article [16] examines the user′s consent to an automatic overtaking system based on cooperative systems. The user acceptance is focused on usefulness, ease of use, enjoyment, safety, etc.
Article [17] is focused on transport factors that affect cooperative systems. These are solved and measured using simulation tools. Challenges and difficulties are also mentioned here, such as choosing the right indicators or defining proper scenarios for determining these factors.
The next article [18] is aimed at the effect of HMI on driver perception. It compares two basic HMI modes: one-stage, i.e., informing about only one event (the one with the highest priority), or three-stage, i.e., informing about several events. Testing took place on a driving simulator. The bottom line is that drivers would prefer a 3-stage warning system.
Another study [19] deals with the design of an interface to the infotainment systems in a vehicle. It contains an extensive study of articles dealing with this issue, focusing on the positives and negatives of these systems in order to mitigate the negatives.
The last but not least mentioned document is the Evaluation and Assessment plan [20] provided by the European group within the C-Roads focused on the evaluation and assessment created by the representatives from each C-Roads member state. This document provides recommendations and advice to evaluate and assess C-ITS in the correct way and is focused on the user acceptance and the impact assessment in the following areas: safety, traffic efficiency, environment. It is assumed that this plan will be further extended to other areas based on ongoing evaluations.

2. Materials and Methods

2.1. Evaluation Methodology

Each Member State within C-Roads approaches evaluation and assessment differently due to different specificities including the different types of use-cases or conditions that can be created on the pilot site in order to perform the evaluation.
The evaluation methodology described below deals with these specifics and is in accordance with both the FESTA handbook [4] and the evaluation and assessment plan of C-Roads [20].
The methodology first focusing on the whole evaluation process consists of
  • Evaluation preparation,
  • Pilot site evaluation,
  • Evaluation and assessment.
In the Czech Republic, the part of evaluation and assessment covers only the following areas:
  • User acceptance,
  • Impact assessment,
  • Functional evaluation.
All the previously mentioned points are gradually analyzed and finally shown by an example.

2.1.1. Evaluation Preparation

The preparation of the evaluation requires several points:
  • cooperation with all relevant partners in the project on:
    selection of an area for evaluation within the pilot site,
    selection of specific use-case(s),
    evaluation date,
    preparation of evaluation route,
    specific requirements related to the service concerned,
    provision of drivers to independently assess C-ITS,
  • preparation of specific questionnaires related to the evaluated services,
  • provision and preparation of a special “evaluation” vehicle equipped with HMI and other sensors for future assessment.
Drivers addressed to carry out evaluation routes should be diverse in age, sex, mileage covered, driving experience, etc. This will make the evaluation assessment more relevant.

2.1.2. Pilot Site Evaluation

This phase takes place after all points from the evaluation preparation part are agreed upon and deployed. Appropriate events should be arranged at the pilot site or on the evaluation route. There is always an effort to create the event (represented by a use case in terms of C-ITS) in such a way that its location is not affected by other circumstances and that it is really taking a place in a given area. Inappropriate circumstances could significantly affect the results of the evaluation in a negative sense.
The typical course at the pilot site for the evaluation driver is as follows:
  • meeting of drivers at the agreed time and place,
  • filling in the relevant pre-ride questionnaires,
  • first evaluation ride (with C-ITS off),
  • second evaluation ride (with C-ITS on),
  • filling in the relevant post-ride questionnaires.
More information about the questionnaires can be found in Section 2.1.3. User Acceptance in this paper.
The aim of the evaluation is to find out the benefits resulting from the use of C-ITS services but also their shortcomings, to gain knowledge about the possibility of future improvements. For this reason, ideally, the driver should travel the evaluation route twice. In the first case, he will not have any C-ITS elements, mainly HMI, to inform him about the event. During the second evaluation ride, these systems will be switched on and the driver will be informed of the event in advance via C-ITS. The goal of this procedure is to compare drivers’ reactions to the traffic events (use cases) using C-ITS services and the baseline behavior without any service. Impact assessment addresses these issues in detail in Section 2.1.3. Impact Assessment and its example makes up part of Section 3.1.3. Impact Assessment.
Evaluation results can be affected by many factors. Such factors include, for example, weather changes (rain, strong wind, direct sunlight), light visibility (day/night), situation clarity (visibility at the intersection, objects blocking the view), used car, traffic restrictions (speed, priority), traffic flow (low/high), and road topology (urban/extra-urban). These factors can influence driver behavior and lead to misleading results. For this reason, it is recommended that the evaluation be performed under the same conditions on the same test circuit for all drivers. In this way, the influence of random factors can be reduced and thus approached more precisely in determining the true impact of the tested system on the driver.

2.1.3. Evaluation and Assessment

The last phase is the evaluation and assessment of all obtained data in three evaluation areas: user acceptance, safety impact, and functional evaluation.

User Acceptance

Four basic questionnaires were prepared for user acceptance (in accordance with WG3):
  • Driver’s profile,
  • General questions regarding C-ITS,
  • Questionnaires related to a specific use-case:
    Pre-ride,
    Post-ride.
For most questions in the driver’s profile questionnaire, the driver ticked off the appropriate option. The questions are from the following areas:
  • Type of driver,
  • Sex,
  • Age (in ranges),
  • Education,
  • Number of citizens in hometown (in range),
  • Type of a driving license,
  • The length of driver’s license ownership,
  • Number of driven kilometers annually,
  • Frequency of driving vehicle,
  • The number of (caused and uncaused) traffic accidents, penalty points,
  • Current sources of traffic information,
  • Preferred traffic information based on C-ITS.
General questions related to C-ITS focused mainly on two areas: willingness to pay for these services (regularly or once) and the opinion of drivers on the HMI and its distraction while driving.
Specific pre-ride questionnaires related to a given use-case do not always have the same questions, however, they focus on driver perceived perception in the following areas:
  • Safety,
  • Overview of a situation,
  • Comfort of driving.
Specific post-ride questionnaires aim at all previous areas and further:
  • Usefulness of the service,
  • Satisfaction with the information obtained.
Samples of the results are shown in Section 3.1.3 User Acceptance.

Impact Assessment

The impact assessment for different use-cases in C-Roads follows the recommendation and general guidelines in the Evaluation and assessment plan [20], proposed by WG3 within the C-Roads platform. These specifications were designed to unify the process of evaluation and assessment and the results of the individual sub-states at their pilot sites in order to be as transparent and comprehensible as possible for all C-ITS stakeholders. Furthermore, it proposes general recommendations at the level of individual use-cases but leaves the method of implementation to individual states.
In the Evaluation and assessment plan [20], the main three impact areas covering the expected benefits and impact of the C-ITS services are in line with national and European policies to increase safety and reduce the environmental impact of transportation. Each area of impact brings a slightly different view of the evaluation of changes in driver behavior and is linked to subsequent analysis and individual statistical methods that use the captured data.
Within the proposed methodology, impact area safety was selected as the main part. For each use case, an analytical method was chosen based on the true nature of each evaluated use case and the ability of the recording equipment to capture the difference in driver behavior using C-ITS as opposed to driving without C-ITS. The main comparison of individual passage for all use-cases was the comparison of speeds and comparison of accelerations. To determine the significance of the differences between the two mean values (situations with and without C-ITS), a two-sided, paired T-Test was used when comparing speed and acceleration. Such a test evaluates whether there is a possibility that the difference in the mean value of the two selections is due to chance. Levene′s test was performed on the same comparison of speed and acceleration to determine if there was a statistical difference in the variances with and without C-ITS.

Functional Evaluation

Functional evaluation in this methodology covers aspects of evaluation that are based on the experience of evaluators, evaluation drivers, and other stakeholders. It also deals with some problematic properties of use-cases. Functional evaluation is therefore divided into three areas:
  • lessons learned,
  • HMI,
  • quality of service.
Lessons learned are primarily designed from the user′s point of view. This means that what the driver lacked in the use-case, he would like to improve, etc. It also focuses on improving the use-case in terms of implementation.
In the HMI section, improvements to the C-ITS message presentation are addressed.
Quality of service then focuses on information related to the message parameters, e.g., whether the range was adequate, the relevant zone set correctly, etc.

3. Results

3.1. Example Application of the Methodology

This paper showed a brief theoretical description of the evaluation methodology in the previous section. In this section, we describe the application of the methodology using a specific example, including a presentation of the results from this evaluation. Public transport vehicle crossing (PTVC) was chosen as an exemplary C-ITS use-case for presenting the results of the C-Road CZ evaluation at the pilot sites.
The goal of the PTVC use-case is to notify road users in advance about trams (or other public transport vehicles) crossing the road on the expected route. This application is useful especially in problematic locations, where trams cross the roadway without the aid of traffic lights.

3.1.1. Initial Conditions

For evaluation purposes we possess a test vehicle that is equipped with OBU Commsignia ITS OB4, GPS data logger, OBD2 CAN bus logging device, and HMI. As part of the evaluation process, we also use the C-ITS SIM tool, which is described in more detail in article [21]. This SIM tool was mainly used as the primary source for capturing the communication of C-ITS units to capture data of individual driver’s rides. The GPS logger (Canmore GP-102+) tracker and CAN-BUS data logger (CANedge1) were used as additional recording devices. These recording devices were used mainly as a backup in case of failure of the main data source and for other experimental purposes. More information about the C-ITS SIM recording device is described in article [21].

3.1.2. Evaluation Preparation + Pilot Site Evaluation

A timetable was set in advance for each test location and potential respondents were approached to participate in the evaluation. The goal was always to obtain a minimum of 15 diverse respondents for the evaluation of each use-case. Numbers varied for different locations as well as use-cases. The time for evaluation was always determined based on local conditions so that the evaluation would not be affected by local traffic problems or abnormalities.
The presented example of the PTVC use-case was implemented in the cities of Ostrava and Pilsen. Testing had already been successfully performed at these pilot sites.
The first task was to choose the location where the evaluation will take place. The city of Ostrava with a tram crossing in the place where the arriving vehicle does not have the possibility to see the approaching tram from a distance was chosen for the evaluation.
The evaluation route was consulted with the relevant partners (DPO-Public Transport in Ostrava) and is shown in Figure 1. The meeting point for evaluation drivers was Point 1. Here drivers filled out the driver’s profile questionnaire and pre-ride questionnaire. Then the driver went on an evaluation drive to point 2, where he also passed point 3 indicating the crossing with a tram. This passing was done without C-ITS. The evaluators at the site (point 3) had an overview of the approaching trams, and thanks to this, the driver was always instructed by the co-driver to start from point 2 in order to meet the tram in the crossing area. The main advantage of this use-case is that when the driver′s visibility to the tram is reduced, as in this situation, the driver has information about the arrival of the tram before he sees it. Then his ride continued back to point 1 where he filled out the post-ride questionnaire and his participation in the evaluation ended. This was followed by the whole process again with another driver until further respondents were available or until it was technically and timewise feasible to carry out the evaluation. The length of the route for one participant was approximately 3 km and the journey time was approximately 13 min, including the waiting time for the arrival of the tram.
The evaluation took place at the same tram crossing for all drivers and on two consecutive days, always from 9 am to 1 pm. As the evaluation was held on sunny days in the summer, the light visibility was comparable during all evaluation runs. Speed restrictions and traffic restrictions were the same on both days of testing. The tram crossing is located in the connecting street on the main street, so the traffic was constant and minimal during the whole day. The test vehicle used in the evaluation by all evaluated drivers was Ford C-Max also shown in Figure 2 with the mentioned tram crossing.

3.1.3. Evaluation Assessment

The assessment results are divided according to the evaluation areas into user acceptance, safety impact, and functional evaluation.

User Acceptance

User acceptance results were provided using different types of graphs. The answers to most questions from the driver′s profile questionnaires are shown in pie or bar graphs. The diversity of drivers in terms of age, mileage covered, or the size of the city in which the driver lives can be seen in Figure 3, Figure 4 and Figure 5.
In addition, the driver was also asked as to what traffic information he would like to receive while driving. The results are shown in Figure 6 where the driver wants to be mainly informed about traffic jams, but also about specific situations. This is very positive in terms of the C-ITS. C-ITS is focused on this type of information. On the other hand, drivers are not very interested in lane information.
In the other types of questionnaires, the answers to the questions asked were in the range:
  • strongly disagree,
  • disagree,
  • neutral attitude,
  • agree,
  • strongly agree.
In Table 1 we see the answers to the questions obtained before the ride. Drivers had an overall neutral attitude towards HMI, however, this value had a relatively large variance. This result is because drivers are diverse and that not every driver could imagine what a ride with an HMI would look like. The drivers also agreed (normally or strongly) to be informed about the crossing in advance when a tram is approaching. They also think that the safety and the overview of the situation near the crossing will increase.
The post-ride results regarding the information registration differ, see Figure 7. Here we can also see the different perceptions of drivers in terms of message registration. This again stems from the diversity of drivers. For some, the message was displayed too late and for some too early.
The post-ride answers are listed in Table 2. Unfortunately, the HMI distraction worsened compared to the pre-ride results in the static parameters. The most common value is no longer 4-disagree, but 2-agree. It clearly follows that HMI design must be focused on parameters like type, position, size, etc. The positive fact is the usefulness of the PTVC use-case is high (more than 50% strongly agree). Drivers also agree (with slightly lesser extent) that the information about approaching the crossing will increase safety.

Impact Assessment

For the public transport vehicle crossing, two main key performance indicators (KPI) were chosen to assess the difference in the driver’s behavior with and without C-ITS warning. Two KPIs were selected according to the capabilities of the recording equipment and the data obtained during the evaluation of the PTVC. The first is the speed of the vehicle and the second is its acceleration. The data captured by the C-ITS evaluation logger was separated from the entire captured log according to whether it belonged to a significant zone for evaluating driver behavior.
The average driver′s speed is shown in the box plot in Figure 8. This examines how the actual speed changed with the C-ITS warning about the approaching public transport vehicle. Comparing the two passages, we can state a speed reduction (approx. 4 km/h) during the passage with the C-ITS unit.
This trend could be also observed in Table 3 where we can see mean, maximum, and minimum speed in numbers. When driving with a C-ITS unit, drivers also had generally more uniform speed in common, which is visible on the smaller standard deviation (1.65 smaller with C-ITS) and the smaller range (by 1.17). Drivers tend to have lower average maximum speed (about 2.36 km/h) and lower average minimum speed (about 1.19 km/h) measured for all vehicles.
When performing a T-test of the statistical significance of the difference between the mean values with a 95% confidence interval, the resulting p-value is equal to 4.23 × 10−9 and we therefore consider the results to be statistically different. When performing Levene’s test for the speed of both rides, the p-value of the test was 3.43 × 10−8, so we can say that the probability of a difference caused by chance is minimal.
The second KPI selected for comparison was driver acceleration which is shown in Figure 9 and Table 4. Acceleration comparison does not show large differences with the use of a C-ITS unit. In both cases, the acceleration is centered around zero acceleration. The maximum and minimum acceleration values do not indicate significant braking or acceleration. When using a C-ITS unit, greater uniformity of driver acceleration can be seen on the range (1.35 m/s2 smaller with C-ITS) and standard deviation (0.17 m/s2 smaller with C-ITS), as well as lower average braking before without a C-ITS device. The sharp braking, which is taken when the deceleration of 5 m/s2 is exceeded during the evaluation rides, was not exceeded in either of the two passes.
On performing a T-test of the statistical significance of the difference between the mean values with a 95% truth interval, the resulting p-value is equal to 0.13, and the mean value of the acceleration can therefore be caused by a random phenomenon. On the other hand, when applying Levene′s test for the speed of both rides, the p-value of the test was 2.16 × 10−5, so we can say that the probability of a difference caused by chance is very small.

Functional Evaluation

The results of the functional evaluation are divided into three parts according to the methodology.
  • Lessons learned:
The evaluation was carried out at a place where a tram comes out of dense forest. Because of that, it sometimes happened that the C-ITS message was not received in time. One solution could be a retransmission of RSU directly at the intersection.
There are also few recommendations filled out by the drivers:
to inform the driver of the remaining time to the tram crossing,
to inform on the tram speed.
2.
Quality of service:
As mentioned in Lessons learned, the main issue was the late retrieval of information on crossing caused by insufficient event coverage. This issue negatively affects other important technical parameters such as latency and accuracy.
3.
HMI:
For illustration, the warning about the public transport safety crossing is depicted in Figure 10 where the drivers are generally satisfied with the information from HMI while some drivers would welcome a larger pictogram of this use-case.
Figure 10 also shows the distance to the event, which is 22 m. At the time the vehicle is waiting for the tram to pass, there is the word “Now” instead of distance. Some drivers would simply omit this word.
The general comments regarding HMI also show that one of the problems that needs to be addressed in the future is the size and position of the HMI to inform the driver.

4. Discussion

The main objective of the C-ITS evaluation is to assess whether notifying the driver of the condition of an incident on the road in front of the driver will improve traffic safety, improve driving comfort, and readiness for the approaching event. A questionnaire survey was conducted to gain insight into the driver′s views on C-ITS, the HMI, and the overall warning system.
This can be seen in the example of the PTVC use-case and its usefulness, and implementation together. Before the ride, the drivers were slightly inclined to believe that they would be distracted by the warning, which was favored by a few more drivers even after the ride. As the end user of C-ITS is a car driver, this should be seen as a good opportunity to obtain data and feedback from surveys on possible improvements and shortcomings of C-ITS and the impact on drivers.
The main result of the Impact Assessment analysis was a slight increase in caution when driving with C-ITS with an average reduction in driver speed, together with greater uniformity of measured speed and acceleration. It also turned out that the drivers braked less but also accelerated less using the C-ITS equipment. A big challenge is the effort to increase the accuracy of the measured data by filtering out external influences affecting the driver during the evaluation. As it is not possible to create the exact same driving conditions for all drivers, such as the impact of other traffic on drivers or the weather, it is necessary to take such factors into account in the analysis and reduce their effect on the overall conclusion by a larger number of evaluated passes. If an effort is made to balance the conditions for all evaluated drivers as much as possible, for example, by road closure or closed-circuit testing, there is a possibility that we expose drivers to an inauthentic environment in which they will not behave as they would in a real event.
Functional evaluation tells us important findings obtained during the evaluation, which significantly correlated with the opinion of drivers on the evaluated system. In this way, the late incoming messages and the insufficiently large pictograms depicting warnings were detected. One of the important outputs is the following implementation of important findings in C-ITS and its HMI from both functional evaluation and user acceptance. For the subsequent expansion and improvement of the methodology, the possibility of using more vehicles and thus multiple numbers of tested drivers for the same use-case in the same place and time are required. In this way, there is a better opportunity to capture a true view of the C-ITS system for wide publicity, which allows for better subsequent implementation of corrective measures in the HMI and the entire C-ITS system.

5. Conclusions

The main benefit of the article is the creation of evaluation methodology including specifics in the Czech Republic that are in line with the Evaluation and Assessment plan and FESTA handbook.
The methodology is divided into three parts: Evaluation preparation, Pilot Site Evaluation, and Evaluation and Assessment. Furthermore, Evaluation and Assessment are divided into three parts, User Acceptance, Impact assessment, and Functional Evaluation, Each part of the methodology is described and explained within the article and shown by an example of one specific use-case.
User acceptance showed the driver′s positive view of the tested PTVS use-case, together with its usefulness, and implementation. The display of the HMI and the distraction of drivers following the displayed warning proved to be a significant factor in the evaluation of C-ITS, as well as a challenge for future studies. Impact assessment indicated a positive impact on the driver′s behavior by reducing speed and slowing down more often when using C-ITS. To better understand and validate driver behavior on C-ITS systems, future studies should focus on streamlining the evaluation process for larger numbers of drivers.
Reference evaluation tests with specific responses to the event were conducted according to the proposed methodology and subsequently evaluated. As the results of the reference tests corresponded to the specified parameters, we consider the methodology to be valid.
Generally speaking, the evaluation should not be underestimated in terms of newly developed systems. If we focus on the C-ITS area we can see that the conclusions from the example are essential for the further development of C-ITS.

Author Contributions

Conceptualization, M.V.; Testing and methodology, M.V., M.Š. and M.M.; data processing, M.M.; project administration and management, Z.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by European Union (CEF), within the C-ROADS project, grant number C-Roads_CZ_2015-CZ-TM-0188-M.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Two types of data were used in this article: questionnaire survey data and logs recorded during the evaluation. The data from the questionnaire survey are covered by the GDPR Act and therefore it is not possible to publish them. The data that were used to evaluate the impact assessment are available from the authors on official request.

Acknowledgments

The published outputs are supported by voluntaries who joined the testing procedure and shared the personal data for the evaluation. The testing was also performed with the support of project partners (e.g., support of the back-office and control of infrastructure equipment).

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

BOBack Office
CANController Area Network
CEFCommon European Framework
C-ITSCooperative Intelligent Transport Systems
C-ITS SIMCooperative Intelligent Transport Systems Simulator
COOPERSCo-operative Networks for Intelligent Road Safety
DPOPublic Transport in Ostrava
FESTAField opErational teSt supporT Action
FOTField Operational Tests
GDPRGeneral Data Protection Regulation
GPSGlobal Positioning System
HMIHuman Machine Interface
KPIKey Performance Indicator
OBD2On-Board Diagnostics II
OBUOn-Board Unit
PTVCPublic Transport Vehicle Crossing
RSURoad Side Unit
SCOOPPilot project for the deployment of C-ITS
WGWork Group

References

  1. Srotyr, M.; Zelinka, T.; Lokaj, Z. Pilot applications of cooperative systems. In Proceedings of the 2016 Smart Cities Symposium Prague (SCSP), Prague, Czech Republic, 26–27 May 2016. [Google Scholar]
  2. Srotyr, M.; Zelinka, T.; Lokaj, Z. Hybrid communication solution for C-ITS and its evaluation. In Proceedings of the 2017 Smart Cities Symposium Prague (SCSP), Prague, Czech Republic, 25–26 May 2017. [Google Scholar]
  3. C-Roads Platform. Harmonized C-ITS Specifications for Europe—Release 1.7. 2020. Available online: https://www.c-roads.eu/platform/about/news/News/entry/show/release-17-of-c-roads-harmonized-c-its-specifications.html (accessed on 28 August 2021).
  4. FESTA Handbook Version 7, December 2018. Available online: https://www.connectedautomateddriving.eu/wp-content/uploads/2019/01/FESTA-Handbook-Version-7.pdf (accessed on 28 August 2021).
  5. Crockford, G.; Netten, B.; Wadsworth, P. Establishing a common approach to evaluating the InterCor C-ITS pilot project. In Proceedings of the 2018 IEEE 87th Vehicular Technology Conference (VTC Spring), Porto, Portugal, 3–6 June 2018; pp. 1–2. [Google Scholar]
  6. Innamaa, S.; Koskinen, S.; Kauvo, K. NordicWay Evaluation Outcome Report: Finland. 2017. Available online: https://uploads-ssl.webflow.com/5c487d8f7febe4125879c2d8/5c5c02add707426eba903956_NordicWay%20Evaluation%20Outcome%20Report%20M_13%20(secured).pdf (accessed on 28 August 2021).
  7. Farah, H.; Koutsopoulos, H.N.; Saifuzzaman, M.; Kölbl, R.; Fuchs, S.; Bankosegger, D. Evaluation of the effect of cooperative infrastructure-to-vehicle systems on driver behavior. Transp. Res. Part C Emerg. Technol. 2012, 21, 42–56. [Google Scholar] [CrossRef]
  8. Böhm, M.; Fuchs, S.; Pfliegl, R.; Kölbl, R. Driver behavior and user acceptance of cooperative systems based on infrastructure-to-vehicle communication. Transp. Res. Rec. 2009, 2129, 136–144. [Google Scholar] [CrossRef]
  9. Malone, K.; Innamaa, S.; Hogema, J. Impact assessment of cooperative systems in the DRIVE C2X project. In Proceedings of the 22nd World Congress on Intelligent Transport Systems, Bordeaux, France, 1 January 2015; ERTICO-ITS Europe: Brussels, Belgium, 2015. [Google Scholar]
  10. Esposito, M.C. Scoop@ idf: Implementation of cooperative systems for a road operator. Transp. Res. Procedia 2016, 14, 4582–4591. [Google Scholar] [CrossRef] [Green Version]
  11. Agriesti, S.; Gandini, P.; Marchionni, G.; Paglino, V.; Ponti, M.; Studer, L. Evaluation approach for a combined implementation of day 1 C-ITS and truck platooning. In Proceedings of the 2018 IEEE 87th Vehicular Technology Conference (VTC Spring), Porto, Portugal, 3–6 June 2018; pp. 1–6. [Google Scholar]
  12. Barnard, Y.; Innamaa, S.; Koskinen, S.; Gellerman, H.; Svanberg, E.; Chen, H. Methodology for field operational tests of automated vehicles. Transp. Res. Procedia 2016, 14, 2188–2196. [Google Scholar] [CrossRef] [Green Version]
  13. Festag, A.; Le, L.; Goleva, M. Field operational tests for cooperative systems: A tussle between research, standardization and deployment. In Proceedings of the Eighth ACM International Workshop on Vehicular Inter-Networking, Las Vegas, USA, 23 September 2011; pp. 73–78. [Google Scholar]
  14. Elhenawy, M.; Bond, A.; Rakotonirainy, A. C-ITS safety evaluation methodology based on cooperative awareness messages. In Proceedings of the 2018 21st International Conference on Intelligent Transportation Systems (ITSC), Maui, HI, USA, 4–7 November 2018; pp. 2471–2477. [Google Scholar]
  15. Fank, J.; Knies, C.; Diermeyer, F.; Prasch, L.; Reinhardt, J.; Bengler, K. Factors for user acceptance of cooperative assistance systems. A two-step study assessing cooperative driving. In Proceedings of the 8th Tagung der Fahrerassistenz, München, Germany, 22–23 November 2017. [Google Scholar]
  16. Kauer, M.; Franz, B.; Schreiber, M.; Bruder, R.; Geyer, S. User acceptance of cooperative maneuverbased driving—A summary of three studies. Work 2012, 41 (Suppl. S1), 4258–4264. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  17. Liu, F.; Pueboobpaphan, R.; van Arem, B. Assessment of traffic impact on future cooperative driving systems: Challenges and considerations. In Proceedings of the 2009 International Conference on Ultra Modern Telecommunications & Workshops, St. Petersburg, Russia, 12–14 October 2009; pp. 1–5. [Google Scholar]
  18. Jizba, T. Influence of HMI ergonomy on drivers in cooperative systems area. Acta Polytech. CTU Proc. 2017, 12, 42–49. [Google Scholar] [CrossRef] [Green Version]
  19. Agudelo, A.F.; Bambague, D.F.; Collazos, C.A.; Luna-García, H.; Fardoun, H. Design guide for interfaces of automotive infotainment systems based on value sensitive design: A systematic review of the literature. In Proceedings of the VI Iberoamerican Conference of Computer Human Interaction (HCI 2020), Arequipa, Perú, 16–18 September 2020. [Google Scholar]
  20. C-Roads Platform. Evaluation & Assessment Plan. 2019. Available online: https://www.c-roads.eu/fileadmin/user_upload/media/Dokumente/C-Roads_WG3_Evaluation_and_Assessment_Plan_version_June19_adopted_by_Countries_Final.pdf (accessed on 28 August 2021).
  21. Lokaj, Z.; Srotyr, M.; Vanis, M.; Broz, J.; Mlada, M. C-ITS SIM as a tool for V2X communication and its validity assessment. In Proceedings of the Smart Cities Symposium Prague (SCSP), Prague, Czech Republic, 27–28 May 2021; IEEE Press: New York, NY, USA, 2021. ISBN 978-1-6654-3033-3. [Google Scholar]
Figure 1. Evaluation route within the Public transport vehicle crossing use-case.
Figure 1. Evaluation route within the Public transport vehicle crossing use-case.
Applsci 11 09700 g001
Figure 2. The test vehicle on the tram crossing under evaluation ride (point 3 on the map above).
Figure 2. The test vehicle on the tram crossing under evaluation ride (point 3 on the map above).
Applsci 11 09700 g002
Figure 3. Representation of drivers in terms of age.
Figure 3. Representation of drivers in terms of age.
Applsci 11 09700 g003
Figure 4. Representation of drivers in terms of mileage per year.
Figure 4. Representation of drivers in terms of mileage per year.
Applsci 11 09700 g004
Figure 5. Representation of drivers in terms of the size of the driver’s city.
Figure 5. Representation of drivers in terms of the size of the driver’s city.
Applsci 11 09700 g005
Figure 6. Types of information that drivers would welcome while driving.
Figure 6. Types of information that drivers would welcome while driving.
Applsci 11 09700 g006
Figure 7. Information registration.
Figure 7. Information registration.
Applsci 11 09700 g007
Figure 8. Box plot of speed for all tested vehicles.
Figure 8. Box plot of speed for all tested vehicles.
Applsci 11 09700 g008
Figure 9. Box plot of acceleration for all tested vehicles.
Figure 9. Box plot of acceleration for all tested vehicles.
Applsci 11 09700 g009
Figure 10. HMI of Public transport vehicle crossing use-case.
Figure 10. HMI of Public transport vehicle crossing use-case.
Applsci 11 09700 g010
Table 1. The results from pre-ride questionnaires including specific questions on general and the use-case Public Transport Vehicle Crossing.
Table 1. The results from pre-ride questionnaires including specific questions on general and the use-case Public Transport Vehicle Crossing.
Pre-Ride Questions Related toMean ValueMedian ValueModeVarianceStandard Deviation
HMI distraction3.31321.231.11
Information about trams approaching the crossing4.69550.230.48
Safety increase4.62550.260.51
Overview of the situation4.31440.400.63
Table 2. The results from post-ride questionnaires including specific questions on general and the use-case Public Transport Vehicle Crossing.
Table 2. The results from post-ride questionnaires including specific questions on general and the use-case Public Transport Vehicle Crossing.
Post-Ride Questions Related toMean ValueMedian ValueModeVarianceStandard Deviation
HMI distraction3.23341.031.01
Usefulness4.31550.900.95
Safety increase3.85440.970.99
Table 3. Box plot of speed for all tested vehicles.
Table 3. Box plot of speed for all tested vehicles.
Mean Speed
[km/h]
Mean Max Speed
[km/h]
Mean Min Speed
[km/h]
Speed Range
[km/h]
Standard Deviation
[km/h]
Without C-ITS16.7130.265.7033.439.36
With C-ITS14.7627.903.7932.267.71
Table 4. Box plot of acceleration for all tested vehicles.
Table 4. Box plot of acceleration for all tested vehicles.
Mean Acceleration
[m/s2]
Mean Max Acceleration
[m/s2]
Mean Min Acceleration
[m/s2]
Acceleration Range
[m/s2]
Standard Deviation
[m/s2]
Without C-ITS0.0004061.15−1.734.650.69
With C-ITS0.0592091.17−1.053.300.52
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Lokaj, Z.; Šrotýř, M.; Vaniš, M.; Mlada, M. Methodology of Functional and Technical Evaluation of Cooperative Intelligent Transport Systems and Its Practical Application. Appl. Sci. 2021, 11, 9700. https://doi.org/10.3390/app11209700

AMA Style

Lokaj Z, Šrotýř M, Vaniš M, Mlada M. Methodology of Functional and Technical Evaluation of Cooperative Intelligent Transport Systems and Its Practical Application. Applied Sciences. 2021; 11(20):9700. https://doi.org/10.3390/app11209700

Chicago/Turabian Style

Lokaj, Zdeněk, Martin Šrotýř, Miroslav Vaniš, and Michal Mlada. 2021. "Methodology of Functional and Technical Evaluation of Cooperative Intelligent Transport Systems and Its Practical Application" Applied Sciences 11, no. 20: 9700. https://doi.org/10.3390/app11209700

APA Style

Lokaj, Z., Šrotýř, M., Vaniš, M., & Mlada, M. (2021). Methodology of Functional and Technical Evaluation of Cooperative Intelligent Transport Systems and Its Practical Application. Applied Sciences, 11(20), 9700. https://doi.org/10.3390/app11209700

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop