Next Article in Journal
Access or Accessibility? A Critique of the Urban Transport SDG Indicator
Next Article in Special Issue
Estimating the Available Sight Distance in the Urban Environment by GIS and Numerical Computing Codes
Previous Article in Journal
3D Landform Modeling to Enhance Geospatial Thinking
Previous Article in Special Issue
The Spatial and Social Patterning of Property and Violent Crime in Toronto Neighbourhoods: A Spatial-Quantitative Approach
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Do Crash Barriers and Fences Have an Impact on Wildlife–Vehicle Collisions?—An Artificial Intelligence and GIS-Based Analysis

by
Raphaela Pagany
1,2,* and
Wolfgang Dorner
1
1
Institute for Applied Informatics, Technologie Campus Freyung, Deggendorf Institute of Technology, Grafenauer Straße 22, 94078 Freyung, Germany
2
Interfaculty Department of Geoinformatics Z_GIS, University of Salzburg, Schillerstraße 50, 5020 Salzburg, Austria
*
Author to whom correspondence should be addressed.
ISPRS Int. J. Geo-Inf. 2019, 8(2), 66; https://doi.org/10.3390/ijgi8020066
Submission received: 15 December 2018 / Revised: 23 January 2019 / Accepted: 27 January 2019 / Published: 30 January 2019
(This article belongs to the Special Issue GIS for Safety & Security Management)

Abstract

:
Wildlife–vehicle collisions (WVCs) cause significant road mortality of wildlife and have led to the installation of protective measures along streets. Until now, it has been difficult to determine the impact of roadside infrastructure that might act as a barrier for animals. The main deficits are the lack of geodata for roadside infrastructure and georeferenced accidents recorded for a larger area. We analyzed 113 km of road network of the district Freyung-Grafenau, Germany, and 1571 WVCs, examining correlations between the appearance of WVCs, the presence or absence of roadside infrastructure, particularly crash barriers and fences, and the relevance of the blocking effect for individual species. To receive infrastructure data on a larger scale, we analyzed 5596 road inspection images with a neural network for barrier recognition and a GIS for a complete spatial inventory. This data was combined with the data of WVCs in GIS to evaluate the infrastructure’s impact on accidents. The results show that crash barriers have an effect on WVCs, as collisions are lower on roads with crash barriers. In particular, smaller animals have a lower collision share. The risk reduction at fenced sections could not be proven as fenced sections are only available at 3% of the analyzed roads. Thus, especially the fence dataset must be validated by a larger sample number. However, these preliminary results indicate that the combination of artificial intelligence and GIS may be used to analyze and better allocate protective barriers or to apply it in alternative measures, such as dynamic WVC risk-warning.

1. Introduction

Heterogeneous parameters of the road environment contribute, to a greater or lesser extent, to the cause of wildlife–vehicle collisions (WVCs). Land use, road structure, as well as traffic volume and speed are already explored parameters with a significant impact on WVCs [1,2,3]. One aspect which has been marginally researched is the road accompanying structure. This opens the question up as to what extent the roadside infrastructure, such as crash barriers and fences, affect the local occurrence of WVCs. Analyses in this context have been restricted in the past by data availability for WVCs, on the one hand, and georeferenced and digital documentation of roadside infrastructures on the other hand.
For road documentation, camera-based inspections are made on a regular basis for most roads in Germany, using a car as a carrier for up to four cameras (for the front, rear, left, and right side of the driving direction). While camera data is georeferenced, we posed the question whether computer vision, in combination with GIS-based approaches, would be able to derive a georeferenced infrastructure inventory from these images. This information could provide knowledge about different risk types in the road environment, e.g., factors influencing the risk for wildlife crossings.
Relying on experiences on automatic extraction of road signs from images, (e.g. [4,5]) using artificial intelligence and machine learning, (e.g. [6,7]), we based our research on the following hypothesis:
(1)
Crash barriers and fences can be identified in a series of inspection images by an artificial neural network (ANN).
(2)
Using GIS, a georeferenced inventory of fences and barriers can be built.
(3)
The barriers have a preventive measure while influencing animals’ behavior and, hence, reduce accidents.
In this context, we want to especially consider the effect on different species. We restricted the study to crash barriers and fences, which we will refer to as barriers in the following text.
At this preliminary research stage, the objective was to test individual steps, which could be later combined to build an automated process chain from updating the inventory on basis of camera data through machine learning and GIS-based documentation and analysis. As the majority of German building authorities only have individual road construction plans with crash barriers and fences, but not any comprehensive spatial barriers’ documentation available, we propose the machine learning and GIS-driven approach for automatic identification and georeferencing using existing photo material from official road inspections. The project is driven by the idea that a better understanding of animals’ behavior and resulting accidents could help to better plan and place protective measures. Also, a better warning for car drivers, e.g., through mobile applications, may be a potential opportunity to increase road security with this combined GIS and machine learning approach.
In the first part of this paper, we introduce the state of the art in GIS-based WVC analysis, considering, especially, the aspect of infrastructural effects. Additionally, potential approaches were analyzed, such as how to integrate ANN for feature detection and GIS to build an infrastructure inventory. Based on the material of WVCs and images of camera-based road inspections, we developed a methodology, how to use barriers, derived from computer vision and artificial intelligence in geospatial modelling. Afterwards, the relation of the occurred WVCs and the presence or absence of both barrier types is evaluated statistically. After the presentation of our results showing the species-specific effect, we draw conclusions and propose the next steps to include the abovementioned results in a larger framework for WVC analysis and prediction.

2. State of Art of GIS-Based WVC Analysis

Thus far, GIS methods have been utilized by several studies about WVCs to compare the accident occurrence with environmental, infrastructural, and traffic parameters (e.g., [8]). In order to contribute to road safety efforts, the GIS studies analyzed WVC development and changes in landscape structures [9], the placement of warning signs [10], delineating wildlife movement [11], or the impact of landscape on WVCs [12]. Even though some studies ascertained that infrastructure has an impact on WVCs [1,13], the number of studies dealing with specific infrastructure types is limited, and those available are spatially restricted and have not been able to substantially quantify the effect of different structures, such as barriers and fences, on WVCs [2,14,15,16]. To the best knowledge of the authors, until now, no research on the impact of crash barriers has been undertaken at such a scale [14,15].
For fences, Seiler found out that “Traffic volume, vehicle speed and the occurrence of fences were dominant factors determining MVC risks” [13] (MVC—moose vehicle collision). While fences reduce the risk of WVCs, they also seem to shift parts of the risk to the end of the fence [13] and do not exclude accidents inside the fenced area. As deer, for example, circumvent the fence [17], the collision risk increases the fence endings by a higher concentration of crossing numbers. Colino-Rabanal et al. identified in their data that “roadkill was proportionally higher along fenced highways than on similar major roads that lacked fences” [18], but they mainly see low traffic volumes along these unfenced roads as the reason for the low numbers of WVCs.
So far, the availability of data has been restricted and has limited studies from drawing conclusions about the connection between WVC and infrastructure. Now, camera-based road inspections provide a huge amount of data to derive inventories from, offering data for the aforementioned limited research efforts. Therefore, de Frutos et al. suggested to use smartphone-based cameras and apps for video inspections [19], an alternative option to the more expensive terrestrial laser scanning [20]. Besides, process chains to collect and evaluate image data using GIS-based handling of camera-based material and frames have been recommended [21]. The extraction of infrastructure data can be achieved using computer vision, which was already recommended for camera-based road inspections, but mainly applied for pavement analysis and road sign detection [22]. The predominant applied example is, in general, road sign detection [7,23,24], resulting in a vast number of approaches from classical computer vision [7], via support vector machine [24] and neural networks [6]. The use of artificial intelligence and neural networks have especially contributed to this domain. While transfer learning improves the ability to rely on existing nets and focus the learning on specific aspects of a type of images [25], image databases, such as ImageNet [26,27] and competitions [27,28], contribute to making the approaches transferrable, systems comparable, and to provide a solid basis for transfer learning.

3. Material and Method

During the past eight years, Bavarian police recorded all reported WVCs, georeferenced the location of the accidents, and documented the species of the animal involved in the accident. We used an excerpt of this dataset for the district of Freyung-Grafenau, a rural accident hotspot in the Bavarian Forest, Germany (Figure 1). The dataset contains 1571 accidents within the period of 2010–2017 at secondary roads. As a lot of WVCs occur at overland roads of the secondary street class, we have chosen a dataset with all three secondary roads B 12, B 85, and B 533 (national roads—Bundesstrasse) for the analysis. To identify the relevant roadside infrastructure, we used 5596 images recorded by camera-based inspections along the road, data from official road inspections from the Bavarian Ministry of the Interior, for Building and Transport of the year 2015. While WVC data covers a period of the past eight years, the barrier information is, due to the photo material, only a snapshot of the situation in 2015. However, we assume that the changes in this type of infrastructure are minor and will not have significant impact on the analysis.
For the investigation, we used information about animal- and human-preserving barriers (crash barriers and fences) derived from an automatized image classification with the TensorFlow framework and a GIS analysis (see Figure 2 for the concept and Figure 3 for the process chain). First, we manually classified all images of the camera inspection. The basic ANN was constructed by using Inception V3 architecture. For the training, the transfer learning approach was used because the set of available data was too small to train an own network. A pre-trained network, based on the ImageNet dataset [27], was taken to support the training work. The output layers were removed, and the network was trained again, using a set of roughly 800 inspection images.
Input data to the Inception V3 model are images with a size of 299 × 299 pixels. All input data was automatically scaled by the TensorFlow framework. Monitoring of the training process showed that 6000 training steps were sufficient, and further training steps did not increase the classification quality. Two separate ANNs were used to classify fences and crash barriers. A total of 398 images was split for fence detection. Manually pre-classified data from the left (211 images) and the right camera (187 images) were taken, showing fences in comparison to 418 images without fences. For crash barriers, we used 835 manually pre-classified images with 455 images showing a barrier, and 380 without a barrier. Eighty percent of images were used for training, 10% for validation, and 10% for testing in the ANN. We shuffled training data to use left and right images alternately.
For the first training of the ANN, we used camera data from the left and right side of the car. Cameras were directed in the front right and front left direction. While the car was permanently driving at the right side of the street, the consequence was that the distance between camera and the border of the street, where fences are mainly located, was a minimum of 2 to 2.5 m on the right side (strip of the street plus embankment and drainage system), but at a minimum 5.5 to 6 m at the left hand side (left lane of the street roughly 3.5 m plus embankment and drainage system). Consequently, the images, especially those displaying fences, have different resolutions, due to the distance to the camera. Thus, we adapted the methodology later on by only using images from the right side camera with a higher resolution of depicted objects for training, resulting in a significantly higher classification quality, as shown in Chapter 4.
For further GIS-based analysis, the image classification results are imported into ArcGIS (Desktop ArcMap) using csv format. The result for each image (identified by ID and x and y coordinates) is separated in four values: fence present (1) on the left road side, (2) on the right road side; crash barrier present (3) on the left road side, (4) on the right road side (see Figure 2).
Approximately 20 m is the distance between two image points giving information about the absence or presence of a fence and a crash barrier. To get linear information about the length of both barrier types, features with the attribute crash barrier = 1 (“available”) and fence = 1 are separately selected and converted into lines. Feature points or, rather, image information, which are classified as non-barrier location, are not considered further. The coordinates of each image point belong to the original record point, indeed, and not the next few meters visible in the image, where the barrier may be identified. Nevertheless, the approach is an easy approximation about where a barrier starts and ends, with a minor uncertainty about the exact start and end point.
Lines with distances smaller than 10 m between two single image points were eliminated. By testing and comparing with the point information, we found out that 10 m was the minimum for a reasonable barrier line, deleting artefacts and very short lines of single wrong classified images (see the case of Figure 6, zoom-in map). Instead, only barriers with more than two points (longer than 10 m) were identified. We calculated the Euclidean distance between the points as an approximation to the road network, and very similar to the road line because of the small image points (see also Figure 6). If the distance between two image points was larger than 80 m, the line was also eliminated, as we assumed that there is a gap in the barrier between the two points, or two discontinuous points have been connected by the point-in-line calculation. As there was no other study available for comparison, this threshold is an assumption which was approximated to the real situation to get a realistic barrier line. Then, the individual files for left and right side barriers were merged, and a new file for the remaining road sections without barriers was calculated using the erase function (including a 10 m buffer for lines not lying exactly on the road line due to inaccurate GPS point data). Finally, the number of WVCs that occurred on roads with and without barrier lines, was summed up separately.

4. Results

To better discuss the individual parts of the research, we split the presentation of the results in three segments: First, presenting the underlying analysis of WVC data, second, looking at the GIS-based results of barrier impact on WVCs, and third, critically reflect the ANN and its contribution to provide the basic infrastructural data.

4.1. WVC Statistics

In the district of Freyung-Grafenau, a total of 1571 WVCs occurred on secondary roads in the years 2010–2017. The increase of accidents over the past eight years correlates with the increase of this accident type in Bavaria and Germany (Figure 4).
The WVC distribution by species for the three secondary road sections shows that the vast majority of accidents is caused by deer, followed by rabbits and foxes (Table 1).

4.2. GIS Results

Sixty-four percent of road segments are accompanied by crash barriers, and nearly three percent of roads are protected by fences, as Figure 5 shows for the whole research area (a), and for one section (b). In Table 2 and Table 3, we compared the dependencies between WVCs, barriers, and fences for the manual and ANN-based classification. The comparison of WVCs with areas protected by crash barriers results in an equal number of the total 1571 accidents that happened in sections with barriers (776) versus the accidents in areas without barriers (795). Due to the high classification quality, the manual and automatic classification provided similar results. By considering the length of the sections with and without barriers, the number of WVCs per length is 2.11 times higher in sections without crash barrier (3.30 times higher using ANN classification) in comparison to sections without barriers or, the other way around, the street section where one when WVC occurs is 2.11 times longer on average (WVC proportion to street length).
The distribution of WVCs shows that barriers have an impact on accidents. Especially crash barriers seem to significantly influence the accident situation for all animals, particularly on special species (see also Table 4). The impact of fences on WVCs needs to be discussed because current data would support the hypothesis that fences would increase the risk of WVCs by the factor 3.08, or areas without fences would have 0.32 times lower risk of accidents. For a total of 113 km, we identified less than 3 km of fenced road segments.
The hypothesis that animals will circumvent the fence and the number of WVCs will increase at the end of a fenced section is supported by literature [17], and would help to better understand the statistical result. For short fenced sections, also, the inaccuracies in fence positions as well as accident positions may support this theory, because some of the accidents, marked as inside the fenced road section, may have been located near the end of the fence, and could be explained by circumventing.
The effect of crash barriers and fences also depends on the species. Crash barriers seem to have an even stronger retaining effect on badgers and foxes but, also, for the predominant deer population in the area, street segments without crash barriers have a two-times higher (2.16) probability for accidents in comparison to segments with crash barrier (Table 4). The numbers for rabbits indicate that especially smaller animals, such as rabbits and “others”, seem not to be retained by fences at all. The two-times higher probability for wild birds may explain that the effect on other species might affect the behavior to search for carcasses.
We did not distinguish between different fence types (game fence, non-permanent pasture fence, smaller barriers, and garden fences). Different types might retain some species and may have no effect on others. Also, the length of fenced segments should be further analyzed as circumventing may play a role for wildlife crossings and for the WVC risk. For further research, even larger datasets and a better classification of fences will be necessary.
The large dataset of WVCs is a result of a long monitoring period (eight years), while barrier data only represents a temporal snapshot from 2015. As a consequence, also, more time frames of infrastructure are necessary to analyze changes of constructions.
Figure 6 (for crash barriers) and Figure 7 (for fences) are the resulting maps with the georeferenced barrier inventory. The comparison of the automatized classification with the manually checked road sections shows a high correspondence of crash barrier identification and the improving requirements in the case of the fence digitization. The small zoom-in map of Figure 6 presents the advantage of the ANN and GIS combination, as the share of correct identified road sections are increased by using the GIS approach. However, it also shows that the accuracy of the automatic classification still needs to be improved.

4.3. ANN Results

The auto-classification by ANN resulted in a 92% classification rate to identify crash barriers and a 63% rate for fences (Table 5). In particular, the low classification quality of fences may result from the fragile structure of fences in combination with the differing resolution of images from the two different cameras. We later analyzed this difference in resolution by first training the ANN with images from both sides and, second, with images only from the right side, and used this trained ANN to classify images from both sides of the road.
Results (Table 6) show that training data stemming from both cameras results in a significant larger number of misclassifications. Training the ANN only with camera data from the right side resulted in nearly 8% more correct positives (ANN identified fences, where fences are on the image) as well as 12% more correct negative (ANN identified no fence, where no fences are). This may indicate that for slim structures, such as fences, resolution is a key aspect for training that will also result in positive classification results for images with a lower resolution of the object.
The classification results for crash barriers are significantly better than those for fences. Crash barriers are, in general, located closer to the road and provide larger and more significant structures. Fences are often located behind crash barriers or even far from the street, which means that fences are below a resolution that could be detected by ANN, or can be mixed up with slim and elongate bushes and small trees, such as birches or hazelnut.
Subsequently we can summarize that pre-trained ANN can be used through transfer learning to identify barriers and fences as roadside infrastructure and create an inventory using GIS techniques, building up an inventory and analyzing the occurrence and cause effect relationships between infrastructure and WVC. Image resolution and the size of the dataset are essential factors for the analysis.

5. Conclusions

In this study, we analyzed the impact of barriers (crash barriers and fences) on the risk of WVCs. We followed the hypothesis that (1) crash barriers and fences can be identified in a series of camera-based road inspection images by an ANN and that (2) a georeferenced inventory of fences and barriers can be built using GIS. Based on this data, we found a preliminary answer to the question of up to which extent barriers can have a preventive measure while influencing animals’ behavior to reduce accidents and increase safety. While a barrier effect of crash barriers is visible, fence impacts must be researched more precisely in the next step. Using ANN to identify barriers in the images provided, in a first attempt, already adequate results, sufficient to delineate barriers in a GIS. GIS was in this part able not only to build the polylines of these structures but also to compensate errors of the ANN by restricting results to a spatially reasonable dataset.
Due to these new possibilities to derive the location and extent of road-accompanying barriers from camera images, we were able to compare, in a preliminary study, 113 km of road network with WVCs from an eight-year period. We found that crash barriers have a strong effect on wildlife and, thus, reduce the risk of WVCs by one-half. For our test site, fences seem not to affect animals, but this requires further testing with a larger region and an increased number of roadside fences. However, the crash barrier information can already be considered in a new type of WVC warning system for car drivers to better predict areas at risk, or to improve planning of protection measures.
From a methodological perspective, the combination of artificial intelligence with GIS provides a new concept to make a study on a topic, where up-to-date quantitative research was not possible due to a lack of data. The study still leaves some questions open and requires further consideration to improve the quality of results as well as the accuracy of geodata with regard to completeness and precision of location. The classification of fences needs to be developed further due to different fence types, having differing visual appearance and effects on wildlife. There is a need to distinguish between game fences, protection fences, as well as garden fences, and other possible subtypes. For this new training material and, also, a larger test region, it will be necessary to increase the total available length of fences and sections for different fence types. Additionally, the resolution of the inspection images might contribute to a better quality of the ANN’s detection rate. We applied a standard resolution of 299 × 299 pixels to use an existing trained network for transfer learning. Using the available image resolution and training an own neural network will increase the effort to manually classify sufficient data but might also improve barrier—especially fence—recognition.
From a data perspective, three aspects should be considered in the future: (1) Infrastructure developments over time. The length of the available time series (eight years for WVC) will require consideration of the development of the built environment at and along a street in the future. This will increase complexity from a GIS perspective, to take into account spatiotemporal data. The problem is, again, data availability due to missing documentation of these structures. While accidents are documented on a daily basis, changes of infrastructure might be only considered depending on inspection intervals. (2) Other types of barriers, such as noise protection barriers, dams, and hill intersections with steep slopes, walls, or gullies were not considered in this study, but might also impact animals’ behavior and movement patterns and, hence, WVC occurrence and risk. These types of structures are also undocumented and can only partially be derived from photos. Mixed approaches, combining inspection images and laser scanning data, might provide the necessary information. Then, the presented methodology might be applied to these infrastructure types. (3) While past studies only considered a small number of WVC-influencing factors, the current study and its applicability to alternative barrier types indicates that a significant number of parameters needs to be considered to explain WVC development. Besides road infrastructure and traffic as WVC-influencing factors, a better understanding of habitats and wildlife behavior will also be necessary.
Approaches from artificial intelligence and big data might be used to better process these heterogeneous datasets and consider a larger set of potentially influencing parameters. In the future, results might be used for more effective warning of car drivers, substituting warning signs by more precise spatial and temporal and, hence, highly dynamic warnings. To increase road safety in general, this approach might also be applied to other types of accidents.

Author Contributions

Conceptualization, R.P. and W.D.; methodology, R.P., W.D.; validation/formal analysis/investigation, R.P.; writing, W.D. and R.P.; visualization, R.P.; supervision, W.D.

Funding

This research was funded by the GERMAN FEDERAL MINISTRY OF TRANSPORT AND DIGITAL INFRASTRUCTURE (BMVI) as part of the mFund project “WilDa—Dynamic Wildlife–vehicle collision warning, using heterogeneous traffic, accident and environmental data as well as big data concepts” grant number 19F2014A.

Acknowledgments

Accident data provided by the Bavarian Ministry of the Interior, for Sport and Integration and the Bavarian Police; photo material provided by Bavarian Ministry of Living, Building and Transportation.

Conflicts of Interest

The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

References

  1. Seiler, A. Predicting locations of moose-vehicle collisions in Sweden. J. Appl. Ecol. 2005, 42, 371–382. [Google Scholar] [CrossRef]
  2. Malo, J.E.; Suárez, F.; Díez, A. Can we mitigate animal–vehicle accidents using predictive models? J. Appl. Ecol. 2004, 41, 701–710. [Google Scholar] [CrossRef]
  3. Gunson, K.E.; Mountrakis, G.; Quackenbush, L.J. Spatial wildlife-vehicle collision models: A review of current work and its application to transportation mitigation projects. J. Environ. Manag. 2011, 92, 1074–1082. [Google Scholar] [CrossRef] [PubMed]
  4. Fang, C.-Y.; Chen, S.-W.; Fuh, C.-S. Road-sign detection and tracking. IEEE Trans. Veh. Technol. 2003, 52, 1329–1341. [Google Scholar] [CrossRef]
  5. Khan, J.F.; Bhuiyan, S.M.A.; Adhami, R.R. Image Segmentation and Shape Analysis for Road-Sign Detection. IEEE Trans. Intell. Transp. Syst. 2011, 12, 83–96. [Google Scholar] [CrossRef]
  6. Kellmeyer, D.L.; Zwahlen, H.T. Detection of highway warning signs in natural video images using color image processing and neural networks. In Proceedings of the 1994 IEEE International Conference on Neural Networks (ICNN’94), Orlando, FL, USA, 28 June–2 July 1994; Volume 7, pp. 4226–4231. [Google Scholar]
  7. Ouerhani, Y.; Elbouz, M.; Alfalou, A.; Kaddah, W.; Desthieux, M. Road sign identification and geolocation using JTC and VIAPIX module. Proc. SPIE 2018, 10649, 106490J. [Google Scholar] [CrossRef]
  8. Pagany, R.; Dorner, W. Spatiotemporal analysis for wildlife-vehicle-collisions based on accident statistics of the county Straubing-Bogen in Lower Bavaria. ISPRS Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2016, XLI-B8, 739–745. [Google Scholar] [CrossRef]
  9. Keken, Z.; Kušta, T.; Langer, P.; Skaloš, J. Landscape structural changes between 1950 and 2012 and their role in wildlife–vehicle collisions in the Czech Republic. Land Use Policy 2016, 59, 543–556. [Google Scholar] [CrossRef]
  10. Krisp, J.M.; Durot, S. Segmentation of lines based on point densities—An optimisation of wildlife warning sign placement in southern Finland. Accid. Anal. Prev. 2007, 39, 38–46. [Google Scholar] [CrossRef]
  11. Nelson, T.; Long, J.; Laberee, K.; Stewart, B. A time geographic approach for delineating areas of sustained wildlife use. Ann. GIS 2015, 21, 81–90. [Google Scholar] [CrossRef] [Green Version]
  12. Nielsen, C.K.; Anderson, R.G.; Grund, M.D. Landscape Influences on Deer-Vehicle Accident Areas in an Urban Environment. J. Wildl. Manag. 2003, 67, 46–51. [Google Scholar] [CrossRef]
  13. Oskinis, V.; Ignatavicius, G.; Vilutiene, V. An evaluation of wildlife-vehicle collision pattern and associated mitigation strategies in Lithuania. Environ. Eng. Manag. J. 2013, 12, 2323–2330. [Google Scholar]
  14. Clevenger, A.P.; Kociolek, A.V. Potential impacts of highway median barriers on wildlife: State of the practice and gap analysis. Environ. Manag. 2013, 52, 1299–1312. [Google Scholar] [CrossRef] [PubMed]
  15. Kociolek, A.; Clevenger, A.P. Highway median Impacts on Wildlife Movement and Mortality. In Proceedings of the 2007 International Conference on Ecology & Transportation “Bridging the Gaps, Naturally” (ICOET 2007), Little Rock, AR, USA, 20–25 May 2007. [Google Scholar]
  16. Boarman, W.I.; Sazaki, M.; Jennings, W.B. The Effect of Roads, Barrier Fences, and Culverts on Desert Tortoise Populations in California. In Proceedings of the Conservation, Restoration, and Management of Tortoises and Turtles—An International Conference, New York, NY, USA, 11–16 July 1993; pp. 54–58. [Google Scholar]
  17. Gulsby, W.D.; Stull, D.W.; Gallagher, G.R.; Osborn, D.A.; Warren, R.J.; Miller, K.V.; Tannenbaum, L.V. Movements and home ranges of white-tailed deer in response to roadside fences. Wildl. Soc. Bull. 2011, 35, 282–290. [Google Scholar] [CrossRef]
  18. Colino-Rabanal, V.J.; Lizana, M.; Peris, S.J. Factors influencing wolf Canis lupus roadkills in Northwest Spain. Eur. J. Wildl. Res. 2011, 57, 399–409. [Google Scholar] [CrossRef]
  19. Higuera de Frutos, S.; Castro, M. Using smartphones as a very low-cost tool for road inventories. Transp. Res. Part C Emerg. Technol. 2014, 38, 136–145. [Google Scholar] [CrossRef]
  20. Gong, J.; Zhou, H.; Gordon, C.; Jalayer, M. Mobile Terrestrial Laser Scanning for Highway Inventory Data Collection. Comput. Civ. Eng. 2012. [Google Scholar] [CrossRef]
  21. Khan, G.; Santiago-Chaparro, K.R.; Chiturri, M.; Noyce, D.A. Development of Data Collection and Integration Framework for Road Inventory Data. Transp. Res. Rec. 2010, 2160, 29–39. [Google Scholar] [CrossRef]
  22. Varadharajan, S.; Jose, S.; Sharma, K.; Wander, L.; Mertz, C. Vision for road inspection. In Proceedings of the IEEE Winter Conference on Applications of Computer Vision, Steamboat Springs, CO, USA, 24–26 March 2014; pp. 115–122. [Google Scholar]
  23. de la Escalera, A.; Moreno, L.E.; Salichs, M.A.; Armingol, J.M. Road traffic sign detection and classification. IEEE Trans. Ind. Electron. 1997, 44, 848–859. [Google Scholar] [CrossRef] [Green Version]
  24. Greenhalgh, J.; Mirmehdi, M. Real-Time Detection and Recognition of Road Traffic Signs. IEEE Trans. Intell. Transp. Syst. 2012, 13, 1498–1506. [Google Scholar] [CrossRef]
  25. Lagunas, M.; Garces, E. Transfer Learning for Illustration Classification. arXiv, 2018; arXiv:1806.02682. [Google Scholar]
  26. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet classification with deep convolutional neural networks. Commun. ACM 2017, 60, 84–90. [Google Scholar] [CrossRef] [Green Version]
  27. Russakovsky, O.; Deng, J.; Su, H.; Krause, J.; Satheesh, S.; Ma, S.; Huang, Z.; Karpathy, A.; Khosla, A.; Bernstein, M.; et al. ImageNet Large Scale Visual Recognition Challenge. Int. J. Comput. Vis. 2015, 115, 211–252. [Google Scholar] [CrossRef] [Green Version]
  28. Houben, S.; Stallkamp, J.; Salmen, J.; Schlipsing, M.; Igel, C. Detection of traffic signs in real-world images: The German traffic sign detection benchmark. In Proceedings of the 2013 International Joint Conference on Neural Networks (IJCNN), Dallas, TX, USA, 4–9 August 2013; pp. 1–8. [Google Scholar]
Figure 1. Wildlife–vehicle collision (WVC) density map of the research area.
Figure 1. Wildlife–vehicle collision (WVC) density map of the research area.
Ijgi 08 00066 g001
Figure 2. Concept of the intersecting process for barriers and WVC.
Figure 2. Concept of the intersecting process for barriers and WVC.
Ijgi 08 00066 g002
Figure 3. The artificial neural network (ANN) and GIS-based process to build a georeferenced barrier inventory along roads.
Figure 3. The artificial neural network (ANN) and GIS-based process to build a georeferenced barrier inventory along roads.
Ijgi 08 00066 g003
Figure 4. Annual distribution of WVCs (2010–2017).
Figure 4. Annual distribution of WVCs (2010–2017).
Ijgi 08 00066 g004
Figure 5. Spatial distribution of barriers in the research area (a) and in a subsection (b).
Figure 5. Spatial distribution of barriers in the research area (a) and in a subsection (b).
Ijgi 08 00066 g005
Figure 6. Map of classified crash barriers (black line) in comparison to reality (manual check, orange line).
Figure 6. Map of classified crash barriers (black line) in comparison to reality (manual check, orange line).
Ijgi 08 00066 g006
Figure 7. Map of classified fences (black line) in comparison to reality (manual check, turquoise line).
Figure 7. Map of classified fences (black line) in comparison to reality (manual check, turquoise line).
Ijgi 08 00066 g007
Table 1. The annual distribution of WVCs per species.
Table 1. The annual distribution of WVCs per species.
Species20102011201220132014201520162017
Rabbit822201520112018
Deer57151172158136158156182
Other23311581015
Wild Boar20651529
Fox313131221171620
Badger173426911
Wild Bird23311723
Table 2. Results of WVC numbers at roads with and without crash barriers for the road segments identified by ANN in comparison to the manual classification.
Table 2. Results of WVC numbers at roads with and without crash barriers for the road segments identified by ANN in comparison to the manual classification.
Crash BarriersTotal Number of WVCNeural NetworkManual Check
With BarrierWithout BarrierWith BarrierWithout Barrier
Number of WVC1571776795720851
(%)(100%)(49.40%)(50.60%)(45.83%)(54.17%)
Street length (km) 86.25726.74672.48940.514
Street Length / WVC 111.1633.64100.6847.61
WVC Proportion to Street Length 3.300.302.110.47
Table 3. Results of WVC numbers at roads with and without fences for the road segments identified by ANN in comparison to the manual classification.
Table 3. Results of WVC numbers at roads with and without fences for the road segments identified by ANN in comparison to the manual classification.
FencesTotal Number of WVCNeural NetworkManual Check
With FenceWithout FenceWith FenceWithout Fence
Number of WVC157121113601091462
(%)(100%)(13.43%)(86.57%)(6.94%)(93.06%)
Street length (km) 5.429107.5742.668110.335
Street Length/ WVC 25.7379.1024.4875.47
WVC Proportion to Street Length 0.333.070.323.08
Table 4. WVC per species proportional to street length for road segments with crash barriers and fences.
Table 4. WVC per species proportional to street length for road segments with crash barriers and fences.
WVC Proportion to Street Length
SpeciesCrash Barrier Fence
Rabbit2.570.25
Deer2.160.34
Other1.610.21
Wild Boar2.680.34
Fox3.350.32
Badger4.130.50
Wild Bird2.150.51
All species2.110.32
Table 5. Quality of auto-classification of images by the ANN.
Table 5. Quality of auto-classification of images by the ANN.
Barrier LeftBarrier RightBarrier TotalFence LeftFence RightFence Total
False7.72%7.70%7.71%55.2519.50%37.37%
True92.28%92.30%92.29%44.75%80.50%62.63%
Table 6. Comparison of automatic image recognition using neural network with training of images of both sides and only right side and the relative change.
Table 6. Comparison of automatic image recognition using neural network with training of images of both sides and only right side and the relative change.
Training on ImagesRelative Change
Of Both SidesOf Only Right Side
FenceNo FenceFenceNo FenceFenceNo Fence
RightCorrect142350715339437.75%12.43%
False1837105139897−23.90%−7.62%
LeftCorrect1472064166269912.93%30.77%
False334337269531−19.38%−16.22%

Share and Cite

MDPI and ACS Style

Pagany, R.; Dorner, W. Do Crash Barriers and Fences Have an Impact on Wildlife–Vehicle Collisions?—An Artificial Intelligence and GIS-Based Analysis. ISPRS Int. J. Geo-Inf. 2019, 8, 66. https://doi.org/10.3390/ijgi8020066

AMA Style

Pagany R, Dorner W. Do Crash Barriers and Fences Have an Impact on Wildlife–Vehicle Collisions?—An Artificial Intelligence and GIS-Based Analysis. ISPRS International Journal of Geo-Information. 2019; 8(2):66. https://doi.org/10.3390/ijgi8020066

Chicago/Turabian Style

Pagany, Raphaela, and Wolfgang Dorner. 2019. "Do Crash Barriers and Fences Have an Impact on Wildlife–Vehicle Collisions?—An Artificial Intelligence and GIS-Based Analysis" ISPRS International Journal of Geo-Information 8, no. 2: 66. https://doi.org/10.3390/ijgi8020066

APA Style

Pagany, R., & Dorner, W. (2019). Do Crash Barriers and Fences Have an Impact on Wildlife–Vehicle Collisions?—An Artificial Intelligence and GIS-Based Analysis. ISPRS International Journal of Geo-Information, 8(2), 66. https://doi.org/10.3390/ijgi8020066

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop