Improved Dynamic Obstacle Mapping (iDOMap)
Abstract
:1. Introduction
2. Proposal
2.1. Evaluation of Movement Detection
Laser Reflectivity
2.2. Occlusion Handling
- When a dynamic obstacle is occluded.
- When a static obstacle is occluded.
2.2.1. Dynamic Occlusion Detector
Algorithm 1: Dynamic obstacle detector algorithm. |
2.2.2. Static Occlusion Detector
Algorithm 2: Static obstacle detector algorithm. |
2.3. Tracking and Filtering
3. Test Bed and Results
3.1. Velocities Estimation with iDOMap
- Simulated scenario 1—Two dynamic obstacles crossing: In this scenario there are two dynamic obstacles simulated persons crossing in perpendicular paths in front of the robot at 0.25 m/s and 0.3 m/s. The paths followed by the obstacles are shown in Figure 10a. This scenario shows the performance of the proposal across the of the robot at low velocities.
- Real scenario 1—Two dynamic obstacles crossing: In this scenario there are two people crossing perpendicularly to the robot and then obstacle 1 (which starts on the left) causes an occlusion over the obstacle 2, as can be shown in timestep in Figure 11b. Figure 11 shows images at three time steps during the experiments while the robot detects and tracks the paths followed by dynamic obstacles (Figure 12b).In addition, an occupancy grid map of this scenario during the experiment (Timestep ) is shown in Figure 12a. In this occupancy grid, it can be seen as the cells occupied by the static and dynamic obstacles have high occupancy probability (the darker the higher is the probability). Figure 13 shows the velocities detected at each axis of both obstacles, where it can be seen that the velocities of the obstacle 2 are kept during the occlusion (marked as purple dashed-line box).Table 2 shows the errors in the velocities detection for this scenario, where is the average error and the standard deviation of the following parameters: and are the obstacle velocities along and of the robot respectively, is the vector velocity magnitude and the vector angle.
- Real scenario 2—Two dynamic obstacles in diagonal paths: In this scenario, there are two people crossing in diagonal paths, as shown in Figure 14. Each path crosses the other as shown in Figure 15b. Figure 15a shows an example of the occupancy grid computed during this experiment.The detection of the obstacle 2 worsens between seconds 5 and 6 (marked as purple dashed-line box in Figure 16) due to the low amount of laser impacts. This situation appears when the obstacle (a person in this scenario) is located laterally to the laser.Table 3 shows the averages velocity errors () and their standard deviation () for each axis.
- Real scenario 3—Person pushing a trolley in diagonal path: As stated previously, robots and people can coexist at the same environment, then they can be pushing objects, such as baby carriage or shopping cart. In order to test our proposal with this kind of dynamic obstacles, a person pushing a trolley performed two diagonal paths in forth and backward ways. The scenario can be seen in Figure 17.Figure 18a shows the detected paths followed by the person, in this case, the red path shows the forward path and the yellow one the backward path. Figure 18b shows the velocities detection at each axis. It can be seen that both paths (forwards and backwards) have been detected with an errors around 0.1 m/s, as it can be shown in Table 4.
- Real scenario 4—Person walking randomly: In this scenario a participant is walking in different directions inside of the monitored area and making strong changes in direction. With this experiment we demonstrate that PF is able to deal with a real movement of a person that cannot be followed by an EKF, as it can be seen in Figure 19.This scenario has been selected to test the performance with several direction changes in a short period of time.Figure 20a shows the path detected by our proposal, where it can be seen that despite the frequent and rapid changes of direction that the obstacle describes, the system can detect the velocity at each axis with an error less than 0.1 m/s, as it is shown in Table 5. Also the position tracked (shown in red in Figure 20b) follows in a good manner the Ground Truth (shown in blue).
- Real scenario 5—A robot crossing in a diagonal path: Continuing with the idea of test our proposal with different participants, in this scenario, a robot crosses in a diagonal path with a constant linear velocity of 0.5 m/s and in an angle of respect to the robot with iDOMap. The Ground Truth in this case has been obtained by taking into account the initial and final static positions, because the obstacles maintain this position for 5 s. Then, hand-made centroid has been computed from the measures of LIDAR during the obstacle’s static period of time.Figure 21 shows the vector magnitude and angle () of the velocity detection in order to clarifying the comparison with the Ground Truth. Table 6 shows the error in velocities detections, where it can be seen that the error velocity magnitude vector is lower than 0.04 m/s and the error in orientation is about .The vector velocity magnitude percentage error and are the highest experimental errors, because when the Ground Truth velocities are very low (when the obstacle starts and ends move), the error detected (in percentage) is very high. In this case, if only dynamic obstacles are taking into account when their velocities are upper a threshold (0.15 m/s) the error would be and .
3.2. Testing the Improvement of Obstacle-Avoidance Algorithms Using iDOMap
3.2.1. Simulated Experiments
- Scenario 1—Two obstacles crossing: in this scenario, the final goal is in front of the robot and two objects cross in perpendicular paths to the robot at 0.4 m/s. Figure 22 shows the starting position and orientation of the obstacles. Also it shows the path followed by the robot at each case. This scenario tries to simulate a risky environment where the robot could cross the trajectories of the moving obstacles.The paths followed (Figure 22) show that without knowledge of the obstacles velocities, both algorithms try to overcome the obstacles crossing in front of them, and the obstacles block the possible paths to the goal making the robot travels parallel to the first obstacle until this obstacle is far enough away to go to the goal. On the other hand, knowing the estimated velocities of the obstacles, both avoidance algorithms try to go to the goal overcoming the obstacle behind them in a safer maneuver. Also, these two maneuvers are more efficiency, due they are shorter in distance and time, as it is shown in Table 7.
- Scenario 2—Moving obstacle approaching the robot: this scenario is a more complex environment that mixes static and dynamic obstacles (Figure 23). Two static obstacles (blue squares) have been placed: a corridor has been formed between the big obstacle (on the left of the robot) and the obstacle in front of it. Also, another obstacle has been placed in the corner that forms another wider corridor. In addition there is one moving obstacle (red square) approaching to the robot in collision path to it through the first corridor. Therefore, a corridor that will be blocked by a moving obstacle has been simulated. This scenario has been selected to deal with this risky situation when the robot could be locked in a small space due to an obstacle is moving to the robot (local minimum).Figure 23 shows the static obstacles (blue squares) and the starting position and orientation of the dynamic obstacle (red square). Also, Figure 23 shows how without knowledge about the obstacle velocity, the DCVM tries to enter to the narrow corridor to later change the direction when the obstacles block its path. Even worst is the case of the DW4DO due to without a prediction module is not prepared to react to dynamic environments, computing the shortest path in every time step and causing loops trajectories. On the other hand, with our proposal, both algorithms can reach the goal knowing that the narrow corridor will be blocked and avoiding this path. The paths followed with iDOMap are safer and more efficient than without it due to are shorter in time and distance as can be seen in Table 7.
- Scenario 3—Lane changing: In this scenario moving obstacles are travelling in the same direction as the robot, so it simulates a lane changing in a road (see Figure 24). The obstacle nearer the robot travels at 0.75 m/s and the further one at 0.4 m/s.This scenario is similar to the previous one in terms of riskiness, due to the robot could cross the dynamic obstacles trajectories. In the case of only take into account the raw LIDAR measurements, the DCVM spends a lot of time trying to overcome the obstacle travelling parallel to them. This situation occurs when the velocities of the obstacle and robot are similar, until the obstacle is far enough to avoid it. Similar is the case of the DW4DO without our proposed iDOMap. However, if the algorithm has the velocities estimation, both of them begin to separate from the nearer obstacle (because it is safer) and go to goal crossing behind both obstacles in a safer and smoother path, as shown by the time and distance travelled data for these cases in Table 7.
3.2.2. Real Experiments
- Scenario 1—A person overtaking the robot: In this scenario (see images sequence in Figure 26), the robot has to reach a goal 8 m forward from its initial position (red dot in Figure 27), while a person (marked as red square in Figure 27) overtakes the robot by the left.The paths followed by the robot at each combination of perception and obstacles avoidance algorithm are shown in Figure 27. Figure 28 shows the detected dynamic obstacle at each case, where the grey line represents the robot path. The color of the obstacle path and the robot path match (from red to yellow) during the period of time that our method detects the obstacle, in order to show them in a synchronized way.Figure 28 has been limited to 8m, in order to show this clearly, although the obstacle has continuously been detected.We evaluate the four cases in the same way, and assuming that they can be compared. When an estimated velocity is introduced in the case of the DCVM, the robot moves away from its path compared to the case without velocity estimation (Figure 28a), increasing the safety. Figure 28c,d shows other behavior performed by DW4DO; without velocities estimation (Figure 28c) when the obstacle appears in the robot map near the robot, suddenly it tries to move away of the obstacle, on the contrary, if iDOMap is available (Figure 28d), the velocities of the obstacle are estimated, then the algorithm estimated that the overtaking of the dynamic obstacle do not influence in its trajectory, keeping the direction, increasing the energy efficiency of the path due to it has fewer changes in its velocities and a shorter travelled path.
- Scenario 2—Moving obstacle approaching the robot: This scenario has been selected in order to test a risky situation when the robot could be locked in a small space, between the wall in left part of Figure 29a and the auxiliary wall located in the middle of Figure 29a, due to an obstacle is moving to the robot (local minimum).Figure 30 shows the detected dynamic obstacle in each case, where the grey line represents the robot path. The color of the obstacle path and the robot path match (from red to yellow) during the period of time that our proposal detects the obstacle in order to show in a synchronized way the pose of the robot when the obstacle is detected at each step. Figure 30 has been limited to maneuver area, in order to show it in a clarifying way, for that reason the obstacle first detection is outside of the figure.Figure 31 shows the behavior of both algorithm without velocities estimation and with our proposal. In the case of the DCVM when the velocities are not available, the robot tries to pass through the corridor and when the obstacles “appear” (Figure 29b) is too late to avoid it, then the robot stops and spins (position 3, −1.5 m approx) being a risky situation. That spins maneuver and that the obstacle is too close to the robot, making the detection inaccurate at this point (Figure 30a. When iDOMap is combined with the DCVM (blue path) the robot avoids it to the left (Figure 29c), reducing its velocity and then continue to the goal. In the case of the DW4DO without our proposal is similar to the DCVM due to is a “not avoidable situation” in both cases. On the contrary, with DW4DO and iDOMap the algorithm is able to predict that the obstacle will be blocking the pass through the corridor, avoiding this situation (Figure 29d) an reaching the goal in a longer but safer path (pink path).
- Scenario 3—Lane changing: In this scenario (see Figure 32), the robot has to reach a goal that is at 7 m forward and 4 m to right from its initial position (red dot in Figure 33), during the manoeuvre a person (marked as red square in Figure 33) overtakes the robot in a parallel path by the right.The paths followed by the robot at each combination of perception and obstacles avoidance algorithm are shown in Figure 33. Figure 34 shows the detected dynamic obstacle at each case, where the grey line represents the robot path. The color of the obstacle path and the robot path match (from red to yellow) during the period of time that our system detects the obstacle. This shows where the robot and the obstacle were at a particular point in time.Figure 34 has been limited to 8m, to improve clarity, although the obstacle has continuously been detected.We evaluated four cases in the same way. Figure 33 shows that both obstacle-avoidance algorithms move away from the obstacle, DCVM to the right and DW4DO to the left. Both methods produce safer manoeuvers with smoother velocity profiles when using iDOMap.
4. Conclusions and Future Works
Author Contributions
Funding
Conflicts of Interest
References
- Montemerlo, M.; Becker, J.; Bhat, S.; Dahlkamp, H.; Dolgov, D.; Ettinger, S.; Haehnel, D.; Hilden, T.; Hoffmann, G.; Huhnke, B.; et al. Junior: The Stanford Entry in the Urban Challenge. J. Field Robot. 2008, 25, 569–597. [Google Scholar] [CrossRef] [Green Version]
- Thrun, S.; Montemerlo, M.; Dahlkamp, H.; Stavens, D.; Aron, A.; Diebel, J.; Fong, P.; Gale, J.; Halpenny, M.; Hoffmann, G.; et al. Winning the DARPA Grand Challenge. J. Field Robot. 2006, 23, 661–692. [Google Scholar] [CrossRef]
- Schleicher, D.; Bergasa, L.M.; Ocaña, M.; Barea, R.; López, E. Low-cost GPS sensor improvement using stereovision fusion. Signal Process. 2010, 90, 3294–3300. [Google Scholar] [CrossRef]
- Hentschel, M.; Wulf, O.; Wagner, B. A GPS and laser-based localization for urban and non-urban outdoor environments. In Proceedings of the 2008 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2008), Nice, France, 22–26 September 2008; pp. 149–154. [Google Scholar] [CrossRef]
- Thrun, S.; Fox, D.; Burgard, W. A probabilistic approach to concurrent mapping and localization for mobile robots. Auton. Robot. 1998, 5, 253–271. [Google Scholar] [CrossRef]
- Bresson, G.; Alsayed, Z.; Yu, L.; Glaser, S. Simultaneous Localization and Mapping: A Survey of Current Trends in Autonomous Driving. IEEE Trans. Intell. Veh. 2017, 2, 194–220. [Google Scholar] [CrossRef] [Green Version]
- Bekris, K.E.; Click, M.; Kavraki, E.E. Evaluation of algorithms for bearing-only SLAM. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA’06), Orlando, FL, USA, 15–19 May 2006; pp. 1937–1943. [Google Scholar]
- Fox, D.; Burgard, W.; Thrun, S. The dynamic window approach to collision avoidance. Robot. Autom. Mag. IEEE 1997, 4, 23–33. [Google Scholar] [CrossRef] [Green Version]
- Durham, J.W.; Bullo, F. Smooth Nearness-Diagram Navigation. In Proceedings of the 2008 IEEE/RSJ International Conference on Intelligent Robots and Systems, Nice, France, 22–26 September 2008; pp. 690–695. [Google Scholar]
- López, J.; Otero, C.; Sanz, R.; Paz, E.; Molinos, E.; Barea, R. A new approach to local navigation for autonomous driving vehicles based on the curvature velocity method. In Proceedings of the 2019 International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, 20–24 May 2019; pp. 1751–1757. [Google Scholar]
- Elfes, A. Using Occupancy Grids for Mobile Robot Perception and Navigation. Computer 1989, 22, 46–57. [Google Scholar] [CrossRef]
- Chatila, R.; Laumond, J. Position referencing and consistent world modeling for mobile robots. In Proceedings of the 1985 IEEE International Conference on Robotics and Automation, St. Louis, MO, USA, 25–28 March 1985; Volume 2, pp. 138–145. [Google Scholar]
- López, M.; Bergasa, L.; Barea, R.; Escudero, M. A Navigation System for Assistant Robots Using Visually Augmented POMDPs. Auton. Robot. 2005, 19, 67–87. [Google Scholar] [CrossRef]
- Missura, M.; Bennewitz, M. Predictive Collision Avoidance for the Dynamic Window Approach. In Proceedings of the 2019 International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, 20–24 May 2019; pp. 8620–8626. [Google Scholar]
- Molinos, E.J.; Llamazares, Á.; Ocaña, M. Dynamic window based approaches for avoiding obstacles in moving. Robot. Auton. Syst. 2019, 118, 112–130. [Google Scholar] [CrossRef]
- Llamazares, Á.; Molinos, E.J.; Ocaña, M. Detection and Tracking of Moving Obstacles (DATMO): A Review. Robotica 2020, 38, 761–774. [Google Scholar] [CrossRef]
- Dempster, A.P. Upper and Lower Probabilities Induced by a Multivalued Mapping. Ann. Math. Statist. 1967, 38, 325–339. [Google Scholar] [CrossRef]
- Kurdej, M.; Moras, J.; Cherfaoui, V.; Bonnifait, P. Map-Aided Fusion Using Evidential Grids for Mobile Perception in Urban Environment. In Belief Functions: Theory and Applications: Proceedings of the 2nd International Conference on Belief Functions, Compiegne, France, 9–11 May 2012; Springer: Berlin/Heidelberg, Germany, 2012; pp. 343–350. [Google Scholar] [CrossRef]
- Kurdej, M.; Moras, J.; Bonnifait, P.; Cherfaoui, V. Map-Aided Evidential Gridsfor Driving Scene Understanding. IEEE Intell. Transp. Syst. Mag. 2015, 7, 30–41. [Google Scholar] [CrossRef] [Green Version]
- Moras, J.; Cherfaoui, V.; Bonnifait, P. Evidential grids information management in dynamic environments. In Proceedings of the 17th International Conference on Information Fusion (FUSION), Salamanca, Spain, 7–10 July 2014; pp. 1–7. [Google Scholar]
- Coué, C.; Pradalier, C.; Laugier, C.; Fraichard, T.; Bessiere, P. Bayesian Occupancy Filtering for Multitarget Tracking: An Automotive Application. Int. J. Robot. Res. 2006, 25, 19–30. Available online: http://emotion.inrialpes.fr/BP/IMG/pdf/coue-etal-ijrr-06.pdf (accessed on 25 September 2020). [CrossRef] [Green Version]
- Saval-Calvo, M.; Medina-Valdés, L.; Castillo-Secilla, J.M.; Cuenca-Asensi, S.; Antonio, M.Á.; Villagrá, J. A Review of the Bayesian Occupancy Filter. Sensors 2017, 17, 344. [Google Scholar] [CrossRef] [PubMed]
- Llamazares, A.; Ivan, V.; Molinos, E.; Ocaña, M.; Vijayakumar, S. Dynamic Obstacle Avoidance Using Bayesian Occupancy Filter and Approximate Inference. Sensors 2013, 13, 2929–2944. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Kaasalainen, S.; Jaakkola, A.; Kaasalainen, M.; Krooks, A.; Kukko, A. Analysis of Incidence Angle and Distance Effects on Terrestrial Laser Scanner Intensity: Search for Correction Methods. Remote Sens. 2011, 3, 2207–2221. [Google Scholar] [CrossRef] [Green Version]
- Kneip, L.; Tâche, F.; Caprari, G.; Siegwart, R. Characterization of the Compact Hokuyo URG-04LX 2D Laser Range Scanner. In Proceedings of the 2009 IEEE International Conference on Robotics and Automation (ICRA’09), Kobe, Japan, 12–17 May 2009; IEEE Press: Piscataway, NJ, USA, 2009; pp. 2522–2529. [Google Scholar]
- Kawata, H.; Miyachi, K.; Hara, Y.; Ohya, A.; Yuta, S. A method for estimation of lightness of objects with intensity data from SOKUIKI sensor. In Proceedings of the IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI 2008), Seoul, Korea, 20–22 August 2008; pp. 661–664. [Google Scholar] [CrossRef]
- García, J.; Gardel, A.; Bravo, I.; Lázaro, J.L.; Martínez, M. Tracking People Motion Based on Extended Condensation Algorithm. IEEE Trans. Syst. Man Cybern. Syst. 2013, 43, 606–618. [Google Scholar]
- Li, B.; Yang, C.; Zhang, Q.; Xu, G. Condensation-based multi-person detection and tracking with HOG and LBP. In Proceedings of the 2014 IEEE International Conference on Information and Automation (ICIA), Hailar, China, 28–30 July 2014; pp. 267–272. [Google Scholar]
- Doucet, A.; Godsill, S.; Andrieu, C. On sequential Monte Carlo sampling methods for Bayesian filtering. Stat. Comput. 2000, 10, 197–208. [Google Scholar] [CrossRef]
- Isard, M.; Blake, A. Icondensation: Unifying low-level and high-level tracking in a stochastic framework. In Computer Vision—ECCV’98: 5th European Conference on Computer Vision, Freiburg, Germany, 2–6 June 1998 Proceedings, Volume I; Springer: Berlin/Heidelberg, Germany, 1998; pp. 893–908. [Google Scholar] [CrossRef]
- Ripley, B.D. Stochastic Simulation; John Wiley & Sons, Inc.: New York, NY, USA, 1987. [Google Scholar]
- Isard, M.A. Visual Motion Analysis by Probabilistic Propagation of Conditional Density. Ph.D. Thesis, Department of Engineering Science, University of Oxford, Oxford, UK, 1998. [Google Scholar]
- Mekhnacha, K.; Mao, Y.; Raulo, D.; Laugier, C. Bayesian occupancy filter based ”Fast Clustering-Tracking” algorithm. In Proceedings of the IEEE/RSJ 2008 International Conference on Intelligent Robots and Systems, Nice, France, 22–26 September 2008. [Google Scholar]
- Wang, C.C. Simultaneous Localization, Mapping and Moving Object Tracking. Ph.D. Thesis, Carnegie Mellon University, Pittsburgh, PA, USA, 2004. [Google Scholar]
- Nègre, A.; Rummelhard, L.; Laugier, C. Hybrid sampling Bayesian Occupancy Filter. In Proceedings of the 2014 IEEE Intelligent Vehicles Symposium Proceedings, Dearborn, MI, USA, 8–11 June 2014; pp. 1307–1312. [Google Scholar] [CrossRef] [Green Version]
- Mertz, C.; Navarro-Serment, L.E.; Duggins, D.; Gowdy, J.; MacLachlan, R.; Rybski, P.; Steinfeld, A.; Suppe, A.; Urmson, C.; Vandapel, N.; et al. Moving object detection with laser scanners. J. Field Robot. 2013, 30, 17–43. [Google Scholar] [CrossRef]
- Molinos, E.; Llamazares, Á.; Ocaña, M.; Herranz, F. Dynamic Obstacle Avoidance based on Curvature Arcs. In Proceedings of the 2014 IEEE/SICE International Symposium on System Integration, Tokyo, Japan, 13–15 December 2014; pp. 186–191. [Google Scholar]
- Hornung, A.; Wurm, K.M.; Bennewitz, M.; Stachniss, C.; Burgard, W. OctoMap: An Efficient Probabilistic 3D Mapping Framework Based on Octrees. Auton. Robot. 2013. Available online: http://www.arminhornung.de/Research/pub/hornung13auro.pdf (accessed on 25 September 2020). [CrossRef] [Green Version]
- Tatoglu, A.; Pochiraju, K. Point cloud segmentation with LIDAR reflection intensity behavior. In Proceedings of the 2012 IEEE International Conference on Robotics and Automation, Saint Paul, MN, USA, 14–18 May 2012; pp. 786–790. [Google Scholar]
- Cáceres Hernández, D.; Hoang, V.; Jo, K. Lane Surface Identification Based on Reflectance using Laser Range Finder. In Proceedings of the 2014 IEEE/SICE International Symposium on System Integration, Tokyo, Japan, 13–15 December 2014; pp. 621–625. [Google Scholar]
- Caltagirone, L.; Scheidegger, S.; Svensson, L.; Wahde, M. Fast LIDAR-based road detection using fully convolutional neural networks. In Proceedings of the 2017 IEEE Intelligent Vehicles Symposium (IV), Los Angeles, CA, USA, 11–14 June 2017; pp. 1019–1024. [Google Scholar]
(m/s) | (m/s) | ||
---|---|---|---|
Obs 1 | 0.051 | 0.038 | |
0.045 | 0.049 | ||
Obs 2 | 0.045 | 0.021 | |
0.047 | 0.030 |
(m/s) | (m/s) | (m/s) | |||
---|---|---|---|---|---|
Obs 1 | 0.011 | 0.153 | 0.152 (13.12%) | 0.71 | |
0.036 | 0.127 | 0.127 (10.70%) | 2.19 | ||
Obs 2 | 0.0137 | 0.092 | 0.092 (9.75%) | 0.79 | |
0.035 | 0.075 | 0.075 (8.12%) | 2.05 |
(m/s) | (m/s) | (m/s) | |||
---|---|---|---|---|---|
Obs 1 | 0.056 | 0.073 | 0.078 (12.19%) | 7.44 | |
0.041 | 0.045 | 0.063 (17.88%) | 10.57 | ||
Obs 2 | 0.144 | 0.098 | 0.170 (17.86%) | 5.10 | |
0.130 | 0.072 | 0.141 (16.21%) | 5.87 |
(m/s) | (m/s) | (m/s) | |||
---|---|---|---|---|---|
Obs 1 | 0.090 | 0.102 | 0.104 (21.01%) | 13.26 | |
0.077 | 0.083 | 0.097 (75.3%) | 20.82 |
(m/s) | (m/s) | (m/s) | |||
---|---|---|---|---|---|
Obs 1 | 0.111 | 0.100 | 0.144 (14.31%) | 11.58 | |
0.090 | 0.095 | 0.117 (13.34%) | 21.98 |
(m/s) | (m/s) | (m/s) | |||
---|---|---|---|---|---|
Obs 1 | 0.037 | 0.033 | 0.031 (30.65%) | 6.10 | |
0.039 | 0.033 | 0.038 (229.40%) | 5.81 |
Experiment | Algorithm | d | t | ||||
---|---|---|---|---|---|---|---|
Scenario 1: Two Obstacles Crossing | DCVM | – | – | – | – | – | – |
DCVM + iDOMap | 7.90 | 20.28 | 0.38 | 0.009 | 7.93 | 12.88 | |
DW4DO | 12.18 | 47.91 | 0.25 | 0.16 | 15.09 | 20.27 | |
DW4DO + iDOMap | 8.18 | 21.85 | 0.37 | 0.04 | 6.93 | 11.00 | |
Scenario 2: Approaching obstacle | DCVM | 11.83 | 30.22 | 0.39 | 0.001 | 14.15 | 16.75 |
DCVM + iDOMap | 11.06 | 28.30 | 0.39 | 0.002 | 0.59 | 12.18 | |
DW4DO | – | – | – | – | – | – | |
DW4DO + iDOMap | 11.84 | 32.47 | 0.35 | 0.081 | 7.41 | 10.79 | |
Scenario 3: Lane changing | DCVM | 22.549 | 57.699 | 0.39 | 0.001 | 16.11 | 19.38 |
DCVM + iDOMap | 8.43 | 22.90 | 0.36 | 0.084 | 8.97 | 15.00 | |
DW4DO | 8.76 | 22.94 | 0.37 | 0.049 | 9.41 | 13.54 | |
DW4DO + iDOMap | 8.42 | 22.787 | 0.36 | 0.056 | 10.70 | 13.52 |
Experiment | Algorithm | d | t | ||||
---|---|---|---|---|---|---|---|
Scenario 1: Person overtaking | DCVM | 7.61 | 19.30 | 0.39 | 0.043 | 0.97 | 2.25 |
DCVM + iDOMap | 7.63 | 19.34 | 0.39 | 0.045 | 1.67 | 4.55 | |
DW4DO | 7.62 | 19.64 | 0.38 | 0.046 | 2.47 | 3.92 | |
DW4DO + iDOMap | 7.52 | 18.96 | 0.39 | 0.043 | 0.71 | 1.53 | |
Scenario 2: Approaching person | DCVM | 6.15 | 18.52 | 0.33 | 0.134 | 9.06 | 14.33 |
DCVM + iDOMap | 6.63 | 17.40 | 0.38 | 0.062 | 9.75 | 14.12 | |
DW4DO | 6.18 | 18.29 | 0.33 | 0.127 | 7.29 | 8.30 | |
DW4DO + iDOMap | 10.71 | 28.20 | 0.37 | 0.054 | 10.95 | 11.90 | |
Scenario 3: Lane changing | DCVM | 7.68 | 21.55 | 0.35 | 0.120 | 4.99 | 7.98 |
DCVM + iDOMap | 7.80 | 21.99 | 0.34 | 0.122 | 5.10 | 8.56 | |
DW4DO | 7.99 | 20.96 | 0.37 | 0.053 | 6.81 | 10.73 | |
DW4DO + iDOMap | 7.96 | 20.89 | 0.37 | 0.057 | 6.22 | 9.18 |
© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Llamazares, Á.; Molinos, E.; Ocaña, M.; Ivan, V. Improved Dynamic Obstacle Mapping (iDOMap). Sensors 2020, 20, 5520. https://doi.org/10.3390/s20195520
Llamazares Á, Molinos E, Ocaña M, Ivan V. Improved Dynamic Obstacle Mapping (iDOMap). Sensors. 2020; 20(19):5520. https://doi.org/10.3390/s20195520
Chicago/Turabian StyleLlamazares, Ángel, Eduardo Molinos, Manuel Ocaña, and Vladimir Ivan. 2020. "Improved Dynamic Obstacle Mapping (iDOMap)" Sensors 20, no. 19: 5520. https://doi.org/10.3390/s20195520
APA StyleLlamazares, Á., Molinos, E., Ocaña, M., & Ivan, V. (2020). Improved Dynamic Obstacle Mapping (iDOMap). Sensors, 20(19), 5520. https://doi.org/10.3390/s20195520