Victim Localization in USAR Scenario Exploiting Multi-Layer Mapping Structure
Abstract
:1. Introduction
- A multi-sensor fusion approach that uses vision, thermal, and wireless sensors to generate a probabilistic 2D victim-location maps.
- A multi-objective utility function that uses the fused 2D victim-location map for viewpoints evaluation task. The evaluation process targets three desired objectives, specifically exploration, victim detection, and traveled distance.
- An adaptive grid sampling algorithm (AGSA) to resolve the local minimum issue occurs in the Regular Grid Sampling Approach (RGSA).
2. Related Work
2.1. Victim Detection
2.2. Mapping
2.3. Exploration
2.3.1. Viewpoint Sampling
2.3.2. Viewpoint Evaluation
2.3.3. Termination Conditions
3. Proposed Approach
3.1. Multi-Sensor Approach for Victim Localization and Mapping
3.1.1. Vision-Based Victim Localization
- Victim Detection: In this work, Single-Shot Multi-box detector (SSD) [21] structure was adapted in our classifier, trained it with visual object class challenge (VOC2007) dataset [39]. The SSD structure is depicted in Figure 3. The input to the detection module is a 2D RGB image and the desired output is a box overlapping the detected human with a probability reflecting the detection confidence. The victim detection aims to find a human in a received RGB image.
- Victim Mapping: Based on the analysis of the SSD detector, the minimum size of the detected box in the training-set is pixels. This box size corresponds to in meter. To estimate the distance between the detected human and the camera [40], the ratio of the box to the image is along the y-axis is given as where is re-calculated for the tangent of the half-angle under which you see the object which given as . The distance d to the camera is estimated as . The victim will appropriately be located if its distance separation from the camera is within the camera depth operational range.
Algorithm 1 Vision-Based Victim Localization Scheme |
Input:
2: ▹ where is a matrix containing Box coordinates, represents the box coordinates, and is a vector containing the human corresponding probabilities 3 for i=1:length() do 4: if then 5: 6: 7: if is empty then 8: 9: end if 10: sort in ascending order 11: 12: 13: project P in x-y plane 14: ▹ append to the list of victim locations 15: end if 16: end for 17: for all cells do 18: for all do 19: 20: 21: 22: end for 23: 24: 25: end for |
3.1.2. Thermal-Based Victim Localization
- Victim Detection: Thermal detection is used to detect human heat signature presented using the infrared light which is part of the electromagnetic (EM) spectrum. This approach is useful when the human cannot be detected in RGB images, especially, in dark lighting condition. The method adopted here is a simple blob detector which locates blobs in a thermal image for a temperature range of to (normal human temperature). The procedures for thermal detection is presented in Figure 4 which is composed of three stages. In the first stage, the thermal images are converted into mono-image. Then, two thresholds, and are applied to the mono-image to get a binary image that correspond . After that, contouring is done to extract regions which represented the human thermal location. The blob detector was implemented using OpenCV where the minimum blob area is set to 20 pixels and a minimum adjacent distance between blob is set to 25 pixels. A human face was detected as shown in Figure 4. A full-human body can also be detected using the thermal detector by relaxing the choice of and .
- Victim Mapping: After a box is resolved on the human thermal image, the center of the box is calculated in the image frame give as
Algorithm 2 Thermal-Based Victim Localization Scheme |
Input:
2: 3: for all pixels p∈ do 4: if then 5: 6: else 7: 8: end if 9: end for 10: ▹ where is a matrix containing detected Box coordinates in represents the box coordinates 11: if is not empty then 12: 13: for all boxes c in do 14: generate 2-D line, , that pass through the center of box c and terminate at the end of 15: ▹ append to the list of rays 16: end for 17: end if 18: for all cells do 19: for all do 20: 21: 22: 23: end for 24: 25: 26: end for |
3.1.3. Wireless-Based Victim Localization
- Victim Detection: A victim can be detected wirelessly by monitoring a given transmitted signal from the victim phone. In this configuration, the victim phone can be treated as a transmitter while the wireless receiver can be placed on a robot. If no obstacle between the transmitter and receiver is found, the free-space propagation model is used to predict the Received Signal Strength in the line-of-sight (LOS) [41]. The RSS equation is derived from Friis Transmission formula [42]When using the free-space model, the average received signal decreases with the distance between the transmitter and receiver in all the other real environments in a logarithmic way. Therefore, the path-loss model generalized form can be obtained by changing the free-space path loss with the path-loss exponent n depending on the environment, which is known as the log-distance path-loss model which will result in the following formula [41]:Due to the presence of noise in the log-normal model, the path loss will be different even the same location with distance . To obtain a relatively stable reading, is recording for K samples, then, an averaging process is done to get the measured . The victim can be set to be identified wireless if the detected distance is less than a specific distance threshold . This is because RSS reading is high when the transmitter is closer to the receiver leading to a less noise variance. Hence, from a log-normal model, it is sufficient to state that the victim is found in the measured distance satisfy the conditionThe same idea is used in weighted least-square approach where high weights are given for low distances leading to a less RMSE in the localized node when compared to the conventional lease-square approach which assumes constant noise variance across all measured distances [44].
- Victim Mapping: When RSS is used to locate an unknown node, a single measured distance is not sufficient because the unknown node can be anywhere over a circle with radius equal to the distance. That can be solved using trilateration as shown in Figure 6. In 2D space, trilateration requires at least three-distance measurements from anchor nodes. The location of the unknown node is the intersection of the three circles as show in Figure 6 [45].Using trilateration can lead to a problem in case all the measured distances are large (with high noise variance), which can lead to a false located position. To alleviate this problem, the use of occupancy grid is proposed when updating the map. The measured distance is compared based on the criteria . If this criteria it met, the measurement is trusted, and the map is updated with victim probability within a circle of radius equal to . Otherwise, the measured distance is discarded. The updating process is shown in Figure 7.
3.1.4. Multi-Sensor Occupancy Grid Map Merging
3.2. Exploration
4. Experimental Results
4.1. Vehicle Model and Environment
4.2. Tests Scenario and Parameters
4.3. Regular Grid Sampling Approach (RGSA) Results
4.4. Adaptive Grid Sampling Approach (AGSA) Results
4.5. Discussion Results
5. Conclusions
Author Contributions
Funding
Acknowledgments
Conflicts of Interest
References
- Naidoo, Y.; Stopforth, R.; Bright, G. Development of an UAV for search & rescue applications. In Proceedings of the IEEE Africon’11, Livingstone, Zambia, 13–15 September 2011; pp. 1–6. [Google Scholar]
- Waharte, S.; Trigoni, N. Supporting Search and Rescue Operations with UAVs. In Proceedings of the 2010 International Conference on Emerging Security Technologies, Canterbury, UK, 6–7 September 2010; pp. 142–147. [Google Scholar] [CrossRef]
- Erdelj, M.; Natalizio, E.; Chowdhury, K.R.; Akyildiz, I.F. Help from the sky: Leveraging UAVs for disaster management. IEEE Pervasive Comput. 2017, 16, 24–32. [Google Scholar] [CrossRef]
- Kohlbrecher, S.; Meyer, J.; Graber, T.; Petersen, K.; Klingauf, U.; von Stryk, O. Hector Open Source Modules for Autonomous Mapping and Navigation with Rescue Robots. In RoboCup 2013: Robot World Cup XVII; Behnke, S., Veloso, M., Visser, A., Xiong, R., Eds.; Springer: Berlin/Heidelberg, Germany, 2014; pp. 624–631. [Google Scholar]
- Milas, A.S.; Cracknell, A.P.; Warner, T.A. Drones—The third generation source of remote sensing data. Int. J. Remote Sens. 2018, 39, 7125–7137. [Google Scholar] [CrossRef]
- Malihi, S.; Valadan Zoej, M.; Hahn, M. Large-Scale Accurate Reconstruction of Buildings Employing Point Clouds Generated from UAV Imagery. Remote Sens. 2018, 10, 1148. [Google Scholar] [CrossRef]
- Balsa-Barreiro, J.; Fritsch, D. Generation of visually aesthetic and detailed 3D models of historical cities by using laser scanning and digital photogrammetry. Digit. Appl. Archaeol. Cult. Herit. 2018, 8, 57–64. [Google Scholar] [CrossRef]
- Wang, Y.; Tian, F.; Huang, Y.; Wang, J.; Wei, C. Monitoring coal fires in Datong coalfield using multi-source remote sensing data. Trans. Nonferrous Met. Soc. China 2015, 25, 3421–3428. [Google Scholar] [CrossRef]
- Kinect Sensor for Xbox 360 Components. Available online: https://support.xbox.com/ar-AE/xbox-360/accessories/kinect-sensor-components (accessed on 13 November 2019).
- FLIR DUO PRO R 640 13 mm Dual Sensor. Available online: https://www.oemcameras.com/flir-duo-pro-r-640-13mm.htm (accessed on 13 November 2019).
- Hahn, R.; Lang, D.; Selich, M.; Paulus, D. Heat mapping for improved victim detection. In Proceedings of the 2011 IEEE International Symposium on Safety, Security, and Rescue Robotics, Kyoto, Japan, 1–5 November 2011; pp. 116–121. [Google Scholar]
- González-Banos, H.H.; Latombe, J.C. Navigation strategies for exploring indoor environments. Int. J. Robot. Res. 2002, 21, 829–848. [Google Scholar] [CrossRef]
- Cieslewski, T.; Kaufmann, E.; Scaramuzza, D. Rapid Exploration with Multi-Rotors: A Frontier Selection Method for High Speed Flight. In Proceedings of the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2017), Vancouver, BC, Canada, 24–28 September 2017. [Google Scholar]
- Bircher, A.; Kamel, M.; Alexis, K.; Oleynikova, H.; Siegwart, R. Receding Horizon “Next-Best-View” Planner for 3D Exploration. In Proceedings of the 2016 IEEE International Conference on Robotics and Automation (ICRA), Stockholm, Sweden, 16–21 May 2016; pp. 1462–1468. [Google Scholar] [CrossRef]
- Portmann, J.; Lynen, S.; Chli, M.; Siegwart, R. People detection and tracking from aerial thermal views. In Proceedings of the 2014 IEEE International Conference on Robotics and Automation (ICRA), Hong Kong, China, 31 May–7 June 2014; pp. 1794–1800. [Google Scholar] [CrossRef]
- Riaz, I.; Piao, J.; Shin, H. Human detection by using centrist features for thermal images. In Proceedings of the IADIS International Conference Computer Graphics, Visualization, Computer Vision and Image Processing, Prague, Czech Republic, 22–24 July 2013. [Google Scholar]
- Uzun, Y.; Balcılar, M.; Mahmoodi, K.; Davletov, F.; Amasyalı, M.F.; Yavuz, S. Usage of HoG (histograms of oriented gradients) features for victim detection at disaster areas. In Proceedings of the 2013 8th International Conference on Electrical and Electronics Engineering (ELECO), Bursa, Turkey, 28–30 November 2013; pp. 535–538. [Google Scholar]
- Sulistijono, I.A.; Risnumawan, A. From concrete to abstract: Multilayer neural networks for disaster victims detection. In Proceedings of the 2016 International Electronics Symposium (IES), Denpasar, Indonesia, 29–30 September 2016; pp. 93–98. [Google Scholar]
- Abduldayem, A.; Gan, D.; Seneviratne, L.D.; Taha, T. 3D Reconstruction of Complex Structures with Online Profiling and Adaptive Viewpoint Sampling. In Proceedings of the International Micro Air Vehicle Conference and Flight Competition, Toulouse, France, 18–21 September 2017; pp. 278–285. [Google Scholar]
- Xia, D.X.; Li, S.Z. Rotation angle recovery for rotation invariant detector in lying pose human body detection. J. Eng. 2015, 2015, 160–163. [Google Scholar] [CrossRef]
- Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.Y.; Berg, A.C. SSD: Single shot multibox detector. In Lecture Notes in Computer Science; Springer International Publishing: New York, NY, USA, 2016; pp. 21–37. [Google Scholar] [CrossRef]
- Zhang, L.; Lin, L.; Liang, X.; He, K. Is Faster R-CNN Doing Well for Pedestrian Detection? arXiv 2016, arXiv:1607.07032. [Google Scholar]
- Louie, W.Y.G.; Nejat, G. A victim identification methodology for rescue robots operating in cluttered USAR environments. Adv. Robot. 2013, 27, 373–384. [Google Scholar] [CrossRef]
- Hadi, H.S.; Rosbi, M.; Sheikh, U.U.; Amin, S.H.M. Fusion of thermal and depth images for occlusion handling for human detection from mobile robot. In Proceedings of the 2015 10th Asian Control Conference (ASCC), Kota Kinabalu, Malaysia, 31 May–3 June 2015; pp. 1–5. [Google Scholar] [CrossRef]
- Hu, Z.; Ai, H.; Ren, H.; Zhang, Y. Fast human detection in RGB-D images based on color-depth joint feature learning. In Proceedings of the 2016 IEEE International Conference on Image Processing (ICIP), Phoenix, AZ, USA, 25–28 September 2016; pp. 266–270. [Google Scholar] [CrossRef]
- Utasi, Á.; Benedek, C. A Bayesian approach on people localization in multicamera systems. IEEE Trans. Circuits Syst. Video Technol. 2012, 23, 105–115. [Google Scholar] [CrossRef]
- Cho, H.; Seo, Y.W.; Kumar, B.V.K.V.; Rajkumar, R.R. A multi-sensor fusion system for moving object detection and tracking in urban driving environments. In Proceedings of the 2014 IEEE International Conference on Robotics and Automation (ICRA), Hong Kong, China, 31 May–7 June 2014; pp. 1836–1843. [Google Scholar] [CrossRef]
- Bongard, J. Probabilistic Robotics. Sebastian Thrun, Wolfram Burgard, and Dieter Fox. (2005, MIT Press.) 647 pages. Artif. Life 2008, 14, 227–229. [Google Scholar] [CrossRef]
- Yamauchi, B. A frontier-based approach for autonomous exploration. In Proceedings of the 1997 IEEE International Symposium on Computational Intelligence in Robotics and Automation (CIRA’97), Monterey, CA, USA, 10–11 July 1997; pp. 146–151. [Google Scholar]
- Heng, L.; Gotovos, A.; Krause, A.; Pollefeys, M. Efficient visual exploration and coverage with a micro aerial vehicle in unknown environments. In Proceedings of the 2015 IEEE International Conference on Robotics and Automation (ICRA), Seattle, WA, USA, 26–30 May 2015; pp. 1071–1078. [Google Scholar]
- Paul, G.; Webb, S.; Liu, D.; Dissanayake, G. Autonomous robot manipulator-based exploration and mapping system for bridge maintenance. Robot. Auton. Syst. 2011, 59, 543–554. [Google Scholar] [CrossRef]
- Al khawaldah, M.; Nuchter, A. Enhanced frontier-based exploration for indoor environment with multiple robots. Adv. Robot. 2015, 29. [Google Scholar] [CrossRef]
- Karaman, S.; Frazzoli, E. Sampling-based algorithms for optimal motion planning. Int. J. Robot. Res. 2011, 30, 846–894. [Google Scholar] [CrossRef]
- Verbiest, K.; Berrabah, S.A.; Colon, E. Autonomous Frontier Based Exploration for Mobile Robots. In Intelligent Robotics and Applications; Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2015; pp. 3–13. [Google Scholar] [CrossRef]
- Delmerico, J.; Isler, S.; Sabzevari, R.; Scaramuzza, D. A comparison of volumetric information gain metrics for active 3D object reconstruction. Auton. Robots 2018, 42, 197–208. [Google Scholar] [CrossRef]
- Kaufman, E.; Lee, T.; Ai, Z. Autonomous exploration by expected information gain from probabilistic occupancy grid mapping. In Proceedings of the 2016 IEEE International Conference on Simulation, Modeling, and Programming for Autonomous Robots (SIMPAR), San Francisco, CA, USA, 13–16 December 2016; pp. 246–251. [Google Scholar] [CrossRef]
- Batista, N.C.; Pereira, G.A.S. A Probabilistic Approach for Fusing People Detectors. J. Control Autom. Electr. Syst. 2015, 26, 616–629. [Google Scholar] [CrossRef]
- Isler, S.; Sabzevari, R.; Delmerico, J.; Scaramuzza, D. An Information Gain Formulation for Active Volumetric 3D Reconstruction. In Proceedings of the 2016 IEEE International Conference on Robotics and Automation (ICRA), Stockholm, Sweden, 16–21 May 2016; pp. 3477–3484. [Google Scholar]
- Everingham, M.; Van Gool, L.; Williams, C.K.I.; Winn, J.; Zisserman, A. The PASCAL Visual Object Classes Challenge 2007 (VOC2007) Results. Available online: http://host.robots.ox.ac.uk/pascal/VOC/voc2007/workshop/index.html (accessed on 13 November 2019).
- Hartley, R.; Zisserman, A. Multiple View Geometry in Computer Vision, 2nd ed.; Cambridge University Press: Cambridge, UK; New York, NY, USA, 2004. [Google Scholar]
- Pozar, D.M. Microwave and Rf Design of Wireless Systems, 1st ed.; Wiley: Hoboken, NJ, USA, 2000. [Google Scholar]
- Kulaib, A.R. Efficient and Accurate Techniques for Cooperative Localization in Wireless Sensor Networks. Ph.D. Thesis, Khalifa University, Abu Dhabi, UAE, 2014. [Google Scholar]
- Tran-Xuan, C.; Vu, V.H.; Koo, I. Calibration mechanism for RSS based localization method in wireless sensor networks. In Proceedings of the 11th International Conference on Advanced Communication Technology (ICACT 2009), Phoenix Park, Korea, 15–18 February 2009; Volume 1, pp. 560–563. [Google Scholar]
- Tarrío, P.; Bernardos, A.M.; Casar, J.R. Weighted least squares techniques for improved received signal strength based localization. Sensors 2011, 11, 8569–8592. [Google Scholar] [CrossRef]
- Patwari, N.; Ash, J.; Kyperountas, S.; Hero, A.; Moses, R.; Correal, N. Locating the nodes: cooperative localization in wireless sensor networks. IEEE Signal Proc. Mag. 2005, 22, 54–69. [Google Scholar] [CrossRef]
- Zeng, W.; Church, R.L. Finding shortest paths on real road networks: The case for A. Int. J. Geogr. Inf. Sci. 2009, 23, 531–543. [Google Scholar] [CrossRef]
- Koubâa, A. Robot Operating System (ROS): The Complete Reference (Volume 1), 1st ed.; Springer International Publishing: Heidelberg, Germany, 2016. [Google Scholar]
- Fankhauser, P.; Hutter, M. A universal grid map library: Implementation and use case for rough terrain navigation. In Robot Operating System (ROS); Koubaa, A., Ed.; Studies in Computational Intelligence; Springer: Cham, Switzerland, 2016; pp. 99–120. [Google Scholar]
- Hornung, A.; Wurm, K.M.; Bennewitz, M.; Stachniss, C.; Burgard, W. OctoMap: An efficient probabilistic 3D mapping framework based on octrees. Auton. Robots 2013, 34, 189–206. [Google Scholar] [CrossRef] [Green Version]
- Rusu, R.B.; Cousins, S. 3D is here: Point Cloud Library (PCL). In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Shanghai, China, 9–13 May 2011. [Google Scholar]
- Goian, A. Open Source Implementation of the Proposed Algorithm as a ROS Package. Available online: https://github.com/kucars/victim_localization (accessed on 13 November 2019).
- Ashour, R. Indoor Environment Exploration Using Adaptive Grid Sampling Approach, 2019. Available online: https://youtu.be/4-kwdYb4L-g (accessed on 13 November 2019).
- Ashour, R. Victim Localization in USAR Scenario Exploiting Multi-Layer Mapping Structure, 2019. Available online: https://youtu.be/91vIhmp8vYI (accessed on 13 November 2019).
Specs | RGB-D | Thermal | Specs | Wireless |
---|---|---|---|---|
HFOV | 80 | 60 | Type | isotropic |
VFOV | 60 | 43 | N of samples | 150 |
Resolution | 640 × 480 px | 160 × 120 px | frequency | Hz |
Range | 0.5–5.0 m | N/A | SNR | 5 dB |
Position y | −0.05 m | −0.09 m | Position | 0.05 m |
Pitch | 15 | 15 | - | - |
Regular Grid Sampling Parameters | Value |
---|---|
x displacement between samples | 0.8 |
y displacement between samples | 0.8 |
yaw step-size | |
Straight-line Collision Check Box size | 0.8 |
Adaptive Grid Sampling Parameters | Value |
---|---|
Adaptive scale factor base a | 2 |
Entropy change per cell | 0.1 m |
grid resolution | 0.3 m |
circular distance to goal | 0.3 m |
circular distance to robot in case of no-path found | 0.5 m |
Vision Map Parameters | Value | Thermal Map Parameters | Value | |
---|---|---|---|---|
map resolution | 0.5 | map resolution | 0.2 | |
0.8 | 0.7 | |||
0.1 | 0.3 | |||
0.2 | 0.3 | |||
0.9 | 0.7 | |||
SSD probability threshold | 0.8 | |||
minimum cluster size | 10 points | minimum cluster size | 40 pixels | |
minimum distance between clusters | 0.01 m | minimum distance between clusters | 30 pixels | |
Wireless Map Parameters | Value | Merged Map Parameters | Value | |
map resolution | 0.5 | map resolution | 0.5 | |
0.7 | vision map weight | 0.65 | ||
0.3 | thermal map weighted | 0.2 | ||
0.3 | wireless map weighted | 0.15 | ||
0.7 | ||||
wireless distance threshold | 5 m |
Scenario | VF | It | D (m) | TER |
---|---|---|---|---|
-vision | No | 120 | 101.1 | 368.9 |
-vision | Yes | 81 | 72.4 | 382.2 |
-vision | Yes | 77 | 68.0 | 440.9 |
-thermal | Yes | 100 | 88.1 | 2494.4 |
-thermal | Yes | 104 | 91.71 | 2067.2 |
-thermal | Yes | 98 | 83.46 | 2395.0 |
-wireless | No | 120 | 75.59 | 732 |
-wireless | No | 120 | 85.17 | 472.1 |
-wireless | No | 120 | 76.20 | 824.2 |
-merged | No | 120 | 98.46 | 2810 |
-merged | No | 120 | 17.24 | 587 |
-merged | Yes | 73 | 64.34 | 2344.3 |
Scenario | VF | It | D (m) | TER |
---|---|---|---|---|
-vision | No | 84 | 78.4 | 645.52 |
-vision | Yes | 81 | 72.4 | 718.06 |
-vision | Yes | 80 | 68.4 | 671.1 |
-thermal | Yes | 89 | 88.7 | 4608.2 |
-thermal | Yes | 104 | 91.7 | 4932.7 |
-thermal | Yes | 86 | 75.6 | 4752.1 |
-wireless | No | 120 | 223 | 490.8 |
-wireless | No | 120 | 115.4 | 245.8 |
-wireless | No | 120 | 223 | 490.8 |
-merged | Yes | 76 | 73.6 | 4459.4 |
-merged | No | 120 | 96.8 | 4959.8 |
-merged | Yes | 66 | 61.6 | 4772.7 |
© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Goian, A.; Ashour, R.; Ahmad, U.; Taha, T.; Almoosa, N.; Seneviratne, L. Victim Localization in USAR Scenario Exploiting Multi-Layer Mapping Structure. Remote Sens. 2019, 11, 2704. https://doi.org/10.3390/rs11222704
Goian A, Ashour R, Ahmad U, Taha T, Almoosa N, Seneviratne L. Victim Localization in USAR Scenario Exploiting Multi-Layer Mapping Structure. Remote Sensing. 2019; 11(22):2704. https://doi.org/10.3390/rs11222704
Chicago/Turabian StyleGoian, Abdulrahman, Reem Ashour, Ubaid Ahmad, Tarek Taha, Nawaf Almoosa, and Lakmal Seneviratne. 2019. "Victim Localization in USAR Scenario Exploiting Multi-Layer Mapping Structure" Remote Sensing 11, no. 22: 2704. https://doi.org/10.3390/rs11222704
APA StyleGoian, A., Ashour, R., Ahmad, U., Taha, T., Almoosa, N., & Seneviratne, L. (2019). Victim Localization in USAR Scenario Exploiting Multi-Layer Mapping Structure. Remote Sensing, 11(22), 2704. https://doi.org/10.3390/rs11222704