Auditing Flood Vulnerability Geo-Intelligence Workflow for Biases
Abstract
:1. Introduction
2. Geo-Intelligence Workflows in DRRM
2.1. Case Study: Flood Vulnerability Geo-Intelligence Workflow
2.1.1. Workflow Datasets and Vulnerability Indicators
2.1.2. Data Processing
2.1.3. Flood Damage Curves
2.1.4. DRRM Implementation
2.2. Case Study Ethical Concerns
3. Methodology
3.1. Types of Biases
3.2. Auditing
4. Model Cards and Datasheets
5. Results
5.1. Representation Bias
5.2. Aggregation Bias
6. Discusssion
6.1. Data Biases vs. Model Biases
6.2. Overall Implications of the Biases
6.3. Strategies for Reducing Biases in Geo-Intelligence Workflows
7. Limitations
8. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
Appendix A. Model Cards
Appendix B. Datasheets
References
- Šakić Trogrlić, R.; van den Homberg, M.; Budimir, M.; McQuistan, C.; Sneddon, A.; Golding, B. Early Warning Systems and Their Role in Disaster Risk Reduction. In Towards the “Perfect” Weather Warning: Bridging Disciplinary Gaps Through Partnership and Communication; Golding, B., Ed.; Springer International Publishing: Cham, Switzerland, 2022; pp. 11–46. [Google Scholar] [CrossRef]
- Soden, R.; Wagenaar, D.; Luo, D.; Tijssen, A. Taking ethics, fairness, and bias seriously in machine learning for disaster risk management. arXiv 2019, arXiv:1912.05538. [Google Scholar]
- Gevaert, C.M.; Carman, M.; Rosman, B.; Georgiadou, Y.; Soden, R. Fairness and accountability of AI in disaster risk management: Opportunities and challenges. Patterns 2021, 2, 100363. [Google Scholar] [CrossRef] [PubMed]
- Angwin, J.; Larson, J.; Mattu, S.; Kirchner, L. Machine bias. In Ethics of Data and Analytics; Martin, K., Ed.; CRC Press: Boca Raton, FL, USA, 2022. [Google Scholar] [CrossRef]
- Mayson, S.G. Bias in, bias out. Yale Law J. 2018, 128, 2218. [Google Scholar]
- Dastin, J. Amazon scraps secret AI recruiting tool that showed bias against women. In Ethics of Data and Analytics; Auerbach Publications: Boca Raton, FL, USA, 2022; pp. 296–299. [Google Scholar]
- Gevaert, C.M.; Buunk, T.; Van Den Homberg, M.J.C. Auditing geospatial datasets for biases: Using global building datasets for disaster risk management. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2024, 17, 12579–12590. [Google Scholar] [CrossRef]
- Yu, M.; Yang, C.; Li, Y. Big data in natural disaster management: A review. Geosciences 2018, 8, 165. [Google Scholar] [CrossRef]
- Pestre, G.; Letouzé, E.; Zagheni, E. The ABCDE of big data: Assessing biases in call-detail records for development estimates. World Bank Econ. Rev. 2020, 34, S89–S97. [Google Scholar] [CrossRef]
- Paulus, D.; Fathi, R.; Fiedrich, F.; de Walle, B.V.; Comes, T. On the interplay of data and cognitive bias in crisis information management: An exploratory study on epidemic response. Inf. Syst. Front. 2024, 26, 391–415. [Google Scholar] [CrossRef]
- Dodgson, K.; Hirani, P.; Trigwell, R.; Bueermann, G. A Framework for the Ethical Use of Advanced Data Science Methods in the Humanitarian Sector; Technical Report; Data Science and Ethics Group (DSEG). 2020. Available online: https://migrationdataportal.org/sites/g/files/tmzbdl251/files/2020-06/Framework%20Advanced%20Data%20Science%20In%20The%20Humanitarian%20Sector.pdf (accessed on 7 November 2024).
- Krupiy, T.T. A vulnerability analysis: Theorising the impact of artificial intelligence decision-making processes on individuals, society and human diversity from a social justice perspective. Comput. Law Secur. Rev. 2020, 38, 105429. [Google Scholar] [CrossRef]
- Khaled, A.F.M. Do No Harm in refugee humanitarian aid: The case of the Rohingya humanitarian response. J. Int. Humanit. Action 2021, 6, 7. [Google Scholar] [CrossRef]
- Wieringa, M. What to account for when accounting for algorithms: A systematic literature review on algorithmic accountability. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, Barcelona, Spain, 27–30 January 2020; pp. 1–18. [Google Scholar] [CrossRef]
- Kemper, J.; Kolkman, D. Transparent to whom? No algorithmic accountability without a critical audience. Inf. Commun. Soc. 2019, 22, 2081–2096. [Google Scholar] [CrossRef]
- Dai, S.; Li, Y.; Stein, A.; Yang, S.; Jia, P. Street view imagery-based built environment auditing tools: A systematic review. Int. J. Geogr. Inf. Sci. 2024, 38, 1136–1157. [Google Scholar] [CrossRef]
- Friedman, B.; Nissenbaum, H. Bias in computer systems. ACM Trans. Inf. Syst. (Tois) 1996, 14, 330–347. [Google Scholar] [CrossRef]
- Suresh, H.; Guttag, J. A framework for understanding sources of harm throughout the machine learning life cycle. In Equity and Access in Algorithms, Mechanisms, and Optimization; ACM: New York, NY, USA, 2021; pp. 1–9. [Google Scholar] [CrossRef]
- Mitchell, M.; Wu, S.; Zaldivar, A.; Barnes, P.; Vasserman, L.; Hutchinson, B.; Spitzer, E.; Raji, I.D.; Gebru, T. Model cards for model reporting. In Proceedings of the Conference on Fairness, Accountability, and Transparency, Atlanta, GA, USA, 29–31 January 2019; pp. 220–229. [Google Scholar] [CrossRef]
- Gebru, T.; Morgenstern, J.; Vecchione, B.; Vaughan, J.W.; Wallach, H.; Iii, H.D.; Crawford, K. Datasheets for datasets. Commun. ACM 2021, 64, 86–92. [Google Scholar] [CrossRef]
- Raji, I.D.; Smart, A.; White, R.N.; Mitchell, M.; Gebru, T.; Hutchinson, B.; Smith-Loud, J.; Theron, D.; Barnes, P. Closing the AI accountability gap: Defining an end-to-end framework for internal algorithmic auditing. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, New York, NY, USA, 27–30 January 2020; pp. 33–44. [Google Scholar] [CrossRef]
- Deparday, V.; Gevaert, C.; Molinario, G.; Soden, R.; Balog-Way, S.A.B. Machine Learning for Disaster Risk Management; Technical Report; World Bank Group: Washington, DC, USA, 2019. [Google Scholar]
- Xing, Z.; Yang, S.; Zan, X.; Dong, X.; Yao, Y.; Liu, Z.; Zhang, X. Flood vulnerability assessment of urban buildings based on integrating high-resolution remote sensing and street view images. Sustain. Cities Soc. 2023, 92, 104467. [Google Scholar] [CrossRef]
- Wang, Y.; Gardoni, P.; Murphy, C.; Guerrier, S. Empirical predictive modeling approach to quantifying social vulnerability to natural hazards. Ann. Am. Assoc. Geogr. 2021, 111, 1559–1583. [Google Scholar] [CrossRef]
- Kaplan, S.; Garrick, B.J. On the quantitative definition of risk. Risk Anal. 1981, 1, 11–27. [Google Scholar] [CrossRef]
- Cutter, S.L. Social Science Perspectives on Hazards and Vulnerability Science. In Geophysical Hazards: Minimizing Risk, Maximizing Awareness; Beer, T., Ed.; Springer: Dordrecht, The Netherlands, 2010; pp. 17–30. [Google Scholar] [CrossRef]
- Murphy, C.; Gardoni, P. The capability approach in risk analysis. In Handbook of Risk Theory: Epistemology, Decision Theory, Ethics, and Social Implications of Risk; Springer: Dordrecht, The Netherlands, 2012; pp. 979–997. [Google Scholar] [CrossRef]
- Gardoni, P.; Murphy, C. Gauging the societal impacts of natural disasters using a capability approach. Disasters 2010, 34, 619–636. [Google Scholar] [CrossRef]
- Omukuti, J.; Megaw, A.; Barlow, M.; Altink, H.; White, P. The value of secondary use of data generated by non-governmental organisations for disaster risk management research: Evidence from the Caribbean. Int. J. Disaster Risk Reduct. 2021, 56, 102114. [Google Scholar] [CrossRef]
- Geiß, C.; Pelizari, P.A.; Marconcini, M.; Sengara, W.; Edwards, M.; Lakes, T.; Taubenböck, H. Estimation of seismic building structural types using multi-sensor remote sensing and machine learning techniques. ISPRS J. Photogramm. Remote Sens. 2015, 104, 175–188. [Google Scholar] [CrossRef]
- Islam, M.M.; Ujiie, K.; Noguchi, R.; Ahamed, T. Flash flood-induced vulnerability and need assessment of wetlands using remote sensing, GIS, and econometric models. Remote Sens. Appl. Soc. Environ. 2022, 25, 100692. [Google Scholar] [CrossRef]
- Schwarz, B.; Pestre, G.; Tellman, B.; Sullivan, J.; Kuhn, C.; Mahtta, R.; Pandey, B.; Hammett, L. Mapping Floods and Assessing Flood Vulnerability for Disaster Decision-Making: A Case Study Remote Sensing Application in Senegal. In Earth Observation Open Science and Innovation; Mathieu, P.P., Aubrecht, C., Eds.; Springer International Publishing: Cham, Switzerland, 2018; pp. 293–300. [Google Scholar] [CrossRef]
- Cian, F.; Giupponi, C.; Marconcini, M. Integration of earth observation and census data for mapping a multi-temporal flood vulnerability index: A case study on Northeast Italy. Nat. Hazards 2021, 106, 2163–2184. [Google Scholar] [CrossRef]
- Valentijn, T.; Margutti, J.; van den Homberg, M.; Laaksonen, J. Multi-hazard and spatial transferability of a cnn for automated building damage assessment. Remote Sens. 2020, 12, 2839. [Google Scholar] [CrossRef]
- Kerle, N.; Nex, F.; Gerke, M.; Duarte, D.; Vetrivel, A. UAV-based structural damage mapping: A review. ISPRS Int. J. Geo-Inf. 2020, 9, 14. [Google Scholar] [CrossRef]
- Matin, S.S.; Pradhan, B. Earthquake-induced building-damage mapping using Explainable AI (XAI). Sensors 2021, 21, 4489. [Google Scholar] [CrossRef]
- Adriano, B.; Xia, J.; Baier, G.; Yokoya, N.; Koshimura, S. Multi-source data fusion based on ensemble learning for rapid building damage mapping during the 2018 sulawesi earthquake and tsunami in Palu, Indonesia. Remote Sens. 2019, 11, 886. [Google Scholar] [CrossRef]
- Gebrehiwot, A.; Hashemi-Beni, L.; Thompson, G.; Kordjamshidi, P.; Langan, T.E. Deep convolutional neural network for flood extent mapping using unmanned aerial vehicles data. Sensors 2019, 19, 1486. [Google Scholar] [CrossRef]
- Lemmens, R.; Toxopeus, B.; Boerboom, L.; Schouwenburg, M.; Retsios, B.; Nieuwenhuis, W.; Mannaerts, C. Implementation of a comprehensive and effective geoprocessing workflow environment. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2018, 42, 123–127. [Google Scholar] [CrossRef]
- Greenwood, F.; Joseph, D. Aid from the Air: A Review of Drone Use in the RCRC Global Network; Technical Report; The International Red Cross and Red Crescent Movement: Geneva, Switzerland, 2020. [Google Scholar]
- Leyteño, T.T. Detailed Drone and Street-Level Imagery for Mapping in the Philippines; Technical Report; The Philippine Red Cross: Mandaluyong, Philippines, 2017. [Google Scholar]
- Mokkenstorm, L.C.; van den Homberg, M.J.C.; Winsemius, H.; Persson, A. River Flood Detection Using Passive Microwave Remote Sensing in a Data-Scarce Environment: A Case Study for Two River Basins in Malawi. Front. Earth Sci. 2021, 9, 552. [Google Scholar] [CrossRef]
- Ngongondo, C.; Xu, C.Y.; Gottschalk, L.; Alemaw, B. Evaluation of spatial and temporal characteristics of rainfall in Malawi: A case of data scarce region. Theor. Appl. Climatol. 2011, 106, 79–93. [Google Scholar] [CrossRef]
- Wouters, L.; Couasnon, A.; De Ruiter, M.C.; van den Homberg, M.J.C.; Teklesadik, A.; De Moel, H. Improving flood damage assessments in data-scarce areas by retrieval of building characteristics through UAV image segmentation and machine learning–a case study of the 2019 floods in southern Malawi. Nat. Hazards Earth Syst. Sci. 2021, 21, 3199–3218. [Google Scholar] [CrossRef]
- Bucherie, A.; Werner, M.; van den Homberg, M.; Tembo, S. Flash flood warnings in context: Combining local knowledge and large-scale hydro-meteorological patterns. Nat. Hazards Earth Syst. Sci. 2022, 22, 461–480. [Google Scholar] [CrossRef]
- Gortzak, I. Characterizing Housing Stock Vulnerability to Floods by Combining UAV, Mapillary and Survey Data—A Case Study for Karonga, Malawi. Master’s Thesis, Utrecht University, Utrecht, The Netherlands, 2021. [Google Scholar]
- Mapillary. 2024. Available online: https://www.mapillary.com/open-data (accessed on 7 November 2024).
- Ma, D.; Fan, H.; Li, W.; Ding, X. The state of mapillary: An exploratory analysis. ISPRS Int. J. Geo-Inf. 2019, 9, 10. [Google Scholar] [CrossRef]
- Lindert, K.; Andrews, C.; Msowoya, C.; Paul, B.V.; Chirwa, E.; Mittal, A. Rapid Social Registry Assessment; Working Paper; World Bank Group: Washington, DC, USA, 2018. [Google Scholar]
- Haklay, M.; Weber, P. Openstreetmap: User-generated street maps. IEEE Pervasive Comput. 2008, 7, 12–18. [Google Scholar] [CrossRef]
- Ipeirotis, P.G.; Provost, F.; Wang, J. Quality management on amazon mechanical turk. In Proceedings of the ACM SIGKDD Workshop on Human Computation, Washington, DC, USA, 25 July 2010; pp. 64–67. [Google Scholar] [CrossRef]
- Zhang, J.; Wu, X.; Sheng, V.S. Learning from crowdsourced labeled data: A survey. Artif. Intell. Rev. 2016, 46, 543–576. [Google Scholar] [CrossRef]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. arXiv 2015, arXiv:1512.03385. [Google Scholar] [CrossRef]
- Rudari, R.; Beckers, J.; De Angeli, S.; Rossi, L.; Trasforini, E. Impact of modelling scale on probabilistic flood risk assessment: The Malawi case. E3S Web Conf. 2016, 7, 04015. [Google Scholar] [CrossRef]
- Cardona, O.D.; Ordaz, M.; Reinoso, E.; Yamín, L.; Barbat, A. CAPRA–comprehensive approach to probabilistic risk assessment: International initiative for risk management effectiveness. In Proceedings of the 15th World Conference on Earthquake Engineering, Lisbon, Portugal, 24–28 September 2012; Volume 1. [Google Scholar]
- Fan, Z.; Feng, C.C.; Biljecki, F. Coverage and Bias of Street View Imagery in Mapping the Urban Environment. arXiv 2024, arXiv:2409.15386. [Google Scholar] [CrossRef]
- Kim, D.H.; López, G.; Kiedanski, D.; Maduako, I.; Ríos, B.; Descoins, A.; Zurutuza, N.; Arora, S.; Fabian, C. Bias in Deep Neural Networks in Land Use Characterization for International Development. Remote Sens. 2021, 13, 2908. [Google Scholar] [CrossRef]
- Melamed, D.; Johnson, C.; Gerg, I.D.; Zhao, C.; Blue, R.; Hoogs, A.; Clipp, B.; Morrone, P. Uncovering Bias in Building Damage Assessment from Satellite Imagery. In Proceedings of the IGARSS 2024-2024 IEEE International Geoscience and Remote Sensing Symposium, Athens, Greece, 7–12 July 2024; IEEE: Piscataway, NJ, USA, 2024; pp. 8095–8099. [Google Scholar]
- Masinde, B.K.; Gevaert, C.M.; Nagenborg, M.H.; Zevenbergen, J.A. Group-Privacy Threats for Geodata in the Humanitarian Context. ISPRS Int. J. Geo-Inf. 2023, 12, 393. [Google Scholar] [CrossRef]
- Mehrabi, N.; Morstatter, F.; Saxena, N.; Lerman, K.; Galstyan, A. A Survey on Bias and Fairness in Machine Learning. ACM Comput. Surv. 2021, 54, 1–35. [Google Scholar] [CrossRef]
- Ruiz, N.; Kortylewski, A.; Qiu, W.; Xie, C.; Bargal, S.A.; Yuille, A.; Sclaroff, S. Simulated adversarial testing of face recognition models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 4145–4155. [Google Scholar] [CrossRef]
- Guo, X.; Yin, Y.; Dong, C.; Yang, G.; Zhou, G. On the class imbalance problem. In Proceedings of the 2008 Fourth International Conference on Natural Computation, Jinan, China, 18–20 October 2008; IEEE: Piscataway, NJ, USA, 2008; Volume 4, pp. 192–201. [Google Scholar] [CrossRef]
- Johnson, J.M.; Khoshgoftaar, T.M. Survey on deep learning with class imbalance. J. Big Data 2019, 6, 27. [Google Scholar] [CrossRef]
- Wei, W.; Li, J.; Cao, L.; Ou, Y.; Chen, J. Effective detection of sophisticated online banking fraud on extremely imbalanced data. World Wide Web 2013, 16, 449–475. [Google Scholar] [CrossRef]
- Kubat, M.; Holte, R.C.; Matwin, S. Machine learning for the detection of oil spills in satellite radar images. Mach. Learn. 1998, 30, 195–215. [Google Scholar] [CrossRef]
- Bria, A.; Marrocco, C.; Tortorella, F. Addressing class imbalance in deep learning for small lesion detection on medical images. Comput. Biol. Med. 2020, 120, 103735. [Google Scholar] [CrossRef] [PubMed]
- Lopez, A.; de Perez, E.C.; Bazo, J.; Suarez, P.; van den Hurk, B.; van Aalst, M. Bridging forecast verification and humanitarian decisions: A valuation approach for setting up action-oriented early warnings. Weather Clim. Extrem. 2020, 27, 100167. [Google Scholar] [CrossRef]
- Shorten, C.; Khoshgoftaar, T.M. A survey on image data augmentation for deep learning. J. Big Data 2019, 6, 60. [Google Scholar] [CrossRef]
- Wang, Z.; Du, L.; Mao, J.; Liu, B.; Yang, D. SAR target detection based on SSD with data augmentation and transfer learning. IEEE Geosci. Remote Sens. Lett. 2018, 16, 150–154. [Google Scholar] [CrossRef]
- Costanza-Chock, S.; Raji, I.D.; Buolamwini, J. Who Audits the Auditors? Recommendations from a field scan of the algorithmic auditing ecosystem. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, Seoul, Republic of Korea, 21–24 June 2022; pp. 1571–1583. [Google Scholar] [CrossRef]
- van den Homberg, M.J.C.; Gevaert, C.M.; Georgiadou, Y. The changing face of accountability in humanitarianism: Using artificial intelligence for anticipatory action. Politics Gov. 2020, 8, 456–467. [Google Scholar] [CrossRef]
- Kasy, M.; Abebe, R. Fairness, equality, and power in algorithmic decision-making. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, New York, NY, USA, 3–10 March 2021; pp. 576–586. [Google Scholar] [CrossRef]
- Bovens, M. Analysing and assessing accountability: A conceptual framework. Eur. Law J. 2007, 13, 447–468. [Google Scholar] [CrossRef]
- McKay, F.; Williams, B.J.; Prestwich, G.; Treanor, D.; Hallowell, N. Public governance of medical artificial intelligence research in the UK: An integrated multi-scale model. Res. Involv. Engagem. 2022, 8, 21. [Google Scholar] [CrossRef]
Indicator | Minimum Value | Maximum Value |
---|---|---|
Age | 1 [20–30 yrs] | 7 [80–90 yrs] |
Health (fit for work?) | 0 (yes) | 1 (no) |
Level of education | 1 (none) | 4 (training college) |
Household size | 1 | 13 |
Wealth | 1 (Poor) | 3 (Poorest) |
Building Type | Physical Vulnerability Score | Social Vulnerability Score | Social Vulnerability Index | Integrated Vulnerability Score |
---|---|---|---|---|
Permanent | 1 | 1 2 3 4 5 | 1 2 3 4 5 | 0.067 0.133 0.200 0.266 0.333 |
Semi-permanent | 2 | 1 2 3 4 5 | 6 7 8 9 10 | 0.400 0.466 0.536 0.603 0.666 |
Traditional | 3 | 1 2 3 4 5 | 11 12 13 14 15 | 0.737 0.800 0.866 0.933 1.000 |
Wall Material | Precision | Recall | F1-Score | Overall Accuracy | Kappa |
---|---|---|---|---|---|
Bricks | 0.99 | 0.66 | 0.80 | 0.84 | 0.68 |
Concrete | 0.77 | 0.99 | 0.87 |
Roof Material | Precision | Recall | F1-Score | Overall Accuracy | Kappa |
---|---|---|---|---|---|
Thatch | 0.67 | 0.49 | 0.57 | 0.81 | 0.51 |
Iron sheet | 0.85 | 0.92 | 0.88 |
CNN Classification Classes | Household Survey Data Classes |
---|---|
Bricks | Bricks (59.34%) |
Concrete | Bricks and mud (14.2%) |
Bamboo (5.79%) | |
Mud (3.72%) | |
Other combinations (16.95%) |
Build Type | Roof Type | Wall Material |
---|---|---|
Permanent | Iron-sheet | Brick Concrete |
Semi-permanent | Thatch Thatch Thatch Iron-sheet | Brick Concrete Brick and mud/bamboo/grass Brick and mud/bamboo/grass |
Traditional | Thatch | Mud Bamboo Grass |
Data Collection Stage | Data Processing | Link to Damage Curves | Implementation |
---|---|---|---|
No representation or aggregation biases found at this stage from our assessment. | Representation bias 1. Performance of OBIA models cause representation bias of thatched roof buildings downstream of the workflow. 2. The training data of the CNN contained only bricks and concrete excluding other facade materials. | Aggregation bias occurs in the categorization of house typologies. Semi-permanent group involves many variations of build types. This obscures the flood damage trend for each individual build type. | Bias in the previous steps are reinforced since workflow only caters for households living in permanent and semi-permanent. but households living in traditional buildings remain invisible. |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Published by MDPI on behalf of the International Society for Photogrammetry and Remote Sensing. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Masinde, B.K.; Gevaert, C.M.; Nagenborg, M.H.; van den Homberg, M.J.C.; Margutti, J.; Gortzak, I.; Zevenbergen, J.A. Auditing Flood Vulnerability Geo-Intelligence Workflow for Biases. ISPRS Int. J. Geo-Inf. 2024, 13, 419. https://doi.org/10.3390/ijgi13120419
Masinde BK, Gevaert CM, Nagenborg MH, van den Homberg MJC, Margutti J, Gortzak I, Zevenbergen JA. Auditing Flood Vulnerability Geo-Intelligence Workflow for Biases. ISPRS International Journal of Geo-Information. 2024; 13(12):419. https://doi.org/10.3390/ijgi13120419
Chicago/Turabian StyleMasinde, Brian K., Caroline M. Gevaert, Michael H. Nagenborg, Marc J. C. van den Homberg, Jacopo Margutti, Inez Gortzak, and Jaap A. Zevenbergen. 2024. "Auditing Flood Vulnerability Geo-Intelligence Workflow for Biases" ISPRS International Journal of Geo-Information 13, no. 12: 419. https://doi.org/10.3390/ijgi13120419
APA StyleMasinde, B. K., Gevaert, C. M., Nagenborg, M. H., van den Homberg, M. J. C., Margutti, J., Gortzak, I., & Zevenbergen, J. A. (2024). Auditing Flood Vulnerability Geo-Intelligence Workflow for Biases. ISPRS International Journal of Geo-Information, 13(12), 419. https://doi.org/10.3390/ijgi13120419