1. Introduction
The United Nations Office for Disaster Risk Reduction (UNDRR) has reported significant increases in the frequency and intensity of natural disasters over the past few decades [
1]. This trend severely threatens human life, infrastructure, and ecosystems, with wide-ranging socio-economic impacts [
2]. The contributing factors to this rise include climate change, urbanization, deforestation, and unsustainable land use practices [
3,
4], which worsen the vulnerability of landscapes and populations [
5,
6]. For example, a study by Swain et al. (2020) revealed that, in high-warming scenarios, climate change has led to a 20% increase in the magnitude and over 200% increase in the frequency of 100-year precipitation events, resulting in a 30–127% increase in population exposure [
7]. Additionally, Quintero et al. (2018) emphasized that climate change has impacted the frequency and intensity of extreme precipitation events, leading to more frequent and severe floods, especially in vulnerable regions [
8]. Urbanization increases impervious surfaces, reduces natural water absorption, and concentrates populations in flood-prone areas. For instance, in Jakarta, Indonesia, rapid urban expansion and inadequate infrastructure have increased the flooding risk, affecting over 1.5 million residents annually [
9]. Similarly, deforestation has been linked to heightened landslide risks; a study in the Amazon basin indicated that regions with over 20% deforestation experienced a 45% increase in landslide occurrences compared to less-disturbed areas [
10]. Moreover, the abandonment deriving from poor land management practices further destabilizes slopes, leading to increased vulnerability to landslides. Persichillo et al. (2017) reported that landslide incidents in the internal regions of Italy have doubled in the last decade due to these practices [
11].
To handle this change, we can act in different ways. Firstly, developing effective mitigation and adaptation strategies is imperative. To do so, it is fundamental to understand the mechanisms behind natural disasters and deal with them [
12,
13]. Secondly, the new technological tools available today enable better management of the static and dynamic data related to the territory, enabling more effective emergency management and planning of protective measures [
14,
15].
Geographic Information Systems (GISs) have revolutionized the way spatial data are collected, analyzed, and visualized. Traditionally, GIS applications were limited to desktop environments, limiting access and collaboration due to costly licenses [
16]. Recently, the emergence of online technologies has enabled the creation of Web-GIS applications. These new tools offer robust solutions that bring the capabilities of traditional GISs to a wider audience, often without the need for any license. In fact, Web-GIS apps leverage the internet to offer real-time access to spatial data and analysis tools [
17]. This breakthrough has made GISs more accessible, enabling a broader spectrum of users, from professionals to laypersons, to interact with geospatial data without the need for specialist software or hardware.
The development of Web-GIS applications has undergone significant advancements in recent years [
14]. Modern Web-GIS platforms, such as ArcGIS Online, Google Earth Engine, QGIS Web, and OpenLayers, offer comprehensive toolsets for mapping, data visualization, and spatial analysis. Each of these platforms has its distinct strengths and weaknesses. ArcGIS Online excels in creating interactive maps that include complex simulation results. In fact, it provides robust data analytics capabilities that enable users to conduct sophisticated geospatial analyses and user-friendly features for real-time data processing. The global user interaction makes ArcGIS Online a popular choice, especially for organizations that need to manage a wide range of complex geospatial information [
18]. At the same time, it requires costly licenses, which may deter smaller projects or research initiatives. In contrast, Google Earth Engine is renowned for its ability to handle massive datasets and perform cloud-based analysis. This platform facilitates planetary-scale environmental monitoring by leveraging satellite imagery and advanced processing algorithms [
19]. While it offers significant advantages in terms of scalability and access to historical satellite data, users may encounter a steeper learning curve, particularly when customizing analyses for specific research questions. QGIS Web extends the functionalities of the QGIS desktop application, enabling users to publish their GIS projects online. It is an open-source tool with features comparable to ArcGIS Online, but the fact that it is freely available makes it a popular choice for many research fields [
20]. However, sometimes, it may lack some of the advanced features and user support found in commercial platforms like ArcGIS. Meanwhile, OpenLayers is a flexible JavaScript library designed to facilitate the integration and visualization of geographic data. It supports various geospatial data formats and web services, such as the WMS (Web Map Service) and the WFS (Web Feature Service). Usually, it is used to generate or integrate Geographic Information Systems but may require more technical expertise to implement effectively [
21]. Each of these platforms supports APIs (Application Programming Interfaces) for customization and integration with other web services, enabling users to tailor their applications to specific needs. However, while these systems provide powerful functionalities, they also present challenges, such as varying levels of user-friendliness and performance in handling large datasets. Thus, the uniqueness of this study system resides in its targeted approach to facilitate the dissemination of results for different natural hazards, considering the effective territory directly. This effort aims to bridge the existing gaps in accessibility and knowledge sharing within the community, ultimately enhancing public awareness and engagement regarding territorial risks.
The advancements regarding the user interface design have improved the usability of Web-GIS applications [
22,
23]. Interactive maps, a responsive design, and intuitive controls have made these applications more accessible to non-technical users. Additionally, the integration of Web-GIS with other technologies such as the Internet of Things (IoT), remote sensing, and artificial intelligence has expanded the potential applications of Web-GIS, from environmental monitoring to urban planning and disaster management [
24]. The integration of real-time data acquisition technologies with Web-GIS applications and Building Information Models (BIMs) is advancing the development of a territorial-scale Digital Twin (DT). Digital Twins possess the capability of the continuous interaction and exchange of extensive datasets, enabling advanced simulations to enhance disaster resilience. By providing decision-makers with actionable insights into natural hazards, Digital Twins support the development of proactive mitigation strategies [
25,
26]. Furthermore, they facilitate collaboration among stakeholders by ensuring coordinated responses to evolving disaster scenarios. However, challenges remain, particularly the limited accessibility of dynamic data and the integration of GIS and BIM models. These obstacles continue to hinder the creation of comprehensive Digital Twins of territories despite ongoing efforts in the scientific literature to address these issues [
14]. Nonetheless, GISs still possess the reference graphical database that can be exploited for this purpose.
Web-GIS applications offer numerous advantages over traditional desktop GISs. One of the primary benefits is accessibility. Users can access geospatial data and tools from any location with an internet connection, facilitating collaboration and work planning. This feature acquires relevant value especially in scenarios where the stakeholders are geographically dispersed or when immediate access to spatial information is fundamental to manage an emergency [
27]. Web-GIS applications also have a significant advantage in scalability. They can handle large amounts of data by using cloud infrastructures to perform complex analyses that would be challenging for desktops [
28].
In addition, Web-GIS applications significantly improve the sharing of real-time data. By directly integrating IoT sensors or processing data in real time, these applications ensure that users always have access to the latest information, enabling shared decision-making and leading to more informed and timely choices in emergency management and activity planning [
29].
In this paper, we aim to describe a Web-GIS application that has been developed explicitly for hydrogeological risk management within the NODES—Nord Ovest Digitale e Sostenibile project, which is funded under the National Recovery and Resilience Plan (NRRP) [
30]. This project aims to promote digitalization and sustainability in the Northwest region of Italy, including the creation of advanced tools for analyzing and managing environmental risks. The Web-GIS application has been designed to integrate geospatial and dynamic data, providing an interactive platform for the assessment and planning of prevention and response measures to hydrogeological disasters. By combining the strengths of GISs with the accessibility and scalability of web technologies, this application aims to improve risk management and support informed decision-making for greater territorial resilience. Our research highlights the evolution of Web-GIS applications, representing a significant advancement in geospatial technology that is capable of addressing the contemporary spatial challenges and enabling novel solutions across multiple domains.
2. Materials and Methods
2.1. Study Area—Cervo Valley
The Cervo Valley, located in the province of Biella in Northwest Italy, has been chosen for this case study. This region is characterized by a striking contrast between its mountainous terrain and plains, resulting from its diverse topography. The area is prone to frequent flooding and landslides, which have significant impacts on its socio-economic fabric. For example, between 2000 and 2020, floods and landslides in the region caused over EUR 50 million in damage to infrastructure, leading to the displacement of thousands of residents and disrupting local economies. Historical records, which have become more consistent since the early 1900s, indicate that these natural hazards have resulted in over 100 fatalities, 500 injuries, and the destruction of critical infrastructure such as roads and bridges [
31]. Furthermore, the economic impact is worsened by the long-term disruption of agricultural activities and tourism, both of which are vital sectors in the area.
The valley extends geographically from the source of the Cervo Stream in Piedicavallo to the city of Biella. It borders the Aosta Valley Region, Vercelli Province, and Turin Province, covering approximately 100 km
2. A distinct division between the Upper and Lower Cervo Valley characterizes the valley. The Upper Valley features a more rugged terrain with a sparse population and small villages, while the Lower Valley is more densely populated, with significant residential and industrial development. The Cervo Stream, central to the valley’s hydrology, flows through a drainage basin of approximately 9943 km
2 and extends about 64 km [
32]. The stream experiences a highly variable flow regime with significant peak discharge rates. For instance, during a 100-year return period, peak flows at the Silit cross-section can reach around 900 m
3/s [
33]. Flood events are particularly notable, with studies indicating that flows can reach up to 1047 m
3/s for a 200-year return period at the Passobreve hydrometric station [
34]. In addition to the Cervo Stream, the valley’s hydrographic network includes 17 tributaries, with the Mologna, Irogna, and Rio Valdescola being the most significant.
Regarding the geological aspect, the valley was characterized approximately 30 million years ago by an extensive solidification of magma, resulting in vast portions of volcanic rocks, granites, and metamorphic volcanic rocks, primarily syenites [
35].
2.2. Data Collection
Creating web maps requires multiple raster and vector data representing the territory of interest as well as potential areas at risk for landslides and flooding events. For our case study, we obtained the initial data from two primary national websites:
Geoportale Piemonte: This online platform, provided by Regione Piemonte, allows users to download and visualize updated spatial data related to infrastructure, land use, and cartography [
36].
Arpa Piemonte: This online platform is managed by the regional agency for environmental protection in Piedmont. It is responsible for collecting environmental data, including atmospheric, hydrological, and meteorological monitoring [
37].
The quality of data used is fundamental as it directly influences the accuracy of the results obtained. Geoportale Piemonte and ARPA Piemonte are two highly reliable sources of data verified at both national and regional levels. Both portals adhere to rigorous quality standards, ensuring the validity and reliability of the information. However, it is important to consider that the accuracy of research results heavily depends on the timeliness and granularity of the data. Any temporal or spatial gaps may introduce errors or uncertainties in the models used for hydrogeological risk analysis, sometimes requiring integration with additional data or the use of interpolation techniques to compensate for any deficiencies [
38].
To accurately recreate the considered surface, two distinct Digital Terrain Models (DTMs) were acquired from Geoportale Piemonte. These DTMs were created using Light Detection and Ranging (LIDAR) technology, specifically adhering to the Level 4 uniform standard method. The first DTM, with a precision of 25 m, is used to create the Piedmont Web Map, providing a general overview of the hydrogeological situation of the entire region. The second DTM, with a precision of 5 m (
Figure 1), is used to develop the Cervo Valley Web Map, enabling precise real-time hydrogeological risk analysis. This distinction was necessary to manage data efficiently by limiting file sizes.
Subsequently, various vector datasets were obtained from Geoportale Piemonte. The historical data, between 1952 and 1966, have been derived from Military Geographic Institute (IGM) with a scale of 1:100,000, while data on streets, buildings, green areas, hydrographic network, and municipal boundaries have been accessed through the Reference Spatial Database (BDTRE) via the Geo-Topographic Database from Geoportale Piemonte. The data were most recently updated on 28 February 2023. Furthermore, disaster framework data were downloaded too. This dataset is based on the ‘Piano per l’Assetto Idrogeologico’ (PAI)—Annex 2 (Atlas of Hydraulic and Hydrogeological Risks). The PAI is a strategic plan for hydrogeological planning, addressing hydraulic and hydrogeological hazards [
39]. The acquisition and georeferencing of data on the spatial distribution of current and historical disaster processes enable the creation of representative cartography. This cartography depicts the distribution of instability phenomena across the entire basin territory, including landslide areas, flood-prone zones, debris cones, and avalanche zones throughout the Piedmont Region. As detailed in the PAI document itself, flood return periods for the main watercourses, both in the plain and mountain valley sections, are set at 20, 100, 200, and 500 years. For landslide events, hazard analysis is conducted using cartographic zoning at a scale of 1:25,000, as detailed in the ‘Cartographic Delineation of Landslide Areas’, which aims to define land use limitations for regulatory purposes. The plan was approved on 24 May 2001 and since then has been updated through urban planning instruments (PRG—General Regulatory Plans).
The ARPA Piemonte website was utilized to obtain variations in precipitation data from two locations: Piedicavallo, situated in the Upper Valley, and Biella, located at the valley’s terminus, as highlighted in
Figure 1. These locations correspond to the positions of pluviometer sensors within the valley, providing critical data for hydrological analysis. The data were collected using API calls, a mechanism employed to share information between different software systems, specifically between the ARPA Piemonte website and the web maps. In this context, a series of definitions and protocols are necessary.
2.3. Data Processing
The objective of this research is to move towards creating a simplified Digital Twin for managing data related to hydrogeological risk. In this initial phase, a Web-GIS application was developed, enabling visualization and interaction with both static and dynamic data.
Thanks to the high flexibility and interactive user interface of QGIS, it has been selected for map creation. Additionally, custom Python scripts were developed to generate both static and dynamic map layers and to evaluate the efficiency and effectiveness of the QGIS tools employed for data management. As largely described in the literature and tested during this research project, QGIS and Python are seamlessly integrable, offering a robust combination for large-scale geospatial analysis. On one hand, GIS methodologies, such as those supported by QGIS, have been developed explicitly to handle large-scale spatial data and complex geospatial processes. On the other hand, Python provides remarkable versatility, enabling it to operate effectively across both large-scale datasets and smaller-scale analyses depending on the data and the specific requirements of each script. This integration enables researchers and professionals to efficiently manage powerful spatial analyses, enhancing productivity and scalability. Therefore, QGIS3.36.3 and Python3.9 offer a highly effective combination across a wide range of scales and complexities.
The architecture proposed in this paper manages data across two distinct levels. As described in
Figure 2, the first level focuses on the selection, organization, and management of files to build a GIS-organized database. This process is carried out using QGIS3.36.3. The second level involves deploying this well-structured database onto a web server, making it accessible to multiple clients. This process is carried out using dedicated QGIS plugins in combination with Python scripts. This two-tiered approach ensures efficient data handling, from initial organization to broad accessibility.
This first level aims to create a dedicated workspace for the developers of future Web-GIS applications, where all necessary data and files can be collected, modified, and organized. These data include vector information (roads, rivers, and infrastructures of interest) and raster data (elevation maps, flood area maps, etc.). With QGIS, it is possible to perform complex spatial analyses, such as overlaying layers, calculating distances, or identifying areas at risk. Additionally, QGIS offers advanced tools for symbolization and layer customization, enabling it to represent data in a visually intuitive way. After processing and symbolizing the data, a QGIS project is created to represent the final maps, complete with labels, styles, and informational pop-ups. This project can be viewed and modified within QGIS, providing full control over the appearance and content of the map.
For the Cervo Valley case study, two different web maps were created. The first map is based on a Digital Terrain Model (DTM) with a 25 m resolution, covering the entire Piedmont Region. This map provides a broad overview of the hydrogeological landscape across Piedmont, incorporating general information such as the hydrographic network and landslide distribution. The second map, which serves as the primary tool, was developed using a high-resolution 5 m DTM and focuses exclusively on the Cervo Valley area. In this map, all previously mentioned data have been imported and meticulously processed to enable real-time hydrogeological risk analysis. To enhance manageability and optimize file size, sub-maps of Cervo Valley have been created to facilitate more focused analyses. Moreover, to avoid limiting the model at the watershed boundary of Cervo Valley, the surrounding areas were included in these maps to improve visualization. For the northern area bordering the Aosta Valley, an additional DTM was needed as this region belongs to a different administrative area. This supplementary raster, sourced from the Aosta Valley Geoportal [
40], enhances the overall results. Since these rasters are used solely for completeness and do not impact the static or dynamic data, they have a 25 m resolution to minimize file size.
The same raster and shapefiles used in the 2D map were also utilized to generate a 3D map based on the 5 m DTM. This 3D map greatly enhances terrain visualization, especially for non-technical audiences. However, the imported shapefiles lacked the necessary elevation data for accurate 3D visualization. To address this, a specialized Python script was employed to integrate the required elevation information.
2.3.1. Data Management: Geometric Tools
This phase focuses on reprocessing the files collected during the data collection stage to optimize and streamline the Cervo Valley model. The files are modified using QGIS tools, which are detailed in this section.
Union. Available data include various types of streets (such as vehicular areas, secondary mixed-traffic roads, pedestrian zones, and transportation infrastructures), buildings (including general structures, sports facilities, minor constructions, roofing elements, industrial sites, and monuments), as well as hydrographic networks (encompassing both lakes and rivers). These layers are available for all municipalities within Cervo Valley, resulting in a substantial number of layers in the maps. To improve user experience and reduce the number of layers, the union tool has been utilized. This process involves selecting two different layers in the command window and executing the union command.
Intersection. This tool has been utilized to identify buildings and streets at risk from flooding, landslides, and debris flow cones according to the PAI disaster framework. The command enables the creation of a new layer based on the intersection of two specifically selected layers. To execute the process, two overlapping layers must be selected in the command window: a layer containing a class of infrastructure (such as roads, public buildings, etc.) is overlaid with a layer representing a specific environmental risk (such as flood areas, landslide-prone areas, etc.). The result is a new layer that highlights only the infrastructure exposed to the particular risk under consideration. To improve map usability, different colors have been applied to represent the various risks, making it easier to distinguish between them.
Clip. This command is essential for clipping the 5 m precision Digital Terrain Model (DTM) to focus specifically on the Cervo Valley region and to reduce the file size. Clipping in this context involves extracting a specific portion of the DTM that corresponds to the area of interest—in this case, the Cervo Valley—while removing the surrounding areas that are not relevant. By inputting the entire DTM and manually defining the area to be clipped, the process efficiently isolates the required region. This results in a more manageable file size and enhances the efficiency of data processing.
Difference. To accurately represent the PAI disaster framework in Cervo Valley, the difference tool has been applied. This tool operates by taking the perimeter of each municipality within Cervo Valley and overlaying it with the PAI disaster framework. Executing the command removes areas outside the overlapping regions. This command is necessary since QGIS cannot directly clip or overlay raster files with vector files. Therefore, it is not feasible to clip the PAI disaster framework layer to align with the raster file of the entire Cervo Valley. Conversely, using the difference tool with the municipality perimeter for this operation is the most practical approach. An alternative method could involve defining borders through watersheds, which is particularly useful when the municipality boundaries do not precisely match the watershed boundaries.
MMQGIS plugin. Since the union tool enables merging different layers one by one, the MMQGIS plugin has been used to reduce the time required for this process. This plugin focuses on the basic operations necessary to unify different data. It features an intuitive user interface and can be used to create or modify animations, combine layers (as in this case), create Voronoi diagrams, import/export CSV files, and produce some other additional elements. In this case, the combination tool is adopted, which requires selecting multiple layers and defining the new shapefile name obtained by combining the previous ones. An automatic import into QGIS is then performed. With this plugin, the features are maintained and properly organized. Specifically, all the features present in the different layers are collected in the new one. In case of similar names, the plugin automatically enumerates the features, ensuring no information is lost. By adopting this layer, all municipalities’ streets and buildings subjected to floods, debris flow cones, and landslides are merged into three single layers.
2.3.2. GRASS Process Algorithm for Drainage Area Calculation
A key parameter for flood management and land cover planning is the drainage area. In scientific literature, particularly within the field of hydrology, a “drainage area” (also referred to as a “watershed” or “catchment area”) is defined as the geographical region where all surface water converges to a single outlet [
41]. This area encompasses all land where water accumulates and flows towards a common point. Given its significance in flood risk assessment and prevention, the drainage area is an essential component of our flood risk model for Cervo Valley. Including this parameter enables a more accurate and effective analysis of potential flood impacts and supports better-informed decision-making for flood management strategies. To accurately calculate the drainage area for each selected point in the model, we opted to employ the Geographic Resource Analysis Support System (GRASS) algorithm [
42]. This tool, integrated with QGIS, enables users to generate thematic maps from remote sensing data, as suggested by Lacaze et al. [
43]. In the model described here, the GRASS algorithm was specifically used to calculate the drainage area at key outlet points and to add a corresponding layer displaying these values. These values can then be easily visualized in the model by clicking on the designated locations. Key locations include Piedicavallo and Biella, which were strategically selected to assess the variation in the drainage basin from the beginning to the end of the valley. Additionally, significant sites such as the Saint Giovanni Andorno Sanctuary in the Campiglia Cervo municipality and the Maurizio Sella wool mill in the Biella municipality were also included to evaluate the impact on important cultural and industrial landmarks. The process of computing drainage areas using GRASS involves the following key steps, as also shown in
Figure 3:
Filled Digital Terrain Model (DTM): Remove any sinks in the DTM using the
r.fill.dir command. By setting the DTM raster file as the input parameter, this command generates a depression-less terrain model, which is then saved in GeoTIFF format. This step is essential for accurate drainage area analysis, as described by S.K. Jenson and J.O. Domingue [
44].
Flow Accumulation and Direction: Calculate the flow accumulation and flow direction using the
r.watershed tool. This step helps to determine how water flows across the terrain and identifies potential river channels by applying a threshold flow accumulation value. Users must specify the minimum size of the exterior watershed basin and select the single flow direction (D8). The minimum size parameter sets a threshold for watersheds, considering only those with at least this number of contributing cells [
45]. For this analysis, a threshold of 500 cells is applied, which is deemed suitable for preliminary drainage area assessment. According to GIS OpenCourseWare [
46], normally, you would repeat this for different threshold values until you obtain the best match with the rivers on a reference map or satellite image. Regarding the single flow direction (D8), the tool calculates the slope to each of the 8 surrounding cells for every raster cell and assigns the flow direction to the cell with the highest slope. If the highest slope is flat and occurs in multiple directions, the tool selects an alternative direction based on the flow directions of adjacent cells [
44]. Zero values indicate depression areas, while negative values suggest surface runoff is exiting the current geographic region. Thus, it is necessary to convert these negative values to absolute values with the following equation:
Consequently, the new absolute flow direction file is saved as a TIFF file.
River Channel Extraction: At this stage, each cell in the Cervo Valley study area is associated with a flow accumulation value, although not all cells are part of a river channel. Stream channels are defined using a flow accumulation threshold that best represents river development in the area—500 cells in this case. The r.stream.extract command is used for this purpose and can be applied to both raster and vector formats. Consequently, streams are initiated wherever the flow accumulation meets or exceeds the defined threshold.
Outlet Point Referencing: To precisely delineate drainage areas, the r.water.outlet tool is employed. By selecting specific outlet points on the map, the tool automatically calculates and saves the corresponding drainage basins as a TIFF file.
Conversion to Shapefile: Finally, it is necessary to convert the drainage area from a raster (TIFF) to a vector (shapefile) format using the r.to.vect command. This conversion enables more detailed analysis and visualization of the watershed’s boundaries.
After completing these steps, the drainage areas for the selected points of interest are determined, providing crucial data for flood risk assessments in Cervo Valley. The drainage areas for the aforementioned outlet points of interest are as follows:
Piedicavallo municipality: 18.076 km2.
Saint Giovanni Andorno Sanctuary: 0.370 km2.
Biella municipality: 99.047 km2.
Ex-wool mill Maurizio Sella: 124.713 km2.
To highlight the power of GRASS algorithm, a comparative analysis is carried out. Among the various tools available, three prominent open-source GIS platforms stand out for their capabilities: GRASS GIS, SAGA GIS, and TauDEM. Each of these tools offers unique strengths, catering to different analytical needs and user preferences.
SAGA GIS (System for Automated Geoscientific Analyses) is designed to facilitate spatial data analysis and offers a range of geoprocessing capabilities. Its modular structure and extensive library of geoscientific tools enable efficient analysis of geospatial data, and it is particularly recognized for its user-friendly interface, making it accessible for users with varying levels of expertise [
47]. TauDEM (Terrain Analysis Using Digital Elevation Models) specializes in hydrological analysis and watershed delineation, providing advanced algorithms for modeling flow direction and accumulation, which are essential for hydrological assessments [
48]. According to Mattivi et al. [
49], GRASS GIS excels in advanced modeling capabilities and scripting flexibility, making it suitable for complex analyses that require custom workflows. However, its steeper learning curve may pose challenges for novice users. In contrast, SAGA GIS is efficient in handling large datasets but may lack some advanced modeling features compared to GRASS GIS. Meanwhile, TauDEM is powerful for hydrological assessments but is more specialized and may not be as versatile for general GIS tasks. Together, these tools provide a comprehensive toolkit for researchers and practitioners in the field of geospatial analysis and hydrology, enabling a wide array of applications and analyses.
2.3.3. Data Analysis: Python Scripts
In this phase of local dataset processing, the tools provided by GIS software do not always fully address all data reprocessing needs. Specifically, custom checks often require the use of external tools. For this reason, alongside the previously described QGIS framework, several dedicated Python scripts were integrated to perform detailed operations. This approach is replicable for any additional requirements. Python scripts can either be directly integrated into QGIS or used in conjunction with it to generate files in formats that can be imported into QGIS. Below, we provide exemplary descriptions of three scripts and their functionalities:
A preliminary analysis was conducted to compare the two DTMs (5 m and 25 m) to understand their differences, including raster dimensions, pixel size, data type, and data volume per cell, as illustrated in
Figure 4. This comparison was performed using Python commands with the
rasterio library.
Rasterio facilitates the opening, reading, and writing of raster files in various formats and provides access to raster metadata. It was used to efficiently assess and compare the characteristics of the TIFF files.
Another significant application of Python scripts lies in their capacity to replicate and customize existing GIS operations. A specific script was developed to compute the slope percentage of the DTM 5m raster file, resulting in the generation of distinct shapefiles that can be imported into QGIS as layers, each representing various categories of terrain slopes. Slope classes were defined based on established literature regarding landslides, as noted by Colombo [
50], which describes recurrent landslide types in the Piedmont region.
For the Cervo Valley case study, five distinct layers were defined based on the prevalent landslide types in the area. These layers are categorized as follows:
The 2D maps of Cervo Valley, provided as shapefiles, lack elevation information, which is essential for creating a 3D map. A Python script was developed to read all the 2D shapefiles and integrate elevation data from the DTM 5 m raster file. The first function samples raster data along the geometries of the 2D shapefiles, extracting and returning the corresponding raster values. The second function uses this sampling function to extract raster data based on the geometries of the various shapefiles and calculates the average altitude for each sampled geometry. These average altitude values are then assigned as new attributes to the respective shapefiles.
2.4. Web Publication
When a QGIS project is complete and ready for web publication, there are dedicated plugins available that facilitate the conversion of QGIS maps into a web-friendly format. These tools typically convert the project into a set of HTML, CSS, and JavaScript files that can be viewed directly in a web browser. The procedure involves taking the geospatial data and map elements created in QGIS and making them interactive and accessible online. The HTML files handle the structure of the web page, CSS manages the styling and appearance, while JavaScript is responsible for the interactive logic, such as dynamically loading data, navigating the map, zooming, and interacting with map layers. This approach enables the maps to be distributed on standard web servers, making them accessible without the need for specialized GIS software. Additionally, the final output is highly customizable, enabling the adaptation of the user interface and map navigation experience to the specific requirements of the project, such as addressing hydrogeological risk.
Specifically, two distinct web applications were developed to cater to different visualization needs. The first application focuses on 2D map visualization and was built using the
qgis2web plugin. This plugin efficiently converts QGIS projects into web maps, leveraging powerful JavaScript libraries like Leaflet and OpenLayers. Leaflet, an open-source JavaScript library, is specifically designed for creating lightweight, mobile-friendly interactive maps [
51], while OpenLayers provides a more robust set of features for handling complex map data and operations [
52]. The 2D maps generated are fully interactive, enabling users to pan, zoom, and interact with various map layers, offering an intuitive way to explore the geospatial data. The second application was designed for 3D map visualization, utilizing the
qgis2threejs plugin. This plugin extends the capabilities of QGIS by transforming 2D geospatial data into 3D visualizations, which can be viewed and interacted with directly in a web browser. The 3D maps are rendered using WebGL, a JavaScript API for rendering high-performance interactive 3D graphics within web browsers. This approach enables users to explore terrain models, buildings, and other geospatial features in three dimensions, providing a more immersive experience.
The files generated in this way can be uploaded to a standard web server, eliminating the need for a specialized geospatial server, which simplifies the publication process. At this point, the map becomes accessible to anyone with a link to the website.
2.4.1. Web Map Export for Online Use: qgis2ThreeJS and qgis2threejs Plugins
Once all the static data have been incorporated into the map, it is ready for export. The QGIS plugin qgis2web is employed for this purpose for the 2D map. This plugin facilitates the export of the QGIS project to a Leaflet web map, effectively replicating the layers, extent, and styles used in the project without requiring any server-side software.
Each layer used in the QGIS project contains specific attributes, some of which are crucial for understanding the static data, while others provide additional information that may not be relevant to the final goals of the case study. For instance, in the hydrographic network, essential attributes include municipality names, river basin catchment areas, and river names. In contrast, less relevant attributes might include the date of data acquisition, update dates, and the institution responsible for the data. Before applying the qgis2web plugin, it is important to carefully manage the project layers. Attributes deemed unnecessary should have their corresponding widget set to hidden. Conversely, for essential attributes, the widget should be configured to display modified text, and attribute aliases can be created to enhance clarity and relevance.
After defining the necessary attributes, the plugin dialogue window can be opened. In this window, select the Leaflet library for export and activate the relevant layers. For layers with specific attribute widgets defined, a pop-up will be generated, becoming visible when the data are accessed by the user. On the appearance settings page, select the address search tool, attribute filter, user geolocation, and the option to highlight layers when the mouse hovers over them. For the attribute filter, choosing the municipality name enables direct zooming to a specific municipality after the map has been exported and the filter is applied. This procedure is consistently applied to both the Cervo Valley map and the Piedmont map. Similarly, the
qgis2threejs plugin is used to generate the 3D web map. In this case, however, after defining all static layers, it is essential to include the previously created shapefile containing the elevation data (see
Section 2.3.3). Once this is complete, the plugin can be launched, and the process proceeds similarly to the steps taken for the 2D map export.
The
qgis2threejs plugin enables the visualization of both raster and vector data in 3D within web browsers. It facilitates the creation of various 3D objects and the generation of files suitable for web publishing. Additionally, it enables the saving of 3D models in the gITF format, which is compatible with 3D computer graphics (3DCG) and 3D printing [
53]. In this case, the attributes of each layer are not directly visible within the 3D environment; instead, the properties of each layer must be carefully configured to achieve optimal graphic results.
2.4.2. Dynamic Data Management: ARPA Piemonte API Calls and Pop-Up Generation
The integration of dynamic data facilitates valuable insights and predictions regarding hydrogeological risks, ultimately aiding in the development and execution of an effective territorial management plan. To support this, real-time precipitation data are displayed through interactive pop-ups on both the Piedmont and Cervo Valley web maps. These pop-ups enable users to access current rainfall information, enabling informed decision-making and timely interventions in the face of potential environmental risks.
This functionality is enabled through a Python script that utilizes Application Programming Interface (API) calls. APIs are mechanisms that enable two software systems to communicate by using a set of definitions and protocols [
54]. Among the various types of APIs available, this case study employs the REST API category. REST, which stands for Representational State Transfer, defines a series of functions—such as GET, PUT, and DELETE—that allow clients to access server data.
Every time the user clicks on either the Piedicavallo or the Biella pluviometer locations, the GET function is specifically used. This means that the data are updated instantaneously each time the user checks, provided that the information published by ARPA is kept current and accurately reflects the latest measurements.
ARPA Piemonte serves as the primary source for data aquisition in Cervo Valley. The website directly provides API access to its data. For this case study, the focus is on the precipitation measurements recorded by the two pluviometers located within the valley. The first pluviometer is located in Piedicavallo, at the head of the Upper Valley, while the second is in Biella, at the mouth of Cervo Valley. Below are their respective characteristics:
To enable seamless data retrieval, Cross-Origin Resource Sharing (CORS) is leveraged. CORS enables resources to be shared across different origins, facilitating communication between client applications from different domains and third-party APIs. This is particularly useful in cases where client-side scripts refer to external API documentation. However, the Same Origin Policy (SOP) restricts the use of data from other servers unless they originate from the same server, primarily applied in web browsers to mitigate security risks. SOP ensures that JavaScript and CSS files cannot load data from potentially harmful external sources without client consent, maintaining a secure interaction environment.
In this case study, since the operation takes place in a web browser, the CORS issue arises. The ARPA Piemonte SOP does not recognize the HTML file of the exported map because its server differs from the map’s hosting site. Consequently, the request is blocked, and the necessary data cannot be retrieved. To address this issue, an intermediate step was introduced. Unlike web browsers, Python operates outside the constraints of CORS, enabling it to interact with external servers without these limitations. A Python script is employed to make the API call to the ARPA Piemonte website. In the same script, the Flask library [
55] is used to create a virtual server, effectively bypassing the CORS restrictions and the Same Origin Policy (SOP). This setup establishes a seamless connection between the Python virtual server and two JavaScript files, which are responsible for generating the pop-up graph on the map. The first JavaScript file uses the third-party Chart.js library [
56] to create the pop-up graph. This library is automatically placed in the JS folder created by the QGIS plugin during map export. The second JavaScript file is responsible for making the API call from the client side to the virtual Python server. It employs the fetch method with a GET request to retrieve necessary information using pluviometer parameters, such as the unique identifier for each pluviometer (
fk_id_punto_misura_meteo) and the pluviometer’s location data. The Fetch API method provides a standardized approach making asynchronous network requests [
57], while the browser uses GET verb for obtaining data from server [
58].
Once all the necessary preparation files are ready, the HTML script representing the web map is modified to integrate the required functionality. These modifications include the implementation of custom functions that enable the generation of pop-ups on the map. These pop-ups are triggered when a user interacts with specific elements on the map, such as the location markers for pluviometers, providing dynamic and interactive data visualization. The updated script seamlessly ties together the API data, Python server, and JavaScript libraries to enhance the user experience with real-time information. Subsequently, the JavaScript file is embedded within the HTML script of the map, enabling the dynamic information to be displayed seamlessly.
3. Results
In this section, we will present the obtained results from the Web-GIS application, highlighting its strengths and limitations. Both 2D and 3D web maps will be discussed together as the creation process is comparable and the outcomes are similarly aligned. More insights will be given in
Appendix A.
Of particular interest are the intersections of the PAI disaster framework with the roads and buildings from the BDTRE geo-topographic database, as well as the real-time precipitation data, displayed via pop-ups. Additionally, the maps include drainage areas at various outlet points, terrain slope variations, and other key topographical features that provide a comprehensive understanding of the region’s hydrogeological dynamics. These capabilities make web maps a valuable tool for both analysis and decision-making in managing hydrogeological risks.
The data presented in the maps can be divided into two layer categories: layers related to relevant socio-economic infrastructures, including streets, public and private buildings, the river network, and green areas (
Figure 5), and layers related to the natural hazards considered, such as floods, landslides, and debris flow cones.
The PAI disaster framework includes detailed representations of the flood-prone areas, with return periods of 20, 50, 100, 200, and 500 years, as well as the zones susceptible to landslides and debris flow cones. Utilizing the QGIS intersection tool, the infrastructure layers—comprising roads and buildings—were overlaid onto the corresponding natural hazard layers. This method enabled the identification and visualization of the infrastructure elements that are specifically at risk from each hazard type. To improve the clarity and usability of the maps, distinct color coding was applied: yellow for streets and buildings vulnerable to flooding, red for those susceptible to landslides, and light blue for areas at risk from debris flow cones (
Figure 6). This color-coded scheme not only facilitates the quick identification of at-risk infrastructure but also aids in prioritizing the risk management and mitigation efforts across different hazard scenarios.
Slope analysis is a critical parameter, alongside geological factors, for the identification of landslide-prone areas. Drawing from the established classifications in the literature [
50], we devised a five-tier slope classification to more accurately represent the distribution of the different landslide types within Cervo Valley. The categories include slow drips (0–15%), quick drips (15–30%), rotational/translational slips (30–35%), complex terrain movements (35–70%), and rockfalls (70–100%). Each category corresponds to specific landslide dynamics and the associated risks. The first slope range (0–15%) represents areas of minimal risk, predominantly impacting streets and buildings with minor instability. As the slope gradient increases, the potential for more severe landslide activity, such as complex movements and rockfalls, also rises. However, the actual landslide risk is further influenced by factors such as soil composition and rock structure [
59]. This classification provides a valuable framework for assessing landslide risk on medium to large scales, but, for a more precise and accurate risk evaluation, it is essential to incorporate field observations and expert judgment by a specialized geotechnical professional. To address this objective, we developed a custom Python script instead of relying on the conventional QGIS tools. This approach capitalizes on Python’s flexibility, enabling a more adaptable and replicable classification process. For instance, as shown in
Figure 7, which displays the area around the Saint Giovanni Andorno Sanctuary, the script was utilized to classify the slopes within the 70–100% range, indicative of rockfall-prone areas. This specific region features a nearly vertical rock face that needs to be monitored as it is prone to collapse and is located near a historic building. However, the 5 m resolution of the Digital Terrain Model proved to be inadequate for accurately identifying this and other rock faces as they fall below the DTM’s resolution threshold. This limitation underscores the need for higher-resolution data or supplementary methods to enhance the detection of smaller geological features and improve the risk assessment accuracy.
The drainage area was computed using GRASS GIS tools, as detailed in
Section 2.3.2. By interacting with the calculated basin locations, users can visualize the associated drainage areas on screen. However, a restriction of the current framework becomes apparent here. Specifically, the model does not support the calculation of drainage areas for every individual point despite the operation being conceptually straightforward. This constraint is due to the model’s inability to integrate algorithms capable of performing such repetitive and computationally intensive tasks efficiently. Instead, the drainage areas are updated manually by the project manager, which necessitates the creation of a finite number of points for which the drainage areas are calculated. Consequently, while the current approach enables the analysis of discrete basins, it does not facilitate comprehensive drainage area calculations across the entire model, potentially impacting the granularity of the spatial hydrological assessments.
Another challenge encountered during the process was the conversion of 2D maps to 3D web maps. Specifically, the 2D map layers lacked elevation data, which are essential for constructing accurate 3D representations. This issue was addressed using a custom Python script that integrates elevation data from a 5 m DTM raster into the 2D shapefiles by generating an elevation mask (see
Section 2.3.3). The process involves overlaying the shapefiles onto the raster data, sampling the raster along the geometry of each shapefile, and calculating the average elevation within the sampled raster geometry. This computed elevation is then assigned as a new attribute to the shapefile. To validate the accuracy of the updated shapefiles, a verification procedure was carried out by rendering a small subset of Cervo Valley (Zumaglia municipality) in 3D using Python. As shown in
Figure 8, minor alignment discrepancies are evident between the shapefiles and the raster data. These discrepancies arise from the script’s method of integrating elevation data, where the average elevation is computed based on the sampled raster cells and may not perfectly align with the finer details of the raster data. Despite these minor inconsistencies, they do not significantly affect the final model, which operates within the precision constraints of the 5 m DTM resolution. Therefore, for DTMs with a precision of less than 5 m, no specific improvements are necessary. However, if higher-precision DTMs are utilized, adjustments to the Python script may be warranted. In such cases, instead of calculating the average elevation from sampled raster cells, it might be more appropriate to assess the elevation of each raster cell within the sampled raster geometry. This approach ensures a more accurate representation of the terrain’s variability.
As mentioned, the framework integrates real-time precipitation data visualization and therefore the opportunity to consider hydrogeological aspects and wise planning of the activities and interventions. This objective is achieved through pop-ups that can be selected in correspondence with the rain gauges present in the valley. This operation can be replicated for various types of sensors, establishing a real-time data network that is easily accessible and facilitated by the 3D view. A critical prerequisite for this process is the availability of a usable and shareable database containing the sensor data. Unfortunately, this still represents a significant limitation for two reasons: firstly, the uneven distribution of instrumentation across different areas impedes the full scalability of this feature. Secondly, even in areas where sensors are deployed, access to the data is often restricted and may involve subscription fees, which impedes the broad dissemination and accessibility of real-time data. Thus, while the integration of real-time data is a powerful tool for hydrogeological assessment, its effectiveness is constrained by the current limitations in sensor coverage and data accessibility.
The procedure for extracting data from the ARPA Piemonte website [
34] encountered a challenge due to Cross-Origin Resource Sharing (CORS) restrictions. Specifically, the ARPA Piemonte server does not recognize the exported map’s HTML file, resulting in blocked data requests. To overcome this issue, a Python script has been implemented to circumvent the CORS limitations by establishing a virtual server using Flask [
55]. This server acts as a proxy, facilitating the retrieval and display of real-time data in pop-ups by enabling the JavaScript code embedded in the map’s HTML to interact with the data sources without being obstructed.
Figure 9 provides two examples of the information displayed within the pop-ups. The first example shows the precipitation gauge in Biella within a 2D view, while the second illustrates the Piedicavallo gauge in a 3D view. The pop-ups automatically present the precipitation values recorded over the past three days. These data are always updated to reflect the most recent information provided by ARPA as the API call to retrieve them is executed instantly whenever the user clicks on the gauge point. Additionally, users can access graphical representations of daily or cumulative precipitation for a customizable time period. This setup ensures that users have access to up-to-date and historical precipitation data, enhancing the map’s functionality for real-time monitoring and analysis.
To summarize the computational effort involved in generating the various models, a radar chart is presented in
Figure 10. It illustrates the comparative performance of the resolution models (2D 5 m, 2D 25 m, 3D 5 m, and 3D 25 m) across critical metrics such as vertex density, number of faces, file size, and rendering time. The 3D 5 m model shows the highest vertex density and number of faces, demonstrating its capacity for detailed spatial analysis and high-quality visualization. However, this increased complexity is accompanied by a larger file size and extended rendering time, which may limit its applicability in real-time scenarios. Conversely, since the 2D 25 m model offers lower detail, considering the same area, it provides improved rendering efficiency and reduced file size, making it a more viable option for rapid assessments and broader accessibility. These findings underscore the inherent trade-offs between model detail and computational performance, enabling users to make informed decisions regarding the selection of the most suitable model based on their specific requirements and constraints in geospatial analysis.
In the context of hydrogeological risks, the issue of high-resolution data is crucial. A resolution of 25 m is inadequate for accurately identifying the considered risks because it lacks the spatial detail necessary to capture the subtle variations in the terrain and hydrological features. For instance, a 25 m resolution may smooth over the critical topographic changes that significantly influence the water flow patterns, slope stability, and the likelihood of landslides or flooding events. Consequently, this resolution limits the visualization of risk maps calculated with higher-precision data, relying instead on a lighter model that may not accurately reflect the true risk landscape. On the other hand, a 5 m resolution provides a level of detail that enables significant insights for hydrogeological hazards. This precision enables a more accurate representation of the terrain, enhancing the model’s ability to identify potential risk areas. However, increasing the resolution beyond this threshold faces two major challenges. First, obtaining Digital Terrain Models (DTMs) with resolutions higher than 5 m is problematic as there are currently no accessible datasets that meet the required accuracy for our study area. This issue could potentially be addressed through dedicated surveys that collect point clouds at much finer resolutions, on the order of a few centimeters [
61,
62]. Second, the computational costs associated with higher-resolution models remain prohibitively high for seamless user interaction. The current hardware and software capabilities often fall short in managing such models over large areas without incurring significant lag, which could impede the real-time analysis and decision-making processes in hydrogeological risk management. Thus, while high-resolution data are crucial for precise risk assessments, the practical limitations in data availability and computational capacity must be carefully considered.
4. Discussion
The proposed approach aims to serve as a Digital Twin at a territorial scale, integrating both static and dynamic data that have been suitably pre-filtered. Designed to be simplified, quickly producible, and easily accessible online for a broad audience, this framework aspires to make complex data more digestible. However, it is important to acknowledge that this proposal does not fully realize the potential of a true Digital Twin at a territorial scale. The current methodologies, such as those examined by Batty et al. (2012) [
63], delve into the application of Digital Twins in urban planning contexts, while Kritzinger et al. (2018) discuss the integration of IoT technologies with Digital Twins for large-scale infrastructure management [
64]. These studies provide valuable insights into the capabilities and complexities associated with implementing Digital Twins in expansive areas. At the same time, they highlight the existing limitations, especially for DTs not focused on a single infrastructure but considering a territorial scale.
A key component of these implementations is the integration of Internet of Things (IoT) technologies. IoT refers to a network of interconnected devices—such as sensors, cameras, and smart meters—that continuously collect and exchange data over the internet [
65]. These devices enable real-time monitoring of physical systems and environments, providing crucial data streams that are essential for Digital Twin models, particularly in the contexts of large-scale challenging and remote areas. The implementation of 5G networks plays a fundamental role in enhancing the efficiency of IoT systems [
66]. Indeed, 5G is the fifth generation of mobile network technology, characterized by ultra-low latency (often below 1 millisecond) and extremely high data transmission rates, reaching up to 10 Gbps. This enables the near-instantaneous transfer of large datasets from IoT devices to central processing systems. The low latency and high bandwidth of 5G networks facilitate real-time data processing, ensuring that Digital Twins can function with high fidelity. Moreover, the capacity of 5G to support massive-machine-type communication (mMTC) is particularly relevant for IoT, enabling the connection of thousands of devices per square kilometer, a critical feature for scalable Digital Twin applications. The use of these technologies, however, involves complex and expensive instrumentation. Therefore, for the purpose of this research, information made public by the national service was used.
Nevertheless, even the most sophisticated Digital Twin architectures face significant limitations when addressing the challenges of large-scale territorial management. One of the primary issues is the difficulty in processing and integrating vast and diverse datasets. Schrotter and Leclercq highlight the challenges of data integration and scalability in Digital Twin implementations for smart cities, noting that discrepancies in data formats, collection methodologies, and quality can lead to inefficiencies and inaccuracies [
67]. Moreover, scalability remains a pressing concern. As the size and complexity of the data increase, the performance of Digital Twin systems may degrade, leading to significant lag and inhibiting timely decision-making processes. Tahmasebinia et al. (2023) indicate that the computational resources required for high-resolution data processing can be prohibitive, further complicating the use of territorial Digital Twins [
68].
For these reasons, the exponential increase in the available data necessitates the development of advanced filtering, compression, and selection systems for integration into the Digital Twin. These systems must be optimized not only to ensure operational efficiency but also to adapt to the technical expertise of the end user. For instance, expert users may benefit from raw and detailed data, while less-specialized users may require aggregated or simplified information. An adaptive approach, capable of modulating the information flow based on the context and the user, thus becomes essential to ensure the efficient use of the network resources and the usability of the Digital Twin on a large scale.
Studies indicate that data management and filtering within Digital Twins have been explored in various fields. For example, Liu et al. (2022) discuss the role of Digital Twin-enabled collaborative data management systems for industries like metal additive manufacturing, highlighting the importance of data integration for operational efficiency [
69]. Additionally, Barricelli et al. (2019) provide a detailed survey on Digital Twins, focusing on the need for scalable data management systems that can handle the complexity and heterogeneity of real-time data [
26]. Despite this progress, the literature on adaptive systems for Digital Twins remains sparse. This emerging field holds vast potential for enhancing the scalability and effectiveness of Digital Twins in large-scale systems, as discussed by Dasgupta et al. (2021), who developed adaptive traffic control systems using Digital Twins to improve efficiency under varying traffic conditions [
70]. Therefore, further research is needed to fully explore and address these challenges, particularly in terms of personalized data delivery and real-time adjustments.
In contrast to the proposed method—QGIS in conjunction with a plugin that directly converts local geospatial data into interactive web maps for straightforward online sharing—the traditional Web-GIS applications often involve a more complex infrastructure. These systems typically rely on a geospatial server to manage, process, and serve geospatial data, which are then accessed through a web server to provide end users with interactive mapping capabilities [
71,
72]. The QGIS with the
qgis2web/qgis2threejs approach offers notable advantages in terms of simplicity and accessibility. By enabling the direct export of QGIS projects to web-friendly formats, this method bypasses the need for a dedicated geospatial server. This streamlined workflow reduces both the technical complexity and the associated costs, making advanced geospatial visualization more accessible to users and organizations without extensive IT resources [
73]. Conversely, Web-GIS applications that utilize a geospatial server provide a more scalable solution for handling large datasets and supporting dynamic data interactions. These systems are designed to manage complex geospatial queries and offer robust performance for large-scale applications [
74]. However, the infrastructure for such systems can be more resource-intensive and require significant setup and maintenance [
75].
From a user experience perspective, both approaches have their merits. The QGIS with
qgis2web/qgis2threejs method excels in delivering rapid deployment and ease of use, with interactive features embedded directly in the web map. This method is particularly effective for straightforward visualization tasks and smaller-scale projects where immediate accessibility is prioritized. The most significant disadvantage of this method is that real-time data updates are only available for dynamic data, such as precipitation. In contrast, all the static data—and, consequently, all the maps that are downloaded and filtered at the beginning of the process, such as those depicting landslides or flood areas—must be manually updated locally in QGIS by the project manager. On the other hand, Web-GIS applications with a geospatial server are better suited for scenarios involving complex spatial queries and real-time data integration, providing a more comprehensive platform for advanced geospatial analysis [
76].
The topic of Web-GIS maps that move in the direction of Digital Twins at territorial scales has thus already been explored in the literature in various forms. For example, there are Digital Twins dedicated to the study of large-scale climate change [
77] and Web-GIS applications for early warnings for various natural hazards [
78,
79]. However, these models require a complex infrastructure for continuous data analysis and updates. This makes it challenging for non-technical users to share and understand the models due to the incorporation of simulations. Unlike these studies, the framework proposed here does not rely heavily on simulations. Instead, simulations are performed only on static data before publication, and dynamic data are filtered for ease of visualization and reading but not post-processed. The unique aspects of the proposed framework lie in its ease of sharing and use, which are crucial for widespread dissemination.
5. Conclusions
QGIS, together with the qgis2web/qgis2threejs plugins, provides an efficient and intuitive workflow to turn complex geospatial data into interactive web maps. This combination leverages the robust data processing capabilities of QGIS with the user-friendly web map functionalities, facilitating a smooth transition from local data analysis to global accessibility. By using these plugins, the process of publishing geospatial data is simplified, enabling users to convert detailed datasets into interactive maps that are easy to be shared and viewed online. By removing the need for advanced server configurations, this approach lowers the technical barriers to publishing geospatial content, enabling a wider audience, including those without specialized IT infrastructure, to share and explore these resources. This makes complex spatial data more accessible.
Research has demonstrated that interactive web maps significantly improve user engagement and data interpretation compared to static maps [
80]. The interactive elements provided by
qgis2web/qgis2threejs—such as zoom, pan, and attribute pop-ups—enable users to explore data dynamically, gaining deeper insights and a wider understanding of the spatial phenomena represented. This interactivity is particularly valuable in applications such as urban planning, environmental management, and disaster response, where users need to interact with data to make informed decisions [
81]. Furthermore, the use of web-based map solutions supports real-time data updates and collaborative analysis. As highlighted by recent studies, the ability to add real-time data to web maps makes the information more useful and up to date [
82]. This feature is crucial in fields such as emergency management, where timely and accurate spatial information can significantly impact the decision-making processes. In addition to the practical benefits, the accessibility of web maps contributes to the broader goal of open data and transparency. By providing geospatial data in a format that is easily shareable and understandable, QGIS and
qgis2web/qgis2threejs support the principles of open science and data democratization, as emphasized by Jiang et al. [
83]. This openness fosters greater public engagement and collaboration, empowering communities and stakeholders to contribute to and benefit from geospatial research and applications.
In light of this, the proposed Web-GIS application framework offers a notable advancement in simplifying the process of disseminating geospatial information while preserving the powerful capabilities of the traditional GIS platforms. By streamlining the release of spatial data, this framework enables users to more easily create, interact with, and share geospatial maps without the complexity typically associated with GIS technologies. One key advantage of this framework is its ability to pre-process both static and dynamic data into a more manageable form, enabling quicker updates and reducing the computational load required to visualize large datasets. This ensures that even non-expert users can access and interact with geospatial information intuitively. Another major benefit is the reduction in the technical barriers typically associated with GIS software. The web-based nature of the platform enables users to access sophisticated geospatial data and functionalities from any device with internet connectivity. This accessibility is paired with improvements in the data filtering and aggregation processes, ensuring that large datasets are efficiently processed and delivered in real time while maintaining high precision for critical applications. Therefore, the proposed system fosters transparency and helps stakeholders—from urban planners to environmental scientists—to make more informed data-driven decisions.
Research has demonstrated that interactive web maps greatly enhance user engagement and data interpretation compared to static maps [
80]. The dynamic features provided by tools like
qgis2web/qgis2threejs—such as zoom, pan, and attribute pop-ups—enable users to explore data more deeply, offering a better understanding of the spatial phenomena represented. This interactivity is particularly valuable in fields like urban planning, environmental management, and disaster response, where the users need to interact with data to make informed decisions [
81].
Ultimately, this research aims to improve environmental sustainability in many forms, with obvious impacts on economic and social sustainability. On one hand, raising awareness among the population about the daily risks they face and, on the other hand, enabling better planning for necessary interventions can lead to a decrease in the fatalities and damage caused by extreme natural events like landslides and floods. Sustainability means first and foremost being adaptable to new challenges, and the best way to be adaptable is to be aware. This article moves in this direction.