Next Article in Journal
Towards Single-Component and Dual-Component Radar Emitter Signal Intra-Pulse Modulation Classification Based on Convolutional Neural Network and Transformer
Next Article in Special Issue
Resolving the Urban Dilemma of Two Adjacent Rivers through a Dialogue between GIS and Augmented Reality (AR) of Fabrics
Previous Article in Journal
Mutual Guidance Meets Supervised Contrastive Learning: Vehicle Detection in Remote Sensing Images
Previous Article in Special Issue
Assessing Railway Landscape by AHP Process with GIS: A Study of the Yunnan-Vietnam Railway
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An HBIM Methodology for the Accurate and Georeferenced Reconstruction of Urban Contexts Surveyed by UAV: The Case of the Castle of Charles V

by
Anna Sanseverino
1,2,3,
Barbara Messina
1,*,
Marco Limongiello
1 and
Caterina Gabriella Guida
1
1
Department of Civil Engineering, University of Salerno, 84084 Fisciano, Italy
2
E.T.S.I. de Caminos, Canales y Puertos, University of Castilla-La Mancha, 13001 Ciudad Real, Spain
3
Department of Civil Engineering and Architecture, University of Pavia, 27100 Pavia, Italy
*
Author to whom correspondence should be addressed.
Remote Sens. 2022, 14(15), 3688; https://doi.org/10.3390/rs14153688
Submission received: 30 May 2022 / Revised: 11 July 2022 / Accepted: 25 July 2022 / Published: 1 August 2022
(This article belongs to the Special Issue Application of GIS, BIM and Linked Digitisations in Urban Heritage)

Abstract

:
The potentialities of the use of the UAV survey as a base for the generation of the context mesh are illustrated through the experiments on the case study, the Crotone Fortress, proposing a systematic general methodology and two procedural workflows for the importation of the triangulated model, maintaining its real geographical coordinates, in the Autodesk Revit environment through a Dynamo Visual Programming script [VPL]. First, the texturisation of the mesh of the urban context was experimented with, using the real-sized photogrammetric orthoimage as Revit material; therefore, the reproduction of the discretised detailed areas of the urban context was tested. They were imported via Dynamo by reading the coordinates of the vertices of every single face that constitutes the triangulated model and associating to each of them the corresponding real colorimetric data. Starting from the georeferenced context of the photogrammetric mesh, nine federated BIM models were produced: the general context models, the detailed models and the architectural model of the fortress.

1. Introduction

1.1. What Is Scan-to-BIM?

In recent years, the digitisation of the built heritage and the related registration processes of the surrounding environment have made significant progress and can now quickly reach a large number of users via multiple devices [1]. Over the past decades, despite the growing interest in Building Information Modelling (BIM) as one of the most relevant emerging technologies in the architecture, engineering and construction (AEC) sectors [2], the application of BIM methodology to the built heritage still poses some unaddressed challenges, such as interoperability, big data and the lack of automated processes: to provide an efficient interface between software and physical data, it is then imperative to create flexible and adaptive data collection systems [3]. Capturing a physical site or space using scan data to develop an intelligent 3D model using BIM software is known as “Scan-to-BIM” [4,5]. New advanced sensing technologies allow us to address these challenges by gathering semantic information needed to produce an accurate and detailed 3D model. Although, when compared to new buildings, existing assets require the acquisition of additional information for a correct assessment of their current state: models need to be enriched with more than just geometric data and information, such as historical information, degradation or deformation analysis and information on performed or to-be-performed maintenance. All these data are crucial for the maintenance and preservation of the building itself. Ideally, the whole data set coming from a three-dimensional survey is indispensable for the Scan-to-BIM modelling of the built heritage. Still, in reality, it is rarely possible to use the complete raw information. In this sense, researchers are working on the implementation of Artificial Intelligence (AI) for the semantic subdivision of 3D point clouds. Working with a classified point cloud makes it possible to speed up architecture’s analysis, maintenance operations and conservation plans, leading to a semantically enriched hierarchy that can be preparatory to successive applications such as the reconstruction phase of 3D models (CAD or BIM) [6].
The association of heterogeneous information to 3D data by means of automated segmentation and classification methods can help to characterise, describe and better interpret the object under study, whereas the term semantic segmentation (or simply classification) for point clouds means to group similar data into subsets (called segments) that have characteristics/features useful to distinguish, and identify in classes, different parts [7].

1.2. UAS Photogrammetric Survey

Unmanned Aerial Systems (UASs), known under various names and acronyms, such as Unmanned Aerial Vehicles (UAVs)—although the latter technically correspond to the sole drone supporting the system constituted of both the aerial vehicle and the sensor mounted on it— Remotely Piloted Aerial Systems or simply drones [8], are aircrafts without a pilot on board, that are being continuously miniaturised and have become widely accessible for commercial use [9,10,11]. In the last years, thanks to recent technological developments, Remotely Controlled Aerial Vehicles are increasingly used in support of geophysical surveys, enabling reliable 3D models [12,13,14]. UAS-based data collection is becoming increasingly cost-effective due to increased precision and accuracy and the ability to cover large areas inaccessible by land, with shorter flights and faster acquisition planning [15]. In particular, aerial photogrammetry from UAS has been used extensively in archaeology and cultural heritage for the documentation and 3D mapping of sites, thanks to innovative low-cost systems and high-resolution digital cameras [16,17], enabling the construction of 3D models with photorealistic textures [18].
For the purposes of the present discussion, the acronym UAV was chosen when referring to aero-photogrammetric surveying, as it is the most common terminology found in the literature.

1.3. Integrated 3D Survey Database

With regard to the survey and representation of historical assets, laser scanning is the most promising tool and is widely used for Scan-to-BIM applications due to its high accuracy and speed, proving to be extremely suitable in the acquisition of complex geometries. Photogrammetry, on the other hand, produces better detail at the graphic/photographic level, i.e., texture, but requires more processing time and also produces less “dense” results. LiDAR and photogrammetry can complement each other [19] and are getting tremendous attention in the development of remote sensing technology. Numerous studies have attempted to use multi-sensor data in various applications, such as for the generation of 3D building models by integrating terrestrial and aerial data [20,21]. Despite the advantages, there are some challenges with heterogeneous point cloud data.
Several studies formulate validation criteria for point cloud and semantic segmentation in relation to BIM [22,23]. There is no shortage of more holistic validation works in which point cloud data quality criteria are established for Scan-to-BIM and Scan-vs-BIM, with the Level of Accuracy (LOA) and Level of Development (LOD) being defined [24,25]. Apart from accuracy, they determine parameters for the completeness and density of the point cloud needed to model various building elements. For accuracy, researchers directly report deviations on reference datasets or refer to international specifications such as the LOA and LOD [22,26,27].

1.4. State of the Art: Experimental Applications of Scan-to-BIM and Mesh-to-BIM

Given the richness in the data made available by the integrated 3D survey databases, numerous experimental applications have been proposed in an effort to preserve as much survey data as possible in the transition to BIM modelling. Although no unified method has yet been identified, the most common practice for a Scan-to-BIM process consists of manual modelling, involving the insertion of ad hoc created intelligent objects, whose parameters are adapted to the specific characteristics of the study object. To facilitate this process, many researchers have opted for custom parametric modelling of objects, based on point clouds imported via plug-ins within the Autodesk Revit family editor [28]. Nevertheless, there are several studies concerning the semi-automated generation of NURBS from transforming them into “masses” capable of accommodating photogrammetric textures applied as decals in Revit [26,29,30], as well as plug-ins developed via Autodesk Revit’s Application Programming Interface (API), such as GreenSpider, which was created to recognise points from surveyed points and interpolate them to generate curves and surfaces [31]. The built heritage typically has complex (non-uniform, thus difficult to parametrise) geometries that turn their digitisation through conventional methods into imprecise and time-consuming processes. As technology has advanced, researchers have developed automated approaches for BIM reconstruction [2]. However, the efficient transformation of remote sensing data into intelligent parametric as-built models is currently an unsolved challenge [4], still requiring manual verification to increase their efficiency in a complex environment. Indeed, even though the modelling/conversion effort required for creating semantic BIMs from unstructured survey data is high, and the difficulties connected to accurately representing the variety of complex and irregular objects occurring in existing buildings and the lack of standards for their representation are notable, the manual modelling and parametrising of existing architectural elements is still the most accurate way to interpret them. This is a common practice that aims to develop a library of reusable parametric objects for an efficient implementation of the Historic Building Information Modelling (HBIM) methodology [32]. HBIM is a renowned solution whereby interactive parametric objects representing architectural elements are constructed from historic data, and these elements (including detail behind the scan surface) are usually accurately mapped onto integrated survey data; point cloud segmentation and orthoimages integration are two of the most suitable approaches for the purpose [33], going in the direction of artificial intelligence algorithm implementation [34].
Therefore, a fully automated process for extracting semantics from raw data in BIM still poses a major issue worth being investigated. Notably, there is still a lack of direct connection between the rich, geometrically accurate graphical data captured on-site and the discrete synthesis that even an as-built model type is capable of storing and reproducing. A BIM model often feels too much like an abstraction of the real world, so its practical use as a support tool for restoration and conservation purposes becomes rather ineffective.
The most recent research, primarily addressed to restoration interventions and aimed at identifying areas affected by degradation phenomena identified according to shared protocols, is based on the projection of photogrammetric orthophotos onto BIM objects that, although geometrically accurate, are not parameterised. Other techniques used to reproduce “real” textures, to preserve access to intelligent objects with the purpose of dissemination and preservation of cultural heritage, involve “decal types” [27] or the creation of textured surface materials through “diffuse maps” derived from photogrammetric orthoimages. An attempt in this respect, aimed at preserving access to smart objects, was realised by Ferreyra et al. in developing an application optimised for real-time visualisation [35]. The research investigates a methodological proposal for linking UAV survey data, via full-size orthoimages used as “textures”, to the sub-components of a built asset, such as a façade, and its sub-parts. UAVs can be the basis for the creation of an interactive image database for the metric reconstruction of 3D geometry. The goal is to establish an actual link between the collected data and BIM models, thus improving model productivity. Many studies also concern the semi-automated generation of NURBS, i.e., Rhinoceros, where the surveyed three-dimensional model is first transformed into a solid and then exported to a BIM environment via a VPL script developed in Grasshopper [36,37].
Other applications involved reverse modelling procedures, i.e., the conversion of numerical models produced from point clouds into mesh surfaces, generating polygonal connections that allowed for a more fluid and qualitative reliable 3D model of an entire territory to be further optimised within environments such as 3D GIS [38]. New methodologies regarding the replication of unique complex details typical of the built heritage involve the manual, user-supervised cutting of the photogrammetric model; such portions of the resulting 3D model (OBJ) are then imported directly into a BIM environment (such as ACCA software Edificius [39]) and positioned correctly in space, using the point cloud as a guideline, to be subsequently exported as IFC objects and exported to a software environment, such as Autodesk Revit, for semantic enhancement. In this case, the limitation of the proposed methodology relates to the need for employing multiple pieces of software and the consequent time-consuming nature [40].

1.5. How to Bridge the Gap Identified from a Thorough Analysis of the State of the Art

It then appears clear that, in applying the BIM approach to Cultural Heritage, one of the main difficulties is that it is not always possible to identify standard constructive rules for architectural elements. It depends on the complexity of the architecture, its details, the goal of the BIM and the relevance of the architecture, without forgetting the corresponding economic effort [41]. Thus, although the long-term aim of BIM modelling is to standardise the elements as much as possible, when the object to be modelled is typically unique, as in the case of the urban context, the aim becomes to propose a methodology for the standardisation of the process in order to reproduce it as authentically as possible. As a matter of fact, the analysis of state of the art reveals a lack of reliable protocols/systems for the realisation in a BIM environment of those elements of the built heritage characterised by a relevant historical, cultural and economic value and by a recognisable unicity.
Therefore, a systematic methodology for the Scan-to-BIM approach is proposed, which aims at standardising the whole process while also establishing two innovative procedural practices to reproduce, at different scales of detail, operability and modifiability, textured photogrammetric meshes of the urban context as well as detailed areas of interest within a BIM environment. The aim is to move in the direction of filling the gap between the type of detail a survey can achieve, precisely a photogrammetric one when talking about colorimetric data, and what can be reproduced in a BIM environment when it comes to distinctive, rather than unique, features such as the urban context or detailed elements, such as damaged areas, laying the foundation of a metaphorical eco-systemic monitoring environment for the management of the built heritage.

1.6. The Paper Structure

From here on, the paper will be organised as follows. In the second section, we deal with the methodology proposed in a top-down approach, from the proposed systematisation of the Scan-to-BIM modelling, organised in five subsequential steps (in a nutshell 3DS, GEO, FSC, ARC, LOI); to the innovative procedural workflows developed for the purpose; and some final considerations on the achievable Level of Information (LOI). In the third section, the case study, i.e., the Crotonian “Castle of Charles V”, will be presented together with the integrated survey conducted, and the specifical HBIM modelling carried out. The fourth and the fifth sections will concern the results, discussions and the conclusions.

2. Materials and Methods

2.1. Advantages of the Proposed Methodology

Although the long-term purpose of BIM modelling is to standardise as many elements as possible, when the object to model is unique, as in the case of the urban context, which is typically different and distinctive from any asset, the aim should focus on standardising the process to reproduce it most authentically, for further in-depth study.
Hence, the proposed methodology involves two workflows developed to reproduce texturized photogrammetric meshes of the urban context, and some detailed areas of interest within a BIM environment, by parametrising the very components of the mesh model: its triangular faces. Particularly, this second procedure may come in handy when one seeks an accurate reproduction of selected areas for future qualitative and quantitative assessments. The aim is indeed to bridge the gap between the type of detail a survey can reach, precisely a photogrammetric one, when speaking about the colorimetric data and what is possible to reproduce in a BIM environment when talking about peculiar rather than unique elements such as the urban context, or detailed relevant elements, such as frieze/decorations or damaged areas.

2.2. A Proposal for a Standardised Scan-to-HBIM Approach

A consolidated Scan-to-BIM approach usually involves first surveying the structure and the surrounding landscape on which to develop a BIM model. For said reason, we hereby intend to propose a framework for the standardisation of some well and lesser-known practices implemented in this process to be able to trace data sources and the quality of their reproduction along the whole modelling process.
The reported tested HBIM methodology, which employs the Autodesk software package, is meant as a “good operational practice” and can be organised into five sequential steps as follows:
Three-dimensional survey (3DS);
Georeferencing (GEO);
Federate modelling and Shared Coordinates setting (FSC);
Architectural modelling (ARQ);
Level of Information enhancement (LOI).
Each step of the proposed workflow is necessary to the subsequent, but it stays updatable thanks to the BIM environment. The modelling of the existing heritage is rarely a straightforward process and may, in some cases, be iterative, thus proving the necessity of repeating some steps or, at least, exchanging some of them. Remarkably, the LOI phase is present at different levels, a constant throughout the process, whether performed manually or via VPL scripts, by populating ad hoc parameters with varying types of information.

2.2.1. 3DS: Three-Dimensional Survey

A three-dimensional survey is imperative when modelling the stratified cultural heritage where each detail may have contributed to defining the site’s historic character. An integrated Laser Scanning—Photogrammetric survey is the most suitable choice. Namely, for medium-scale applications, TLS (Terrestrial Laser Scanning) and UAV (Unmanned Aerial Vehicle) surveys are carried out, to be later georeferenced within a common coordinate system via control points measured with topographic instrumentation for their subsequent integration.

2.2.2. GEO: Georeferencing

The georeferencing of the BIM models can be optimised by directly importing the mesh models of the surroundings, thanks to VPL (Visual Programming Language) scripts, in the same coordinate system of the surveyed model. For the visual script to work correctly, it is advisable to operate a rigid translation to a local coordinate system already at the end of the photogrammetric workflow and later revert to the global coordinate system by simply imposing the exact rigid translation—in the opposite sense—to the “Project Base Point” (PBP) (A Revit project stores the internal coordinates for all the elements that compose the model in a project. In detail, it is possible to distinguish between two different origin points: the Project Base Point (PBP) and the Survey Point (SP): the PBP defines the origin (0,0,0) of the project coordinate system. Use the Project Base Point as a reference point for measurements across the site; the Survey Point identifies a real-world location near the model, such as a corner of the project site or the intersection of 2 property lines. It defines the origin of the survey coordinate system, which provides a real-world context for the model. To learn more about Project Base and Survey Points, visit: https://autode.sk/3ygsFXO, accessed on 21 February 2022) of the Revit projects.

2.2.3. FSC: Federate Modelling and Shared Coordinates Setting

The federate modelling stage requires the particular BIM projects (when operating within the Autodesk Revit software [42]), such as the architectural, structural and urban parts that may compose a complete superordinate model, to be linked into a shared environment, practically consisting of a higher-level project. The first importation of the sub-models has to employ the PBP as the original reference point, to later “publish” to them the shared coordinates.

2.2.4. ARC: Architectural BIM Modelling

Though the proposed application will present an architectural BIM model, this stage may equally apply to an accurate structural and/or mechanical modelling of the object of study. First, it is fundamental to import the integrated surveyed point clouds into the Autodesk ReCap Pro [43] environment to be correctly read within a Revit project and used as guidelines for the proper scan-to-BIM modelling. Then, the proper modelling process will start by placing already existing parametrised objects to fit the point cloud representing the surveyed asset or, in their absence, by realising ad hoc “families”.

2.2.5. LOI: Level of Information Enhancement

The acronym LOI is not new to the parametric informative modelling technology; it stands for “Level of Information”, being seen as part of the equation that, together with the “Level of Geometric Detail” (LOG), defines the general concept of “Level of Development” (LOD). The LOI may include a vast amount of data in the form of parameters, which contribute to describing different aspects of a “smart” object. The current regulation provides a basic definition of it [25] to be further implemented according to the specific cases. As already mentioned, the data enrichment of the BIM models occurs along the whole process; thus, it may seem reductive and probably wrong to place it in the last step. We will then intend the LOI as the additional information provided thanks to the ad hoc developed implementation for the proposed methodology.

2.3. Procedural Workflows Developed within the Proposed Methodology

2.3.1. Workflows Premises: Global to Local System Transformation and Mesh Simplification

The two proposed procedural workflows, developed within the general Scan-to-BIM systematised methodology, are to all intents and purposes Mesh-to-BIM approaches. Therefore, for their effective implementation, some preliminary actions on the mesh surveyed model must be undertaken. As previously mentioned, when working with topographic coordinates, within some software not designed to manage this type of coordinates (i.e., Meshlab [44], Dynamo [45], etc.), approximation issues in the correct interpretation of the exact coordinates arise, leading to an incorrect visualisation and consequent reproduction of the mesh. For this reason, it is advisable to operate a transformation from the global system to a local one—so that the x, y and z coordinates of the points of the cloud and accordingly the vertices of the mesh may have the same order of magnitude—by operating a rigid translation. The rigid translation aiming to shift from a global to a local coordinate system has to be performed at the end of the photogrammetric process (in our formulation, Agisoft Metashape [46]), for example, by operating on the GCPs (Ground Control Points) used to scale and georeference the model: a fixed quantity is to be subtracted from both the longitude and latitude of the control points in the photogrammetric project. These quantities will then represent the inverse translation that will be imposed in the BIM environment by georeferencing the PBP, thus operating the rigid inverse translation.
Mesh simplification is a common practice for minimising model size by reducing the number of faces while preserving the shape, volume and boundaries. Criteria for mesh decimation are generally user-defined, selecting the reduction method (working on the number of vertices, edges or faces) and the reduction target; indeed, several commercial and open-source editing and modelling softwares include a mesh simplification module for handling this post-processing task efficiently [47]. Therefore, for an average notebook (Core i7 16GB of RAM, 2GB GPU) to be able to process the developed scripts and manage the results, it is advisable to keep the mesh faces count under 600,000 units, for the first method proposed, and under 20,000 units, for the second one, simplifying and splitting in more than one project the original photogrammetric mesh model (via Agisoft Metashape and ISTI-CNR MeshLab).

2.3.2. The Workflow A: Importing the Meshes as a Unicum into the BIM Environment

The first proposed workflow involves using a simple VPL (Visual Programming Language) script to “import” photogrammetric meshes (OBJ) of large areas of the urban context, chosen on a case-by-case basis given its distinctive uniqueness, in the BIM environment, and employing the open-source platform that can be implemented as a plug-in for Autodesk Revit. Together with their related material, they are generated as instances falling under the categories “Site” for the predominantly horizontal areas of the urban fabric and “Mass” for some characteristic vertical elements of the urbanised area. Once “reprojected” in Revit, the mesh models of the general context can be easily textured through the full-sized orthophotos [41,47], and imported in the “Material Browser” as colour maps (Figure 1). It is worth clarifying that, although the most common formats for orthoimages, such as TIFF and JPG, are equally adequate to be imported as textures into Revit’s material browser, PNG is the one that leads to the best rendering results, due to the possibility of maintaining a transparent background and at the same time an optimal resolution/compression ratio.
The first workflow works quite well for larger urban areas that constitute the unique context of an architectonic asset, specifically whenever they are predominantly horizontal; on the contrary, this method does not work flawlessly in the case of particularly articulated areas deemed worth being reproduced precisely together with their colorimetric information, for their historical value, or so to estimate their extension for further analysis.

2.3.3. The Workflow B: Mesh Model Parametrisation into the BIM Environment

Even though the second workflow also starts from photogrammetric meshes (PLY-ASCII encoding) appropriately simplified as explained before, it focuses on selected detailed areas with a much smaller extension to be exported with the relative texture (PNG). The raster image representing the texture is first treated separately, reducing the colour scale from 256 to a range varying between 8 and 15 (the Abode Photoshop “Indexed colours” tool can be employed for the purpose), depending on the variegated nature of the image in question, so to reduce to the minimum number necessary the subsequent material generation. (When converting into “indexed” colour, the Photoshop tool builds a Colour Lookup Table (CLUT), that appears as a reduced palette, which stores and indexes the colours in the image; if a colour in the original image does not appear in the table, the program chooses the closest one or simulates the colour using available colours. By limiting the panel of colours, indexed colour can reduce file size while maintaining visual quality, for example, for a web page [48]).
Through software capable of editing meshes, such as the ISTI-CNR Meshlab, it is then possible to reassign the texture to the mesh, projecting the colours to its vertices and from the vertices to its faces (Figure 2). (A polygonal mesh is at least composed of vertices, edges and faces. In the case of a triangular mesh, its faces have three vertices, located in space by their coordinates (x, y, z). To compose each triangle, it is therefore necessary to know the indices of its vertices, intended as the number that identifies the place of each vertex in the complete list. To learn more about polygonal meshes, visit: https://bit.ly/3Ay99IV, accessed on 21 February 2022).
On reexporting the resulting 3D model again in a PLY-ASCII format, their direct reading is allowed; namely, the very same files can be opened via a text pad or imported into the Microsoft Excel environment to have access to their geometric information. The numeric data derived in this way are then filtered to retrieve just the face information from it, which appears as in the following string:
3 I1 I2 I3 6 Tc1 Tc2 Tc3 Tc4 Tc5 Tc6 R G B α
where from left to right, the “3” value indicates the number of vertices of each face; the “I#” values stand for the index (the number that identifies a vertex) of each vertex that composes the face in question; the “6” value represents how may float-type texture coordinates are provided; followed by said coordinates, i.e., the “Tc#” values; and lastly we can find the numeric values of the colour assigned to each face in the “R”, “G”, “B” and “α” channel format. For the purposes of the proposed method, only the first and the last 4 digits will be considered.
Both the mesh in the 3D format (PLY) and its data in the numerical format (XLSX) are then used as input, together with an original family of triangles with adaptive vertices (“Triangular Face.RFA”), for its discretised reproduction in the BIM environment through the second VPL script. Both the adaptive family component and the most complex script were developed explicitly for the proposed methodology (Figure 3).
In a nutshell, the developed VPL script firstly retrieves the vertices coordinates from the same mesh in the PLY format and the related vertex indices and face colours from the spreadsheets. The photogrammetric colour information is then used to generate the minimum number of unique Revit materials corresponding to the “real” colours via the R, G, B and alpha channel data. The proper colour material and the vertices of each mesh model face are then used as input data to set the “photogrammetric colour” parameter and the “adaptive points” coordinates of the “triangular face” family component used to recreate a discretised version of the photogrammetric mesh in the BIM environment.

2.4. LOI Enhancement for Future Assessments

It is worth mentioning that both workflows keep an acceptable degree of approximation along the process, if we take into account the simplifications required by any modelling approach. Indeed, quantum physicist Niels Bohr would tell us that: “When we measure something we are forcing an undetermined, undefined world to assume an experimental value; we are not measuring the world, we are creating it”.
Additionally, in the case of flat surfaces, both procedures would virtually produce the same geometrically accurate outputs. Anyway, small vertical imprecision may even be neglected when we are not interested in reaching the highest level of development for the selected areas. On the contrary, the consistent difference between the workflows lies in the discretisation of the imported mesh model by means of the ad hoc developed “Triangular Face” element (Figure 4). This component is characterised by “adaptive vertices” so as to be easily placed just by picking three points in the space and a set of “Shared Parameters”. (Shared Parameters are parameter definitions that can be used in multiple families or projects. Their definitions are stored in a file independent of any family file or Revit project; this allows one to access the file from different families or projects. The Shared Parameter is a definition of a container for information that can be used in multiple families or projects. To learn more about Shared Parameters, visit: https://autode.sk/3uwHvsh, accessed on 21 February 2022). The component is also characterised by the “Photogrammetric Material”, the three sides, i.e., “L1”, “L2” and “L3” set as report parameters; the “Area” to be calculated via the Heron’s formula by running another simple VPL script that uses “L1”, “L2” and “L3” as inputs; and the “Comment” set to be manually filled out, thus allowing their selection, filtering and scheduling in report sheets for possible future assessment.

3. Results

The presented methodology focused explicitly on the realistic and correctly georeferenced modelling of architectural assets and the relative urban context, focusing on the accurate morphological-colorimetric reproduction of some suitably selected detailed areas.
The proposed techniques were then validated on a medium-scale heritage case study, with a special focus on detailed areas that allowed us to verify the accuracy of the application. After the informative-parametric modelling of the object of study, i.e., the Crotonian “Castle of Charles V”, via the implementation of the consolidated Scan-to-BIM, the experimental applications, hereby proposed, were carried out by directly importing the triangulated models, commonly known as polygonal meshes, obtained from the photogrammetric modelling process. Once the exact georeferencing of both the macro-areas of the urban context (imported via Workflow A) and the detailed areas (reproduced via Workflow B) had been carried out by operating on the PBP, it was, therefore, possible to organise the federation of the architectural model effortlessly. Thus, the triggering moment of the present work was an integrated TLS-UAV survey of the built environment of the Crotone city (Calabria, Italy) carried out by the Laboratorio Modelli of the University of Salerno in July 2021.

3.1. The Case Study of the Crotonian “Fortress of Charles V”

The Calabrian fortress known as the “Castle of Charles V” is one of the most impressive in southern Italy. It represents traces of a quadrangular profile around a hill’s upper slopes, fortifying them and reinforcing the corners with two circular towers (“Torre Aiutante” and “Torre Comandante”) and two polygonal bastions (“Bastione Santa Caterina” and “Bastione San Giacomo”). The case study is the emblem of the historical memory of the Crotonians, as it was built upon the site of the original Greek acropolis (Króton), firstly transformed into the Roman citadel and later into medieval fortifications (Figure 5).
The fort became one of the 75 castles belonging to Roger II’s vassals during the Norman period. It was accessed through what is now Piazza Castello, due to a partly fixed stone bridge and a partly wooden drawbridge. After the Swabian conquest, Frederick II decided to restore the castle with the city port. In the 14th century, the medieval fortress underwent some adaptations imposed by using artillery; although, the heaviest changes date back to the end of the 15th century when Ferdinand I ordered the fortification of the most exposed maritime sites in Calabria. Another major restoration was carried out by Charles V at the end of the 16th century to comply with the latest fortification criteria, followed by those of the 17th and 19th.
As time went by and due to the technological advance of war weapons, the castle lost its strategic military importance. During the 19th century, it was partially dismantled also because of damage caused by frequent earthquakes. To date, it houses a Civic Museum of archaeological interest (to further investigate the historical evolution of Crotone Castle, we suggest visiting the following web pages: https://bit.ly/3alEJiB; https://bit.ly/3bU1vhS; https://bit.ly/3ahNHxr; https://bit.ly/3alEJiB, accessed on 21 February 2022).
It is therefore clear that the fortress of Crotone, and even more so its surroundings, constitute a fundamental part of the identity memory of the city, hence the need to find the most appropriate way to translate, as close as possible to reality, the alternation of urbanised volumes that characterise it, and developing at a later stage a further in-depth study of specific areas of particular historical interest that deserve to be analysed in greater detail. From this arises, therefore, the need to develop a second procedure that will provide the tools for a more in-depth qualitative and quantitative analysis of those unique and unrepeatable elements of the landscape. In detail, the northern and the eastern areas, chosen for the second method application, represent some of the remaining structures of the 14th-century adaptation before Charles V’s 15th-century restoration.

3.2. The Integrated Three-Dimensional Survey

TLS and photogrammetric techniques have advantages and disadvantages; discriminating becomes the project budget rather than the required objectives or level of detail. Photogrammetric techniques require experience, above all in the acquisition phase, in order to obtain an accurate final result. TLS, on the other hand, while easy to use, requires experience in setting up the parameters and is a highly time-consuming and costly activity. The choice of which method to use depends mainly on the complexity of the site to be investigated, the accuracy requirements and the budget and time available. For this reason, the integration of multiple techniques is often the most suitable solution.
The initial purpose of the survey campaign was the documentation for its posterior valorisation of the Crotone Fortress’ exteriors; a reality-based model of the castle and its surroundings was then acquired employing integrated survey techniques to serve as a basis for those dissemination activities aimed at promoting the Italian cultural heritage. The integrated survey, obtained by combining UAV and TLS data, was carried out with the purpose of filling the gaps in both clouds, consisting of large portions of the castle that were not surveyed due to dense vegetation. The resulting three-dimensional multiscale model was, therefore, suitable for the development of sufficiently detailed HBIM models and for an initial assessment of possible maintenance and restoration work.
With the aim of obtaining full coverage of the area under study, a UAV survey was planned to be also integrated with a TLS once they had been registered within the same coordinate system via six common Control Points. The acquisitions obtained in this way thereby provided an accurate and georeferenced database for the subsequent HBIM modelling phase of the “Castle of Charles V” (Figure 6).

3.2.1. The Unmanned Aerial Vehicle (UAV) Survey

Given the case study’s relevance and the intention of realising a texture with future applications, an aero-photogrammetric survey with the following characteristics was designed. The drone used was a DJI Phantom 4 Pro equipped with an integrated 20 Megapixels camera with a 1″ CMOS sensor (5472 × 3648 pixels, Field of View (FOV) of 84°, Focal Length of 8.8 mm, Pixel Size of 2.41 μm).
In order to control the metric error and georeference the point cloud in the “EPSG: 32,633—WGS 84/UTM zone 33N”, six GCPs (Ground Control Points) were measured in nRTK (Network Real-Time Kinematic) mode by means of a Geomax Zenith 25 receiver. GCP measurement accuracy was contained within a 1.5 cm range both in planimetry and altimetry for a total error of less than 2.5 cm (Table 1).
The photogrammetric shots were carried out both with a flight plan—571 takes, creating a final square grid for the nadiral images—designed using the DJI Ground-Station software package, and in manual mode (533 takes, following the outline of the castle boundary (Table 1)). The choice to also acquire oblique images was necessary for the implementation of texture information on vertical elevations, and simultaneously increased the accuracy of the photogrammetric survey [49].
A total of 1104 photogrammetric shots were acquired with an average GSD (Ground Sample Distance) of approximately 1.4 cm/px, for a total surveyed area of approximately 2.8 hectares. At the end of the photogrammetric process, elaborated within the Agisoft Metashape environment, executed by fixing the quality at “Highest” and the filter option as “Disabled”, the following outputs were obtained: a dense point cloud of 101,640,748 units, a mesh of 20,328,148 faces and 10,186,948 vertices with a texture size of 8192 px (Table 1).

3.2.2. The Terrestrial Laser Scanning (TLS) Survey

The laser survey campaign was carried out employing a phase-distance laser scanner, the Faro Focus 3D × 330 with an integrated GPS receiver, which, under optimal environmental conditions, provides a scanning range of 0.6 m to 330 m, a measurement speed of up to 976,000 points/s, a linearity error of ±2 mm, a vertical FOV: 300° and a horizontal FOV: 360°. The instrument was set to acquire scans with an average resolution of 1/5 with a quality of 3×. A total of 31 TLS stations were set up along the case study perimeter, approximating 600 m and covering a total area of around 3 ha. Proprietary Faro Scene software was used for data processing. The structured registered information was later exported to the Autodesk ReCap environment where the GCPs previously measured were used to also georeference the laser point cloud so as to proceed to import the photogrammetric cloud within the same project.

3.3. HBIM Modelling

To achieve a metrically reliable HBIM model, a manual Scan-to-BIM approach—within the Autodesk Revit environment—was applied, for the modelling of the architectural BIM objects aiming at keeping them updatable at any time. On the other hand, a Mesh-to-BIM implementation—via ad hoc, albeit repeatable, VPL scripts, all of which were designed by the authors employing the Dynamo tool for Revit—was proposed for those unique elements of the built environment, i.e., the city neighbourhoods surrounding the castle and the area of the urban context that climbs up the external walls of the fortress, merging with it.

3.3.1. Scan-to-BIM Architectural Modelling

Scan-to-BIM is a reverse-engineering methodology that employs a point cloud as the basis for parametric modelling of the architectural asset. To import the point cloud into Autodesk Revit, it was mandatory to use Autodesk Recap Pro as the intermediary software, where the point cloud was segmented into homogeneous regions to facilitate the subsequent modelling phase. Due to the irregular geometries and the different construction phases that characterise the case study, modelling was not a straightforward procedure. Particularly complex was the setting up of the boundary walls, which presented inhomogeneous thicknesses, deviations and a lack of perpendicularity, in addition to being irregular both in planimetry and elevation. Although the most recent updates of Autodesk Revit implemented “sloped” and “tapered walls”, as systems families, their full functioning is far from being achieved; thus, occasional in place mass models were conceived to serve as reference planes for the correct design of the sharp corners of the walls. Furthermore, “parametric voids” were realised as updatable families for juxtaposing the niches and the various openings. The architectural model of the castle was then named “Castle_CV_Kroton.RVT” (Figure 7).

3.3.2. Urban Context Mesh Model Importing via Workflow A

As mentioned in Section 2.3.1, for the mesh model of the urban context to be imported to Revit, a transformation of the reference coordinate system must be performed to avoid approximation issues. It was carried out by operating on the six initially measured GCPs by subtracting a fixed quantity to the x and y coordinates (x = 684,300 m and y = 4,327,900 m), resulting in the locally translated GCPs reported in Table 2.
Nevertheless, a simplification of the mesh model was also due, and it was accomplished via smoothing and decimation tools. This phase’s outputs consist of four context mesh models. The resulting upper and the lower areas of the mainly horizontal built environment are respectively made up of 349,999 and 499,999 faces, while the house cluster placed over the castle—249,999 faces—and the wall continuity opposite to the castle on the east side—149,999 faces—were treated separately. The four three-dimensional models were then exported in an OBJ format to be later imported into Revit via the VPL script developed for Workflow A (Figure 8). It was opted for assigning to the produced BIM instances the “site” category in the case of the predominantly horizontal mesh models, and “mass” for the mostly vertical ones, which additionally generated for every instance a related material under the same name. Finally, the orthoimages with a 1.42 cm/px resolution and realised separately for each model were used at full-size to texturize them (Figure 9).
The results of the first workflow implementation were then separated into four Revit projects: “Context_1.1.RVT” (the horizontal lower area of the city neighbourhood), “Context_1.2.RVT” (the horizontal lower area of the city neighbourhood), “Contest_2.1.RVT” (the cluster of houses over the castle), and “Contest_2.1.RVT” (the city walls to the west of the castle) (Figure 10).

3.3.3. Northern and Eastern Detailed Mesh Model Discretising via Workflow B

Given the necessity of keeping the face count strictly under 20,000 units, for a manageable implementation of the second workflow, in terms of both the script execution and the results produced, the preparatory phase of the meshes proved to be even more relevant. A first decimation was performed still in the photogrammetric software on the northern and eastern areas selected for the application. Each area was further divided into two sub-areas due to their excessive extension, resulting in four texturized mesh models: North_1 (19,881 faces), North_2 (19,960 faces), East_1 (16,000 faces) and East_2 (15,722 faces).
The triangulated models were then exported, each in a PLY (ASCII encoding) format together with their textures (PNG). The raster images that represented the textures were then also simplified, in Adobe Photoshop, by “indexing” their colour scale from 8 to a maximum of 15, depending on the variety of colours in the images. Once the mesh edges and vertices had been thoroughly “cleaned” and “repaired”, employing suitable MeshLab filters, the respective textures were reapplied to project them to the vertices later and from the vertices to the faces. Once the meshes had been reexported in the PLY-ASCII format, the numerical information that describes them was imported into Microsoft Excel to be sorted out, as explained in Section 2.3.3.
Each of the four meshes was then reproduced in a single Revit project, using both the mesh in the PLY format and the related numeric data in XLSX as inputs for the VPL script (Figure 11): the former for determining and transforming the vertices into Dynamo points, the latter to retrieve the indices and the colours of the faces in order to generate triangles, belonging to the “Triangular Face.RFA” family, placed via their adaptive points, corresponding, in sequences of three, to the vertices of the meshes according to the order given by the indices.
After the generation of the triangular instances, the subsequent step was to assign the effective colours, as the newly realised Revit Materials, to the “Photogrammetric Color” parameter. Lastly, the adaptive triangles to be created were partitioned into 2000-unit batches for the hardware to be able to process the script.
At the end of the second workflow application, four independent Revit projects, “North_1.RVT”, “North_2.RVT”, “East_1.RVT” and “East_2.RVT” were produced, too (Figure 12).

3.3.4. Georeferencing and Federated Models Setting Up

The georeferencing of the eight models produced by the implementation of the first and the second workflow within the Revit environment was as simple as assigning to the Project Base Point (PBP) of each project the coordinates previously subtracted to the GCPs in Agisoft Metashape, taking into account that the “x” value represents the longitude, i.e., to the “E/O” coordinate, while the “y” corresponds to the latitude, i.e., to the “N/S” coordinate.
Therefore, it was sufficient to subsequently link these eight projects into another one using the PBP as a reference to make them perfectly fit, having correctly placed them back in the georeferenced coordinates system “EPSG: 32633” (Figure 13). The BIM mesh models of the surroundings served then as guidelines to georeference the architectural BIM model of the fortress, too. At the end of the process, the now nine models, all together linked within a superordinate Revit project, shared the same coordinate system, that was eventually “published” back to them to store it (Figure 14).

4. Discussion

This type of documentation, which goes in the direction of as-built modelling, is proposed as a valid support tool for management and maintenance purposes, as well as an effective means of updating the actual state of knowledge about the built environment.
The results achieved prove how the proposed study provides an efficient semi-automated approach to extract geometric information from a complex topography acquired with laser scanning and photogrammetry data to create a BIM as-built model based on the extracted information to perform a correct contextualization. This approach allows the obtaining of the necessary parameters to create BIM models of historical architecture with complex shapes from an integrated point cloud.
The possibility to build an actual bridge between the surveyed database and the BIM models, where the data can be enriched as required, moves in the direction of concretely employing the informative models as support tools for restoration and refurbishment projects. Hence, the incorporation of photogrammetry-derived mesh models and materials into the BIM environment makes it possible to directly measure them with a good degree of approximation. Depending on the required level of detail, it is then possible to obtain both a contextualization precise enough and a morphologically and colorimetrically accurate reproduction of selected areas of detail, for those elements of the built environment with a typically unique formal and cultural value; thus, it is worthwhile for informative modelling within a wider monitoring system. In particular, the modelling procedure developed for “Workflow A”, here carried out on parts of the context, is reproducible for any unique detail that can be catalogued under a “Category” other than “Site”.
On the other hand, in case more detailed modelling of the selected areas along with their realistic colorimetric data are required, due to their particular historical value or should they be affected by degenerative decays and thus in need of urgent intervention, they could be reproduced into the BIM environment no longer as a unicum, but rather as discretised elements storing “Shared Parameters” that would allow performing any sort of filtering and assessment. The parameters assigned to the triangles are updatable and increasable on a case-by-case basis, and by customising them, e.g., by filling in the “Comment” parameter, it would be possible to select, visually filter and group them while also calculating their cumulative area (Figure 15).

5. Conclusions

Although the automation of the protocols used to generate these procedures still has a long way to go, they are fundamental in defining the basis from which to develop multimedia systems capable of reproducing the complex spatial relationships that exist between the built environment and historical architectural artefacts. Borrowing from Werner Heisenberg: “We have to remember that what we observe is not nature in itself but nature exposed to our method of questioning”. Hence, the information digitisation of complex territorial realities aims to promote programmes for the renewal of the historical heritage; updating the existing database; and developing, through 3D digital systems, conservation and restoration techniques for the architectural heritage [39].
Future developments will certainly try to combine TLS and close-range photogrammetry data for indoor applications. This type of integrated data will initially be used for the manual BIM model of the main structure but also represents an interesting challenge if used as the source for the proposed procedural workflows, implementing triangulated mesh models derived from both the laser and the entire integrated data set.
The management of the existing heritage cannot be dissociated from a thorough investigation of the state of preservation of materials and a detailed 3D reconstruction. The morphological and colorimetric reconstruction of peculiar and complex structures, elements and friezes in the BIM environment is essential for the construction of databases to archive data and facilitate the planning of restoration or partial, although identifiable, reconstructions [40].
Indeed, the semi-automated implementations proposed here can be easily applied in subsequent case studies to improve the automation of the methodology and further develop its potential to accurately estimate the geometric dimensions of any area under study.

Author Contributions

Conceptualisation, A.S. and B.M.; methodology, A.S.; software, A.S.; validation, A.S., M.L. and C.G.G.; formal analysis, A.S. and M.L.; investigation, A.S. and C.G.G.; resources, B.M. and M.L.; data curation, A.S., M.L. and C.G.G.; writing—original draft preparation, A.S. and C.G.G.; writing—review and editing, B.M. and M.L.; visualization, A.S. and B.M.; supervision, B.M.; project administration, B.M. and A.S.; funding acquisition, B.M. and M.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was carried out as part of the “KROTON LAB project” funded by Italian regional, municipal and local authorities.

Data Availability Statement

A comprehensive rendered model of the whole project is available at: http://bit.ly/3vlPTWf; the two IFC models composing the northern area, enhanced as explained in Figure 15, are available at: http://autode.sk/3ODEnle; http://autode.sk/3cEB3sZ.

Acknowledgments

The survey and modelling activities were conducted as part of a research project of the Laboratorio Modelli of the University of Salerno in collaboration with Naos Consulting and the Calabria Region—“KROTON LAB project”.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Stylianidis, E. Photogrammetric Survey for the Recording and Documentation of Historic Buildings, 1st ed.; Solari, G., Chen, S.-H., di Prisco, M., Vayas, I., Eds.; Springer: Cham, Switzerland, 2020; ISBN 978-3-030-47310-5. [Google Scholar]
  2. Rashdi, R.; Martínez-Sánchez, J.; Arias, P.; Qiu, Z. Scanning Technologies to Building Information Modelling: A Review. Infrastructures 2022, 7, 49. [Google Scholar] [CrossRef]
  3. Brumana, R.; della Torre, S.; Oreni, D.; Previtali, M.; Cantini, L.; Barazzetti, L.; Franchi, A.; Banfi, F. HBIM Challenge among the Paradigm of Complexity, Tools and Preservation: The Basilica Di Collemaggio 8 Years after the Earthquake (L’Aquila). Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci.-ISPRS Arch. 2017, 42, 97–104. [Google Scholar] [CrossRef] [Green Version]
  4. Rocha, G.; Mateus, L.; Fernández, J.; Ferreira, V. A Scan-to-BIM Methodology Applied to Heritage Buildings. Heritage 2020, 3, 47–65. [Google Scholar] [CrossRef] [Green Version]
  5. Badenko, V.; Fedotov, A.; Zotov, D.; Lytkin, S.; Volgin, D.; Garg, R.D.; Min, L. Scan-to-BIM Methodology Adapted for Different Application. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci.-ISPRS Arch. 2019, 42, 1–7. [Google Scholar] [CrossRef] [Green Version]
  6. Teruggi, S.; Grilli, E.; Fassi, F.; Remondino, F. 3D Surveying, Semantic Enrichment and Virtual Access of Large Cultural Heritage. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2021, 8, 155–162. [Google Scholar] [CrossRef]
  7. Grilli, E.; Remondino, F. Classification of 3D Digital Heritage. Remote Sens. 2019, 11, 847. [Google Scholar] [CrossRef] [Green Version]
  8. Colomina, I.; Molina, P. Unmanned Aerial Systems for Photogrammetry and Remote Sensing: A Review. ISPRS J. Photogramm. Remote Sens. 2014, 92, 79–97. [Google Scholar] [CrossRef] [Green Version]
  9. Silverberg, L.M.; Bieber, C. Central Command Architecture for High-Order Autonomous Unmanned Aerial Systems. Intell. Inf. Manag. 2014, 6, 183–195. [Google Scholar] [CrossRef] [Green Version]
  10. Eppelbaum, L.; Mishne, A. Unmanned Airborne Magnetic and VLF Investigations: Effective Geophysical Methodology for the Near Future. Positioning 2011, 2, 112–133. [Google Scholar] [CrossRef] [Green Version]
  11. Hadjimitsis, D.G.; Agapiou, A.; Themistocleous, K.; Alexakis, D.D.; Sarris, A. Remote sensing applications in archaeological research. In Remote Sensing-Applications; InTech: Vienna, Austria, 2012. [Google Scholar]
  12. Barba, S.; Barbarella, M.; di Benedetto, A.; Fiani, M.; Limongiello, M. Comparison of UAVs Performance for a Roman Anphitheatre Survey: The Case of Avella (Italy). Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2019, 42, 179–186. [Google Scholar] [CrossRef] [Green Version]
  13. Federman, A.; Quintero, M.S.; Kretz, S.; Gregg, J.; Lengies, M.; Ouimet, C.; Laliberte, J. UAV Photogrammetric Workflows: A Best Practice Guideline. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2017, 42, 237–244. [Google Scholar] [CrossRef] [Green Version]
  14. Remondino, F.; Barazzetti, L.; Nex, F.; Scaioni, M.; Sarazzi, D. UAV Photogrammetry for Mapping and 3D Modeling—Current Status and Future Perspectives. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2012, 38, 25–31. [Google Scholar] [CrossRef] [Green Version]
  15. Adamopoulos, E.; Rinaudo, F. UAS-Based Archaeological Remote Sensing: Review, Meta-Analysis and State-of-the-Art. Drones 2020, 4, 46. [Google Scholar] [CrossRef]
  16. Brumana, R.; Oreni, D.; van Hecke, L.; Barazzetti, L.; Previtali, M.; Roncoroni, F.; Valente, R. Combined Geometric and Thermal Analysis from UAV Platforms for Archaeological Heritage Documentation. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2013, 5, 49–54. [Google Scholar] [CrossRef] [Green Version]
  17. Nikolakopoulos, K.G.; Soura, K.; Koukouvelas, I.K.; Argyropoulos, N.G. UAV vs. Classical Aerial Photogrammetry for Archaeological Studies. J. Archaeol. Sci. Rep. 2017, 14, 758–773. [Google Scholar] [CrossRef]
  18. Barazzetti, L.; Brumana, R.; Oreni, D.; Previtali, M.; Roncoroni, F. True-Orthophoto Generation from UAV Images: Implementation of a Combined Photogrammetric and Computer Vision Approach. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2014, 2, 57–63. [Google Scholar] [CrossRef] [Green Version]
  19. Lato, M.; Kemeny, J.; Harrap, R.M.; Bevan, G. Rock Bench: Establishing a Common Repository and Standards for Assessing Rockmass Characteristics Using LiDAR and Photogrammetry. Comput. Geosci. 2013, 50, 106–114. [Google Scholar] [CrossRef]
  20. Abdullah, C.C.K.; Baharuddin, N.Z.S.; Ariff, M.F.M.; Majid, Z.; Lau, C.L.; Yusoff, A.R.; Idris, K.M.; Aspuri, A. Integration of Point Clouds Dataset from Different Sensors. ISPRS-Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2017, 42, 9–15. [Google Scholar] [CrossRef] [Green Version]
  21. Romero-Jarén, R.; Arranz, J.J. Automatic Segmentation and Classification of BIM Elements from Point Clouds. Autom. Constr. 2021, 124, 103576. [Google Scholar] [CrossRef]
  22. de Geyter, S.; Vermandere, J.; de Winter, H.; Bassier, M.; Vergauwen, M. Point Cloud Validation: On the Impact of Laser Scanning Technologies on the Semantic Segmentation for BIM Modeling and Evaluation. Remote Sens. 2022, 14, 582. [Google Scholar] [CrossRef]
  23. Hübner, P.; Clintworth, K.; Liu, Q.; Weinmann, M.; Wursthorn, S. Evaluation of HoloLens Tracking and Depth Sensing for Indoor Mapping Applications. Sensors 2020, 20, 1021. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  24. BIM Forum. Level of Development (LOD) Specification Part I & Commentary for Building Information Models. 2019. Available online: https://bimforum.org/lod/ (accessed on 30 January 2022).
  25. Technical Commuttees UNI/CT033, UNI/CT033/SC05. Edilizia e Opere Di Ingegneria Civile-Gestione Digitale Dei Processi Informativi Delle Costruzioni—Parte 4: Evoluzione e Sviluppo Informativo Di Modelli, Elaborati e Oggetti; UNI 11337-4:2017; Italy, 2017. Available online: https://store.uni.com/uni-11337-4-2017 (accessed on 21 February 2022).
  26. Sun, C.; Zhou, Y.; Han, Y. Automatic Generation of Architecture Facade for Historical Urban Renovation Using Generative Adversarial Network. Build. Environ. 2022, 212, 108781. [Google Scholar] [CrossRef]
  27. Sun, Z.; Xie, J.; Zhang, Y.; Cao, Y. As-Built BIM for a Fifteenth-Century Chinese Brick Structure at Various LoDs. ISPRS Int. J. Geo-Inf. 2019, 8, 577. [Google Scholar] [CrossRef] [Green Version]
  28. Yang, X.; Lu, Y.C.; Murtiyoso, A.; Koehl, M.; Grussenmeyer, P. HBIM Modeling from the Surface Mesh and Its Extended Capability of Knowledge Representation. ISPRS Int. J. Geo-Inf. 2019, 8, 301. [Google Scholar] [CrossRef] [Green Version]
  29. Barazzetti, L.; Banfi, F.; Brumana, R.; Previtali, M. Creation of Parametric BIM Objects from Point Clouds Using NURBS. Photogramm. Rec. 2015, 30, 339–362. [Google Scholar] [CrossRef]
  30. Tommasi, C.; Achille, C.; Fassi, F. From Point Cloud to BIM: A Modelling Challenge in the Cultural Heritage Field. Remote Sens. Spat. Inf. Sci.-ISPRS Arch. Int. Soc. Photogramm. Remote Sens. 2016, 41, 429–436. [Google Scholar]
  31. Jia, S.; Liao, Y.; Xiao, Y.; Zhang, B.; Meng, X.; Qin, K. Methods of Conserving and Managing Cultural Heritage in Classical Chinese Royal Gardens Based on 3D Digitalization. Sustainability 2022, 14, 4108. [Google Scholar] [CrossRef]
  32. Dore, C.; Murphy, M. Current State of the Art Historic Building Information Modelling. Remote Sens. Spat. Inf. Sci.-ISPRS Arch. Int. Soc. Photogramm. Remote Sens. 2017, 42, 185–192. [Google Scholar] [CrossRef] [Green Version]
  33. Murphy, M.; McGovern, E.; Pavia, S. Historic Building Information Modelling—Adding Intelligence to Laser and Image Based Surveys of European Classical Architecture. ISPRS J. Photogramm. Remote Sens. 2013, 76, 89–102. [Google Scholar] [CrossRef]
  34. Casillo, M.; Colace, F.; Lorusso, A.; Marongiu, F.; Santaniello, D. An IoT-based system for expert user supporting to monitor, manage and protect cultural heritage buildings. In Robotics and AI for Cybersecurity and Critical Infrastructure in Smart Cities. Studies in Computational Intelligence; Nedjah, N., Abd El-Latif, A.A., Gupta, B.B., Mourelle, L.M., Eds.; Springer International Publishing: Cham, Switzerland, 2022; Volume 1030, pp. 143–154. ISBN 978-3-030-96737-6. [Google Scholar]
  35. Ferreyra, C.; Sanseverino, A.; di Filippo, A. Image-Based Elaborations to Improve the HBIM Level of Development. Dn. Build. Inf. Modeling Data Semant. 2021, 8, 109–120. [Google Scholar]
  36. Barazzetti, L.; Banfi, F.; Brumana, R.; Previtali, M.; Roncoroni, F. BIM from Laser Scans… Not Just for Buildings: NURBS-Based Parametric Modeling of a Medieval Bridge. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2016, 3, 51–56. [Google Scholar] [CrossRef] [Green Version]
  37. Acosta, E.; Spettu, F.; Fiorillo, F. A Procedure to Import a Complex Geometry Model of a Heritage Building into BIM for Advanced Architectural Representations. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2022, 46, 9–16. [Google Scholar] [CrossRef]
  38. Parrinello, S.; Picchio, F.; de Marco, R. Urban Modeling Experiences for the Representation of the Historical City in Holy Land. DisegnareCON 2018, 11, 5.1–5.22. [Google Scholar]
  39. ACCA Software Edificius. Available online: https://www.acca.it/software-progettazione-edilizia (accessed on 21 February 2022).
  40. Barrile, V.; Bernardo, E.; Bilotta, G. An Experimental HBIM Processing: Innovative Tool for 3D Model Reconstruction of Morpho-Typological Phases for the Cultural Heritage. Remote Sens. 2022, 14, 1288. [Google Scholar] [CrossRef]
  41. Fassi, F.; Fregonese, L.; Adami, A.; Rechichi, F. BIM System for the Conservation and Preservation of the Mosaics of San Marco in Venice. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. ISPRS Arch. 2017, 42, 229–236. [Google Scholar] [CrossRef] [Green Version]
  42. Autodesk Revit Software. Available online: https://www.autodesk.com/products/revit/architecture (accessed on 21 February 2022).
  43. Autodesk ReCap Pro. Available online: https://www.autodesk.it/products/recap/overview?term=1-YEAR&tab=subscription (accessed on 21 February 2022).
  44. MeshLab 2021.10. Available online: https://www.meshlab.net/#download (accessed on 21 February 2022).
  45. Dynamo for Revit. Available online: https://dynamobim.org/ (accessed on 21 February 2022).
  46. Agisoft Metashape Professional. Available online: https://www.agisoft.com/ (accessed on 21 February 2022).
  47. Farella, E.M.; Morelli, L.; Rigon, S.; Grilli, E.; Remondino, F. Analysing Key Steps of the Photogrammetric Pipeline for Museum Artefacts 3D Digitisation. Sustainability 2022, 14, 5740. [Google Scholar] [CrossRef]
  48. Adobe®. Using Image Modes and Color Tables. Available online: https://adobe.ly/3BjeSmj (accessed on 21 February 2022).
  49. Barba, S.; Barbarella, M.; di Benedetto, A.; Fiani, M.; Gujski, L.; Limongiello, M. Accuracy Assessment of 3D Photogrammetric Models from an Unmanned Aerial Vehicle. Drones 2019, 3, 79. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Procedural Workflow A scheme.
Figure 1. Procedural Workflow A scheme.
Remotesensing 14 03688 g001
Figure 2. Procedural Workflow B scheme. (MESH FACES AS REVIT INSTANCES GENERATING*: resumes to the entire script described in Figure 3).
Figure 2. Procedural Workflow B scheme. (MESH FACES AS REVIT INSTANCES GENERATING*: resumes to the entire script described in Figure 3).
Remotesensing 14 03688 g002
Figure 3. Workflow B: Visual Programming Language (VPL) script to parametrise the mesh into Revit. (PROCESS PARTITIONING **: refers to the choice of partitioning the script in Figure 11 by repeating the last two steps—Figure 11d—up to eight times in order to monitor the process and for the hardware to run it efficiently).
Figure 3. Workflow B: Visual Programming Language (VPL) script to parametrise the mesh into Revit. (PROCESS PARTITIONING **: refers to the choice of partitioning the script in Figure 11 by repeating the last two steps—Figure 11d—up to eight times in order to monitor the process and for the hardware to run it efficiently).
Remotesensing 14 03688 g003
Figure 4. Adaptive component, i.e., “Triangular Face.RFA”, in the Revit “Family Editor” and VPL script to calculate the triangle areas, once placed as “Family Instances”, via the “Heron’s Formula”.
Figure 4. Adaptive component, i.e., “Triangular Face.RFA”, in the Revit “Family Editor” and VPL script to calculate the triangle areas, once placed as “Family Instances”, via the “Heron’s Formula”.
Remotesensing 14 03688 g004
Figure 5. Territorial overview of the castle’s neighbourhoods with archival image overlay.
Figure 5. Territorial overview of the castle’s neighbourhoods with archival image overlay.
Remotesensing 14 03688 g005
Figure 6. Integrated TLS (green) and UAV (violet) point clouds.
Figure 6. Integrated TLS (green) and UAV (violet) point clouds.
Remotesensing 14 03688 g006
Figure 7. Photobashing to synthesise the Scan-to-HBIM process.
Figure 7. Photobashing to synthesise the Scan-to-HBIM process.
Remotesensing 14 03688 g007
Figure 8. VPL script developed for the Workflow A, in the Dynamo for Revit environment, e.g., the project file “Context_1.1.RVT”.
Figure 8. VPL script developed for the Workflow A, in the Dynamo for Revit environment, e.g., the project file “Context_1.1.RVT”.
Remotesensing 14 03688 g008
Figure 9. Real-sized orthoimages used as textures for the Revit materials (view of the Revit “Material Browser”).
Figure 9. Real-sized orthoimages used as textures for the Revit materials (view of the Revit “Material Browser”).
Remotesensing 14 03688 g009
Figure 10. Results and issues, in the vertical elements texturing, posed by the Workflow A implementation.
Figure 10. Results and issues, in the vertical elements texturing, posed by the Workflow A implementation.
Remotesensing 14 03688 g010
Figure 11. VPL script developed for Workflow B, in the Dynamo for Revit environment (e.g., “North_1.RVT”).
Figure 11. VPL script developed for Workflow B, in the Dynamo for Revit environment (e.g., “North_1.RVT”).
Remotesensing 14 03688 g011
Figure 12. Results of Workflow B implementation.
Figure 12. Results of Workflow B implementation.
Remotesensing 14 03688 g012
Figure 13. Overview of the federated linked models, georeferenced via the “Project Base Point”.
Figure 13. Overview of the federated linked models, georeferenced via the “Project Base Point”.
Remotesensing 14 03688 g013
Figure 14. View of the rendered federated models.
Figure 14. View of the rendered federated models.
Remotesensing 14 03688 g014
Figure 15. Overview of the visual filtering and scheduling possibilities resulting from Workflow B application (e.g., north detailed area).
Figure 15. Overview of the visual filtering and scheduling possibilities resulting from Workflow B application (e.g., north detailed area).
Remotesensing 14 03688 g015
Table 1. For the sake of clarity, the table summarises the input and output data of the photogrammetric survey and subsequent processing. Line 1 and line 3 represent respectively the input and output variables, while lines 2 and 4 contain the numerical values of the said variables for the specific photogrammetric survey.
Table 1. For the sake of clarity, the table summarises the input and output data of the photogrammetric survey and subsequent processing. Line 1 and line 3 represent respectively the input and output variables, while lines 2 and 4 contain the numerical values of the said variables for the specific photogrammetric survey.
UAV Survey
Input Data
Total
Images
Nadiral Shots
[Flight Plan]
Oblique Shots [Manual Mode]Number of GCPsGCP Accuracy [Planimetry]GCP Accuracy [Altimetry]
110457153361.5 cm2.5 cm
Photogrammetric Output DataGSDQuality & Filtering SettingDense Point CloudMesh ModelTexture Size
1.4 cm/px“Highest” &
“Disabled”
101,640,748 points20,328,148 faces10,186,948
vertices
8192 × 8192 px
Table 2. The table shows the variation of “X” and “Y” coordinates in the translation from the Global Georeferenced System to the local one. Notably, the “Z” coordinate stays the same in both systems.
Table 2. The table shows the variation of “X” and “Y” coordinates in the translation from the Global Georeferenced System to the local one. Notably, the “Z” coordinate stays the same in both systems.
GCPsX (EPSG: 32633)X (Local)Y (EPSG: 32633)Y (Local)Z
1684,522.6136222.61364,327,983.223283.223226.8423
2684,498.0206198.02064,328,019.9652119.965211.2003
3684,463.0226163.02264,328,109.7702209.77022.5433
4684,341.454641.45464,328,044.9662144.966239.3333
5684,475.1146175.11464,327,969.375269.375230.7203
6684,380.775680.77564,327,968.284268.284234.5353
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Sanseverino, A.; Messina, B.; Limongiello, M.; Guida, C.G. An HBIM Methodology for the Accurate and Georeferenced Reconstruction of Urban Contexts Surveyed by UAV: The Case of the Castle of Charles V. Remote Sens. 2022, 14, 3688. https://doi.org/10.3390/rs14153688

AMA Style

Sanseverino A, Messina B, Limongiello M, Guida CG. An HBIM Methodology for the Accurate and Georeferenced Reconstruction of Urban Contexts Surveyed by UAV: The Case of the Castle of Charles V. Remote Sensing. 2022; 14(15):3688. https://doi.org/10.3390/rs14153688

Chicago/Turabian Style

Sanseverino, Anna, Barbara Messina, Marco Limongiello, and Caterina Gabriella Guida. 2022. "An HBIM Methodology for the Accurate and Georeferenced Reconstruction of Urban Contexts Surveyed by UAV: The Case of the Castle of Charles V" Remote Sensing 14, no. 15: 3688. https://doi.org/10.3390/rs14153688

APA Style

Sanseverino, A., Messina, B., Limongiello, M., & Guida, C. G. (2022). An HBIM Methodology for the Accurate and Georeferenced Reconstruction of Urban Contexts Surveyed by UAV: The Case of the Castle of Charles V. Remote Sensing, 14(15), 3688. https://doi.org/10.3390/rs14153688

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop