Next Article in Journal
Small Sample Hyperspectral Image Classification Based on the Random Patches Network and Recursive Filtering
Next Article in Special Issue
Digital Twinning for 20th Century Concrete Heritage: HBIM Cognitive Model for Torino Esposizioni Halls
Previous Article in Journal
X-ray Single Exposure Imaging and Image Processing of Objects with High Absorption Ratio
Previous Article in Special Issue
Extended Tromograph Surveys for a Full Experimental Characterisation of the San Giorgio Cathedral in Ragusa (Italy)
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

H-BIM and Artificial Intelligence: Classification of Architectural Heritage for Semi-Automatic Scan-to-BIM Reconstruction

1
Department of Energy, Systems, Land and Construction Engineering (DESTEC), University of Pisa, 56122 Pisa, Italy
2
Civil and Industrial Engineering, ASTRO Laboratory, University of Pisa, 56122 Pisa, Italy
3
UMR MAP 3495 CNRS/MC, Campus CNRS Joseph-Aiguier, 13402 Marseille, France
4
LISPEN EA 7515, Arts et Métiers Institute of Technology, 13100 Aix-en-Provence, France
*
Author to whom correspondence should be addressed.
Sensors 2023, 23(5), 2497; https://doi.org/10.3390/s23052497
Submission received: 25 January 2023 / Revised: 15 February 2023 / Accepted: 17 February 2023 / Published: 23 February 2023

Abstract

:
We propose a semi-automatic Scan-to-BIM reconstruction approach, making the most of Artificial Intelligence (AI) techniques, for the classification of digital architectural heritage data. Nowadays, Heritage- or Historic-Building Information Modeling (H-BIM) reconstruction from laser scanning or photogrammetric surveys is a manual, time-consuming, overly subjective process, but the emergence of AI techniques, applied to the realm of existing architectural heritage, is offering new ways to interpret, process and elaborate raw digital surveying data, as point clouds. The proposed methodological approach for higher-level automation in Scan-to-BIM reconstruction is threaded as follows: (i) semantic segmentation via Random Forest and import of annotated data in 3D modeling environment, broken down class by class; (ii) reconstruction of template geometries of classes of architectural elements; (iii) propagation of template reconstructed geometries to all elements belonging to a typological class. Visual Programming Languages (VPLs) and reference to architectural treatises are leveraged for the Scan-to-BIM reconstruction. The approach is tested on several significant heritage sites in the Tuscan territory, including charterhouses and museums. The results suggest the replicability of the approach to other case studies, built in different periods, with different construction techniques or under different states of conservation.

1. Introduction

In recent years, the Building Information Modeling (BIM) methodology was transferred from the realm of new construction to that of the built heritage. Since the early studies by Murphy and Dore [1,2], the scientific literature on Heritage or Historic-BIM (H-BIM) has expanded [3,4,5,6,7,8], aiming to illustrate how geometrical data can be linked to: architectural grammar and styles [9,10,11], material characterization [12], degradation patterns [13], façade interventions and historical layers [14,15], structural damage and FEM analysis [16,17,18], data collection and simulation of environmental parameters [19], archival photographs [20] and text documents [21,22].
Hichri et al. [8] and Macher et al. [23] emphasized that H-BIM techniques require the transition from the existing condition of the object to the modeling environment. The shift from the as-built condition (registration of a building after construction) to the as-is representation (registration of its current condition) implies reference to surveying data, as point clouds acquired via laser scanning or photogrammetry [24] and reverse engineering techniques. On the one hand, the elaboration of 3D surveying for the construction of BIM models, known as Scan-to-BIM [21], is seen as a manual, time-consuming and subjective process [3,5]; on the other hand, the emergence of Artificial Intelligence (AI) techniques in the architectural heritage domain [25,26,27] is reshaping the approach of heritage experts towards the interpretation, recognition and classification of building components on raw surveying information. Based on this consideration, this work proposes a semi-automated procedure to enable the construction of BIM models of heritage objects and sites, starting from 3D survey data that are classified via a supervised Machine Learning (ML) method.
The paper is structured as follows: Section 2 provides a literature review on Scan-to-BIM techniques and AI-based semantic segmentation processes. Section 3 presents case studies on which the proposed methodology was tested (Materials), and concurrently illustrates the different steps of the reconstruction approach (Methods). In Section 4 and Section 5, the results are presented and discussed, while Section 6 draws conclusions and future developments.

2. Related Work

2.1. State-of-the-Art Scan-to-BIM Reconstruction Processes

Scan-to-BIM processes [7,21,28] focus on translating existing survey data, as point clouds, into BIM. They involve three main steps: (i) data acquisition by laser scanning or photogrammetry; (ii) processing of survey data; and (iii) 3D modelling (Figure 1). In the processing phase (ii), it is essential to semantically describe the objects that make up a building over unstructured point clouds [28]. This interpretative issue is a major bottleneck in current research, as the main limits of the Scan-to-BIM processing workflow are identified as:
  • Difficulties in modeling complex or irregular elements and representing architectural details of existing buildings [1,29,30], and the need to intervene with classification, hierarchical organization and simplification assumptions [14,23];
  • Measurement uncertainties [23], as surveying data may contain occlusions [31];
  • Compared to BIM for new constructions, there is an absence of pre-defined, extensive libraries of parametric objects [3] and lack of existing standards for H-BIM artefacts [1,28,30];
  • High conversion effort [1], since most BIM software for new buildings offer tools for the construction of regular and standardized objects while the free-form geometry modeling functions that are available are limited [15,29,32,33].
For the above limits, Scan-to-BIM techniques are never unambiguous. However, they can be distinguished based on the degree of human involvement in the data processing stage, classified as manual (Section 2.1.1) or semi-automated (Section 2.1.2).

2.1.1. Manual Scan-to-BIM Methods

Most common approaches to the Scan-to-BIM are manual, as they require visual recognition and subsequent manual tracing of building components starting from a point cloud (Figure 2). Extensive literature reviews provided by Logothetis et al. [6], Volk et al. [7], Tang et al. [34] and more recently by López et al. [3] and Pocobelli et al. [4], demonstrate that manual methods [1,35], although widely consolidated, result in time-consuming, laborious processes. Indeed, operators are asked to manually identify, isolate and reconstruct each class of building elements [7,23]. This entails a considerable amount of time and resources, besides implying the risk of making too subjective choices [36].

2.1.2. Semi-Automated Scan-to-BIM Methods

Fundamental issues in the definition of semi-automated methods are the recognition and labelling of data points on raw point clouds with a named object or object class (e.g., windows, columns, walls, roofs, etc.) [34,35,36,37,38]. Existing methods can be distinguished according to the solution identified over time for this issue:
Primitive fitting methods. They fit simple geometries, such as planes, cylinders and spheres [39], to sets of points in the scene via robust estimation of the primitive parameters. The Random Sample and Consensus [40] and the Hough Transform [41] are common algorithms of this type, used in commercial solutions for the semi-automatic recognition of walls, slabs and pipes, proposed by software houses [3,42,43,44] including: EdgeWise Building by ClearEdge3D (clearedge3d.com) as a complement for Autodesk Revit; Scan-to-BIM Revit plug-in by IMAGINiT Technologies (imaginit.com); and Buildings Pointfuse from Arithmetica (pointfuse.com). Primitive fitting methods mostly apply to indoor environments [31,37,38,42] for the detection of planar elements, as floors and walls [23,37,45]. Shape extraction and BIM conversion are limited to simple geometries with standardized dimensions; application to complex existing architectural structures, varying in forms and types, is hardly possible unless the model is oversimplified (Figure 3) [23,42].
Mesh-reconstruction methods. For each architectural component or group thereof, a mesh is reconstructed via triangulation techniques, starting from the distribution of points in the original point cloud. References [15,17,29,31,47,48,49] converted 3D textured meshes derived from surveying into BIM objects; however, the mesh manipulation and geometric modification are limited as the mesh models cannot be edited and controlled by parametric BIM modeling [29].
Reconstruction by shape grammar and object libraries. Such approaches rely on the construction of suitable 3D libraries of architectural elements (families) to handle the complexity of materials and components that characterizes historic architecture [10,50,51,52]. In detail, De Luca et al. [53] studied the formalization of architectural knowledge based on the analysis of architectural treatises, to generate template shape libraries of classical architecture. Murphy et al. [54] modelled interactive parametric objects based on manuscripts ranging from Vitruvius to Palladio to the architectural pattern books of the 18th century. Since relying on the formalization of architectural languages as derived from treatises of historical architecture, such methods are valid regardless of the modeling type or representation chosen [53].
Reconstruction by generative modelling. In this case, the reconstruction is again guided by the formalization of architectural knowledge, and VPLs are considered to manipulate each geometry by interactively programming, via a graphical coding language made up of nodes and wires, the set of modeling procedures, primitive adjustments and duplication operations performed in 3D space [33,53,55]. Grasshopper, a visual programming interface for Rhino3D, and Dynamo, a plug-in for Autodesk Revit, are commonly used for these tasks in the case of new constructions. By contrast, VPLs are rarely exploited for existing monuments and sites. The 3D content could be created, based on surveying data [17,48,55,56,57], by a series of graphic generation instructions, repeated rules and algorithms [58]. The release of Rhino.Inside.Revit (rhino3d.com/it/features/rhino-inside-revit, accessed on 18 December 2022), allowing Grasshopper to run inside BIM software as Autodesk Revit, goes in the direction of novel VPL-to-BIM connection tools.

2.2. State-of-the-Art AI-Based Semantic Segmentation

In the digital heritage field, ML and Deep Learning (DL) techniques emerge to help digital data interpretation, semantic structuring and enrichment of a studied object [25] e.g., to assist the identification of architectural components [59], the re-assembly of dismantled parts [60], the recognition of hidden or damaged wall regions [61], and the mapping of spatial and temporal distributions of historical phenomena [62].
In the architectural heritage domain, AI techniques have proven to be crucial in streamlining the so-called semantic segmentation process, understood as the reasoned subdivision of a building into its architectural components (e.g., roof, wall, window, molding, etc.), starting from surveying data. With respect to other common computer vision tasks exploiting AI, such as object recognition, instance localization and segmentation, the semantic segmentation process classifies pixels or points as belonging to a certain label and performs this operation for multiple objects of the 2D image or of the 3D unstructured scene (Figure 4). The term semantic, indeed, underlines that the breakdown is done by referring to prior knowledge on the studied 2D/3D architectural scenes.
Though earlier experiments of digital heritage classification were geared towards the semantic segmentation of images [61,63,64], research is now moving in the direction of segmenting textured polygonal meshes [27] and/or 3D point clouds [65]. In the architectural domain, the classification is either focused on automatically recognizing, via ML algorithms and through a suitable amount of training data, on the one hand, the presence of alterations on historical buildings [66] or the mapping of materials (texture-based approaches) [27,67,68], and, on the other hand, the distinction into architectural components based on prior historical knowledge (geometry-based approaches) [26,69,70,71].
Depending on the type of approach chosen, the classification can act on either two kinds of properties of the raw data: (a) geometric features, such as height, planarity, linearity, sphericity, etc. [72], that are better suited for the recognition of architectural components based on respective shapes of elements, or (b) colorimetric attributes, such as RGB, HSL or HSV color spaces [66], that are widely used for the identification of decay patterns (as biological patina or colonization, chromatic alterations, spots, etc.) or of materials.
Geometry-based classification techniques, formerly exploited for classifying urban scenes [17,72,73], are now applied to the scale of the individual building, for the segmentation of walls, moldings, vaults, columns, roofs, etc. [70]. Grilli et al. [70] investigated the effectiveness of covariance features [72] in training a Random Forest (RF) classifier [74] for architectural heritage, even demonstrating the existence of a correlation between such features and many main dimensions of architectural elements.

2.3. Open Issues Arising from the State-of-the-Art Methods

The literature review in Section 2.1 proves that data segmentation and classification are essential steps in common Scan-to-BIM workflows, enabling:
  • A breakdown of the survey data into subsets of elements (pixels or points) sharing the same features, whether geometric or radiometric, extracted from 2D or 3D descriptors and according to predefined criteria (segmentation);
  • The assignment of a label to each subset (classification or semantic segmentation).
Although this process has been considered to be mostly manual and performed by a single operator, the evolution of recent research in the application of RF algorithms to the classification of digital heritage point clouds (see Section 2.2), suggests the possible automation of Scan-to-BIM reconstruction processes. The semantically segmented point cloud, in which different architectural elements are finally distinguished as an outcome of geometry-based classification techniques, could in fact be considered as a basis for the reconstruction of H-BIM models. To date, besides primitive shape fitting approaches [75], no research has considered the possible integration of semi-automatically annotated point clouds and H-BIM environments. This still-uncertain and unclear transition, deserving more in-depth analysis and worth the exploration of the operational challenges of a scan-to-BIM via ML model, constitutes the research line of the present work.

3. Materials and Methods

3.1. Materials

The semantic segmentation and Scan-to-BIM reconstruction methodology was tested on three point clouds of historic buildings in the Tuscany territory (Italy), acquired either by laser scanning or photogrammetry, alongside traditional topographic instruments.
The case studies relate to the typology of medieval cloisters: a central area is closed on its perimeter by recurring architectural elements such as columns, moldings, arches and vaults that form a series of open galleries (Figure 5). In detail:
  • The Grand (or main) cloister of the Pisa Charterhouse. The cloister, extending an area of about 70 × 45 m, was built starting from the year 1375 and underwent major renovations in the 17th century. The perimeter walkway, covered by vaulted ceilings and enclosed by marble columns, once provided access to the cells of the Carthusian fathers. The point cloud is the result of a Leica ScanStation C10 laser scanner survey (~10 M points).
  • The Grand-Ducal cloister of the Pisa Charterhouse. Extending a rectangular-shaped area of 12 × 14 m, this cloister dates back to the 14th century. Its structure underwent several transformations around the 17th century, that lent it its current layout. The courtyard, with a central cistern, is overlooked by vaulted galleries; the two opposite sides of the cloister are connected, on the first floor, by an overhead walkway. The considered point cloud is the outcome of an integration between laser scanning and drone-based photogrammetric surveys (~6 M points).
  • The cloister of the convent of San Matteo in Pisa. This cloister is located in the medieval convent of San Matteo in Pisa, which currently houses a National Museum. Major changes of its layout, dated to the 16th century, involved the construction of a portico, with granite columns closing the central space, Gothic windows and a cross-vaulted ambulatory. The survey was carried out via terrestrial photogrammetry and the resulting point cloud consists of ~12 M points.
The considered point clouds have different densities, but they are all set to a minimum space between points of 0.01 m.

3.2. Methods

The proposed methodological approach for Scan-to-BIM classification and reconstruction is divided into two macro-parts: in the first one, a supervised ML algorithm, the RF, is exploited to annotate the architectural components of a building from a point cloud survey. The workflow for this first phase was previously presented in reference [76], to which the reader could refer for further details and in-depth discussion on the ML-based data processing step.
The focus of this paper is rather on the second part of the workflow, concerning Scan-to-BIM reconstruction based on semantically annotated data. The latter takes place following the successive steps of: import of annotated data into the BIM environment; reconstruction and propagation, via VPL, of template geometries based on knowledge derived from historical treatises of architecture; and transfer of template geometries into BIM software as Autodesk Revit (Figure 6). The two steps of data segmentation and H-BIM reconstruction are both completed by a data validation process.

3.2.1. Semantic Segmentation via ML

At first, semi-automated systems were exploited to properly interpret the 3D architectural scene from the input point cloud to improve the description and recognition of forms, materials, state of preservation. A ML-based segmentation procedure was used to assist the proper processing, management and semantic enrichment of digital heritage objects. The RF by Breiman [74] was used as the reference algorithm following the successful tests by Grilli [72]. This classifier is recognized as an effective tool for the classification of typological components of a building, outperforming other ML and DL approaches in terms of trade-off between training time, size of training data required and accuracy of the obtained results [26].
Starting from the extraction of appropriate covariance and radiometric features, and using a relevant set of training data, the RF is trained to classify, within digital models, the architectural elements that make up a historic building [76]. The procedure is broken down into five different steps:
(i)
Neighborhood selection and feature extraction;
(ii)
Feature selection;
(iii)
Manual annotation on a reduced portion of the dataset (training set) to identify classes of elements;
(iv)
Application of the RF classifier and consequent accuracy evaluation;
(v)
Generation of an annotated 3D point cloud.
At first, a set of features is extracted in a chosen local neighborhood of each 3D point or image pixel (i). The choice of appropriate features and consequently, the choice of the local neighborhood in which they are computed, is fundamental in this phase as the predictive model is built to make predictions by recognizing the features that distinguish one class of elements from another. As the initial set of features may appear redundant or too large to be managed, the features are iteratively selected (ii). Readers can refer to previous work [76] for more details on the features’ description, extraction and selection steps. Subsequently, classes of recurring architectural elements are identified and labeled on a training set. This input data is used to perform a multi-scale classification via the RF to iteratively select the most relevant features; the classification process is thus run by considering a subset of features each time. For this reason, steps (ii) and (iii) are strictly interrelated. For the RF, the number of trees, N t r e e s , is set to 100 and the hyperparameter optimization accounts for overfitting through a 10-fold cross validation procedure.
Upon its completion of training using the training set of annotated data, the predictive model is applied to the remaining part of the point cloud or image (non-manually annotated), so to semantically label the classes of typological elements in the rest of the dataset. The accuracy of the classifier is finally assessed (iv), based on the comparison between true and predicted values on a validation set, that consists of almost the 25% of the labeled data (not used in the training phase). The performance evaluation is sorted out in the form of a confusion matrix, providing a measure of the number of correct and incorrect predictions, class by class. The on-diagonal elements stand for the True Positive (TP) values (correctly classified instances of the dataset), while the off-diagonal elements provide a measure of misclassifications: True Negatives (TN), False Positives (FP) and False Negatives (FN) values. The performance measures of Precision, Recall, Overall Accuracy and F-measure are derived from a combination of these values, as follows:
P r e c i s i o n = T P T P + F P
R e c a l l = T P T P + F N
O v e r a l l   a c c u r a c y = T P + T N T P + T N + F P + F N
F m e a s u r e = 2 · R e c a l l   · P r e c i s i o n R e c a l l + P r e c i s i o n
The point cloud obtained at the end of this classification step is separated into its recurring architectural components: considering that the semantic structuring of heritage data could be beneficial in view of the construction of BIM-based representations, this classified point cloud is taken as initial data for the Scan-to-BIM reconstruction process. The hierarchical organization of the classes over the annotated point cloud reflects the logic of H-BIM environments, where each element (molding, vault, floor, wall, etc.) is defined by specific names, attributes and properties.

3.2.2. Scan-to-BIM Reconstruction

Starting from the distinct classes of elements recognized on the point cloud through the segmentation process, so-called template geometries are reconstructed. These geometries are generated from the observation of architectural components that fall within a single point cloud class with reference to the architectural treatises. With VPL, a series of algorithms and rules are established for constructing the reference geometries of each type of component (family); subsequently, the reconstructed geometries are propagated to all elements belonging to the same typological class.
A conceptual model is thereby reconstructed by treating and processing each class of architectural components separately; if carried out on the entire set of classes making up the architectural object, the replication of the template geometries yields a complete H-BIM information system. This way of classifying data appears consistent with the logic of the H-BIM process, whereby the model results from a combination of smart objects, properly differentiated in terms of type and morphology (e.g., roof, wall, floor, column, etc.) and grouped into families of architectural elements. In the overall model, each component is effectively discerned according to whether it belongs to one class or another.
The model obtained at the end of the process, containing the 3D reconstruction of all typological element classes, can be used to construct H-BIM type representations, i.e., to build 3D archives of architectural heritage, which can be further enriched with information related to preservation and documentation.
In detail, for the reconstruction of template forms, reference geometries and proportions are derived, where available, from historical architectural treatises. Conceptual forms are hence generated for each class through the recognition and parametric reconstruction of related elementary parts, profiles and surfaces. The procedure is performed through VPL and is broken down class by class according to the following steps:
(i)
Import of the annotated point cloud into 3D modeling environment, and extraction of the single class concerned by the reconstruction process;
(ii)
Reconstruction of a template geometry for each class of architectural elements identified, while referring to architectural treatises and based on the definition of base construction plans, constraints, generating primitives, base profiles and ensuing functions of extrusion, loft, sweep, etc.;
(iii)
Propagation of the template geometry to all elements belonging to a typological class, i.e., definition of element replica operations, so to enable the duplication of the defined geometry to multiple elements sharing same characteristics.
The mathematical and conceptual representation of each class, managed through generative modeling procedures, is entrusted to the creation of Non-Uniform Rational B-Splines (NURBS).
Real-time generation, control, and editing of architectural forms is accomplished through the Grasshopper graphical algorithm editor, integrated in Rhino McNeel. Grasshopper, in particular, allows visual control of the 3D geometries reconstruction procedures by direct manipulation of nodes (algorithms) and wires. Finally, the Volvox (food4rhino.com/en/app/volvox, accessed on 18 December 2022) and Rhino.Inside.Revit (rhino3d.com/inside/revit/1.0/, accessed on 18 December 2022) plug-ins are used to connect the reconstructed 3D model with point cloud processing software and BIM platforms, respectively.

4. Results

4.1. Annotated 3D Data

At an initial stage, geometric features are extracted from input raw point clouds, with varying local neighborhood radii for each 3D point. Features, as covariance characteristics and changes of curvature, are computed and iteratively selected for each case study via a predictor importance estimate process [76] (Figure 7) in order to allow better distinction between one class and another. The choice of the local neighborhood in which features vary draws on considerations of the recurring dimensions of many elements composing the dataset, that are provided with a first estimate (e.g., the diameter of the columns, or thickness of certain architectural moldings, as well as other repetitive dimensions of the elements) so to extract geometric features considering selected ranges of the local neighborhood (e.g., ϱ = 0.2 m; ϱ = 0.4 m; ϱ = 0.6 m). Within the chosen ranges of ϱ, the covariance features are extracted from the covariance matrix, while the Normal Change Rate is extracted as a curvature measure describing for each point the speed of the orientation change [38]. Radiometric features, derived from the decomposition of the color space into single R, G and B scalar fields, and a height feature (the Z coordinate), are then considered as additional characteristics for each dataset. For additional samples of extracted covariance features -i.e., features depending on the distribution of 3D points in space (linearity, planarity, sphericity, omnivariance, etc.), colorimetric features and height features, the readers can refer to Appendix A (Figure A1 and Figure A2).
After feature extraction, classes of architectural components are identified and annotated on a reduced portion of data samples, consisting of almost 20% of the total number of points of the point cloud (Figure 8). The so-called training set is specified each time for each case study; the point cloud is segmented based on the ten benchmark classes proposed by Matrone et al., 2020 [77] (Figure 9): arch, column, molding, floor, openings (door or window), wall, stair, vault, roof and other (all elements not belonging to the previous classes).
The overall number of cases is 10 for the Grand-ducal cloister dataset, but it was reduced to 9 for the Grand cloister, as there is no ‘Class 6—Stair’, as well as for the San Matteo dataset, where ‘Class 8—Roof’ was not visible since the photogrammetric survey was ground-based and did not allow the description of the roofing structure. In any case, each identified class, for the three datasets, was associated with a specific label, to a class index varying from 0 to 10 and to a related color; the training set was chosen in a representative portion of the dataset, where all classes to be annotated are visible.
After the manual annotation of the training set, a first classification is run, via the RF algorithm, considering the multi-scale feature extraction (i.e., the whole set of features extracted at different local neighborhood radii). At the end of the learning process, an importance ranking can be displayed, showing the relevance score of each feature.
Starting from the importance ranking extracted, redundant and less relevant features are iteratively removed, and the RF is hence trained with a reduced subset of features. This step provides insight into the data, showing which features are more relevant to the classification task, thus reducing the dimensionality of the data to a set of almost 10–15 features and allowing the selection of a subset of predictors that adequately describe the identified classes. The feature ranking process is run each time for the three different case studies; surface variation, sphericity, anisotropy and verticality always appear among the most relevant features.
By comparing the results of the three different datasets in terms of feature selection, and by visually considering the variation of features along the datasets, many relevant considerations can be drawn up on the recurrence of some features: verticality, for instance, is more apt to the distinction between elements of the dataset that are mostly horizontal (floors, ground) or vertical (columns, walls), while omnivariance and sphericity enable the recognition of architectural moldings, arches, vaults and columns; anisotropy and planarity further support the classification of columns and the distinction of windows and doors from the wall. Moreover, they are valuable for depicting finer elements with horizontal development, e.g., the windowsill and the underroof moldings.
Normal change rate, as curvature feature, is appropriate to depict columns and arches, and to identify elements belonging to ‘Class 9—other’, as drain spouts.
The selected features associated with the manually annotated training set allow the RF classifier to be trained to extend classification to the entire point cloud. The procedure is followed for each of the cases studied and leads to the semantic segmentation results shown in Figure 10. Once classified, the point clouds can be differentiated according to the distinction of the architectural elements that compose them. As an example, Figure 11 provides a zoomed view of the several classes of recurring architectural elements that were recognized on a portion of the main cloister of the Pisa Charterhouse.
In order to evaluate the classifier performance, a validation set was considered, consisting of almost 25% of annotated portion of the data that was not previously used for training. On such validation set, the correspondence between the manually annotated labels and the predicted ones was tested by referring to the values of the confusion matrix and to the performance measures: Precision (1), Recall (2), Overall accuracy (3) and F-measure (4). The average values of the performance measures obtained for the three case studies are summarized in Table 1; a detailed description of the validation sets, confusion matrices and performance measures for the three cases is provided in Appendix A, Figure A3 and Figure A4.

4.2. Semantic-Based Reconstruction on Annotated Data

Since geometry-based approaches relying on supervised ML support the distinction of recurring typological elements as walls, columns, vaults, etc. on raw point clouds, the semantically segmented data is used here as a reference for the reconstruction of the HBIM models.

4.2.1. Import of Annotated Data

In the scan-to-BIM reconstruction process, the need first arises to preserve the classified 3D data even though the point cloud was imported into a 3D modeling environment. Through the semantic segmentation procedure, each architectural element identified within the point cloud can be isolated and shown individually (as in the example in Figure 12). This allows class information to be made available and visible in the transition to the 3D modelling environment, so that so-called template geometries can be reconstructed by acting on the manipulation and display of individual object classes over the point cloud between the point cloud and the Rhinoceros modeling environments.
In detail, the developed algorithm (Figure 13) reads and sorts the point cloud file (1), recognizing indices and colors that are associated with each class of typological elements. The original point cloud is segmented into multiple point clouds, each containing the points belonging to the individual class of architectural elements (2); then, the management of a special slider allows the user to directly select the index of a desired class (3). The colors, names and points associated with the selected class are selected (4) and the corresponding point cloud is displayed in the graphical user interface (5). The advantage of this procedure lies in allowing the user to activate or deactivate the display of a given class, depending on case-specific needs: only the 3D points belonging to the selected class are visible, while the remaining 3D points are hidden (Figure 14).
This direct manipulation of architectural component classes on the original point cloud simplifies the modeling phase that is already in its first stage since the 3D reconstruction work can be carried out on an already segmented (reduced) dataset.

4.2.2. Libraries of Template Geometries

Once the import of the semantically segmented point cloud into the modeling environment is completed, each class can be reconstructed in the form of a template geometry, i.e., a parametric 3D object that reproduces a particular recurring typological element (class or family of parametric elements). Again, the need to reconstruct geometries with modifiable and adaptable parameters leads to the choice of VPL, to model and edit the model by means of a series of rules and graphical processing operations (e.g., through the connection of nodes and wires in the VPL of Rhino Grasshopper). The following operations are performed via VPL algorithms:
  • Definition of template conceptual shapes of each class of architectural elements identified on the point cloud, through a series of processing operations, rules (nodes) attributes and connections (wires).
  • Control of the different graphic elements that compose the reconstructed shapes and direct manipulation of sliders related to their dimensions, extension and other and properties.
Many relevant classes of architectural types—such as the column, arches and vaults—are reconstructed with reference to architectural canons. For the case studies considered, Vincenzo Scamozzi’s treatise L’idea dell’architettura universale [78] is taken into account. This work inspired the renovation of many of the settings of the Charterhouse of Pisa in the 17th century. In detail, for the template shapes reconstruction process, the followed formalization approach is based on the work by De Luca et al. [53], consisting of three steps: interpretation of any knowledge referring to the shape; identification of the necessary modeling methods; and identification of relationships between elements.
With this approach, the elementary entities are identified and architecture primitives that constitute the basis of the representation method, are formalized. Once the 3D points belonging to each class of elements have been properly identified and shown on the 3D viewer, indeed:
  • Reference building planes were detected and suitably oriented;
  • A generating profile was created via VPL and, where necessary, a direction path was outlined.
  • Functions such as revolution, sweep, extrusion, loft, etc. of the identified profile were used to build the targeted surface.
The parametric geometry constructed, representative of each class of elements, can be adapted over the point cloud through manipulation of sliders and parameters, and the related programming outputs are displayed in the Rhino3D graphical interface. For the formalization of template geometries, the moldings are studied first as they comprise the smallest and most trivial units of architectural elements corresponding to a semantic description of the building, and further combination of them yields more complex architectural elements.
Many moldings from Vincenzo Scamozzi’s treatise were analyzed and studied (Figure 15), and an example of the structure of their generation algorithms is shown in Figure 16: each molding is represented by a curve (drawn on a selected construction plane) and is included in a bounding box, i.e., a deformable section that defines the height and width of the element. A starting point and an ending point are the anchor points for any other moldings attached to the element. Considering the insertion points of the moldings as anchoring elements for the construction of subsequent moldings, more complex profiles are later provided (Figure 17).
The application of extrusion, loft, sweep and revolution functions to the constructed generating profiles determines the template shape of a class.
Figure 18 shows the construction process of the ‘column shaft class’, for the case of the Grand Cloister dataset: first, descriptors and geometric attributes are defined for the construction of this architectural component. Then, the column shaft is built based on the study of the dimensional relationships between the diameter of the column base and its height, which allows for the establishment, at different heights, of reference circles. These circles are used to define, through a loft function, the conceptual shape of the concerned object, even representing the enthesis of the column shaft.
This approach can be extended to the entire set of classes, by a set of rules and visual scripting nodes that enable the construction of each architectural component.

4.2.3. Information Propagation and Import into BIM Software

Once defined for each class, the geometry of the model is subsequently propagated to other parts of the point cloud, where the 3D points have been recognized as belonging to the same architectural type. The procedure is done through duplication and displacement nodes, particularly leveraging array, copy and translation operations (e.g., the Move node in Grasshopper). These operations allow the repetition, as many times as necessary, of the 3D geometry for each class of architectural components (Figure 19). Figure 20 shows the results obtained in the construction of some significant classes extracted from the considered datasets: the ‘column’ class for the main cloister, the ‘vault’ class for the Grand-Ducal cloister and the ‘arch’ class for the Museum of San Matteo. The creation and propagation of conceptual geometries by generative design rules allows the repetition and, where necessary, the modification of the parameter of these reconstructed geometries.
By extending the propagation of the template shapes to the set of all classes, a complete model of the whole building is finally achieved (Figure 21). Each architectural element identified within individual classes retains its own semantic description, as it is linked to the semantic decomposition of the architectural object and it is defined by a template shape (Figure 22).
Following these principles, each element is reconstructed independently of the software used and can be imported, for example, into BIM software, to be further enriched with non-geometric information (related, e.g., to materials, restoration and consolidation work, documentary and analytical sources, state of preservation, etc.).
For instance, Grasshopper VPL could be linked to Autodesk Revit BIM software via the Rhino.Inside.Revit plug-in, as one possible way of importing the classified model in BIM environment: the algorithm displayed in Figure 23 allows the selection of architectural objects belonging to individual element classes and associates them with a Revit family, preserving their level of semantic description.

5. Discussion

The use of AI-based classification methods in common Scan-to-BIM processes empowers the automation of 3D model reconstruction from point clouds, in terms of time, raw surveying data management and semantic description. The key findings of this work may be identified following the breakdown into the two respective steps of semantic classification via ML (5.1) and BIM-based reconstruction (5.2).

5.1. Assessment of ML-Based Classification Methods

Upon assessment of the performance scores of the three datasets being considered, the geometry-based classification returned an average accuracy of 98.73% and an F-score of 87.13%. The models accurately predicted the semantic segmentation results and significantly reduced the annotation time: Figure 24 summarizes the processing times for different stages of the workflow. However, some considerations need to be raised on the validation of this methodology:
  • After observing the confusion matrices (Appendix A, Figure A4) and visually checking the data with the segmentation results, we observed misclassifications in the boundary regions, that is, in those areas that mark the boundary between one class and another. Specifically, as features are computed in a given local neighborhood ρ, feature extraction can be misleading for those 3D points that are in the boundary regions between classes (Figure 25). Those errors increase with increasing radius of the spherical neighborhood. This situation was mitigated, on the one hand, by adding discriminative radiometric features based on color information and, on the other hand, by choosing low (<0.6 m) values of ρ.
  • Regions with similar developments (e.g., planar or cylindrical) in which geometric features may yield several values, can be misclassified as falling into the same class. For instance, the analysis of the off-diagonal elements of the confusion matrices suggested that Class 5—Wall’ and ’Class 4—Door and Window’ are often interchanged with each other, as both are characterized by predominantly planar behaviors.
  • As the covariance and curvature features are computed in a given local neighborhood, the density of the point cloud influences the classification results. In other words, if two point clouds of a same object have different point densities, feature selection may produce two different results, as seen in the example in Figure 26. In order to align feature selection for different surveys of the same dataset, one could then plan to return the point clouds to the same density by means of a subsampling operation.
Besides these observations, it should be noted that, as the number of classes increased, greater similarity among them were found, and this implies that the number of semantic classes chosen can strongly influence the quality of the classification results; likewise, the amount of training data used can impact the resulting classification. In the case of laser scanning data, many sources of error, such as EDM centering, beam divergence and instrumental errors [79] which occurred during data collection, were not taken into account in the present study, although they could influence the classification results.

5.2. Assessment of the Scan-to-BIM Reconstruction Workflow

The separation of the semantic parts is the fundamental prelude to automation of the scan-to-BIM process and better correlation between the point cloud and parametric model. The semi-automated reconstruction of 3D mockups from survey data relies here on the visualization and import of the annotated data and on the subsequent construction and propagation of template geometries. With this approach, the geometric nature of the building components is reconstructed, the originally designed form is interpreted, and a reference geometry is identified and modeled for each class, following the logic of parametric BIM families. At the end of the process, visual programming techniques enable propagation and dimensional comparison of repetitive elements. A direct connection is in fact established between the point cloud and the reconstructed model on the level of individual classes of elements, and this avoids the loss or possible dispersion of information in the transition from the 3D survey to the parametric model.
The resulting conceptual representation yields an effective support tool in the documentation of any architectural asset: the obtained digital model, being based on model geometries (with reference to architectural canons and treaties), becomes valid regardless of actual object changes and modifications, and additional information can then be inserted, retrieved, modified and updated within it, as a relevant basis for H-BIM-type systems. In fact, the use of architectural canons and the definition of a set of rules to reconstruct or modify a 3D object makes this system independent of the software used. This also implies that Grasshopper and Rhino3D do not represent the only environment in which such methods could be implemented, but rather other generative modeling software (such as Dynamo, implemented in Autodesk Revit software) could be exploited for the same task, if the same construction, modeling and operation processes are retrieved and repurposed accordingly. In addition, as a result of the connection drawn between the reconstructed classes and the semantic point cloud, it is pertinent to note that the two types of representations could be compared with each other, e.g., in terms of relative distances, in order to derive the extent to which, for each class of elements, a quasi-conceptual (digitally reconstructed) model deviates from the real data (e.g., from a point cloud acquired by survey). To this end, the comparison of the two (real and ideal) models could lead to the construction of disparity maps showing the variation, in space and time, of real (existing) architectural elements compared to the relative ideal model (Figure 27). For instance, this study could enable further refinement of the model geometry or could have an impact on the study of the evolution of an architectural style over time.
The conceptual model of the 3D connection point cloud allows each element belonging to a given class to be enriched with additional analytical or technical data. However, it is worth noting that the transfer of localized information—that is, information related to small portions of the model geometries—is not yet possible unless appropriate subdivisions of the reconstructed surfaces are provided to connect the information at a higher level of detail.

6. Conclusions

This work pointed at the automation of Scan-to-BIM workflows by combination of semantic segmentation methods exploiting AI and graphical algorithm editors for 3D modeling.
At first, geometry-based classification approaches are exploited to enable, to different extents, the addition of a semantic label associated with the decomposition of the building into recurring architectural elements. ML algorithms, implemented by suitably leveraging manipulation, export and extraction of geometric and visual descriptors (features) from raw 2D or 3D data, significantly reduce the brute annotation phase, lessening the space for arbitrary and too subjective choices. However, user supervision, in terms of choice of the training set and decision on which classes should be used to partition the digital data, is crucial in determining the success of the classification and labeling process. In addition, high performance computing machines are required in the feature extraction and data-driven algorithm training phase. Moreover, mis-classification may occur in boundary regions, as well as in regions with similar development (geometry-based approaches) or with similar color characteristics or patterns (texture-based approaches). The generalization of the same ML algorithm to other datasets, pertaining to different architectural types and/or built in different periods, as well as the establishment of larger annotated datasets to train deep neural networks, are possible future developments in this domain.
The semantically segmented point cloud is later exploited for the construction of a reference model, composed of template geometries, following the logic of H-BIM type information systems. The proposed procedure enables the reconstruction of Heritage-Building Information Models starting from annotated 3D survey data. The reconstructed H-BIM model preserves the semantic link with the semantically annotated point cloud, at the level of the single classes of detected architectural components and can be leveraged for further enrichment with non-geometric (analytical, knowledge-related) information. However, it is noted that the annotation process, although being trivial when referred to the single architectural component, becomes tricky in cases of localized annotation.
Considering this aspect, future work could be focused on developing additional possibilities of semantic structuring and transfer of more localized information, relying on the definition of suitable tiling procedures of the template model geometries. These experiments could be aimed at noting, inter alia, the presence of frescoes and decorative parts, degradation phenomena, and crack patterns, repair and restoration interventions.
The integration of reconstructed H-BIM models and existing H-GIS systems, at the urban and territorial scale, could even be the subject of future research.

Author Contributions

Conceptualization, investigation, methodology and validation, V.C.; resources, formal analysis and data curation, V.C., G.C. and A.P.; writing—original draft preparation, V.C.; writing—review and editing, V.C., G.C., A.P., L.D.L. and P.V.; supervision and project administration, G.C., A.P., L.D.L. and P.V.; funding acquisition, V.C., G.C., A.P., L.D.L. and P.V. The work is the result of a Ph.D. thesis developed in the framework of the International Doctorate in Civil and Environmental Engineering, XXXIV cycle (Universities of Pisa and Florence, Italy) and of the Ecole Doctorale Sciences des Métiers de l’Ingénieur SMI, ED SMI 432 (Ecole Nationale Supérieure d’Arts et Métiers ENSAM ParisTech Aix-en-Provence, France). The Ph.D. dissertation by V.C. was the result of a co-tutelle agreement between the Italian and French institutions involved in this project: the Department of Civil and Industrial Engineering of the University of Pisa (Italy), the MAP Laboratory of the Centre National de la Recherche Scientifique (CNRS; Marseille, France) and the LISPEN Laboratory in Aix-en-Provence (France). All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by POR FSE TOSCANA 2014/2020 Ph.D. fellowships (Tuscany Region, Italy) and by the Université Franco-Italienne Vinci 2019 program for Chapter II—Mobility grants for co-tutored doctoral theses.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

AIArtificial Intelligence
BIMBuilding Information Modeling
DLDeep Learning
FNFalse Negatives
FPFalse Positives
H-BIMHeritage- or Historic- Building Information Modeling
MLMachine Learning
RFRandom Forest
TNTrue Negatives
TPTrue Positives
VPLVisual Programming Language

Appendix A

Figure A1. Examples of covariance features computed for different values of the local neighborhood radius using example from the Cloister of the National Museum of San Matteo.
Figure A1. Examples of covariance features computed for different values of the local neighborhood radius using example from the Cloister of the National Museum of San Matteo.
Sensors 23 02497 g0a1
Figure A2. Examples of changes of curvature, radiometric features and height features computed for the different case studies.
Figure A2. Examples of changes of curvature, radiometric features and height features computed for the different case studies.
Sensors 23 02497 g0a2
Figure A3. Validation sets. Comparison between true and predicted labels. The Grand cloister (a) and Grand-Ducal cloister (b) of the Pisa Charterhouse; the cloister of the National Museum of San Matteo (c).
Figure A3. Validation sets. Comparison between true and predicted labels. The Grand cloister (a) and Grand-Ducal cloister (b) of the Pisa Charterhouse; the cloister of the National Museum of San Matteo (c).
Sensors 23 02497 g0a3
Figure A4. Confusion matrix and performance scores for the three case studies.
Figure A4. Confusion matrix and performance scores for the three case studies.
Sensors 23 02497 g0a4

References

  1. Dore, C.; Murphy, M. Semi-automatic techniques for as-built BIM façade modeling of historic buildings. In Proceedings of the 2013 Digital Heritage International Congress (DigitalHeritage), IEEE, Marseille, France, 28 October 2013–1 November 2013; pp. 473–480. [Google Scholar]
  2. Murphy, M.; McGovern, E.; Pavia, S. Historic building information modelling (HBIM). Struct. Surv. 2009, 27, 311–327. [Google Scholar] [CrossRef] [Green Version]
  3. López, F.; Lerones, P.; Llamas, J.; Gómez-García-Bermejo, J.; Zalama, E. A Review of Heritage Building Information Modeling (H-BIM). MTI 2018, 2, 21. [Google Scholar] [CrossRef] [Green Version]
  4. Pocobelli, D.P.; Boehm, J.; Bryan, P.; Still, J.; Grau-Bové, J. BIM for heritage science: A review. Herit. Sci. 2018, 6, 30. [Google Scholar] [CrossRef] [Green Version]
  5. Pătrăucean, V.; Armeni, I.; Nahangi, M.; Yeung, J.; Brilakis, I.; Haas, C. State of research in automatic as-built modelling. Adv. Eng. Inform. 2015, 29, 162–171. [Google Scholar] [CrossRef] [Green Version]
  6. Logothetis, S.; Delinasiou, A.; Stylianidis, E. Building Information Modelling for Cultural Heritage: A review. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2015, II-5/W3, 177–183. [Google Scholar] [CrossRef] [Green Version]
  7. Volk, R.; Stengel, J.; Schultmann, F. Building Information Modeling (BIM) for existing buildings—Literature review and future needs. Autom. Constr. 2014, 38, 109–127. [Google Scholar] [CrossRef] [Green Version]
  8. Hichri, N.; Stefani, C.; De Luca, L.; Veron, P.; Hamon, G. From point cloud to BIM: A survey of existing approaches. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2013, XL-5/W2, 343–348. [Google Scholar] [CrossRef] [Green Version]
  9. Miceli, A.; Morandotti, M.; Parrinello, S. 3D survey and semantic analysis for the documentation of built heritage. The case study of Palazzo Centrale of Pavia University. VITRUVIO Int. J. Archit. Technol. Sustain. 2020, 5, 65. [Google Scholar] [CrossRef]
  10. Oreni, D.; Brumana, R.; Georgopoulos, A.; Cuca, B. HBIM for conservation and built heritage: Towards a library of vaults and wooden bean floors. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2013, II-5/W1, 215–221. [Google Scholar] [CrossRef] [Green Version]
  11. Arayici, Y. Towards building information modelling for existing structures. Struct. Surv. 2008, 26, 210–222. [Google Scholar] [CrossRef] [Green Version]
  12. Fai, S.; Sydor, M. Building Information Modelling and the documentation of architectural heritage: Between the ‘typical’ and the ‘specific’. In Proceedings of the 2013 Digital Heritage International Congress (DigitalHeritage), Marseille, France, 28 October 2013–1 November 2013; Volume 1, pp. 731–734. [Google Scholar]
  13. Bacci, G.; Bertolini, F.; Bevilacqua, M.G.; Caroti, G.; Martínez-Espejo Zaragoza, I.; Martino, M.; Piemonte, A. HBIM methodologies for the architectural restoration. The case of the ex-church of San Quirico all’Olivo in Lucca, Tuscany. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2019, XLII-2/W11, 121–126. [Google Scholar] [CrossRef] [Green Version]
  14. Angulo-Fornos, R.; Castellano-Román, M. HBIM as Support of Preventive Conservation Actions in Heritage Architecture. Experience of the Renaissance Quadrant Façade of the Cathedral of Seville. Appl. Sci. 2020, 10, 2428. [Google Scholar] [CrossRef] [Green Version]
  15. Barazzetti, L.; Banfi, F.; Brumana, R.; Gusmeroli, G.; Previtali, M.; Schiantarelli, G. Cloud-to-BIM-to-FEM: Structural simulation with accurate historic BIM from laser scans. Simul. Model. Pract. Theory 2015, 57, 71–87. [Google Scholar] [CrossRef]
  16. Croce, P.; Landi, F.; Puccini, B.; Martino, M.; Maneo, A. Parametric HBIM procedure for the structural evaluation of Heritage masonry buildings. Buildings 2022, 12, 194. [Google Scholar] [CrossRef]
  17. Pepe, M.; Costantino, D.; Restuccia Garofalo, A. An efficient pipeline to obtain 3D model for HBIM and structural analysis purposes from 3D point clouds. Appl. Sci. 2020, 10, 1235. [Google Scholar] [CrossRef] [Green Version]
  18. Brumana, R.; Oreni, D.; Cuca, B.; Binda, L.; Condoleo, P.; Triggiani, M. Strategy for integrated surveying techniques finalized to interpretive models in a Byzantine Church, Mesopotam, Albania. Int. J. Archit. Herit. 2014, 8, 886–924. [Google Scholar] [CrossRef]
  19. Pocobelli, D.P.; Boehm, J.; Bryan, P.; Still, J.; Grau-Bové, J. Building Information Modeling for monitoring and simulation data in heritage buildings. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2018, 42, 909–916. [Google Scholar] [CrossRef] [Green Version]
  20. Martinez Espejo Zaragoza, I.; Caroti, G.; Piemonte, A. The use of image and laser scanner survey archives for cultural heritage 3D modelling and change analysis. ACTA IMEKO 2021, 10, 114. [Google Scholar] [CrossRef]
  21. Brumana, R.; Oreni, D.; Barazzetti, L.; Cuca, B.; Previtali, M.; Banfi, F. Survey and Scan to BIM Model for the Knowledge of Built Heritage and the Management of Conservation Activities. In Digital Transformation of the Design, Construction and Management Processes of the Built Environment; Daniotti, B., Gianinetto, M., Della Torre, S., Eds.; Springer International Publishing: Cham, Switzerland, 2020; pp. 391–400. ISBN 978-3-030-33570-0. [Google Scholar]
  22. Bruno, S.; Musicco, A.; Fatiguso, F.; Dell’Osso, G.R. The Role of 4D Historic Building Information Modelling and Management in the Analysis of Constructive Evolution and Decay Condition within the Refurbishment Process. Int. J. Archit. Herit. 2019, 15, 1250–1266. [Google Scholar] [CrossRef]
  23. Macher, H.; Landes, T.; Grussenmeyer, P. From Point Clouds to Building Information Models: 3D Semi-Automatic Reconstruction of Indoors of Existing Buildings. Appl. Sci. 2017, 7, 1030. [Google Scholar] [CrossRef] [Green Version]
  24. Bevilacqua, M.G.; Caroti, G.; Piemonte, A.; Terranova, A.A. Digital Technology and Mechatronic Systems for the Architectural 3D Metric Survey. In Mechatronics for Cultural Heritage and Civil Engineering; Intelligent Systems, Control and Automation: Science and Engineering; Ottaviano, E., Pelliccio, A., Gattulli, V., Eds.; Springer International Publishing: Cham, Switzerland, 2018; Volume 92, pp. 161–180. ISBN 978-3-319-68645-5. [Google Scholar]
  25. Fiorucci, M.; Khoroshiltseva, M.; Pontil, M.; Traviglia, A.; Del Bue, A.; James, S. Machine Learning for Cultural Heritage: A Survey. Pattern Recognit. Lett. 2020, 133, 102–108. [Google Scholar] [CrossRef]
  26. Matrone, F.; Grilli, E.; Martini, M.; Paolanti, M.; Pierdicca, R.; Remondino, F. Comparing Machine and Deep Learning Methods for Large 3D Heritage Semantic Segmentation. IJGI 2020, 9, 535. [Google Scholar] [CrossRef]
  27. Grilli, E.; Remondino, F. Classification of 3D Digital Heritage. Remote Sens. 2019, 11, 847. [Google Scholar] [CrossRef] [Green Version]
  28. Rocha, G.; Mateus, L.; Fernández, J.; Ferreira, V. A Scan-to-BIM Methodology Applied to Heritage Buildings. Heritage 2020, 3, 47–67. [Google Scholar] [CrossRef] [Green Version]
  29. Yang, X.; Koehl, M.; Grussenmeyer, P. Mesh-To-BIM: From segmented mesh elements to BIM model with limited parameters. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2018, 42, 1213–1218. [Google Scholar] [CrossRef] [Green Version]
  30. Bruno, S.; De Fino, M.; Fatiguso, F. Historic Building Information Modelling: Performance assessment for diagnosis-aided information modelling and management. Autom. Constr. 2018, 86, 256–276. [Google Scholar] [CrossRef]
  31. Previtali, M.; Barazzetti, L.; Brumana, R.; Scaioni, M. Towards automatic indoor reconstruction of cluttered building rooms from point clouds. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2014, 2, 281–288. [Google Scholar] [CrossRef] [Green Version]
  32. Bruno, N.; Roncella, R. HBIM for Conservation: A New Proposal for Information Modeling. Remote Sens. 2019, 11, 1751. [Google Scholar] [CrossRef] [Green Version]
  33. Tommasi, C.; Achille, C. Interoperability matter: Levels of data sharing, starting from a 3D information modeling. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2017, XLII-2/W3, 623–630. [Google Scholar] [CrossRef] [Green Version]
  34. Tang, P.; Huber, D.; Akinci, B.; Lipman, R.; Lytle, A. Automatic reconstruction of as-built building information models from laser-scanned point clouds: A review of related techniques. Autom. Constr. 2010, 19, 829–843. [Google Scholar] [CrossRef]
  35. Dore, C.; Murphy, M. Current state of the art Historic Building Information Modeling. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2017, 42, 185–192. [Google Scholar] [CrossRef] [Green Version]
  36. Xiong, X.; Adan, A.; Akinci, B.; Huber, D. Automatic creation of semantically rich 3D building models from laser scanner data. Autom. Constr. 2013, 31, 325–337. [Google Scholar] [CrossRef] [Green Version]
  37. Jung, J.; Hong, S.; Jeong, S.; Kim, S.; Cho, H.; Hong, S.; Heo, J. Productive modeling for development of as-built BIM of existing indoor structures. Autom. Constr. 2014, 42, 68–77. [Google Scholar] [CrossRef]
  38. Zhang, X. Curvature estimation of 3D point cloud surfaces through the fitting of normal section curvatures. Proc. Asiagraph 2008, 8, 23–26. [Google Scholar]
  39. Schnabel, R.; Wahl, R.; Klein, R. Efficient RANSAC for Point-Cloud Shape Detection. Comput. Graph. Forum 2007, 26, 214–226. [Google Scholar] [CrossRef]
  40. Fischler, M.A.; Bolles, R.C. Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM 1981, 24, 381–395. [Google Scholar] [CrossRef]
  41. Hough, P.V.C. Method and Means for Recognizing Complex. Patterns. Patent 3,069,654, 18 December 1962. [Google Scholar]
  42. Thomson, C.; Boehm, J. Automatic Geometry Generation from Point Clouds for BIM. Remote Sens. 2015, 7, 11753–11775. [Google Scholar] [CrossRef] [Green Version]
  43. Bosché, F.; Ahmed, M.; Turkan, Y.; Haas, C.T.; Haas, R. The value of integrating Scan-to-BIM and Scan-vs-BIM techniques for construction monitoring using laser scanning and BIM: The case of cylindrical MEP components. Autom. Constr. 2015, 49, 201–213. [Google Scholar] [CrossRef]
  44. Wang, Z.; Shi, W.; Akoglu, K.; Kotoula, E.; Yang, Y.; Rushmeier, H. CHER-Ob: A Tool for Shared Analysis and Video Dissemination. J. Comput. Cult. Herit. 2018, 11, 1–22. [Google Scholar] [CrossRef]
  45. Hong, S.; Jung, J.; Kim, S.; Cho, H.; Lee, J.; Heo, J. Semi-automated approach to indoor mapping for 3D as-built building information modeling. Comput. Environ. Urban Syst. 2015, 51, 34–46. [Google Scholar] [CrossRef]
  46. Grassi, I. Applicazione della metodologia HBIM al Chiostro Granducale della Certosa di Calci: Restituzione semantica e mappatura tridimensionale del degrado. Master’s Thesis, Scuola di Ingegneria, Università di Pisa, Pisa, Italy, 2019. [Google Scholar]
  47. Santagati, C.; Lo Turco, M.; Garozzo, R. Reverse information modeling for historic artefacts: Towards the definition of a level of accuracy for ruined heritage. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2018, 42, 1007–1014. [Google Scholar] [CrossRef] [Green Version]
  48. Yang, X.; Lu, Y.-C.; Murtiyoso, A.; Koehl, M.; Grussenmeyer, P. HBIM Modeling from the Surface Mesh and Its Extended Capability of Knowledge Representation. IJGI 2019, 8, 301. [Google Scholar] [CrossRef] [Green Version]
  49. Rodríguez-Moreno, C.; Reinoso-Gordo, J.F.; Rivas-López, E.; Gómez-Blanco, A.; Ariza-López, F.J.; Ariza-López, I. From point cloud to BIM: An integrated workflow for documentation, research and modelling of architectural heritage. Surv. Rev. 2018, 50, 212–231. [Google Scholar] [CrossRef]
  50. Quattrini, R.; Battini, C.; Mammoli, R. HBIM TO VR. Semantic awareness and data enrichment interoperability for parametric libraries of historical architecture. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2018, 42, 937–943. [Google Scholar] [CrossRef] [Green Version]
  51. Baik, A. From point cloud to Jeddah Heritage BIM Nasif Historical House—Case study. Digit. Appl. Archaeol. Cult. Herit. 2017, 4, 1–18. [Google Scholar] [CrossRef]
  52. Fai, S.; Rafeiro, J. Establishing an appropriate Level of Detail (LoD) for a Building Information Model (BIM)—West Block, Parliament Hill, Ottawa, Canada. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2014, 2, 123–130. [Google Scholar] [CrossRef] [Green Version]
  53. De Luca, L.; Véron, P.; Florenzano, M. A generic formalism for the semantic modeling and representation of architectural elements. Vis. Comput. 2007, 23, 181–205. [Google Scholar] [CrossRef] [Green Version]
  54. Murphy, M.; McGovern, E.; Pavia, S. Historic Building Information Modelling—Adding intelligence to laser and image based surveys of European classical architecture. ISPRS J. Photogramm. Remote Sens. 2013, 14, 89–102. [Google Scholar] [CrossRef]
  55. Tommasi, C.; Achille, C.; Fassi, F. From point cloud to BIM: A modelling challenge in the Cultural Heritage field. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2016, 41, 429–436. [Google Scholar] [CrossRef] [Green Version]
  56. Andriasyan, M.; Moyano, J.; Nieto-Julián, J.E.; Antón, D. From Point Cloud Data to Building Information Modelling: An Automatic Parametric Workflow for Heritage. Remote Sens. 2020, 12, 1094. [Google Scholar] [CrossRef] [Green Version]
  57. Capone, M.; Lanzara, E. Scan-to-BIM vs. 3D ideal modela HBIM: Parametric tools to study domes geometry. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2019, 42, 219–226. [Google Scholar] [CrossRef] [Green Version]
  58. Kelly, G. A survey of procedural techniques for city generation. ITB J. 2006, 14, 342–351. [Google Scholar] [CrossRef]
  59. Pierdicca, R.; Paolanti, M.; Matrone, F.; Martini, M.; Morbidoni, C.; Malinverni, E.S.; Frontoni, E.; Lingua, A.M. Point Cloud Semantic Segmentation Using a Deep Learning Framework for Cultural Heritage. Remote Sens. 2020, 12, 1005. [Google Scholar] [CrossRef] [Green Version]
  60. Paumard, M.-M.; Picard, D.; Tabia, H. Deepzzle: Solving Visual Jigsaw Puzzles with Deep Learning and Shortest Path Optimization. IEEE Trans. Image Process. 2020, 29, 3569–3581. [Google Scholar] [CrossRef]
  61. Ibrahim, Y.; Nagy, B.; Benedek, C. Deep Learning-Based Masonry Wall Image Analysis. Remote Sens. 2020, 12, 3918. [Google Scholar] [CrossRef]
  62. Varinlioglu, G.; Balaban, O. Artificial Intelligence in Architectural Heritage Research: Simulating Networks of Caravanserais through Machine Learning; The Routledge Companion to Artificial Intelligence in Architecture; Basu, P., Ed.; Routledge: Abingdon-on-Thames, UK, 2021; ISBN 978-0-367-42458-9. [Google Scholar]
  63. Korc, F.; Forstner, W. eTRIMS Image Database for Interpreting Images of Man-Made Scenes. Comput. Sci. 2009, 12, 62918340. [Google Scholar]
  64. Manfredi, M.; Grana, C.; Cucchiara, R. Automatic Single-Image People Segmentation and Removal for Cultural Heritage Imaging. In New Trends in Image Analysis and Processing—ICIAP 2013; Petrosino, A., Maddalena, L., Pala, P., Eds.; Springer: Berlin/Heidelberg, Germany, 2013; pp. 188–197. [Google Scholar]
  65. Grilli, E.; Dininno, D.; Petrucci, G.; Remondino, F. From 2D to 3D supervised segmentation and classification for Cultural Heritage applications. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2018, 42, 399–406. [Google Scholar] [CrossRef] [Green Version]
  66. Musicco, A.; Galantucci, R.A.; Bruno, S.; Verdoscia, C.; Fatiguso, F. Automatic point cloud segmentation for the detection of alterations on historical buildings through an unsupervised and clustering-based Machine Learning approach. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2021, 2, 129–136. [Google Scholar] [CrossRef]
  67. Bassier, M.; Yousefzadeh, M.; Vergauwen, M. Comparison of 2D and 3D wall reconstruction algorithms from point cloud data for as-built BIM. ITcon 2020, 25, 173–192. [Google Scholar] [CrossRef]
  68. El Kadi, K.A. Automatic Extraction of Facade Details of Heritage Building Using Terrestrial Laser Scanning Data. J. Archit. Eng. Technol. 2014, 3, 2. [Google Scholar] [CrossRef]
  69. Morbidoni, C.; Pierdicca, R.; Paolanti, M.; Quattrini, R.; Mammoli, R. Learning from synthetic point cloud data for historical buildings semantic segmentation. J. Comput. Cult. Herit. 2020, 13, 1–16. [Google Scholar] [CrossRef]
  70. Grilli, E.; Farella, E.M.; Torresani, A.; Remondino, F. Geometric features analysis for the classification of Cultural Heritage point clouds. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2019, 42, 541–548. [Google Scholar] [CrossRef] [Green Version]
  71. Malinverni, E.S.; Mariano, F.; Di Stefano, F.; Petetta, L.; Onori, F. Modeling in HBIM to document materials decay by a thematic mapping to manage the Cultural Heritage: The case of Chiesa della Pietà in Fermo. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2019, 42, 777–784. [Google Scholar] [CrossRef] [Green Version]
  72. Weinmann, M. Reconstruction and Analysis of 3D Scenes; Springer International Publishing: Cham, Switzerland, 2016; ISBN 978-3-319-29244-1. [Google Scholar]
  73. Özdemir, E.; Remondino, F.; Golkar, A. Aerial point cloud classification with deep learning and machine learning algorithms. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2019, XLII-4/W18, 843–849. [Google Scholar] [CrossRef] [Green Version]
  74. Breiman, L. Ramdom forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef] [Green Version]
  75. Kyriakaki-Grammatikaki, S.; Stathopoulou, E.K.; Grilli, E.; Remondino, F.; Georgopoulos, A. Geometric primitive extraction from semantically enriched point clouds. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2022, 46, 291–298. [Google Scholar] [CrossRef]
  76. Croce, V.; Caroti, G.; De Luca, L.; Jacquot, K.; Piemonte, A.; Véron, P. From the Semantic Point Cloud to Heritage-Building Information Modeling: A Semiautomatic Approach Exploiting Machine Learning. Remote Sens. 2021, 13, 461. [Google Scholar] [CrossRef]
  77. Matrone, F.; Lingua, A.; Pierdicca, R.; Malinverni, E.S.; Paolanti, M.; Grilli, E.; Remondino, F.; Murtiyoso, A.; Landes, T. A benchmark for large-scale heritage point cloud semantic segmentation. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2020, 43, 1419–1426. [Google Scholar] [CrossRef]
  78. Scamozzi, V.; Vincenzo, W. L’idea Dell’architettura Universale; Colpo di Fulmine Edizioni: Venezia, Italy, 1615. [Google Scholar]
  79. Kowalczyk, K.; Rapinski, J. Investigating the error sources in reflectorless EDM. J. Surv. Eng. 2014, 140, 06014002. [Google Scholar] [CrossRef]
Figure 1. Steps of the Scan-to-BIM workflow using example from Grand-Ducal cloister, Pisa Charterhouse.
Figure 1. Steps of the Scan-to-BIM workflow using example from Grand-Ducal cloister, Pisa Charterhouse.
Sensors 23 02497 g001
Figure 2. Instantiation of a capital by direct reconstruction over the raw point cloud using example from Grand-Ducal cloister, Pisa Charterhouse.
Figure 2. Instantiation of a capital by direct reconstruction over the raw point cloud using example from Grand-Ducal cloister, Pisa Charterhouse.
Sensors 23 02497 g002
Figure 3. Primitive fitting of perimetral walls and slabs by EdgeWise Building. Raw point cloud (a); fit planes (b); BIM objects (c) using example from Grand-Ducal cloister, Pisa Charterhouse [46].
Figure 3. Primitive fitting of perimetral walls and slabs by EdgeWise Building. Raw point cloud (a); fit planes (b); BIM objects (c) using example from Grand-Ducal cloister, Pisa Charterhouse [46].
Sensors 23 02497 g003
Figure 4. Semantic segmentation compared to other computer vision tasks using example from Cloister of the National Museum of San Matteo, Pisa.
Figure 4. Semantic segmentation compared to other computer vision tasks using example from Cloister of the National Museum of San Matteo, Pisa.
Sensors 23 02497 g004
Figure 5. Point clouds for the three case studies considered: the Grand cloister of the Pisa Charterhouse (a); the Grand-Ducal cloister of the Pisa Charterhouse (b); the cloister of the National Museum of San Matteo in Pisa (c).
Figure 5. Point clouds for the three case studies considered: the Grand cloister of the Pisa Charterhouse (a); the Grand-Ducal cloister of the Pisa Charterhouse (b); the cloister of the National Museum of San Matteo in Pisa (c).
Sensors 23 02497 g005
Figure 6. Two steps of the proposed approach: ML-based classification workflow, as illustrated in [76], and Scan-to-BIM reconstruction. The example provided refers to the case study of the Grand cloister of the Pisa Charterhouse.
Figure 6. Two steps of the proposed approach: ML-based classification workflow, as illustrated in [76], and Scan-to-BIM reconstruction. The example provided refers to the case study of the Grand cloister of the Pisa Charterhouse.
Sensors 23 02497 g006
Figure 7. Many relevant features identified on a portion of the Grand-Ducal cloister dataset, Pisa Charterhouse, for different local neighborhood radii ρ.
Figure 7. Many relevant features identified on a portion of the Grand-Ducal cloister dataset, Pisa Charterhouse, for different local neighborhood radii ρ.
Sensors 23 02497 g007
Figure 8. Percentage of training set used for learning over the three case studies considered.
Figure 8. Percentage of training set used for learning over the three case studies considered.
Sensors 23 02497 g008
Figure 9. Training set for the three case studies. The Grand cloister (a) and Grand-Ducal cloister (b) of the Pisa Charterhouse; the cloister of the National Museum of San Matteo (c). The distinction of classes is conveyed visually, through a color legend.
Figure 9. Training set for the three case studies. The Grand cloister (a) and Grand-Ducal cloister (b) of the Pisa Charterhouse; the cloister of the National Museum of San Matteo (c). The distinction of classes is conveyed visually, through a color legend.
Sensors 23 02497 g009
Figure 10. Training set (a) and segmentation results (b,c) for the three case studies.
Figure 10. Training set (a) and segmentation results (b,c) for the three case studies.
Sensors 23 02497 g010
Figure 11. Semantic segmentation: exploded view of the classified components using example from the Grand cloister, Pisa Charterhouse.
Figure 11. Semantic segmentation: exploded view of the classified components using example from the Grand cloister, Pisa Charterhouse.
Sensors 23 02497 g011
Figure 12. Two isolated classes of architectural components from the initial point cloud using example from the Grand-Ducal cloister, Pisa Charterhouse.
Figure 12. Two isolated classes of architectural components from the initial point cloud using example from the Grand-Ducal cloister, Pisa Charterhouse.
Sensors 23 02497 g012
Figure 13. Point cloud import script. The meaning of each overlay rectangle is specified in the text using example Class 0—Arch, Grand cloister, Pisa Charterhouse.
Figure 13. Point cloud import script. The meaning of each overlay rectangle is specified in the text using example Class 0—Arch, Grand cloister, Pisa Charterhouse.
Sensors 23 02497 g013
Figure 14. Segmented point cloud import. Selection (a) and isolation (b) of the ‘Vault’ class using example from the Grand cloister, Pisa Charterhouse.
Figure 14. Segmented point cloud import. Selection (a) and isolation (b) of the ‘Vault’ class using example from the Grand cloister, Pisa Charterhouse.
Sensors 23 02497 g014
Figure 15. Study on template moldings for the formalization of their shape grammar.
Figure 15. Study on template moldings for the formalization of their shape grammar.
Sensors 23 02497 g015
Figure 16. The cavetto molding can be modified by editing the sliders of the bounding box (1) and anchoring point P (2). The resulting curve is edited both on the VPL GUI (3) and in the 3D viewer (4).
Figure 16. The cavetto molding can be modified by editing the sliders of the bounding box (1) and anchoring point P (2). The resulting curve is edited both on the VPL GUI (3) and in the 3D viewer (4).
Sensors 23 02497 g016
Figure 17. Generation of a profile for the combination of multiple moldings (to the left). The generating profile (a) is swept along the directing path (b) to create a 3D surface of the entablature (to the right).
Figure 17. Generation of a profile for the combination of multiple moldings (to the left). The generating profile (a) is swept along the directing path (b) to create a 3D surface of the entablature (to the right).
Sensors 23 02497 g017
Figure 18. Creation of the column shaft: base circle (1), reference circles (2) and shape generation via the loft function (3).
Figure 18. Creation of the column shaft: base circle (1), reference circles (2) and shape generation via the loft function (3).
Sensors 23 02497 g018
Figure 19. Example of propagation algorithm: template geometry (1); insertion points and Move node (2); and propagated geometries (3) using example from Class 1—Column, Grand cloister, Pisa Charterhouse.
Figure 19. Example of propagation algorithm: template geometry (1); insertion points and Move node (2); and propagated geometries (3) using example from Class 1—Column, Grand cloister, Pisa Charterhouse.
Sensors 23 02497 g019
Figure 20. Examples of reconstruction of parametric components: original class (a), conceptual reference geometry (b) and propagation of the information to the whole class (c).
Figure 20. Examples of reconstruction of parametric components: original class (a), conceptual reference geometry (b) and propagation of the information to the whole class (c).
Sensors 23 02497 g020
Figure 21. Resulting conceptual model for a portion of the cloister in the National Museum of San Matteo, with related point cloud (a,b).
Figure 21. Resulting conceptual model for a portion of the cloister in the National Museum of San Matteo, with related point cloud (a,b).
Sensors 23 02497 g021
Figure 22. Resulting conceptual model for a portion of the main cloister in the Pisa Charterhouse and selection of the ‘Column’ class.
Figure 22. Resulting conceptual model for a portion of the main cloister in the Pisa Charterhouse and selection of the ‘Column’ class.
Sensors 23 02497 g022
Figure 23. VPL import script via Rhino.Inside.Revit. The selected template geometry (1) is imported in Autodesk Revit as a generic model component (2), and it is associated with the Revit Material ‘white marble (3).
Figure 23. VPL import script via Rhino.Inside.Revit. The selected template geometry (1) is imported in Autodesk Revit as a generic model component (2), and it is associated with the Revit Material ‘white marble (3).
Sensors 23 02497 g023
Figure 24. Average time required for the different phases of the classification process.
Figure 24. Average time required for the different phases of the classification process.
Sensors 23 02497 g024
Figure 25. Misclassified elements in boundary regions. The Grand cloister (a) and Grand-Ducal cloister (b) of the Pisa Charterhouse; the cloister of the National Museum of San Matteo (c).
Figure 25. Misclassified elements in boundary regions. The Grand cloister (a) and Grand-Ducal cloister (b) of the Pisa Charterhouse; the cloister of the National Museum of San Matteo (c).
Sensors 23 02497 g025
Figure 26. Example of a same feature (linearity, ϱ = 0.6 m) computed on two point clouds with higher (a) and lower (b) densities. Example from main cloister, Pisa Charterhouse.
Figure 26. Example of a same feature (linearity, ϱ = 0.6 m) computed on two point clouds with higher (a) and lower (b) densities. Example from main cloister, Pisa Charterhouse.
Sensors 23 02497 g026
Figure 27. Real model (point cloud, (a)) and template model (b) are compared to extract a disparity map (c). The signed distances between the two models are expressed by a dedicated color scale., Example from Class 0—Arch, Grand-Ducal cloister, Pisa Charterhouse.
Figure 27. Real model (point cloud, (a)) and template model (b) are compared to extract a disparity map (c). The signed distances between the two models are expressed by a dedicated color scale., Example from Class 0—Arch, Grand-Ducal cloister, Pisa Charterhouse.
Sensors 23 02497 g027
Table 1. Average performance measures obtained for the classification of the three datasets.
Table 1. Average performance measures obtained for the classification of the three datasets.
Grand Cloister,
Pisa Charterhouse
Grand-Ducal Cloister,
Pisa Charterhouse
Cloister Museum
of San Matteo, Pisa
n. of classes9109
Avg. classes93.49%83.03%84.37%
Avg. precision95.56%82.07%89.71%
Avg. accuracy99.30%98.04%98.73%
Avg. F1-score94.44%81.53%85.98%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Croce, V.; Caroti, G.; Piemonte, A.; De Luca, L.; Véron, P. H-BIM and Artificial Intelligence: Classification of Architectural Heritage for Semi-Automatic Scan-to-BIM Reconstruction. Sensors 2023, 23, 2497. https://doi.org/10.3390/s23052497

AMA Style

Croce V, Caroti G, Piemonte A, De Luca L, Véron P. H-BIM and Artificial Intelligence: Classification of Architectural Heritage for Semi-Automatic Scan-to-BIM Reconstruction. Sensors. 2023; 23(5):2497. https://doi.org/10.3390/s23052497

Chicago/Turabian Style

Croce, Valeria, Gabriella Caroti, Andrea Piemonte, Livio De Luca, and Philippe Véron. 2023. "H-BIM and Artificial Intelligence: Classification of Architectural Heritage for Semi-Automatic Scan-to-BIM Reconstruction" Sensors 23, no. 5: 2497. https://doi.org/10.3390/s23052497

APA Style

Croce, V., Caroti, G., Piemonte, A., De Luca, L., & Véron, P. (2023). H-BIM and Artificial Intelligence: Classification of Architectural Heritage for Semi-Automatic Scan-to-BIM Reconstruction. Sensors, 23(5), 2497. https://doi.org/10.3390/s23052497

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop