Next Article in Journal
Defining Patterns and Behaviours of Forward Spatter Gunshot Misting
Previous Article in Journal
Forensic Expertise Based on Findings through Postmortem Mammography Imaging
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Study of 3D Digitisation Modalities for Crime Scene Investigation

by
George Galanakis
1,*,
Xenophon Zabulis
1,
Theodore Evdaimon
1,
Sven-Eric Fikenscher
2,
Sebastian Allertseder
2,
Theodora Tsikrika
3 and
Stefanos Vrochidis
3
1
Foundation for Research and Technology Hellas, Institute of Computer Science, N. Plastira 100, Vassilika Vouton, 70013 Heraklion, Greece
2
Hochschule für den öffentlichen Dienst in Bayern, Fachbereich Polizei, Fürstenfelder Str. 29, 82256 Fürstenfeldbruck, Germany
3
Centre of Research and Technology Hellas, Information Technologies Institute, 6th km Harilaou-Thermi, 57001 Thessaloniki, Greece
*
Author to whom correspondence should be addressed.
Forensic Sci. 2021, 1(2), 56-85; https://doi.org/10.3390/forensicsci1020008
Submission received: 31 May 2021 / Revised: 21 July 2021 / Accepted: 27 July 2021 / Published: 30 July 2021

Abstract

:
A valuable aspect during crime scene investigation is the digital documentation of the scene. Traditional means of documentation include photography and in situ measurements from experts for further analysis. Although 3D reconstruction of pertinent scenes has already been explored as a complementary tool in investigation pipelines, such technology is considered unfamiliar and not yet widely adopted. This is explained by the expensive and specialised digitisation equipment that is available so far. However, the emergence of high-precision but low-cost devices capable of scanning scenes or objects in 3D has been proven as a reliable alternative to their counterparts. This paper summarises and analyses the state-of-the-art technologies in scene documentation using 3D digitisation and assesses the usefulness in typical police-related situations and the forensics domain in general. We present the methodology for acquiring data for 3D reconstruction of various types of scenes. Emphasis is placed on the applicability of each technique in a wide range of situations, ranging in type and size. The application of each reconstruction method is considered in this context and compared with respect to additional constraints, such as time availability and simplicity of operation of the corresponding scanning modality. To further support our findings, we release a multi-modal dataset obtained from a hypothetical indoor crime scene to the public.

1. Introduction

A crime scene is considered as any location that may be associated with an offence that has been committed and where forensic evidence may be gathered. This work presents 3D digitisation approaches related to the aforementioned scenes but also considers crime prevention planning and education of LEAs to address possible security threats in both indoor and outdoor. In this work, we refer to all of these locations simply as scenes and investigate the appropriate digitisation tools for each type of use-case.
The digitisation, or 3D reconstruction of a scene, can be categorised into three types of use-cases, which are as follows.
  • The prevention and study of a scene prior to and during an event;
  • Analysis and documentation of a scene after an event;
  • THe provision of 3D content in educational simulations that are based on Extended Reality technologies (i.e., Virtual Reality).
Digitisation is facilitated by numerous technologies that are available nowadays. However, each technology with its elements exhibiting varying compliance, depending on a variety of applicability factors. In this investigation, we take into consideration multiple of them, including (a) technical constraints, (b) ease of use, (c) level of automation, (d) cost and (e) time duration of the scan.
Taking advantage of all such technologies is of great importance for the documentation of the physical contents of the scene. The 3D reconstruction component of the documentation is to facilitate the LEA’s work with the organisation of data from multiple scanning modalities and the provision of the ability to inspect these data with ease.
An overview of the situation of the use-cases from the aspect of the target 3D reconstruction environment is shown in Figure 1. Examples of such scenes are illustrated. A primary classification regards the type of the scene, and in particular, whether it is indoors or outdoors. The reason is that scanning conditions are dramatically different in these situations. Besides technical constraints, which limit the applicability of certain modalities in each environment, there are dramatic environmental and practical differences between the two situations. Outdoor digitisation includes weather and safety constraints, uncontrolled illumination, and a possible shortage of time before traces are diminished by elements of nature (water, temperature, etc.). Next, the size of the scene is considered in each case, as different types of digitisation modalities are relevant. Examples of environmental situations are shown according to the classification above.
The aim of this work is twofold. First, to present the state of scanning modalities, briefly describe their technical details and perspectives. Second, to relate the scanning modalities to the aforementioned use-cases, and more precisely, to suggest which modality is suitable for each use-case. As a complement to our report, we release a multi-modal dataset obtained from a hypothetical indoor crime scene to the public.

2. Related Work

Three-dimensional scanning for crime scene investigation has received attention lately as an alternative or complementary tool to photographic documentation. In this section, we present some recent efforts on the topic. We classify the methods according to digitization technology.
Conventional RGB cameras, such as DSLRs, have been employed as indirect measurement modalities as the base of photogrammetry methodology in a wide range of applications. For example, in [1], the scope of the 3D reconstruction is an entire traffic accident scene. A more specific case is considered in [2], where human skin and a hairbrush are reconstructed to determine if abrasions on the skin are from being hit with the hairbrush.
Direct measurement devices are also popular since they offer comparably more precise reconstructions and faster scanning times. These range from inexpensive RGB-D sensors to high-end scanners. In [3], a Kinect v2 RGB-D sensor is utilised for reconstructing a crime scene taking part in a small room. Several scans of the room were aligned into a single 3D mesh and imported along with virtual objects into a scene that was constructed with the Unity 3D game engine. In other works, 3D scanning is achieved using higher-end devices. In [4], a FARO Focus LS120 scanner is utilised for scanning walls for blood pattern analysis. In [5], the scope is to document and measure bullet trajectories as they hit drywall panels from various angles. For this purpose, they utilised a FARO Focus S350. To specify which tool caused a wound, the authors in [6] scanned several household tools and their corresponding “wounds” on a watermelon as a rough simulation of the human skin. The objects were scanned using Gom ATOS Compact Scan 5M, a structured-light scanner. General-purpose 3D scanners are the most adopted ones; however, devices for specialised tasks do exist. For example, TopMatch-3D, an in-lab scanner specialised for firearm forensics is utilised in [7].
To compensate for the technical drawbacks, most approaches utilise more than one scanning device/modality, e.g., pairing photogrammetry with laser scanning, or handheld with terrestrial laser scanning. Such an approach is followed by [8], where the scope is to compare different scanning technologies for a variety of objects of interest, including furniture and shoe prints. Their toolset is composed of a handheld structured light device (Go!Scan 50) and two terrestrial laser scanners: Leica P40 and Z + F Imager 5010X. The outcome is that the scanning accuracy is related to object properties, such as size and surface characteristics, and the distance of the scanner from the object. They also report that the real accuracy of distance measurements on the scan is much lower than the one declared by the scanners’ manufacturers. In a more specific domain, the authors in [9] study the utilisation of 3D scanning of fingerprint impressions as encountered on different materials compared to traditional 2D photography. They utilised two scanners, an inEos X5, which is a tabletop device, specialised for dental use, and a handheld structured light scanner (Artec Space Spider). Their experiments on fingerprint identification showed that the 3D methods outperformed the 2D counterpart and also enabled new domains of fingerprint analysis, such as finger curvature. In [10], 3D scanning is utilised as an investigation tool for identifying the source of an explosion, i.e., accidental or malicious and blast dynamics simulation in general. Three scanning modalities are compared; two low-cost structured light devices, a Kinect v1 and a custom large-format DLP, and one expensive laser scanner (FARO Focus3Dx130). The authors stated that low-cost devices for that particular task are limited and concluded that they would further focus on expensive equipment. The study conducted in [11] considers outdoor crime scene scenarios. Their simulated scene involved vehicles with a varying degree of damage to body parts, including bullet holes in windshields, fenders, doors, etc. For the 3D scanning, they considered both aerial photogrammetry using 7 UAVs of variable specifications and terrestrial laser scanning (FARO Focus S70). The outcome of their study was that terrestrial scanner resulted in higher accuracy, more specifically, 2.6 mm versus 33.2 mm as averaged from all the UAVs. UAVs with larger sensors, e.g., 20MP, performed better. Their future work includes an examination of drones equipped with LIDAR for better comparison to the terrestrial laser scanner. In [12], photogrammetry is combined with post-mortem computed tomography (PMCT) to investigate how a wound was produced by a knife. In particular, the knife is scanned using photogrammetry and a person’s thorax using photogrammetry and CT. In [13], a comparative analysis of the accuracy and precision of 3D scanners and photogrammetric reconstructions was conducted under the scope of forensic incident scene documentation. Multiple scanners, both terrestrial and close-range, along with photogrammetric reconstruction are evaluated. The outcome is that photogrammetry is comparable to laser scanning at close ranges, while at medium and far ranges, terrestrial laser scanners are the most appropriate modality.
In crime scene investigation, a domain of high interest is that of precise examination of blood patterns. As such, it is also approached using 3D scanning methodologies. In [14], a structured-light device (DotProduct DPI-8) and photogrammetry-based scanning are compared in this domain. The scans were imported in FARO Zone 3D software for this purpose. Both modalities proved acceptable for the area of origin of bloodstains, though photogrammetry achieved better precision in cm. We note that the particular structured light scanner is employed with a pretty old sensor, similar to Kinect-v1; therefore, lower precision was somewhat expected. Blood pattern analysis is also considered in [15], where a high-end scanner (FARO Focus 3D) is combined with high-resolution photography provided by a DSLR. The photos were stitched to the scan using the Faro Zone 3D software. Three experts were asked to conduct manual analysis as a comparison to the analysis in the 3D software. The outcome of the study is that using the 3D software package accuracy of the area of origin estimates is improved as compared to manual trigonometric methods. The authors also state that the 3D scanning approach permits reduced physical interaction with blood at a crime scene and less manual work; therefore, it has a positive impact on the health and well-being of the practitioners. The authors in [16] developed a custom rig capable of carrying common crime objects to create cast-off blood strains with controlled parameters, e.g., constant path, constant area of blood deposition and limited variation in the velocity of swing. They scanned the blood patterns using a combination of high-resolution photography and laser scanning (FARO S350) to analyse trajectories of bloodstains in FARO Zone 3D software.
Usually, 3D representations of crime scenes are inspected in ordinary monitors, however, the presentation in a virtual reality environment enhances the experience and permits immersive interactions. VR itself has been shown as an interesting modality able to engage non-experts for crime scene inspections [17,18]. Under these considerations, a line of works utilised VR in their pipelines. In [19,20], a VR-based system implemented in Unity3D is proposed that allows for a virtual incident scene walk-through. The authors do not utilise specific scanning equipment but state that multimodal scanning is required to compensate for the drawbacks of each modality. More interestingly, they mention scanning using Virtobot 2.0 [21], a set of robotic arms capable of high-precision medical scanning. Although such modality is useful for examining internal or external wounds, it is solely in-lab; therefore, its applicability is limited. In [22], a hypothetical indoor crime scene was reconstructed using a combination of FARO Focus 3D X 330 and a FARO Freestyle 3D handheld scanner. The scene comprised of a common room and an office with a desk inside. A handbag containing a tablet was left on the desk. An experiment was conducted where some users acted as suspects; they were asked to follow some steps to steal the tablet. During investigation, their responses were measured in terms of a Concealed Information Test (CIT), a measure for lie detection. The outcome of the experiment was that the detection of concealed recognition increased by over 25% when participants viewed crime items in VR compared to 2D images. In [23], and in contrast to most presented works, a real crime scene is considered. The complete 3D model of the crime scene, covering three rooms, was created by 7 overall laser scans (Faro Focus3D S120), 23 structured light scans (Go! Scan50, Go! Scan20) and high-resolution photography. Structured light scans were used for the bodies and the details. Bloodstain pattern analysis was conducted with Faro Zone 3D, using high-resolution photos and point clouds as aligned with the help of registration targets. Footprint analysis was conducted by a forensic podiatrist in Vxmodel software. Matching injuries to injury-inflicting tools was conducted in Meshlab. VR presentation, in which operators could measure or take screenshots of the virtual world, was in HTC VIVE Pro.
In most of the above examples, the 3D scanning equipment is discussed solely concerning the applicability for several use-cases. However, the most deterrent aspect of such high-quality scanning equipment is their increased cost. In [24], a cost-benefit analysis is conducted, determining which social conditions must hold such that purchasing expensive equipment by LEAs is worthwhile. The outline of the analysis is that worthiness is positively related to the amount of crime in a specific area.
Our review of existing works is not exhaustive but aims to be indicative of the prospects of 3D digitisation as a valuable part of CSI. Most importantly, we presented that a variety of modalities are utilised for a variety of tasks from very broad to very specific. For further reading, we point to some recent insights in the topic [25,26] and some recent surveys [27,28].
In this work, we aim to present a study on a range of scanning modalities, suitable for preventive forensics, crime investigation and education of LEAs. We utilised a set of scanning modalities, larger than any of the aforementioned works and conclude to a set of guidelines depending on the application.

3. Scanning Modalities

In this work, we consider a variety of scanning modalities. A scanning modality is primarily described by a sensing technology, i.e., the technology that captures real-world scenes and objects to corresponding digital assets, and varies from plain web cameras to sophisticated laser scanners. Another key perspective of each modality is its portability, i.e., whether it is stationary, handheld or aerial. The following paragraphs will present scanning modalities based on these and additional aspects.

3.1. Laser Scanner

A terrestrial laser scanner is a special device suited to scan its surrounding environment by properly emitting laser beams and calculating the distance by taking into account the reflected signal. Three-dimensional scanners of this category are manufactured by Artec [29], FARO [30], Leica [31,32,33] and others [34,35,36,37].
The advantage of laser scanning for scanning environments is that it is a very efficient, accurate and robust modality. It provides a direct point measurement on the line of sight of every radius within its view sphere at a configurable resolution and an angular breadth of approximately 270 degrees of solid angle. Another significant advantage is that each scan takes place automatically and at a reasonable temporal duration (approx. 20 min). Laser scanning has been utilised for over 20 years, and significant experience can be retrieved from the literature in the form of guidelines, while a range of software products exists that facilitates the registration of partial scans. It is limited by highly absorbent (dark) surfaces, which do not reflect enough sensor radiance for the time-of-flight measurement to succeed. Another limitation is that there is no real-time feedback available; hence, a preparatory scan is typically required to find the locations that the scanner should be placed.
The main disadvantage of laser scanning is the price of this modality: a reliable unit of medium accuracy (2–3 mm) with a scan range of about 70 m is in the order of EUR 30,000. In addition, a reliable unit weighs at least 7–8 kg. Moreover, a laser scanner at the ground has no line of sight to the top of a building, which is out of its range. Airborne laser scanning exists but awaits advances regarding the payload of the laser scanner and flight velocity. Another disadvantage is that occlusions give rise to the requirement of several scans to cover the surfaces of a scene; this is particularly pronounced in indoor environments that are usually cluttered with furniture.
The acquired partial scans have to be combined, or registered, at a later stage. The registration procedure is not necessarily automatic, particularly for complex environments. To increase automation of the procedure, the placement of markers in the scene is required. This is essential if high accuracy is required. Depending on the scale and complexity of the scene, full coverage can be very challenging.
In the outdoors, the operation of a laser scanner may be hindered by bright sunlight as it interferes with the radiation emitted from the scanner. Assuming proper water insulation, a laser scanner will not produce results as accurate as its specifications in the presence of bad weather (rain, haze) because it has been calibrated for single-phase media (air) and not two-phase dynamic media.
In general, laser scanning is a very useful tool, particularly in terms of accuracy. It is especially useful in textureless environments where photogrammetry becomes more tedious or requires or unreliable. It can methodically automate scanning wide areas from terrestrial viewpoints. As such, it can complement aerial scans and systematise acquisition of broad spatial environments for the case of mission planning.
A wide range of terrestrial laser scanners are available in the market from manufacturers that are specialised in this domain. Table 1 presents some existing scanners and some common characteristics of them. A strict comparison of devices is difficult because manufacturers do not report scanning characteristics, such as ranging error/accuracy, in a common way. For example, ranging error is defined by Faro as “a systematic measurement error at around 10 m and 25 m”. However, others do not report how they obtained their measurements. In general, a scanner might be better at shorter ranges than another, while a common property is that the accuracy drops at longer ranges and for different surfaces, such as white vs. black. Detailed comparisons and product lists can be examined in specialised search engines [38].

3.2. Photogrammetry

Photogrammetric reconstruction requires significant computational time to obtain results because it is not based on direct measurements of spatial structure (such as the laser scanner) but is rather an algorithm that computationally infers structure from implicit measurements (images). The main advantages of photogrammetry lie in the relatively low cost of the required equipment and the wide range of environments in which it can be applied. Decent results can be obtained using low-end sensors, e.g., a mobile phone and relevant free/open-source photogrammetry software, such as Meshroom [39]. However, high-end optics provides images of high definition and, consequently, more accurate reconstructions. Another advantage of photogrammetry is texture realism, as laser scanning tends to provide low-resolution texture information that looks unrealistic when used in first-person VR applications, even though the geometry may be more accurate. Therefore, while a laser-only scan is probably advantageous for preserving a crime scene, it is not necessarily the best choice for a VR-training use-case. Figure 2 depicts a typical photogrammetric workflow using the structure from the motion algorithm.
The core part of photogrammetry is the software that processes the obtained images; therefore, plenty of solutions are already available [39,40,41,42,43,44,45,46,47,48]. Some of them, such as Meshroom [39], are simplistic in that their scope is to only process a set of images into a resulting 3D mesh. Others, such as OpenDroneMap [45], DroneDeploy [48] and Correlator3D [46], are specialised in aerial photogrammetric reconstruction, for example, assisting mission planning for obtaining the photos or optimised reconstruction for such types of images. Others, e.g., PIX4Dmapper [42], PhotoModeler [40], Autodesk Recap [43], 3DF Zephyr [44] and Elcovision [47], offer complete solutions, which cover the entire photogrammetry pipeline, including flight planning in the case of aerial photography, image capturing, post-processing and analysing the resulting reconstruction. Such software packages often come with specialised plug-ins, e.g., Elcovision has a special plug-in for forensics, which includes blood splatter analysis. Some of the aforementioned software have been extensively evaluated in [49,50].
A qualitative distinction between photogrammetry types is that outdoors photogrammetry can be assisted by the collaborative use of a GNSS device, which can provide location readings. Such information enables the transformation of the captured point cloud to a reference coordinate system without requiring the availability of object points with known coordinates (GCPs). Furthermore, they provide information about the scale of the scene, which is a requirement for most use-cases. In addition, the photogrammetry algorithm makes use of the location readings to reduce reconstruction errors in the estimation of the camera pose that it employs. This auxiliary information provided greater benefit in cases of aerial photogrammetry. The reason is that camera locations are farther apart than in terrestrial scans, where the distance between camera poses is similar to the error of a conventional GPS device.
Though based on the same principle of operation, we make a distinction between terrestrial and aerial photogrammetry because the mode of operation has a deep impact on the way of operation for the LEA.

3.2.1. Aerial Photogrammetry

For digitisation of outdoor environments, the proliferation of Unmanned Aerial Vehicles (UAV, drones) has broadened the horizons of surveillance and aerial photogrammetric reconstruction, providing vantage viewpoints and mechanised camera motion. In contemporary systems, the automatic acquisition of images acquired by a specific flight plan is possible. Typically, the region to be photographed is determined by the user on a map. The specific path of the flight plan is often determined with respect to the structure of the building or environment to be scanned and automatically executed by the system without user intervention.
A disadvantage of this approach is that scene segments of interest may not be visible from aerial views, such as the locations below the eaves of buildings. Other environment structures, such as trees and power poles, can limit the available space for a flight. Nevertheless, terrestrial views, obtained from the ground can be combined. This solution requires at least two scanning processes: one aerial and one or more terrestrial, depending on the complexity of the scene.
UAVs with descent camera capabilities and consumer-grade GPS cost around EUR 1500. Higher-end models are more expensive because they carry better cameras and/or GPS sensors, e.g., a drone with a GPS of centimetre-accuracy costs around EUR 6600 [51].

3.2.2. Terrestrial Photogrammetry

Terrestrial photogrammetry regards both indoors and outdoors. Usually, it involves the acquisition of images by a person using a handheld camera. These images must be acquired through a systematic protocol for the photogrammetric reconstruction algorithm to succeed. This protocol mainly dictates the occurrence of significant overlap between images acquired at neighbouring poses. In aerial photogrammetry, little training is needed because this protocol is implemented by the system. However, in terrestrial photogrammetry, some training is required for the protocol to be followed by the person acquiring the images.
As such, terrestrial photogrammetric reconstruction of wide areas (e.g., a room or a concert hall) exhibits the disadvantage that it becomes tedious. In particular, for large indoor regions, the following difficulties are often encountered:
  • A lack of sufficient illumination.
  • A lack of visual texture often encountered on walls and ceilings. This is a problem because photogrammetric algorithms are based on the detection and establishment of point correspondences across the acquired images. When there is a lack of texture, no key points exist in the acquired images, making the results of photogrammetry unreliable.
  • Surfaces of high reflectance, which exhibit illumination specularities when directly illuminated, such as metallic and glass objects, often found in indoor environments.
  • A large number of occlusions due to the structure of human-made indoor rooms and furniture.
Some of these difficulties are treatable with direct steps. For example, the lack of texture is treatable by the introduction of markers and the lack of illumination by the introduction of a light source. Nevertheless, complex structures, particularly in small, indoor environments, are difficult to scan even with the use of the above mitigation measures. A common reason is the difficulty of treating dynamic shadows due to the operator of the camera and luminous specularities in the environment.
A decent DSLR or mirrorless camera costs around EUR 500–1500 depending on the model and additional kit lenses. Additional equipment, such as a tripod or other stabilization gear, may be required for better shots; however, scanning time will be increased significantly due to the extra effort of the operator. Cameras with low light capabilities and stabilization (sensor or lens) are a good all-around option for not requiring special equipment.

3.2.3. Photogrammetry Using Mobile Phones

Recent advances in mobile phone development resulted in the emergence of devices that are capable of real-time processing of input from the camera along with motion features (from other sensors) to provide augmented reality (AR) experiences. AR, for example, is present in the popular ARKit developed by Apple and supported by recent Apple mobile phones. Such technology can be utilised as a complement cue for real-time feedback during the scanning of an object (taking photos) with the phone. An application of this is found in the Trnio 3D scanner mobile application [52]. As the authors of the application state, “Trnio will use the ARKit to know when to take pictures”. Therefore, it utilises both the technology for user feedback and judging optimal views for the photogrammetry pipeline. The actual 3D reconstruction runs in the cloud, i.e., images are uploaded to the vendor’s server. This is either because the computation capabilities of the phones are limited or the piece of software that runs the computation cannot be distributed to the clients.
Upcoming versions of the Trnio application will also benefit from the LiDAR sensor, which is included in the high-end models, such as iPhone 12 Pro/Pro Max. As the authors state, “depth maps from the LiDAR sensor for faster alignment, and to fill gaps”. This is a true combination of photogrammetry and direct measurement technology and is expected to be a breakthrough for small-scale reconstructions.
At its current version, we consider that Trnio and other similar applications are on par with ordinary photogrammetry, in terms of scanning resolution, user effort and drawbacks, with few advantages. First, a mobile phone is a versatile omnipresent device with much more uses than a digital photo camera or other special equipment. Cameras on mobile phones have seen great advances recently, matching professional-grade counterparts for specific applications. Moreover, the AR capability of mobile phones is a useful guide during object scanning. A disadvantage of mobile phones is that the camera sensor is very small, which decreases the performance in low light conditions. The cloud-based nature of the application is both an advantage and disadvantage. On the one hand, no powerful PC is required to run the photogrammetry software. On the other hand, privacy issues may arise due to uploading photos that may contain sensitive information to the cloud.

3.2.4. Discussion

In general, photogrammetric reconstruction is less accurate than laser scanning, but it is particularly useful for photorealistic reconstructions and practical usage in the coverage of wide areas. Photorealism is achieved for small-scale reconstruction, where the camera captures the scene or objects of interest with higher resolution, or equivalently, the subject is represented by the image sensor at a scale close to its real-world size. Photorealism is of less interest for wide outdoor areas, which, however, can be quickly captured using UAVs. Limitations of aerial photogrammetry due to occlusions can be compensated with the addition of terrestrial views.

3.3. RGB-D Scanning

During the last 10 years, the proliferation of imperceptible, active illumination sensors (RGB-D cameras) has played a significant role in the development of new Computer Vision approaches and attracted new interest to older works in the domain of 3D-surface reconstruction that was hindered by the limitations of binocular or multi-view stereo. One of these approaches is the Simultaneous Localization and Mapping (SLAM) [53], which was reinforced by RGB-D sensors due to the additional depth information that they provide.
In a comparison with photogrammetry, it falls short, particularly due to the limitations of sensor hardware as it is off-the-shelf available. The most important limitation is the range within which it is reliable: 0.5–1.5 m. This distance, in combination with the relatively low definition of the RGB camera, produces less realistic textures. Moreover, the sensor is mainly designed for indoor use. To operate outdoors, very careful illumination insulation and engineering are required. Very low levels of (sun) light are required so that they do not overcome the intensity of the active illumination component. At the same time, a controlled light source is required so that there is sufficient light for adequate texture from the RGB component of the sensor. The digitisation modality is more resistant to a lack of texture due to the use of active illumination. Nevertheless, it inherits the disadvantage of SLAM, which requires correspondences across images to retain camera tracking.
On the other hand, it exhibits the following advantages. The sensor cost is relatively low, in the order of EUR 300. The sensor is light-weight and handheld, though it requires a laptop or tablet attached because the sensor is available “as is” and not bundled with an image-recording module. The scanning procedure is simpler than photogrammetric image acquisition because active illumination allows for a higher degree of affinity in the trajectory of the handheld sensor. In comparison with photogrammetry, it exhibits the advantage of simpler camera manipulation in cluttered environments. It thereby could comprise a handy and cost-efficient tool for cases where a simple scan suffices the requirements of documentation. In the context of our investigation with digitisation tools, we developed an RGB-D scanning modality based on the state-of-the-art RGB-D SLAM and reconstruction [54] tailored for this type of sensor. This produces results that are not as brilliant as the ones obtained by the expensive handheld scanner. They are also not as good as the results obtained by tedious photographic and time-consuming photogrammetry, but the RGB-D sensor is reasonably priced and scanning is easy, with real-time feedback of what is being scanned. Figure 3 demonstrates the setup, sensor and laptop during scanning.
We propose this as a tool for LEAs that need to quickly scan an environment. Though not as accurate as laser scanning, decent results are obtained. Recently, in the last 2 years, improvements of this scanning modality appeared in the market. The improvement mainly stems from the integration of an IMU to the modality. Measurements from this component are integrated into the SLAM estimation, most often through a Kalman filter. The modality is designed mainly for autonomously moving robots, which is the domain for which SLAM was originally developed. Unfortunately, in our case, the gentle motion of a handheld camera operated by a photographer that carefully digitises a scene is not sufficient given the low inertial sensitivity of the sensor. In plain words, the sensor is not sensitive enough to measure the acceleration of the camera slowly moving by the photographer’s manipulation. As a complement to hardware improvements, recent advances to algorithmic parts of the RGB-D scanning pipeline have also led to better texture representations even for low-resolution scanning devices [55,56] and better reconstruction accuracy [57]. Decent RGB-D scanning solutions are also present in easy-to-use commercial software [58,59]; therefore, the modality can be considered suitable for even non-experts.

3.4. Hand-Held Optical and Inertial Scanning, with Real-Time Feedback

A high-end module comes from the combination of trinocular stereo with active illumination and inertial measurements coming from a sensitive IMU. For brevity, we henceforth call this type of device handheld scanners. Such scanners are manufactured by Artec [60,61], FARO [62], Creaform [63,64] and others [65,66].
The modality exhibits clear advantages to RGB-D scanning in terms of the affinity of manipulation and robustness. Moreover, real-time feedback on a lightweight companion device (e.g., a tablet computer) can significantly facilitate the acquisition process. The scanning volume of such devices ranges from 0.002 to 40 m3, and its accuracy can be in the order of 0.03–1 mm. Most devices are specialised to narrower volumes with much higher accuracy, while others to broader volumes but less accurate measurements. The modality is suitable for applications in which digitisation needs to be urgently scanned from various perspectives. Similar to any optical method, it is dependent on texture and exhibits limitations in shiny objects. The main disadvantage of pertinent devices is their high cost, which is in the order of EUR 20,000–40,000.
The scope of this modality is both indoors and outdoors, though its accuracy is reduced under bright sunlight. The reason is the same as the RGB-D sensors, that is, the intensity of sunlight is more intense than the structured illumination radiated by the sensor. As such, the emitted structured light is no longer visible to the sensors.
The high cost of these devices is an investment to be considered as it saves significant quantities of image acquisition and computation time, compared to photogrammetry. On the other hand, its texture resolution is inferior compared to photogrammetrically reconstructed texture. Corresponding devices are also simple to use with very little training. In addition, such devices are typically accompanied by a real-time feedback module (e.g., a tablet computer) that indicates un-scanned areas of the environment to the user.
According to the emission technology, handheld scanners mainly correspond to two categories: laser triangulation and structured-light. In the first category, devices comprise of two components: a laser and a camera. The laser projects a line over the object, which is captured by the camera. THe distance between the laser and camera is a priori known; therefore, the distance to object can be calculated by trigonometry. In the second category, structured-light sensors utilised a projection device to actively project structured patterns and a camera. The projector shines structured patterns onto the object whose geometry distorts those patterns, while the camera captures distorted structured images from another perspective. Then, the correspondence is established by analysing the distortion of captured structured images using techniques similar to stereo vision. Table 2 presents some existing handheld scanners that correspond to the above categories. As in the terrestrial laser scanners case, a strict comparison of scanners is difficult due to the inconsistency in the reported specifications. A notable difference between the scanners is that some of them, such as Artec Leo and Faro Freestyle, operate completely wireless, without requiring a powerful PC or laptop for the visual feedback of what is scanned. Therefore, we regard them as more versatile for urgent situations than the others.

3.5. Discussion

The applicability of digitisation modalities is presented in Table 3. The outline is that there is no “holy grail” modality. Some modalities are better for, or only applicable to, indoors than outdoors and vice versa due to physical constraints. As such, we recommend that at least two modalities should be available for adequate scanning capabilities.
The scans that are obtained are typically in common point cloud or mesh formats (.obj, .ply), which can be viewed and/or processed by conventional 3D tools, such as MeshLab [67] or Blender [68]. However, some 3D scanners, such as those of FARO Technologies, store their data in proprietary file formats (.fls), demanding their accompanying software, i.e., FARO Scene [69]. Such software is able to export to common formats; therefore, scans can be processes along with other modalities even in separate tools. Combining partial scans from the same or different modalities at the same scanning scale regards cases where a single scan is not sufficient enough to capture the entire area of surfaces that are required to complete a scan. This situation arises in both indoor and outdoor scenarios. For example, the large size of an environment leads to independent scans that have to be individually treated. In other situations, a digitisation target may exhibit occlusions. When scanning an object, the base of the object that is in contact with the ground plane is always occluded and has to be independently scanned. In an environment, structural complexity gives rise to the need of independent scanning of environment segments. For example, the areas and surfaces underneath and above a balcony when combining views from aerial and terrestrial scans. In order to treat these cases, registration tools are needed and can be found in both open-source [67,68] and proprietary [69,70] software packages.

4. Guidelines for Each Use Case

In this section, we discuss the suitability of the aforementioned scanning techniques for each digitisation use-case. Consideration depends on several factors, scanning time, scanning resolution, other technical details and/or barriers.

4.1. Prevention of a Crime

This use-case relates to actions to be considered to prevent possible threats during public events or other scheduled situations of LEAs’ interest. In this case, the scanning activity is planned. The uses of the 3D reconstruction will be mainly considered during the mission planning phase, as well as the emission monitoring. In addition, 3D reconstruction may be required for mission reporting.
The time that is available during mission planning is usually ample; hence, it is assumed that it is sufficient for each type of scan. For example, in outdoor scenarios, it might be also possible to wait for the appropriate weather to use drone photography, which significantly simplifies the 3D reconstruction by the provision of airborne overview images. Correspondingly, the most appropriate equipment is likely to be available. In the case that multiple LEA agencies share equipment, appropriate planning increases the possibility of the availability of pertinent scanning modalities. In addition, the availability of maps and possibly CAD models can compensate for a lack of data due to bad weather or unavailability of a critical scanning modality, such as a drone or a laser scanner.
Given that time and modalities are ample before the event of interest, we recommend their use and, if possible, the multiple scans of critical areas by more than one scanning modality for both outdoor and indoor places. Outdoor use-cases typically regard a wide area, such as a city square or a stadium, while indoor environments refer typically to wide-area rooms that can accommodate a large number of persons. For mission monitoring and reporting, the available time is usually much less; therefore, convenient devices, such as handheld laser scanner or mobile photography, are the most appropriate for quickly scanning appropriate smaller areas and objects of interest.
In outdoor use-cases, using aerial photogrammetry is often the only way to obtain a 3D reconstruction of the top of a building. For indoor environments, laser scanning is a practical and efficient way to obtain a 3D reconstruction of a wide-area indoor scene. The procedure is automatic, and the obtained reconstruction exhibits high accuracy. Complex environments may call for several scans. This requires little effort, as the only user action required is to place the scanner at a new position. An understanding of what is the FOV of the scanner is required by the operator to avoid leaving parts of the scene uncovered.
In wide-area indoor environments, photogrammetry can be particularly challenging, as these environments are abundant in textureless and glossy surfaces (e.g., walls, floors etc.). Photogrammetry is not the optimal modality for the treatment of such scenes. However, in case that it is the only modality available, the use of markers is highly recommended. Marker placement has been confirmed to be allowed by LEAs at the time of 3D scene reconstruction, as it does not interfere with their processing protocol. In addition, partial scanning of the environment is recommended to reduce errors due to camera-tracking drift. The partial reconstruction can be registered at a later stage.
Though the use-case is preventive, special regions of the scene may be of particular interest for the preparation of a situation. Such can be, for example, a particular location where a LEA can be placed to better observe the scene. In another scenario, a particular place in the scene may be needed to be realistically presented in a mission preparation stage, e.g., a particular piece of machinery situated in that environment. As such, their detailed reconstruction may be required.
For this purpose, photogrammetry and handheld scanning are two reliable candidates, with the better of them being the handheld scanner. Nevertheless, decent and insightful results can be obtained with careful (and tedious to achieve) photogrammetric settings. In indoor environments, an RGB-D scanning modality can be used for scene details, though with inferior results regarding texture reconstruction (typically due to the medium resolution of the RGB component of such sensors).
Registration is a very important part of this use-case. As discussed in Section 3, it is relevant to the combination of multiple scans from the same or different modalities at the same or different scales of observation.
Finally, this use-case requires that the reconstruction is georeferenced, such that it is associated with the location of LEAs and their vehicles (e.g., provided by their mobile phones). This is usually straightforward as both aerial and terrestrial cameras contain GPS information. If this is not available, an external GPS device can be utilised. For example, some DSLR cameras do not incorporate GPS modules; however, geolocation information can be provided either using specialised modules [71], a mobile phone and relevant application for receiving the location over a Bluetooth connection or simply associating photos and GPS logs at a later stage as soon as clocks on both devices are synchronized.
Examples of multimodal scans are provided below. For the outdoor use-case, we present an example reconstruction of a building located in a suburban environment. The building was initially reconstructed using photogrammetry and aerial images captured from a drone (Figure 4). Subsequently, points of interest around the building were also scanned in more detail using a handheld scanner (Figure 5). For the indoor use-case, we present an example reconstruction of an office room scanned with RGB-D modality (Figure 6) and a lab room scanned using photogrammetry (Figure 7). To achieve higher detail of particular objects of complex geometry, a handheld laser scanner is a better option (Figure 8).

4.2. Analysis of a Crime Scene

This use-case regards post-event actions beyond a crime, simultaneously requiring plenty of accurate scene information and taking into account that acquisition time may be critical. Its goal is the acquisition of as much information from the scene, accurately measured and precisely localised, in the time and hardware resources available.
Spatial accuracy plays an important role in the hypothetical investigation of the scene and the evidence-based verification of the rejection of hypotheses. Moreover, a detailed scanning can digitise evidence, which may be studied afterwards.
Time criticality mainly pertains to outdoor cases of this scenario and, in particular, the case where event traces (i.e., footprints, tire marks) are only temporarily available, as they can be eroded by the weather. In some other cases, rapid acquisition of images and measurements may be required because the target of reconstruction must be removed from the event location (i.e., cars obstructing traffic). The uses of the corresponding reconstructions are to be used in the documentation of evidence, crime investigation and educational scenarios of cases of specific interest.
The spatial range of this type of reconstruction is relatively smaller than mission planning because the crime event has already occurred at a specific location. Hence, the region is smaller, which makes the detailed reconstruction of the scene more feasible.
In contrast to mission planning, accuracy is more important in this case, as it regards the documentation of potential evidence. Moreover, accurate measurements can be of importance to the interpretation of the acquired data; for example, the accurate measurement of the size of a footprint is relevant to the information to be obtained from its analysis. Moreover, in contrast to mission planning, the suitability of weather conditions or the availability of basic infrastructures (i.e., supply of electricity) is essential. Finally, as this reconstruction is not planned, the higher-end equipment may not be available.
Crime scene processing and documentation is not an instantaneous process but rather contains stages. Though existing LEA protocols already contain state-of-the-art photographic documentation (e.g., 360 deg panoramic photographs), the inclusion of 3D scanning is not yet widely integrated into the standard operating procedure of processing teams. Furthermore, in contrast to photographic documentation that can be acquired without entering the scene and can be performed by LEAs, detailed and metrically accurate 3D scanning still requires expertise and may require the inclusion of markers.
As such, scanning the scene is not possible anytime during scene processing, and thus, we should find solutions towards this end. To achieve this, we will use existing LEA documentation and investigation hypothesis tools that (a) utilise photographic documentation acquired from LEAs and (b) facilitate the formulation of 3D hypotheses on how the scene could be at multiple time instants before the arrival of the CSI team.
The proposed approach is based on LEA requirements and is as follows. First, an overview of the scene is required. This can be provided by a range of means from a single photograph to a detailed 3D reconstruction of the area. Photographic documentation is necessary, even if a reconstruction of the scene is performed later on. Initial photographs may document the scene at an earlier state than when the scene is available for scanning. In that state, the victim may have been removed from the scene. In addition, LEAs may determine as necessary the removal of scene elements (i.e., furniture) to uncover inaccessible parts of the scene. As such, the initial photographic documentation is valuable because it provides a record of the scene in that earlier state. This documentation can be used later on to simulate the scene as found, e.g., including the victim and furniture.
Utilizing 3D scanning for crime scene documentation may require preparation before scanning, e.g., to address the time-related factors mostly pertaining to outdoor scenes. For example, to retain event traces (e.g., footprints, tire marks) that may deteriorate due to weather, rapid documentation is required. Otherwise, a tent is recommended for the preservation of pertinent traces until they are digitised. This type of protection can extend the time that pertinent traces exist and provide the opportunity for their scan.
Laser scanning is a practical and efficient way to obtain a 3D overview of the scene. This type of input provides the possibility of documenting the scene without intervention and from a distance. More importantly, the procedure is automatic, and the obtained reconstruction exhibits high accuracy. Nevertheless, in complex environments, several scans might be required. This requires little effort, as the only user action required is to place the scanner at a new position. However, an understanding of what is the FOV of the scanner is required by the operator to avoid leaving parts of the scene uncovered. Such skill is easy to acquire during training sessions at various locations before the utilisation of the scanner for the event of interest.
The partial scans obtained from the scene can be semi- or fully automatically reassembled later on, sometimes without the need of marker placement.If the structure of the scene is rich and contains definite structures (e.g., planes, edges, corners), such as in indoor and human-made outdoor environments, then point clouds can be only registered based on their structure. However, in cases of poor structure, e.g., outdoor fields, the placement of markers is recommended to compensate for the lack of structure.
Laser scanning will capture scene details with accuracy but is not necessarily the optimal way to reconstruct traces on the ground. The reason is that due to the obliqueness of view and self-occlusions they are usually not fully visible in the standard tripod placement of the scanner.
In indoor environments, the absence of a laser scanner can be compensated by the use of an RGB-D scanning modality but results in scans of inferior detail and accuracy. In addition, the procedure is not automatic and demands user effort.
Photogrammetryfinds application in this use-case in several ways. In outdoor scenes, aerial photogrammetry is an efficient way to obtain an overview of the scene. The significance of this view is denoted by the corresponding term, “bird’s eye view”. Even if the scene is small, aerial photogrammetry is an efficient way to acquire a scene overview without intervention. Handheld photogrammetry of the same scene exhibits the disadvantage of requiring person effort and, in addition, that this person would have to walk within the scene. Moreover, if the scene contains important elements that are higher than human reach, these may not be able to be scanned from above, unless a drone is utilised.
In indoor environments, photogrammetry can be particularly challenging, mainly because very often the environment:
  • contains surfaces that are poor in structure, e.g., white walls;
  • is cluttered, giving rise to occlusions and increasing the difficulty of the scanning process;
  • contains shiny surfaces, e.g., the floor and polished surfaces.
A light source and an illumination diffuser are necessary for almost every indoor situation. In the aforementioned conditions, photogrammetry does not typically suffice to treat wide-area indoor scenes, such as a large room or a concert hall. If photogrammetry is the only modality available to 3D scan the entire scene, the use of markers is recommended. Marker placement has been confirmed to be allowed by LEAs at the time of 3D scene reconstruction, as it does not interfere with their processing protocol.
Photogrammetry is a time and cost-efficient way to reconstruct a portion of a scene in higher than overview detail. Such details are well reconstructed with texture realism. They provide a very clear way to inspect the portion of the scene in photorealistic quality. Outdoor scenes are usually rich in texture due to the presence of soil, pebbles, leaves, etc. However, in some cases, the use of markers may be required to compensate for lack of texture, e.g., in the presence of snow.
Detailed reconstruction of scene details with high-geometric accuracy is recommended to be carried out with a handheld scanner. This is currently the fastest and simplest modality to record the geometric structure of traces such as a footprint, a tire mark, or a bump on the surface of a car. In the case where geometrical features are to be extracted (e.g., the pattern of a sole in a footprint or the structure of tire marks), such a modality provides metric accuracy and geometric structure that is devoid of shadow effects. Figure 9 demonstrates the application of handheld laser scanning for the reconstruction of a footprint.

Special Case: Indoor Crime Scene

Usually, crimes and, more specifically, assassinations take part indoors and at particular locations. Under this scope, we choose to demonstrate the entirety of modalities that we have considered so far in a simulated indoor crime scene. An overview and some details of the scene are presented in Figure 10. Such pictures would be included in traditional photographic documentation of the scene. In subsequent paragraphs, we present the scene, the included objects and the scanning modalities that were utilised for each of them. More specifically, we demonstrate indicative screenshots as obtained from the reconstruction using each modality. For better comprehension of the reconstruction quality obtained from each modality and/or further research on the topic, we release the 3D files of the scene to the public (Link to the Supplementary Materials data: https://doi.org/10.5281/zenodo.5116478, accessed on 28 July 2021). We also provide ground-truth measurements as a comparison to those that could be obtained using the 3D reconstructions.
The crime (assassination) has been committed in a small room (Figure 10a,e). The body of a dead man (Figure 10d) is located on the floor in the middle of the room. Suspicious objects lie all around. Propaganda magazines and leaflets are on a table and a noticeboard (Figure 10g,h). A mobile phone and drugs are located on the bed (Figure 10c), while a tablet was found under the bed (Figure 10f). Some tools relevant to bomb preparation are located on the floor (Figure 10b).
The available modalities were the following:
  • Terrestrial laser scanner (FARO Focus M70)
  • Handheld laser scanner (Faro Freestyle 3D X)
  • Custom RGB-D scanner based on Asus XTION structured-light sensor
  • DSLR for obtaining photos and Pix4D photogrammetry software
  • iPhone 12 Pro Max and Trnio 3D Scanner application
An overview of the scene was scanned solely using the terrestrial laser scanner. Such modality was selected because it is the most prominent for obtaining a quick scan of the entire room without intervening with the objects of interest. This modality required special markers, therefore some fiducial markers were printed and placed all around the scene, as shown in Figure 10. The resulted reconstruction is demonstrated in Figure 11.
The details of the scene were scanned with all modalities. The results are shown in Figure 12, Figure 13, Figure 14, Figure 15, Figure 16 and Figure 17. The partial scans can be incorporated in the larger scene, as shown in Figure 18. The process is semi-automatic using appropriate tools and includes (a) manual placement of the partial scan near the location of the object in the large scan and (b) automatic registration using correspondences of the 3D structure.
In order to obtain a better indication about the measurement accuracy for each 3D scanning modality, we conducted an indicative quantitative analysis. The analysis was conducted on the basis of comparison of all modalities; therefore, the scope is in some small and medium-sized objects, with corresponding sizes in the order of a few millimetres up to a few centimetres. Ground-truth measurements were obtained using a digital meter and, where possible, confirmed from the manufacturers specifications. We note that in the case of photogrammetrical reconstruction, the resulting mesh is not in real scale because there is no direct measurement of distances rather than relative. To get the real scale, we used the A4 papers with markers that were present in the scene since such object is of known dimensions. Table 4 illustrates the ground-truth and corresponding measurements obtained in Meshlab. For each object, an average of five measurements is reported in order to compensate for errors while using the corresponding tool. For some modalities, the corresponding objects were not reconstructed well; therefore, measurement was not possible.
First of all, we observe that measurements obtained from the reconstructions of the terrestrial laser scanner were significantly diverse from corresponding real sizes. This confirms that the scope of such a scanner is not ideal for small objects but rather for big objects and structures. For the other modalities, we observe that the handheld laser scanner and the DSLR + Pix4D were comparable, while iPhone + Trnio was close to them. The measurements from the RGB-D modality were either significantly diverse or corresponding reconstructions where inappropriate for measurements in almost all cases. More specifically, we observe that the handheld laser scanner performed according to its specification (1 mm accuracy), except in the case of the measurement of the screw. We regard that such shortcoming is due to the reflective surface of the screw, which is a known disadvantage for laser scanners in general. As stated above, photogrammetrical modalities required the utilisation of an object of known size (control object) for obtaining the real scale of the reconstruction. On the one hand, this extra step may have negatively affected the final measurement due to propagation of the error. On the other hand, placing a control object before scanning may be a hard requirement during an urgent situation. Overall, we propose that the handheld laser scanner is the most appropriate tool for crime scene analysis due to its versatility and consistency in measurements. If the provided time for scanning is ample, the photogrammetry-based techniques are also appropriate alternatives. Finally, we regard that the RGB-D modality is appropriate for only a rough reconstruction of larger objects, and it is not reliable for smaller ones, especially for obtaining measurements.

4.3. Education of Stakeholders through VR Representation

This use-case regards the education of LEAs through the virtual presentation of realistic crime scenes. Therefore, the uses of the corresponding reconstructions are to provide content for the VR components of education. In this context, the digitisation of individual objects and traces to be incorporated in the reconstructed environments is also included. The particular use-case does not depend on time constraints; thus, it is possible to gather all of the required equipment, wait for suitable weather, arrange for optimal illumination conditions, etc.
The typical use of reconstructions of this use-case regards a wide or small area. This area is intended to be scanned in detail. In this way, a person can navigate in VR and look at any region of the reconstruction in detail to inspect it and assess the importance of each one.
The main challenges in this use-case are the requirements for texture realism and the scene completeness, so that trained LEAs can navigate in a complete environment without holes and artefacts. This means, on the one hand, that the reconstructed environment should be photorealistic, of good quality and clean from reconstruction artefacts that may obstruct the purposes of the educational activity. For example, in a task that focuses on the detection of traces, reconstruction should be seamless, so that the location of traces is not revealed due to the artefacts. On the other hand, as these are educational scenarios and there is no hard proof related to them, there is the possibility of post-processing to improve the reconstruction result. For example, it is possible to edit the reconstruction to facilitate the insertion of items and traces that were not present in the scene during the scan. In addition, some of the reconstruction artefacts can be suppressed in post-processing. Figure 19 illustrates the two versions of the scan obtained using the laser scanning modality from an amphitheatre and, more specifically, before and after post-processing. An overview of the scene was captured by a terrestrial laser scanner (FARO Focus M70) and some details (e.g., the podium) by a handheld laser scanner (Faro Freestyle 3D X). The scans were combined inside Faro Scene software [69] and post-processed in Blender [68]. We observe that post-processing resulted in a more consistent and photorealistic scan as educational content requires.
The requirement for texture realismleads to the need of using photogrammetry rather than laser scanning, at least for the reconstruction of textures. The reason is the high level of texture realism provided by photogrammetric methods. Moreover, the acquisition of images in the real scene is required to take place from a close distance to the imaged surfaces. In this way, a sufficient resolution can be acquired for the generation of the texture of the reconstruction.
The requirement for scene completenessleads to the need for all parts of the scene being scanned in detail. This can be quite challenging for indoor environments, which typically exhibit many occlusions. To cope with occlusions, a significant number of images is required so that scene surfaces are sufficiently imaged. The reason is to cope with occlusions and image all surfaces of the scene so that they can be reconstructed. In addition, the complexity of the environment often causes camera tracking (employed by photogrammetric algorithms) to be lost. This leads to the situation where multiple, unregistered reconstructions of the environment are obtained. These need to be combined at a later stage.
Fully scanning a scene can be very time-demanding, and, thus, planning is required to obtain useful results efficiently. Although time frames for this use-case are not strict, planning would help to avoid time-consuming interference of the scanning activity with daily activities of the scene of interest. Some scene areas can be challenging to obtain, such as the area under a bed. Pertinent reconstruction tasks can benefit from the preparation of the 3D scene to be scanned and simulated. Thereby, scenes are to be carefully selected and prepared, as are intended to comprise educational material. This preparationcan be facilitated as follows:
  • Reconstruct an empty scene;
  • Separately scan items to be inserted in the scene;
  • Assemble individual scans into a curated educational scene.
This way, the application should allow the trainee to look for traces there, just as in a real environment. Moreover, using this approach, models of environments and objects can be combined in multiple ways. In this way, multiple educational scenarios can be covered by reusing the repository of digital assets.
As an effect, this design provides flexibility to the corresponding educational applications. For example, one of the end-user requirements is that the trainer should be able to change the occurrence and locations of different objects and traces in the simulated environment. Having objects independently scanned from the environment permits the placement of a simulated object at any location in the 3D scene. In contrast, if objects were scanned along with the environment, then moving an object would create a hole in the reconstruction.
Such an approach was followed by synthesizing a backyard scene. The scene was initially captured by photogrammetry, as shown in Figure 20. A wallet and a mobile phone were scanned independently (Figure 21) and placed into the backyard scene using an appropriate tool (Figure 22).

5. Conclusions

In this paper, we presented the state of 3D digitisation technologies from the aspect of crime prevention, crime investigation and the education of LEAs. Each digitisation modality was further presented in various contexts, including outdoors and indoors, large and small regions of interest, small and large time frames for scanning.
Our findings confirmed the efficacy of high-end digitisation modules, i.e., terrestrial and handheld laser scanners, especially for cases where precise measurements are of greater interest than texture quality. As their high cost is a deterrent for buying, we recommend that either one device is shared among regional LEAs or it is first rented for training and subsequently on-demand for the scans. As for the two modalities, we recommend the handheld laser scanner because it is the most versatile and can be carried both during the mission and after a possible event of interest during the investigation. Aerial photogrammetry is the de facto for scanning large outdoor areas; however, for blind spots and regions where high texture quality is mandatory, terrestrial counterpart or laser scanning is needed. Terrestrial photogrammetry, and especially mobile-based, proved the best for high-quality textures that are mandatory for VR presentation or other means of close inspection. In particular, under certain conditions, such as adequate lighting of the scene, mobile-based photogrammetry is very flexible, as a mobile phone is an omnipresent device. Finally, an RGB-D modality lies somewhere in the middle of laser-scanning and photogrammetry in the sense that it is low-cost and permits for direct measurements with sufficient precision in most circumstances.
As technology advances, more and more devices will emerge. For example, close-range 3D laser scanning modality on mobile phones and wide-range counterpart on drones is expected to lift 3D scanning, especially if technologies are used as complementary to existing photogrammetry-based solutions. Consumerization of 3D scanning will naturally bring such technology in LEAs workflows; however, we consider that existing solutions already permit for broad utilisation in digital forensics in a variety of use-cases. In the future, we will closely work with LEAs to transfer our insights and findings and define systematic guidelines for their specific use-cases.

Supplementary Materials

Photos of the simulated crime scene and ground-truth measurements are attached along with the dataset that is available on https://doi.org/10.5281/zenodo.5116478, accessed on 28 July 2021.

Author Contributions

Conceptualization, G.G., X.Z., S.-E.F. and S.A.; methodology, G.G. and X.Z.; software, G.G.; validation, G.G.; formal analysis, G.G.; investigation, G.G. and X.Z.; resources, X.Z.; data curation, G.G. and T.E.; writing—original draft preparation, G.G.; writing—review and editing, G.G., X.Z. and S.-E.F.; visualization, G.G.; supervision, X.Z., S.-E.F., S.A., T.T. and S.V.; project administration, T.T. and S.V.; funding acquisition, T.T. and S.V. All authors have read and agreed to the published version of the manuscript.

Funding

This research has received funding from the European Union’s CONNEXIONs, Horizon 2020 research and innovation program, under grant agreement No 786731.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The multi-modal dataset that was described and briefly presented in the text is available at https://doi.org/10.5281/zenodo.5116478, accessed on 28 July 2021.

Acknowledgments

The facilities for the creation of the dataset were provided by the FORTH-ICS internal RTD Programme ’Ambient Intelligence and Smart Environments’. The authors would also like to thank Panagiotis Koutlemanis for their contribution to the creation of the dataset.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
ARAugmented reality
CADComputer-aided design
CSICrime scene investigation
DSLRDigital single-lens reflex camera
IMUInertial measurement unit
FOVField of view
GCPGround control point
GISGeographic information system
GNSSGlobal navigation satellite system
GPSGlobal positioning system
LEALaw enforcement agency
RGBRGB color model (from red, green and blue components)
SLAMSimultaneous localization and mapping
VRVirtual reality

References

  1. Osman, M.R.; Tahar, K.N. 3D accident reconstruction using low-cost imaging technique. Adv. Eng. Softw. 2016, 100, 231–237. [Google Scholar] [CrossRef]
  2. Kreul, D.; Thali, M.; Schweitzer, W. Case report: Forensic 3D-match of hair brush and scalp abrasions revealing dynamic brush deformation. J. Forensic Radiol. Imaging 2019, 16, 34–37. [Google Scholar] [CrossRef]
  3. Amamra, A.; Amara, Y.; Boumaza, K.; Benayad, A. Crime Scene Reconstruction with RGB-D Sensors. In Proceedings of the 2019 Federated Conference on Computer Science and Information Systems (FedCSIS), Leipzig, Germany, 1–4 September 2019; pp. 391–396. [Google Scholar]
  4. Le, Q.; Liscio, E. A comparative study between FARO Scene and FARO Zone 3D for area of origin analysis. Forensic Sci. Int. 2019, 301, 166–173. [Google Scholar] [CrossRef]
  5. Liscio, E.; Le, Q.; Guryn, H. Accuracy and reproducibility of bullet trajectories in FARO zone 3D. J. Forensic Sci. 2020, 65, 214–220. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  6. Fahrni, S.; Delémont, O.; Campana, L.; Grabherr, S. An exploratory study toward the contribution of 3D surface scanning for association of an injury with its causing instrument. Int. J. Leg. Med. 2019, 133, 1167–1176. [Google Scholar] [CrossRef] [PubMed]
  7. Forensic Technology Center of Excellence. In Success Story: Advancing 3D Virtual Microscopy for Firearm Forensics; Forensic Technology Center of Excellence: Research Triangle Park, NC, USA, 2019.
  8. Wieczorek, T.; Przyłucki, R.; Lisok, J.; Smagór, A. Analysis of the Accuracy of Crime Scene Mapping Using 3D Laser Scanners. In International Workshop on Modeling Social Media; Springer: Cham, Swizerland, 2018; pp. 406–415. [Google Scholar]
  9. Zhang, W.; Kosiorek, D.A.; Brodeur, A.N. Application of Structured-Light 3-D Scanning to the Documentation of Plastic Fingerprint Impressions: A Quality Comparison with Traditional Photography. J. Forensic Sci. 2019, 65, 784–790. [Google Scholar] [CrossRef]
  10. Whelan, D.; Weggel, D.; Moss, J.; Howe, A. Post-Blast Investigative Tools for Structural Forensics by 3D Scene Reconstruction and Advanced Simulation. Natl. Crim. Justice Ref. Serv. 2019, 19, 252954. [Google Scholar]
  11. Cerreta, J.S.; Burgess, S.S.; Coleman, J. UAS for Public Safety Operations: A Comparison of UAS Point Clouds to Terrestrial LIDAR Point Cloud Data using a FARO Scanner. Int. J. Aviat. Aeronaut. Aerosp. 2020, 7, 6. [Google Scholar] [CrossRef]
  12. Villa, C.; Jacobsen, C. The Application of Photogrammetry for Forensic 3D Recording of Crime Scenes, Evidence and People. In Essentials of Autopsy Practice: Reviews, Updates and Advances; Springer International Publishing: Cham, Swizerland, 2019; pp. 1–18. [Google Scholar] [CrossRef]
  13. Luchowski, L.; Pojda, D.; Tomaka, A.A.; Skabek, K.; Kowalski, P. Multimodal Imagery in Forensic Incident Scene Documentation. Sensors 2021, 21, 1407. [Google Scholar] [CrossRef]
  14. Le, Q.; Liscio, E. FARO Zone 3D Area of Origin Tools with Handheld 3D Data. J. Assoc. Crime Scene Reconstr. 2019, 23, 1–10. [Google Scholar]
  15. Esaias, O.; Noonan, G.W.; Everist, S.; Roberts, M.; Thompson, C.; Krosch, M.N. Improved Area of Origin Estimation for Bloodstain Pattern Analysis Using 3D Scanning. J. Forensic Sci. 2019, 65, 722–728. [Google Scholar] [CrossRef]
  16. Liscio, E.; Bozek, P.; Guryn, H.; Le, Q. Observations and 3D Analysis of Controlled Cast-Off Stains. J. Forensic Sci. 2020, 65, 1128–1140. [Google Scholar] [CrossRef]
  17. Süncksen, M.; Teistler, M.; Hamester, F.; Ebert, L.C. Preparing and guiding forensic crime scene inspections in virtual reality. In Proceedings of the Mensch und Computer, Hamburg, Germany, 8–11 September 2019; pp. 755–758. [Google Scholar]
  18. Mach, V.; Valouch, J.; Adámek, M.; Ševčík, J. Virtual reality–level of immersion within the crime investigation. In MATEC Web of Conferences; EDP Sciences: Les Ulis Cedex A, France, 2019; Volume 292, p. 01031. [Google Scholar]
  19. Sieberth, T.; Dobay, A.; Affolter, R.; Ebert, L.C. Applying virtual reality in forensics–a virtual scene walkthrough. Forensic Sci. Med. Pathol. 2019, 15, 41–47. [Google Scholar] [CrossRef]
  20. Sieberth, T.; Dobay, A.; Affolter, R.; Ebert, L. A toolbox for the rapid prototyping of crime scene reconstructions in virtual reality. Forensic Sci. Int. 2019, 305, 110006. [Google Scholar] [CrossRef] [PubMed]
  21. Ebert, L.C.; Ptacek, W.; Breitbeck, R.; Fürst, M.; Kronreif, G.; Martinez, R.M.; Thali, M.; Flach, P.M. Virtobot 2.0: The future of automated surface documentation and CT-guided needle placement in forensic medicine. Forensic Sci. Med. Pathol. 2014, 10, 179–186. [Google Scholar] [CrossRef] [PubMed]
  22. Norman, D.G.; Wade, K.A.; Williams, M.A.; Watson, D.G. Caught Virtually Lying—Crime Scenes in Virtual Reality Help to Expose Suspects’ Concealed Recognition. J. Appl. Res. Mem. Cogn. 2020, 9, 118–127. [Google Scholar] [CrossRef]
  23. Wang, J.; Li, Z.; Hu, W.; Shao, Y.; Wang, L.; Wu, R.; Ma, K.; Zou, D.; Chen, Y. Virtual reality and integrated crime scene scanning for immersive and heterogeneous crime scene reconstruction. Forensic Sci. Int. 2019, 303, 109943. [Google Scholar] [CrossRef]
  24. Tredinnick, R.; Smith, S.; Ponto, K. A cost-benefit analysis of 3D scanning technology for crime scene investigation. Forensic Sci. Int. Rep. 2019, 1, 100025. [Google Scholar] [CrossRef]
  25. Liu, S. Three-dimension Point Cloud Technology and Intelligent Extraction of Trace Evidence at the Scene of Crime. In Journal of Physics: Conference Series; IOP Publishing: Bristol, UK, 2019; Volume 1237, p. 042027. [Google Scholar]
  26. Johnson, A.; Pandey, A. Three-dimensional scanning-A futuristic technology in forensic anthropology. J. Indian Acad. Forensic Med. 2019, 41, 128–131. [Google Scholar] [CrossRef]
  27. Bahirat, K.; Prabhakaran, B. A study on lidar data forensics. In Proceedings of the 2017 IEEE International Conference on Multimedia and Expo (ICME), Hong Kong, China, 10–14 July 2017; pp. 679–684. [Google Scholar]
  28. Berezowski, V.; Mallett, X.; Moffat, I. Geomatic techniques in forensic science: A review. Sci. Justice 2020, 60, 99–107. [Google Scholar] [CrossRef]
  29. Artec Europe. Artec Ray Laser Scanner. Available online: https://www.artec3d.com/portable-3d-scanners/laser-ray-v2 (accessed on 6 July 2021).
  30. FARO Technologies. FARO Focus Laser Scanners. Available online: https://www.faro.com/en/Products/Hardware/Focus-Laser-Scanners (accessed on 6 July 2021).
  31. Leica Geosystems AG. Leica ScanStation P40/P30—High-Definition 3D Laser Scanning Solution. Available online: https://leica-geosystems.com/products/laser-scanners/scanners/leica-scanstation-p40–p30 (accessed on 6 July 2021).
  32. Leica Geosystems AG. Leica ScanStation P50—Long Range 3D Terrestrial Laser Scanner. Available online: https://leica-geosystems.com/products/laser-scanners/scanners/leica-scanstation-p50 (accessed on 6 July 2021).
  33. Leica Geosystems AG. Leica RTC360 3D Laser Scanner. Available online: https://leica-geosystems.com/products/laser-scanners/scanners/leica-rtc360 (accessed on 6 July 2021).
  34. Zoller + Fröhlich GmbH. Z + F IMAGER® 5010X, 3D Laser Scanner. Available online: https://www.zf-laser.com/Z-F-IMAGER-R-5010X.3d_laser_scanner.0.html?&L=1 (accessed on 6 July 2021).
  35. Teledyne Optech. Polaris Terrestrial Laser Scanner (TLS) Series. Available online: https://www.teledyneoptech.com/en/products/static-3d-survey/polaris/ (accessed on 6 July 2021).
  36. Trimble Inc. Trimble Laser Scanning Solutions. Available online: https://geospatial.trimble.com/products-and-solutions/laser-scanning (accessed on 6 July 2021).
  37. RIEGL Laser Measurement Systems GmbH. RIEGL Terrestrial Laser Scanners. Available online: http://www.riegl.com/nc/products/terrestrial-scanning/ (accessed on 6 July 2021).
  38. Aniwaa Pte. Ltd. Aniwaa 3D Scanner Comparison Engine. Available online: https://www.aniwaa.com/comparison/3d-scanners (accessed on 6 July 2021).
  39. AliceVision. Meshroom: A 3D Reconstruction Software. Available online: https://github.com/alicevision/meshroom (accessed on 25 May 2021).
  40. PhotoModeler Technologies. PhotoModeler Photogrammetry Software. Available online: https://www.photomodeler.com (accessed on 6 July 2021).
  41. Agisoft LLC. Agisoft Metashape. Available online: https://www.agisoft.com (accessed on 6 July 2021).
  42. Pix4D SA. PIX4Dmapper. Available online: https://www.pix4d.com/product/pix4dmapper-photogrammetry-software (accessed on 6 July 2021).
  43. Autodesk Inc. Recap Pro. Available online: https://www.autodesk.com/products/recap (accessed on 6 July 2021).
  44. 3Dflow SRL. 3DF Zephyr. Available online: https://www.3dflow.net/3df-zephyr-photogrammetry-software (accessed on 6 July 2021).
  45. OpenDroneMap Authors. ODM—A Command Line Toolkit to Generate Maps, Point Clouds, 3D Models and DEMs from Drone, Balloon or Kite Images. Available online: https://opendronemap.org (accessed on 6 July 2021).
  46. SimActive Inc. Correlator3D. Available online: https://www.simactive.com/correlator3d-mapping-software-features.html (accessed on 6 July 2021).
  47. PMS AG. Elcovision 10. Available online: https://en.elcovision.com/ (accessed on 6 July 2021).
  48. DroneDeploy. DroneDeploy. Available online: https://www.dronedeploy.com (accessed on 6 July 2021).
  49. Mezhenin, A.; Polyakov, V.; Prishhepa, A.; Izvozchikova, V.; Zykov, A. Using Virtual Scenes for Comparison of Photogrammetry Software. In Advances in Intelligent Systems, Computer Science and Digital Economics II; Hu, Z., Petoukhov, S., He, M., Eds.; Springer International Publishing: Cham, Swizerland, 2021; pp. 57–65. [Google Scholar]
  50. Kingsland, K. Comparative analysis of digital photogrammetry software for cultural heritage. Digit. Appl. Archaeol. Cult. Herit. 2020, 18, e00157. [Google Scholar] [CrossRef]
  51. DJI. Phantom 4 RTK. Available online: https://www.dji.com/gr/phantom-4-rtk (accessed on 25 May 2021).
  52. Trnio, Inc. Trnio 3D Scanner. Available online: https://www.trnio.com/ (accessed on 25 May 2021).
  53. Durrant-Whyte, H.; Bailey, T. Simultaneous localization and mapping: Part I. IEEE Robot. Autom. Mag. 2006, 13, 99–110. [Google Scholar] [CrossRef] [Green Version]
  54. Mur-Artal, R.; Tardós, J.D. Orb-slam2: An open-source slam system for monocular, stereo, and rgb-d cameras. IEEE Trans. Robot. 2017, 33, 1255–1262. [Google Scholar] [CrossRef] [Green Version]
  55. Lee, J.H.; Ha, H.; Dong, Y.; Tong, X.; Kim, M.H. Texturefusion: High-quality texture acquisition for real-time rgb-d scanning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 1272–1280. [Google Scholar]
  56. Ha, H.; Lee, J.H.; Meuleman, A.; Kim, M.H. NormalFusion: Real-Time Acquisition of Surface Normals for High-Resolution RGB-D Scanning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 21–24 June 2021; pp. 15970–15979. [Google Scholar]
  57. Xu, Y.; Zhou, L.; Tang, H.; Wu, Q.; Xie, Q.; Chen, H.; Wang, J. Robust and Accurate RGB-D Reconstruction with Line Feature Constraints. IEEE Robot. Autom. Lett. 2021, 6, 6561–6568. [Google Scholar] [CrossRef]
  58. ImFusion GmbH. RecFusion Website. Available online: https://www.recfusion.net (accessed on 6 July 2021).
  59. DotProduct LLC. Dot3D Pro Website. Available online: https://www.dotproduct3d.com/dot3dpro.html (accessed on 6 July 2021).
  60. Artec Europe. Artec Spider Handheld Scanner. Available online: https://www.artec3d.com/portable-3d-scanners/artec-spider (accessed on 6 July 2021).
  61. Artec Europe. Artec Leo Handheld Scanner. Available online: https://www.artec3d.com/portable-3d-scanners/artec-leo (accessed on 6 July 2021).
  62. FARO Technologies. FARO Freestyle 2 Handheld Scanner. Available online: https://www.faro.com/en/Products/Hardware/Freestyle-2-Handheld-Scanner (accessed on 6 July 2021).
  63. Creaform. Go!SCAN SPARK Handheld Scanner. Available online: https://www.creaform3d.com/en/handheld-portable-3d-scanner-goscan-3d (accessed on 6 July 2021).
  64. Creaform. HandySCAN 3D Handheld Scanner, SILVER Series. Available online: https://www.creaform3d.com/en/handyscan-3d-silver-series-professional-3d (accessed on 6 July 2021).
  65. Scantech (Hangzhou) Co. Ltd. KSCAN-Magic Composite 3D Scanner. Available online: https://www.3d-scantech.com/product/kscan-magic-composite-3d-scanner (accessed on 6 July 2021).
  66. SHINING 3D. EinScan HX, Hybrid Blue Laser & LED Light Source Handheld 3D Scanner. Available online: https://www.einscan.com/handheld-3d-scanner/einscan-hx (accessed on 6 July 2021).
  67. Cignoni, P.; Corsini, M.; Ranzuglia, G. MeshLab: An Open-Source 3D Mesh Processing System. ERCIM News 2008, 2008, 129–136. [Google Scholar]
  68. Community, B.O. Blender—A 3D Modelling and Rendering Package; Blender Foundation, Stichting Blender Foundation: Amsterdam, The Netherlands, 2018. [Google Scholar]
  69. FARO Technologies. FARO SCENE Software Website. Available online: https://www.faro.com/en/Products/Software/SCENE-Software (accessed on 6 July 2021).
  70. Leica Geosystems AG. Leica Cyclone REGISTER. Available online: https://leica-geosystems.com/products/laser-scanners/software/leica-cyclone/leica-cyclone-register (accessed on 6 July 2021).
  71. Canon. GPS Receiver GP-E2. Available online: https://www.usa.canon.com/internet/portal/us/home/products/details/cameras/gps-receivers/gps-receiver-gp-e2 (accessed on 6 July 2021).
Figure 1. A taxonomyof scene types by size and type.
Figure 1. A taxonomyof scene types by size and type.
Forensicsci 01 00008 g001
Figure 2. Photogrammetric reconstruction using the structure from motion. (Left) Core assumption that different viewpoints share same world points; (Middle) the building captured from many viewpoints; (Right) the reconstructed building.
Figure 2. Photogrammetric reconstruction using the structure from motion. (Left) Core assumption that different viewpoints share same world points; (Middle) the building captured from many viewpoints; (Right) the reconstructed building.
Forensicsci 01 00008 g002
Figure 3. Our RGB-D scanning setup in action. The operator is able to inspect the parts of the object that have been already scanned live on screen.
Figure 3. Our RGB-D scanning setup in action. The operator is able to inspect the parts of the object that have been already scanned live on screen.
Forensicsci 01 00008 g003
Figure 4. Top and side views from outdoor reconstruction of a building using aerial photogrammetry.
Figure 4. Top and side views from outdoor reconstruction of a building using aerial photogrammetry.
Forensicsci 01 00008 g004
Figure 5. Detailed scans of particular points of interest with different modalities, i.e., a staircase (handheld scanner) and A/C units (terrestrial photogrammetry).
Figure 5. Detailed scans of particular points of interest with different modalities, i.e., a staircase (handheld scanner) and A/C units (terrestrial photogrammetry).
Forensicsci 01 00008 g005
Figure 6. (Left) Top view of the reconstruction of a lab room using an RGB-D scanner. (Right) Side view of the same room.
Figure 6. (Left) Top view of the reconstruction of a lab room using an RGB-D scanner. (Right) Side view of the same room.
Forensicsci 01 00008 g006
Figure 7. (Left) Overview of a photogrammetric reconstruction of a lab room. (Right) Close-up at a specific point of the room.
Figure 7. (Left) Overview of a photogrammetric reconstruction of a lab room. (Right) Close-up at a specific point of the room.
Forensicsci 01 00008 g007
Figure 8. Highly detailed reconstruction of the geometry of machines using a handheld laser scanner.
Figure 8. Highly detailed reconstruction of the geometry of machines using a handheld laser scanner.
Forensicsci 01 00008 g008
Figure 9. Three-dimensional reconstruction of a footprint using a handheld laser scanner: (Left) textured; (Right) textureless.
Figure 9. Three-dimensional reconstruction of a footprint using a handheld laser scanner: (Left) textured; (Right) textureless.
Forensicsci 01 00008 g009
Figure 10. An overview of a hypothetical crime scene. (a,e): overview of the room. (b,c,fh): suspicious objects. (d): victim.
Figure 10. An overview of a hypothetical crime scene. (a,e): overview of the room. (b,c,fh): suspicious objects. (d): victim.
Forensicsci 01 00008 g010
Figure 11. The reconstruction of the scene using the terrestrial laser scanner; (Top-left) overview of the reconstruction; (Top-right) the victim; (Bottom-left) the bomb tools; (Bottom-right) the mobile phone, drugs and tablet.
Figure 11. The reconstruction of the scene using the terrestrial laser scanner; (Top-left) overview of the reconstruction; (Top-right) the victim; (Bottom-left) the bomb tools; (Bottom-right) the mobile phone, drugs and tablet.
Forensicsci 01 00008 g011
Figure 12. The reconstruction of the victim using a (Top-left) handheld laser scanner; (Top-right) RGBD scanner; (Bottom-left) DSLR + Pix4D; (Bottom-right) iPhone + Trnio.
Figure 12. The reconstruction of the victim using a (Top-left) handheld laser scanner; (Top-right) RGBD scanner; (Bottom-left) DSLR + Pix4D; (Bottom-right) iPhone + Trnio.
Forensicsci 01 00008 g012
Figure 13. The reconstruction of the bomb tools using a (Top-left) handheld laser scanner; (Top-right) RGBD scanner; (Bottom-left) DSLR + Pix4D; (Bottom-right) iPhone + Trnio.
Figure 13. The reconstruction of the bomb tools using a (Top-left) handheld laser scanner; (Top-right) RGBD scanner; (Bottom-left) DSLR + Pix4D; (Bottom-right) iPhone + Trnio.
Forensicsci 01 00008 g013
Figure 14. The reconstruction of the mobile phone and drugs using a (Top-left) handheld laser scanner; (Top-right) RGBD scanner; (Bottom-left) DSLR + Pix4D; (Bottom-right) iPhone + Trnio.
Figure 14. The reconstruction of the mobile phone and drugs using a (Top-left) handheld laser scanner; (Top-right) RGBD scanner; (Bottom-left) DSLR + Pix4D; (Bottom-right) iPhone + Trnio.
Forensicsci 01 00008 g014
Figure 15. The reconstruction of the tablet: (Top-left) handheld laser scanner; (Top-right) RGBD scanner; (Bottom-left) DSLR + Pix4D; (Bottom-right) iPhone + Trnio.
Figure 15. The reconstruction of the tablet: (Top-left) handheld laser scanner; (Top-right) RGBD scanner; (Bottom-left) DSLR + Pix4D; (Bottom-right) iPhone + Trnio.
Forensicsci 01 00008 g015
Figure 16. The reconstruction of the noticeboard: (Top-left) handheld laser scanner; (Top-right) RGBD scanner; (Bottom-left) DSLR + Pix4D; (Bottom-right) iPhone + Trnio.
Figure 16. The reconstruction of the noticeboard: (Top-left) handheld laser scanner; (Top-right) RGBD scanner; (Bottom-left) DSLR + Pix4D; (Bottom-right) iPhone + Trnio.
Forensicsci 01 00008 g016
Figure 17. The reconstruction of the table with leaflets and newspapers: (Top-left) handheld laser scanner; (Top-right) RGBD scanner; (Bottom-left) DSLR + Pix4D; (Bottom-right) iPhone + Trnio.
Figure 17. The reconstruction of the table with leaflets and newspapers: (Top-left) handheld laser scanner; (Top-right) RGBD scanner; (Bottom-left) DSLR + Pix4D; (Bottom-right) iPhone + Trnio.
Forensicsci 01 00008 g017
Figure 18. Details of the scene as obtained with the handheld laser scanner (top-left of Figure 12, Figure 13, Figure 14, Figure 15, Figure 16 and Figure 17) and incorporated within the overview of the scene as it was captured by the terrestrial laser scanner (Figure 11).
Figure 18. Details of the scene as obtained with the handheld laser scanner (top-left of Figure 12, Figure 13, Figure 14, Figure 15, Figure 16 and Figure 17) and incorporated within the overview of the scene as it was captured by the terrestrial laser scanner (Figure 11).
Forensicsci 01 00008 g018
Figure 19. (Left) The raw scan of an amphitheater. (Right) The post-processed version of the scan.
Figure 19. (Left) The raw scan of an amphitheater. (Right) The post-processed version of the scan.
Forensicsci 01 00008 g019
Figure 20. (Top left) An overview photo of the garden. (Rest) An overview and details from the 3D reconstruction (photogrammetry).
Figure 20. (Top left) An overview photo of the garden. (Rest) An overview and details from the 3D reconstruction (photogrammetry).
Forensicsci 01 00008 g020
Figure 21. Independent scans of a wallet using photogrammetry (left) and a mobile phone using handheld laser scanning (right).
Figure 21. Independent scans of a wallet using photogrammetry (left) and a mobile phone using handheld laser scanning (right).
Forensicsci 01 00008 g021
Figure 22. Independent scans of a wallet and a mobile were manually placed inside the backyard scene.
Figure 22. Independent scans of a wallet and a mobile were manually placed inside the backyard scene.
Forensicsci 01 00008 g022
Table 1. Terrestrial Laser scanners overview. *: Depends on the model and/or scanning configuration.
Table 1. Terrestrial Laser scanners overview. *: Depends on the model and/or scanning configuration.
ModelMin. DistanceMax. DistanceRanging Error/Accuracy
Artec Ray 3D Scanner [29]1 m110 m<0.70 mm at 15 m
Faro Focus Series M and Series S [30]0.6 m70–350 m±1 mm (model M70), ±1 mm (rest models) at 10–25 m
Leica ScanStation P series [31,32]0.4 m80–1000+ m *1.2 mm
Leica RTC360 [33]0.5 m130 m1.0 mm
Z + F IMAGER 5010X [34]0.3 m187.3 m±1 mm
Teledyne Optech Polaris HD [35]1.5 m1700 m5 mm at 100 m
Trimble X7/TX6/TX8 [36]0.6 m80–340 m *≤2 mm *
Riegle VZ-Series [37]0.5–5 m *800–6000m *5–15 mm *
Table 2. Handheld scanners overview. Type abbreviations: Structured-light (SL), Laser (L), Hybrid (H). Scanners that are not cited are legacy models that were included for comparison, as they are presented in recent literature and also available for rent.
Table 2. Handheld scanners overview. Type abbreviations: Structured-light (SL), Laser (L), Hybrid (H). Scanners that are not cited are legacy models that were included for comparison, as they are presented in recent literature and also available for rent.
ModelTypeMaximum Scanning AreaWorking DistanceAccuracy
Artec Space Spider [60]SL180 × 140 mm0.2–0.3 m0.05 mm
Artec Leo [61]SL838 × 488 mm0.35–1.2 m0.1 mm
Creaform Go!SCAN 20/50SL143 × 108/380 × 380 mmn/a, optimal: 380/400 mm0.100 mm
Creaform Go!SCAN SPARK [63]SL390 × 390 mmn/a, optimal: 400 mm0.05 mm
Creaform HandySCAN 307 [64]L225 × 250 mmn/a, optimal: 300 mmUp to 0.04 mm
Creaform HandySCAN 700 [64]L275 × 250 mmn/a, optimal: 300 mmUp to 0.03 mm
FARO Freestyle 3DXL2600 × 2900 mm0.5–3 m<1 mm
FARO Freestyle 2 [62]L4470 × 5150 mm0.5–5 m0.5 mm at 1 m, 5 mm at 5 m
ScanTech KSCAN-Magic [65]L1440 × 860 mmn/a, optimal: 300 mm0.02 mm
Shining 3D EinScan HX [66]H420 × 440 mm (structured-light), 380 × 400 mm (laser)n/a, optimal: 470 mmUp to 0.05 mm (structured-light), Up to 0.04 mm (laser)
Table 3. Applicable sensors by type and size of environment.
Table 3. Applicable sensors by type and size of environment.
IndoorsOutdoors
Building complexN/ADrone, Camera
Large buildingN/ADrone, Camera
Multiple roomsTerrestrial laser scanner, RGB-D camera, Handheld scanner, CameraN/A
Traffic sceneN/ATerrestrial laser scanner, Drone, Camera
Large roomTerrestrial laser scannerN/A
RoomTerrestrial laser scanner, RGB-D camera, Handheld scanner, CameraN/A
Small roomTerrestrial laser scanner, RGB-D camera, Handheld scanner, CameraN/A
Scene detailRGB-D camera, Handheld scanner, CameraHandheld scanner, Camera
Table 4. Quantitative analysis of 3D scanning modalities. Measurements are in millimetres. The closest to ground-truth measurement is annotated in each case.
Table 4. Quantitative analysis of 3D scanning modalities. Measurements are in millimetres. The closest to ground-truth measurement is annotated in each case.
Object (Dimension)GTFaro Focus M70FARO Freestyle 3DXRGB-DDSLR + Pix4DiPhone + Trnio
Bomb tools/big screw (length)107.16n/a109.11n/a107.64106.35
Bomb tools/paper box (top cover length)95.3290.0194.4888.3994.3692.55
Victim/lips (length)50.5543.4251.5846.6049.2347.69
Victim/shirt button (diameter)11.75n/a11.59n/a11.6711.16
Tablet (width)134.67135.25133.29134.56132.61133.19
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Galanakis, G.; Zabulis, X.; Evdaimon, T.; Fikenscher, S.-E.; Allertseder, S.; Tsikrika, T.; Vrochidis, S. A Study of 3D Digitisation Modalities for Crime Scene Investigation. Forensic Sci. 2021, 1, 56-85. https://doi.org/10.3390/forensicsci1020008

AMA Style

Galanakis G, Zabulis X, Evdaimon T, Fikenscher S-E, Allertseder S, Tsikrika T, Vrochidis S. A Study of 3D Digitisation Modalities for Crime Scene Investigation. Forensic Sciences. 2021; 1(2):56-85. https://doi.org/10.3390/forensicsci1020008

Chicago/Turabian Style

Galanakis, George, Xenophon Zabulis, Theodore Evdaimon, Sven-Eric Fikenscher, Sebastian Allertseder, Theodora Tsikrika, and Stefanos Vrochidis. 2021. "A Study of 3D Digitisation Modalities for Crime Scene Investigation" Forensic Sciences 1, no. 2: 56-85. https://doi.org/10.3390/forensicsci1020008

APA Style

Galanakis, G., Zabulis, X., Evdaimon, T., Fikenscher, S. -E., Allertseder, S., Tsikrika, T., & Vrochidis, S. (2021). A Study of 3D Digitisation Modalities for Crime Scene Investigation. Forensic Sciences, 1(2), 56-85. https://doi.org/10.3390/forensicsci1020008

Article Metrics

Back to TopTop