Next Article in Journal
Causes and Treatment Measures of Submarine Pipeline Free-Spanning
Next Article in Special Issue
Underwater Image Enhancement and Mosaicking System Based on A-KAZE Feature Matching
Previous Article in Journal
3D Numerical Simulations of Green Water Impact on Forward-Speed Wigley Hull Using Open Source Codes
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Applications of Virtual Data in Subsea Inspections

1
QUANT Group, Department of Civil, Structural and Environmental Engineering, Trinity College Dublin, Dublin 2, Ireland
2
Université Bretagne-Loire, Université de Nantes, Research Institute of Civil Engineering and Mechanics (GeM)/Sea and Littoral Research Institute (IUML), CNRS UMR 6183/FR 3473, 44322 Nantes, France
3
IXEAD/CAPACITES Society, Université de Nantes, 44200 Nantes, France
4
Dynamical Systems and Risk Laboratory, School of Mechanical and Materials Engineering, University College Dublin, Dublin 4, Ireland
5
Marine Renewable Energy Ireland (MaREI), University College Dublin, Dublin 4, Ireland
*
Author to whom correspondence should be addressed.
J. Mar. Sci. Eng. 2020, 8(5), 328; https://doi.org/10.3390/jmse8050328
Submission received: 18 March 2020 / Revised: 26 April 2020 / Accepted: 1 May 2020 / Published: 7 May 2020
(This article belongs to the Special Issue Underwater Computer Vision and Image Processing)

Abstract

:
This paper investigates the role that virtual environments can play in assisting engineers and divers when performing subsea inspections. We outline the current state of research and technology that is relevant to the development of effective virtual environments. Three case studies are presented demonstrating how the inspection process can be enhanced through the use of virtual data. The first case study looks at how immersive virtual underwater scenes can be created to help divers and inspectors plan and implement real-world inspections. The second case study shows an example where deep learning-based computer vision methods are trained on datasets comprised of instances of virtual damage, specifically instances of barnacle fouling on the surface of a ship hull. The trained deep models are then applied to detect real-world instances of biofouling with promising results. The final case study shows how image-based damage detection methods can be calibrated using virtual images of damage captured under various simulated levels of underwater visibility. The work emphasizes the value of virtual data in creating a more efficient, safe and informed underwater inspection campaign for a wide range of built infrastructure, potentially leading to better monitoring, inspection and lifetime performance of such underwater structures.

1. Introduction

Assessing the submerged part of marine structures introduces new challenges for inspectors. Poor underwater visibility conditions [1] make damage assessment particularly difficult. Additionally, divers must contend with cold, uncomfortable, and often hazardous conditions, and they must often carry out the inspection within a narrow time window. These factors contribute to the increased variability and reduced accuracy of the inspection results [2,3]. Given the extensive effort and expense associated with undertaking such inspections, there is a strong need to develop tools that can improve the condition of monitoring and, ultimately, enhance the quality of the inspection results. In this vein, this paper describes an approach for developing carefully controlled underwater virtual scenes that: (i) validate methodologies of assessment, (ii) facilitate the design of better image-based damage detection tools, and (iii) enable the performance of image-based non-destructive testing (NDT) methods to be evaluated prior to real-world implementation. A virtual-based approach can be a substitute to real-implementation for many situations and also provide both qualitative and quantitative ideas around the best implementation methods for underwater inspections. This is particularly relevant for complex underwater inspection campaigns, or where the time window available or the variability of environmental parameters potentially can reduce the data obtained, unless an assessment using virtual simulations is carried out. Several recent projects have focused on virtual reality (VR) or simulated augmented reality (AR) providing insight into archeological robotics [4,5] underwater cultural heritage [6], and underwater mining [7]. The application of such approaches attempts to fill an existing and much needed gap in terms of engineering use-cases relevant to commercial and safety or serviceability aspects. This paper builds on the abovementioned need of this nascent and rapidly evolving sector.
Three distinct case studies are used to illustrate applications of virtual data in underwater inspections in this paper to establish this idea. The first case study looks at how the creation of a virtual environment to enable virtual reality (VR) can play a role in developing inspection methodologies to meet the needs of fast-emerging technologies like fish farms, floating wind turbines, wave devices, ocean-bed cables, risers and long umbilicals. In particular, this case study features a virtual aquaculture site. The aquaculture sector is rapidly growing, and new fish cage designs are continually being produced online. VR will be very important at lower technological readiness levels to integrate the suitability of an offshore design or intervention solution. The capacity to test inspection methodologies and NDT tools without having to deploy full campaigns or large-scale experiments is of high practical value for inspectors [8,9].
The second case study shows how virtual imagery can be used to help develop new algorithms by generating vast amounts of labelled photorealistic imagery which can then be used to train deep neural networks. Deep learning techniques have experienced a surge in popularity in recent years, and these techniques have demonstrated strong performances across a range of computer vision applications [10]. However, these techniques typically require large quantities of training data which are generally unavailable for underwater inspection applications. Other works have resorted to using internet-based search approaches as a means of assembling large datasets [11] but these approaches require manual verification (since, for instance, searching for images using a keyword “crack” returns many images that do not relate to structural cracks), and there may be copyright issues. Furthermore, labelled datasets generated from internet-based searches only provide a class label for each image and they do not indicate the location of the object of interest within in the image. For the most part, this limits their usefulness to classification problems, and not segmentation and localization problems. To address this limitation, this case study creates a virtual scene of barnacle-fouled ship hull, and from this, an extensive dataset of synthetic imagery is generated with accurate ground-truth information that reveals the exact location of barnacles in the scene. This large dataset is then used to train a convolutional neural network (CNN) which is then applied to detect barnacles in a video that was captured as part of a real-world ship hull inspection campaign.
The third case study demonstrates how virtual scenes can be used to evaluate the performance of image-processing algorithms. This case study investigates how the performance of corrosion and crack detection algorithms vary as the turbidity levels are varied. A key point here is that the synthetically generated turbidity levels can be calibrated against known and physically meaningful turbidity levels from the publicly accessible Underwater Lighting and Turbidity Image Repository (ULTIR) [1] so that realistic conditions can be accurately represented in virtual environments. The value of this case study is that it gives inspectors an insight into the relationship between underwater visibility and the performance of image-based damage assessment techniques. Furthermore, it enables inspectors to assess the viability of adopting image processing approaches prior to an inspection and helps them to identify the limits at which image-based methods begin to produce unacceptably poor results (i.e., high probability of false alarms and low probability or detection).

2. Background

Exploiting virtual data is an established practice in many fields, such as investigating manufacturing production processes in the automotive industry [12] and training pilots in the aerospace sector where flight simulators are routinely used for replicating a host of scenarios [13]. These simulators are useful for practising and refining common tasks, such take-off and landing and coordinating with air traffic control, as well as for handling emergency events, where they help pilots respond to these situations in a safe environment. Additionally, VR (Virtual Reality) is increasingly gaining a foothold in the consumer market, where VR headsets are becoming more affordable and immersive, such as the HTC Vive Pro Eye enabled with foveated rendering, hands-free interaction and precision eye tracking [14].
Adopting VR technology for underwater inspections shares many of the same benefits that flight simulators bring to the aerospace sector. VR-based inspection simulations can help divers gain a better understanding of what to expect during inspections in a risk-free setting and can serve as an effective route-planning tool by giving inspectors a chance to identify parts of a structure that require special attention (e.g., critical joints) and easily relaying this information to the dive team. Communicating this information with the help of VR is more visually compelling compared with traditional approaches, which usually consist of presenting the dive team with a written brief outlining the task at hand. Depending on the type of structure, divers may be asked to assess:
(1)
Corrosion or indicators of corrosion
(2)
Consumption of cathodic protection
(3)
Presence and appearance of cracks
(4)
Exposed rebar, missing bolts and signs of damage to coatings, sealings, joints etc.
(5)
Deformation of the structure
(6)
Presence of scour and erosion
(7)
Upstream and downstream blockages
(8)
Presence and extent of marine growth colonisation
These tasks are carried out for traditional built infrastructure, like underwater sheet piles and for bourgeoning sectors like monopiles for offshore wind turbines, where limited weather windows are available for inspection. In practice, it is often challenging for the diver to determine what is noteworthy. Having the ability to virtually “walk-through” an underwater infrastructure scene with the dive team, and being able to show site-specific visual examples of important structural components and damage forms that should be documented, is a major asset, especially when language, technical and interpretive barriers exist between the engineers and the dive team and/or when the divers have limited experience and are not experts in either structural assessments or marine biology/chemistry [15,16].
Numerous researchers have looked at exploiting virtual environments for underwater application—mainly for underwater archaeological and gaming applications where VR allows users to explore underwater sites of cultural significance such as shipwrecks [17,18,19]. A virtual and augmented reality system for exploration of underwater archaeological sites was demonstrated in [20]. The system offers archaeologists and the general public a way to explore a realistic reconstruction of the sites and glean new insights on underwater archaeological sites. Similarly, in [21], virtual reality technologies were proposed as tools that could help increase exploration time in an underwater archaeological site.
In [22], a virtual-environment-based testbed was developed to act as an alternative to difficult, costly, and possibly hazardous real-time testing and evaluation of control algorithms for autonomous underwater vehicles (AUVs). Other examples of graphical simulators used for AUVs are outlined in [23]. In another study, [24], the authors explored the process of automatically creating virtual tours from footage captured using an omnidirectional underwater camera in AUVs that can cover large marine areas with precise navigation.
Virtual underwater scenes have also been developed to facilitate SCUBA training and to provide virtual SCUBA diving experiences. In [25], a highly immersive, multi-sensory VR simulation is demonstrated whereby users are attached to a motion platform with their outstretched arms and legs placed in a suspended harness. They receive visual and aural feedback through the Oculus Rift head-mounted display and a pair of headphones for added realism. Additionally, buoyancy, drag, and temperature changes are simulated through various sensors.
Virtual reality technology has been employed to aid the inspection process in a number of cases. Virtual reality systems have been developed for aircraft inspection and maintenance training [26] and for fire safety inspections [27]. In [28], the authors discuss the use of virtual tours, augmented reality, and informational modelling for visual inspection and structural health monitoring. A methodology for integrating existing data about a structure and meta-data into a combination of virtual tour (VT), augmented reality (AR) and informational modeling (IM) environment was presented. The objective of their method was to enable on- and off-site presentation of engineering assessment data in an organized, intuitive, and interactive manner, and additionally foster communication between different parties involved with a structure. However, to the authors’ knowledge, there have been no works that consider the use of virtual environments for overcoming problems that are specific to underwater damage detection as part of structural inspections. This research is especially geared towards exploring application areas where VR can support inspection coordination and improve the quality of collected inspection data. Thus, an important contribution of this research is the identification of applications where VR can contribute to underwater monitoring, and the description of how virtual environments can be simulated for these applications.

3. Case Studies

This section presents the three case studies which show various benefits of using virtual environment as a tool for enhancing underwater inspections. The first case study focusses on how virtual environment can play a role in developing inspection methodologies, the second case study shows how virtual data can be used to develop performant deep learning computer vision techniques, while the third case study demonstrates how virtual scenes can be employed to gauge the performance levels of image-processing based damage assessment algorithms. The overall motivation behind the choice of the cases is to extend the applicability of VR approaches to a range of traditional and bourgeoning infrastructure sectors related to the marine environment. Typically, such sectors have a strong commercial and social relevance and their inspections are limited due to lack of access and constraints in archiving, analysing and synthesizing the obtained information. To this effect, three sectors are chosen. This first sector considered is the fisheries sector which is currently undergoing significant modernization through automation, analyses and monitoring [29]. The second case study focuses on the growth of barnacles on ship hulls and addresses an often-overlooked area of fluid-structure interaction due to marine growth. For static or mobile structures, marine growth can alter the hydrodynamics on the structure and fundamentally, such variation in force is related to the extent and the surface roughness of the growth, eventually impacting the parameters in Morison’s equation [30]. A VR approach-driven extensive simulation can thus integrate existing knowledge about such marine growth and eventually lead to the standardization of design process around this topic, linked to specific types of growths. The third sector considered in this paper is the degrading-built infrastructure in the marine environment, where significant information from diving or remotely operated vehicle (ROV) driven campaigns is often available but there is inadequate guidelines around the right environment to carry out such inspections mainly due to variations in lighting and turbidity conditions. The impact of such inspections is directly related to the safety and performance of such structures. While the choice of the three sectors considered are strongly guided by ideas on safety, the impact of each is presented in Table 1, mapped to the sustainable development goals outlined by the United Nations (UN).

3.1. Virtual Scene Simulation to Investigate Assessment Methodologies: A Case Study on Fisheries

This case study demonstrates four distinct areas where virtual scenes can assist inspectors in the rapidly emerging sector of fisheries: (a) exploration, (b) interaction, (c) route planning, and (d) developing safety procedures. To create such a tool, the virtual underwater environment needs to be developed first, as presented in Figure 1. This step can be complex and related to site-specific conditions.
A virtual base scene, specific to the site under consideration, can be developed using cross-platform game engines. This paper uses Unity [31] in this regard. The base scene consists of the landscape, water plane, outdoor lighting model and the atmosphere involving the sky and sun features. In this paper, an external module (Aquas) was added to the project to extend the functionality of the water-related features for the base scene.
The structure within the base scene incorporated by importing their 3D models (including marine growth shapes, colours and textures if required) based on existing information from drawings and/or previous inspections. Realistic marine life and bubbles within such scenes can make the virtual experience richer and it is important for moving objects or structures within water to respect fundamental physics as much as possible.
To virtually explore the scene created in all directions (including jumping and dashing), camera controls are added next, operated by the user typically with mouse and arrow keys. Optical effects underwater due to varied lighting and turbidity are important factors influencing the quality of information from images obtained and such effects are simulated by adding a low contrast blue fog, representing scattering and varied absorption of light rays underwater, depending on their wavelength. The density of the added blue fog effect is representative of turbidity in water. Ambient above-water and underwater sounds (e.g., breaking waves, marine life, diving splash) are added next to create an immersive experience for users.
Virtual information panels are placed throughout the scene at critical points around the infrastructure to be monitored to guide users and provide site-specific information. Such information provides a virtual experience of the type of damage or other features of interest that divers should expect, and the related monitoring tasks that are subsequently required to be carried out. Such panels can also help engineers to communicate with the dive team more effectively to obtain maximized information. An immersive experience like this should avoid any visual lag for the end users and it may be necessary to optimize certain elements of the scene for this purpose. This is particularly relevant for hand-held devices, when dealing with particularly complex scenes and the design of scenes should balance realism with computational efficiency based on the device where a virtual-scene driven training or implementation will be carried out. The deployed virtual diving scenario should support a wide range of platforms (e.g., Windows, Max OS X and Linux, Android, WebGL) to allow preliminary virtual dives on-site.
Figure 2a presents a virtual salmon fisher representing an actual site (Figure 2b) in Ireland. It starts from above the water surface, so that the diver has a good sense of the site overall. To make the experience more attractive, a realistic terrestrial environment (e.g., coastline of inspection site, buoy) for the actual site was added.
The user can look around the above-water scene and the underwater effects are active (Figure 3) once the user dives below the surface. For realistic effects, continuous discharge of air bubbles, moving aquatic vegetation on seabed and 3D models of salmon inside the fish cage are animated using artificial intelligence techniques. An exploration of the scenes can thus engage the user into active specific to the site in use.
Interactive information panels are positioned at points of interest around the net structure for the fish farm, as shown in Figure 4. The panels advise divers on relevant damage types for inspection and specific tasks associated with inspections. This minimizes human error and has the potential to obtain data with better specification and control for comparison in future.
The use of numbered information panels also allows to create a pre-planned route for inspection, reducing time in water, minimizing diver risks and maximizing information obtained from dives. It can help test the testing contingency safety procedures, and help relay critical information underwater in a uniform manner with significantly lower chance of errors from mismatch of interpretation of visual information.

3.2. Utilizing Virtual Data to Develop Image-Processing Based Damage Detection Techniques: A Case Study on Marine Growth on Structures

The second case study demonstrates how virtual scenes can be useful for training deep neural networks. Marine growth on structures has been considered as an example in this regard since it can creates significant changes in hydrodynamic effects on the structures and such changes are often critically dependent on the correct description of the extent and surface roughness characteristics of marine growth.
Deep learning techniques across a wide range of computer vision tasks offer numerous advantages over conventional techniques by learning rich feature representations, which can be significantly more robust than traditional hand-crafted features based on the size and variety of the training set. However, currently, there is a paucity of large annotated training datasets for underwater applications and this case study attempts to demonstrate how large datasets of synthetic imagery can be generated and used to train deep neural networks for such applications. A CNN is trained in this regard to automatically detect barnacles from video footage recorded as a part of a ship hull inspection. Barnacles increase the surface roughness of the ship hull which causes more drag and thus increased fuel costs/lower speeds. Knowing the extent of barnacle colonization and being able to track its growth in time is useful to engineers as this information can be used as inputs to more detailed computational fluid dynamic (CFD) models. The information is also useful for ship owners/operators to decide when to carry out expensive cleaning regimes or maintenance activities as well as optimizing the frequency of cleaning operations.
A virtual scene was created using E-on VUE® software featuring a barnacle-covered surface (Figure 5). Four prototype 3D models of fundamentally barnacles were created. From these four master barnacles, thousands of instances of new barnacles were developed by applying non-linear scaling operations to the original barnacles and applying unique material properties with controlled and random variations.
This allows for training on a large and diverse dataset featuring a broad range of types of barnacles, varying lighting and visibility conditions and viewing perspectives. The CNN can then perform successfully when presented with new scenes. Figure 6 shows the ship and the hull surface under some of the representative lighting and visibility conditions that were considered.
A total of 16,000 images of barnacles were rendered, some of which are shown in Figure 7. Also shown are some “background” images (i.e., images without any barnacles), required for the CNN to learn the characteristic features that enable the distinction between barnacle and non-barnacle classes.
Two CNN models with the same network architecture (Figure 8) were trained from scratch for 40 epochs using synthetic data generated from the virtual scene. One model was trained using 4000 images (48 min training time) of barnacles and another using 16,000 images (189 min training time) to test if and how their relative performance was related to the training set size, when tested with a 6GB graphics card (model: NVIDIA GeForce GTX 1060). The trained models were applied to detect barnacles in images extracted from a real-world ship hull inspection video.
Features in the images, some of which may be barnacles, were first detected using a circular filter. This was preferred over the slower approach of directly detecting barnacles at each pixel using a sliding window. Four sample frames for the filtered images are shown in Figure 9. Visually, it appears that there is a good correlation of feature detection with regions of barnacle fouling. Classification was performed only at locations where features are detected and this very significantly reduced the detection runtime without adversely affecting its accuracy. The two CNN models were applied to 10 frames of ship hull inspection video and a representative sample of the detection results is shown in Figure 9.
The corresponding average sensitivity, specificity, accuracy and precision metrics for the method [32], are shown in Figure 10. Sensitivity is defined as the proportion of barnacle pixels in an image which are identified by the algorithms as representing barnacles. Specificity is defined as the proportion of background pixels in the image which are identified as representing barnacles. Accuracy measures the overall effectiveness of the algorithm to differentiate the barnacle and background pixels correctly rather than its effectiveness on a class-by-class basis while precision is the fraction of pixels correctly identified as representing barnacles among the total number of pixels classified as representing barnacles. These metrics can be expressed as:
Sensitivity = T P T P + F N
Specificity = T N T N + F P
Accuracy = T P + T N T P + T N + F P + F N
Precision = T P T P + F P
where TP (true positives) is the number of pixels correctly identified as representing a barnacle, FP (false positives) is the number of pixels incorrectly identified as representing a barnacle, TN (true negative) is the number of pixels correctly identified as representing the background/uncolonized surface, and FN (false negative) is the number of pixels incorrectly identified as representing the background. The ground truth was determined by a human operator who manually identified the barnacle regions in each image. The visually segmented images act as the control and are assumed to show the true composition of the scene.
The detection results in Figure 9 and the average sensitivity values shown in Figure 10 indicate that the CNN model trained on 16,000 images outperforms the model trained on 4,000 images, in terms of the model’s ability to correctly detect barnacles (sensitivity) whilst having fewer false positives (specificity). High specificity is generally achieved at the expense of sensitivity, but in the case of the model trained with 16,000 images, a better balance is achieved between the two metrics. The accuracy and precision are comparable for both models.

3.3. Performance Evaluation of Image-Processing based Damage Assessment algorithms: A Case Study on Detecting Cracks and Corrosion in an Underwater Structure

The third case study investigates how virtual scenes can be used to gauge the performance levels of image-processing based damage assessment algorithms. Chloride-induced corrosion and cracks in concrete structures are typical in corrosive marine environment. Visual inspections and vision-based systems can often detect their presence conveniently, but the efficacy of vision-based systems in practice is heavily reliant on the optical qualities of the underwater environment. Reduced visibility conditions diminish the ability of a camera, and related image-processing algorithms to effectively identify instances of damage. An understanding of the relationship between visibility conditions and the performance of image-processing techniques can thus be important to rationalise the use of image-processing as part of an underwater inspection campaign [33,34]. The performance metrics typically include of Probability of Detection (PoD), Probability of False Alarms (PFA), and Receiver Operating Characteristic (ROC) Curves or planes [35,36]. Virtual underwater scenes can be helpful in extracting meaningful information about the ROC curves related to the on-site performance of image-based damage detection techniques under a range of visibility conditions, providing an early estimate on the reliability of damage detection information obtained from such image-based methods. This case study implements this approach in identifying corrosion and cracks on marine structures. A virtual wharf was created (Figure 11), and crack and corrosion damage (derived from real-world photographs) were introduced to the virtual scene.
Image quality is mainly affected by luminosity, sharpness (focus accuracy), contrast and noise [37] due to variations in on-site operating conditions. Turbidity, defined as the cloudiness in water caused by the presence of suspended organic (e.g., decomposed plant and animal matter) or inorganic (e.g., silt, clay) solids scattering and absorbing light and reducing visibility, is one of the most influential factors in this regard [38]. Three levels of turbidity are considered here: clear water, medium turbidity, and high turbidity. Damaged surfaces under these turbidity levels are shown in Figure 12 and Figure 13 for cracks and corrosion respectively, at a distance of 1m from the virtual camera. The damages are clearly visible in clear water, and the visibility falls with increased turbidity, until they are barely visible in high turbidity conditions.
Crack detection is a well-studied topic and several image-based detection algorithms exist. Here, a percolation-based method [37] was applied to the virtual images under varying turbidity levels, (see in Figure 12). The detected cracks are shown in Figure 14 and the performance of this method under each turbidity condition is depicted via the ROC curve in Figure 15. The closer the curve can be to the best performance point with coordinates (0,1) and representing 100% sensitivity (no false negatives) and 100% specificity (no false positives), the better is the detection performance. The best performance point is the closest point on the curve to the best performance point [20].
Figure 15 shows that the performance of crack detection gradually declines with increasing turbidity, but the adopted image-processing method is still capable of producing reasonable results even in high turbidity conditions. Findings of this nature can be useful for inspectors as it allows them to choose a technique appropriate to their needs and one that is sufficiently robust to the onsite operating conditions. Virtual scenes can aid this selection process.
For corrosion damage, a good method should identify and accurately define all corroded regions whilst minimizing the inclusion of extraneous regions in an image. In reality, perfect damage detection is impossible to achieve given the inherent chromatic and luminous complexities encountered in natural scenes. Some corrosion surfaces are more distinguishable by comparing texture attributes while some are distinguishable by color. Color intensity and texture analysis-based methods are typically applied for corrosion detection and naturally, techniques in each group are suited to different applications. As an example, the performance of an established texture analysis technique [39] is applied corrosion stain photographs captured under three turbidity levels (Figure 13). The detected corroded regions are shown in Figure 16 and the performance of this method under each turbidity condition is depicted by the ROC curve in Figure 17.
For corrosion detection, the overall performance of the texture analysis-based technique was moderate in clear water conditions while the technique did not produce compelling results in high turbidity conditions. Under such circumstances, the choice of such a technique for implementation in field conditions can be questioned or scrutinized. Predicting the success of image-processing based methods when applied in field conditions is a challenging prospect and virtual scenes can help inspectors get a sense of the expected performance under realistic operating conditions prior to carrying out an actual inspection. This not only is economical but can avoid exposing divers to unnecessary risk. When compared with recent works [40] it is observed that the obtained performance estimates from virtual scenes are of a similar level to field investigations.

4. Discussion and Conclusions

The use of virtual technology has developed considerably over the past few years. This paper examines various aspects of virtual environments and virtual data that can help improve the condition of underwater inspections. Immersive virtual underwater scenes can assist divers and inspectors when planning and conducting real-world inspections. Divers can experience a virtual and unrestricted tour of real underwater inspection sites prior to carrying out, from which they can develop a rich understanding of the scene and are better positioned to develop effective inspection strategies. These learning benefits of virtual environment are illustrated in the first case study, which features a fish farm site.
The second study highlights the value that virtual scenes can have for training deep learning techniques. Deep learning techniques have already attracted significant interest in other fields owing to the high performances that they can achieve; however, their impact in the domain of underwater imaging is restricted, largely due to the lack of appropriate training data. Curating a dataset takes time and domain-specific knowledge of where and how to gather relevant information, and it often involves a human operator having to manually identify and delineate objects of interest from real-world images. This is a tedious and time-consuming task considering that datasets of up to thousands—or even tens of thousands—of training images are typically required to build a robust and effective deep network. The high cost of assembling datasets puts state-of-the-art deep learning techniques beyond the reach of many practitioners. This case study presents a framework for generating large datasets of synthetic images, from which deep neural networks can be trained and applied to tackle real-world problems. A virtual scene of an underwater ship hull surface is created, and from this, a large dataset of synthetic imagery is generated with accurate ground-truth information that reveals the exact composition of the scene. This large dataset is used to train a convolutional neural network which is then applied to a real video of an underwater ship hull inspection where the model can successfully detect barnacles on the surface of the ship.
The third study demonstrates how virtual scenes can be useful for the purpose of evaluating the performance of image-processing based algorithms. The quality of subsea inspections largely depends on the ability of inspectors to detect and objectively record details of defects. The type of damage present and the on-site operating conditions are crucial factors that dictate the effectiveness of image processing algorithms for detecting damage. The reduced visibility conditions that are often associated with underwater scenes diminish the ability of the camera, and subsequent image-processing algorithms, to successively identify instances of damage. It is therefore important that inspectors can develop an understanding of the relationship between visibility conditions and the performance of image-processing techniques so that they can rationalize the use of image-processing methods as part of an underwater inspection campaign. The presented case study shows how virtual scenes can be useful in this regard.
While the use of virtual data in the inspection process has numerous advantages, there are also some limitations. In terms of practical limitations, the production of a simulated environment and the rendering of synthetic data can be a time-consuming and highly involved process, especially for complex structures where 3D models are not already available and are difficult to model from scratch. Additionally, rendering underwater scenes is often a computationally intensive process because of the highly reflective and refractive materials in the scene.
Inspectors/machine-learning practitioners may also question how much they can ‘trust’ the validity of the virtual data since there will be a gap between real world imagery and synthetic data. For this reason, inspectors should view virtual data as ancillary tools that can aid in the inspection planning process and they should not rely too heavily on it. Similarly, machine learning practitioners should be mindful that some details may not be captured when creating the simulated environment and, as a consequence, damage-assessment algorithms trained with synthetic data may not generalize well when applied to some real-world examples. Therefore, it is prudent to collect some degree of “real” data in order to validate the machine learning model.
In terms of future research directions, the concept of “digital twin” is attracting growing interest and looks like it will be a promising research direction for marine structures also. Having a close pairing between the virtual model and physical structure allows analysis of data (such as fluid-structure simulations can be performed on updated virtual models to get a sense of the actual forces on the real-world structure) and monitoring of systems in order to guard against problems before they arise, thereby preventing downtime, and allowing researchers to explore ‘what-if’ scenarios and plans for future actions by using simulations.

Author Contributions

Conceptualization, B.G.; F.S. and V.P.; methodology, B.G.; F.S. and V.P.; software, M.O.; validation, M.O., B.G.; and V.P.; formal analysis, M.O.; investigation, M.O.; V.P.; resources, B.G.; F.S.; data curation, M.O.; F.S.; writing—original draft preparation, M.O.; B.G. and V.P.; writing—review and editing, M.O.; B.G.; F.S. and V.P.; visualization, M.O.; F.S.; supervision, B.G.; F.S. and V.P.; project administration, B.G.; funding acquisition, B.G. All authors have read and agreed to the published version of the manuscript.

Funding

The authors wish to thank the Irish Research Council for Science, Engineering and Technology (IRCSET) and CAPACITES/IXEAD society for providing financial and practical support. The authors acknowledge funding from Science Foundation Ireland (SFI) Marine Research Energy Ireland (MaREI) centre and Aquaculture Operations with Reliable Flexible Shielding Technologies for Prevention of Infestation in Offshore and Coastal Areas (FlexAqua), an EraNet MarTera PBA/BIO/18/02.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. O’Byrne, M.; Schoefs, F.; Pakrashi, V.; Ghosh, B. An underwater lighting and turbidity image repository for analysing the performance of image-based non-destructive techniques. Struct. Infrastruct. Eng. 2017, 1, 1–20. [Google Scholar] [CrossRef] [Green Version]
  2. Estes, A.C.; Frangopol, D.M. Updating bridge reliability based on bridge management systems visual inspection results. J. Bridge Eng. 2003, 8, 374–382. [Google Scholar] [CrossRef] [Green Version]
  3. Dirksen, J.; Clemens, F.H.L.R.; Korving, H.; Cherqui, F.; Le Gauffre, P.; Ertl, T.; Plihal, H.; Müller, K.; Snaterse, C.T.M. The consistency of visual sewer inspection data. Struct. Infrastruct. Eng. 2013, 9, 214–228. [Google Scholar] [CrossRef] [Green Version]
  4. Allotta, B.; Bartolini, F.; Conti, R.; Costanzi, R.; Gelli, J.; Monni, N.; Natalini, M.; Pugi, L.; Ridolfi, A. MARTA: An AUV for underwater cultural heritage. In Proceedings of the Underwater Acoustics International Conference 2014–UA2014, Island of Rhodes, Greece, 22 June 2014. [Google Scholar]
  5. Mangeruga, M.; Cozza, M.; Bruno, F. Evaluation of Underwater Image Enhancement Algorithms under Different Environmental Conditions. JMSE 2018, 6, 10. [Google Scholar] [CrossRef] [Green Version]
  6. Bruno, F.; Lagudi, A.; Barbieri, L.; Muzzupappa, M.; Cozza, M.; Cozza, A.; Peluso, R. A VR System for the Exploitation of Underwater Archaeological Sites. In Proceedings of the 2016 International Workshop on Computational Intelligence for Multimedia Understanding (IWCIM), Reggio Calabria, Italy, 27–28 October 2016; pp. 1–5. [Google Scholar]
  7. MacLeod Sword, C. Viable Alternative Mine Operating System: A Novel Underwater Robotic Excavation System for Flooded Open-Cut Mines. Energy Procedia 2017, 125, 50–55. [Google Scholar] [CrossRef]
  8. Vora, J.; Nair, S.; Gramopadhye, A.K.; Duchowski, A.T.; Melloy, B.J.; Kanki, B. Using virtual reality technology for aircraft visual inspection training: Presence and comparison studies. Appl. Ergon. 2002, 33, 559–570. [Google Scholar] [CrossRef]
  9. Linn, C.; Bender, S.; Prosser, J.; Schmitt, K.; Werth, D. Virtual remote inspection—A new concept for virtual reality enhanced real-time maintenance. In Proceedings of the 2017 23rd International Conference on Virtual System & Multimedia (VSMM), Dublin, Ireland, 31 October–4 November 2017. [Google Scholar]
  10. Goodfellow, I.; Bengio, Y.; Courville, A. Deep Learning; MIT Press: Cambridge, UK, 2016; Volume 1. [Google Scholar]
  11. Kim, B.; Lee, Y.I.; Cho, S. Deep learning-based rapid inspection of concrete structures. Int. Soc. Opt. Photonics 2018, 10598, 1059813. [Google Scholar]
  12. Berg, L.P.; Vance, J.M. Industry use of virtual reality in product design and manufacturing: A survey. Virtual Real. 2017, 21, 1–17. [Google Scholar] [CrossRef]
  13. Earnshaw, R.A. Virtual Reality Systems; Academic Press: Cambridge, MA, USA, 2014. [Google Scholar]
  14. VIVE Pro Eye|VIVE. Available online: https://www.vive.com/eu/product/vive-pro-eye/ (accessed on 26 April 2020).
  15. Komorowski, J.P.; Forsyth, D.S. The role of enhanced visual inspections in the new strategy for corrosion management. Aircr. Eng. Aerosp. Technol. 2000, 72, 5–13. [Google Scholar] [CrossRef]
  16. Gallwey, T.; Drury, C.G. Task complexity in visual inspection. Hum. Factors 1986, 28, 595–606. [Google Scholar] [CrossRef]
  17. Bruno, F.; Lagudi, A.; Barbieri, L.; Muzzupappa, M.; Ritacco, G.; Cozza, A.; Cozza, M.; Peluso, R.; Lupia, M.; Cario, G. Virtual and Augmented Reality tools to improve the exploitation of underwater archaeological sites by diver and non-diver tourists. In Euro-Mediterranean Conference; Springer: Berlin/Heidelberg, Germany, 2016. [Google Scholar]
  18. Bruno, F.; Lagudi, A.; Ritacco, G.; Agrafiotis, P.; Skarlatos, D.; Cejka, J.; Kouril, P.; Liarokapis, F.; Philpin-Briscoe, O.; Poullis, C.; et al. Development and integration of digital technologies addressed to raise awareness and access to European underwater cultural heritage. An overview of the H2020 i-MARECULTURE project. In Proceedings of the OCEANS 2017—Aberdeen, Aberdeen, UK, 19–22 June 2017; pp. 1–10. [Google Scholar]
  19. Bruno, F.; Bruno, S.; De Sensi, G.; Luchi, M.-L.; Mancuso, S.; Muzzupappa, M. From 3D Reconstruction to Virtual Reality: A Complete Methodology for Digital Archaeological Exhibition. J. Cult. Herit. 2010, 11, 42–49. [Google Scholar] [CrossRef]
  20. Haydar, M.; David, R.; Madjid, M.; Samir, O.; Malik, M. Virtual and augmented reality for cultural computing and heritage: A case study of virtual exploration of underwater archaeological sites (preprint). Virtual Real. 2011, 15, 311–327. [Google Scholar] [CrossRef]
  21. Liarokapis, F.; Kouřil, P.; Agrafiotis, P.; Demesticha, S.; Chmelík, J.; Skarlatos, D. 3D Modelling and Mapping for Virtual Exploration of Underwater Archaeology Assets. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2017, XLII-2/W3, 425–431. [Google Scholar] [CrossRef] [Green Version]
  22. Gračanin, D.; Valavanis, K.P.; Matijašević, M. Virtual Environment Testbed for Autonomous Underwater Vehicles. Control Eng. Pract. 1998, 6, 653–660. [Google Scholar] [CrossRef]
  23. Matsebe, O.; Kumile, C.M.; Tlale, N.S. A Review of Virtual Simulators for Autonomous Underwater Vehicles (AUVs). IFAC Proc. Vol. 2008, 41, 31–37. [Google Scholar] [CrossRef] [Green Version]
  24. Bosch, J.; Ridao, P.; Ribas, D.; Gracias, N. Creating 360° Underwater Virtual Tours Using an Omnidirectional Camera Integrated in an AUV. In Proceedings of the OCEANS 2015—Genova, Genova, Italy, 18–21 May 2015; pp. 1–7. [Google Scholar]
  25. Jain, D.; Sra, M.; Guo, J.; Marques, R.; Wu, R.; Chiu, J.; Schmandt, C. Immersive Terrestrial Scuba Diving Using Virtual Reality. In Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems—CHI EA ’16, San Jose, CA, USA, 7–12 May 2016; ACM Press: San Jose, CA, USA, 2016; pp. 1563–1569. [Google Scholar]
  26. De Marchi, L.; Ceruti, A.; Testoni, N.; Marzani, A.; Liverani, A. Use of Augmented Reality in Aircraft Maintenance Operations. In Proceedings of the SPIE Smart Structures and Materials + Nondestructive Evaluation and Health Monitoring, San Diego, CA, USA, 10–13 March 2014; p. 906412. [Google Scholar]
  27. Zhang, D.; Zhang, J.; Xiong, H.; Cui, Z.; Lu, D. Taking Advantage of Collective Intelligence and BIM-Based Virtual Reality in Fire Safety Inspection for Commercial and Public Buildings. Appl. Sci. 2019, 9, 5068. [Google Scholar] [CrossRef] [Green Version]
  28. Napolitano, R.; Liu, Z.; Sun, C.; Glisic, B. Virtual Tours, Augmented Reality, and Informational Modeling for Visual Inspection and Structural Health Monitoring (Conference Presentation). In Sensors and Smart Structures Technologies for Civil, Mechanical, and Aerospace Systems 2019; Wang, K.-W., Sohn, H., Huang, H., Lynch, J.P., Eds.; SPIE: Denver, CO, USA, 2019; p. 6. [Google Scholar]
  29. Available online: https://www.sintef.no/en/projects/flexaqua/ (accessed on 26 April 2020).
  30. Wright, C.; Murphy, J.; Pakrashi, V. The Dynamic Effects of Marine Growth on a Tension Moored Floating Wind Turbine. In Progress in Renewable Energies Offshore; CRC Press: Lisbon, Portugal, 2016; pp. 723–732. [Google Scholar]
  31. Technologies, U. Unity Real-Time Development Platform|3D, 2D VR & AR Visualizations. Available online: https://unity.com/ (accessed on 26 April 2020).
  32. Sokolova, M.; Japkowicz, N.; Szpakowicz, S. Beyond Accuracy, F-Score and ROC: A Family of Discriminant Measures for Performance Evaluation. In AI 2006: Advances in Artificial Intelligence; Sattar, A., Kang, B., Hutchison, D., Kanade, T., Kittler, J., Kleinberg, J.M., Mattern, F., Mitchell, J.C., Naor, M., Nierstrasz, O., et al., Eds.; Springe: Berlin, Heidelberg, 2006; Volume 4304, pp. 1015–1021. [Google Scholar]
  33. Pakrashi, V.; Schoefs, F.; Memet, J.B.; O’Connor, A. ROC Dependent Event Isolation Method for Image Processing Based Assessment of Corroded Harbour Structures. Struct. Infrastruct. Eng. 2010, 6, 365–378. [Google Scholar] [CrossRef] [Green Version]
  34. Rouhan, A.; Schoefs, F. Probabilistic Modeling of Inspection Results for Offshore Structures. Struct. Saf. 2003, 25, 379–399. [Google Scholar] [CrossRef] [Green Version]
  35. Schoefs, F.; Boéro, J.; Clément, A.; Capra, B. The Aδ Method for Modelling Expert Judgement and Combination of Non-Destructive Testing Tools in Risk-Based Inspection Context: Application to Marine Structures. Struct. Infrastruct. Eng. 2012, 8, 531–543. [Google Scholar] [CrossRef] [Green Version]
  36. Tsai, C.-H.; Liou, C.-S. Applying an On-Line Crack Detection Technique for Laser Cutting by Controlled Fracture. Int. J. Adv. Manuf. Technol. 2001, 18, 724–730. [Google Scholar] [CrossRef]
  37. Mahiddine, A.; Seinturier, J.; Boï, D.P.J.-M.; Drap, P.; Merad, D.; Long, L. Underwater Image Preprocessing for Automated Photogrammetry in High Turbidity Water: An Application on the Arles-Rhone XIII Roman Wreck in the Rhodano River, France. In Proceedings of the 2012 18th International Conference on Virtual Systems and Multimedia, Milan, Italy, 2–5 September 2012; pp. 189–194. [Google Scholar]
  38. O′Byrne, M.; Schoefs, F.; Ghosh, B.; Pakrashi, V. Texture Analysis Based Damage Detection of Ageing Infrastructural Elements: Texture Based Damage Detection. Comput. Aided Civ. Infrastruct. Eng. 2013, 28, 162–177. [Google Scholar]
  39. O’Byrne, M.; Ghosh, B.; Schoefs, F.; Pakrashi, V. Image-Based Damage Assessment for Underwater Inspections, 1st ed.; CRC Press, Taylor & Francis: Boca Raton, FL, USA, 2018. [Google Scholar]
  40. O′Byrne, M.; Pakrashi, V.; Schoefs, F.; Ghosh, B. Semantic Segmentation of Underwater Imagery Using Deep Networks Trained on Synthetic Imagery. JMSE 2018, 6, 93. [Google Scholar]
Figure 1. A schematic approach for creating virtual underwater scenes to advise diving campaigns.
Figure 1. A schematic approach for creating virtual underwater scenes to advise diving campaigns.
Jmse 08 00328 g001
Figure 2. (a) View of the virtual inspection site, and (b) a real fish farm site for reference purposes.
Figure 2. (a) View of the virtual inspection site, and (b) a real fish farm site for reference purposes.
Jmse 08 00328 g002
Figure 3. (a) Underwater view, and (b) close-up view of the net.
Figure 3. (a) Underwater view, and (b) close-up view of the net.
Jmse 08 00328 g003
Figure 4. Information panels that can be used to advise divers on what they should look out for.
Figure 4. Information panels that can be used to advise divers on what they should look out for.
Jmse 08 00328 g004
Figure 5. A virtual scene featuring barnacles on the surface of a ship hull.
Figure 5. A virtual scene featuring barnacles on the surface of a ship hull.
Jmse 08 00328 g005
Figure 6. Training images of barnacle fouling of ship hull under varied underwater visibility conditions.
Figure 6. Training images of barnacle fouling of ship hull under varied underwater visibility conditions.
Jmse 08 00328 g006
Figure 7. A selection of training images for (a) the barnacle class, and (b) the background class.
Figure 7. A selection of training images for (a) the barnacle class, and (b) the background class.
Jmse 08 00328 g007
Figure 8. Network Architecture of the two convolutional neural network (CNN) models considered for training barnacle fouling on ship hull in a virtual environment.
Figure 8. Network Architecture of the two convolutional neural network (CNN) models considered for training barnacle fouling on ship hull in a virtual environment.
Jmse 08 00328 g008
Figure 9. Barnacles detected on ship hull with CNN trained on 4000 images and 16,000 images respectively.
Figure 9. Barnacles detected on ship hull with CNN trained on 4000 images and 16,000 images respectively.
Jmse 08 00328 g009
Figure 10. Performance evaluation of the CNN models trained using 4000 and 16,000 synthetic images for evaluation metrics comprising of Sensitivity, Specificity, Accuracy and Precision¸ when averaged over 10 sample video frames.
Figure 10. Performance evaluation of the CNN models trained using 4000 and 16,000 synthetic images for evaluation metrics comprising of Sensitivity, Specificity, Accuracy and Precision¸ when averaged over 10 sample video frames.
Jmse 08 00328 g010
Figure 11. (a) View of the virtual wharf, and (b) the underwater view.
Figure 11. (a) View of the virtual wharf, and (b) the underwater view.
Jmse 08 00328 g011
Figure 12. Images of a cracked surface on a wharf captured under clear, medium turbidity, and high turbidity conditions.
Figure 12. Images of a cracked surface on a wharf captured under clear, medium turbidity, and high turbidity conditions.
Jmse 08 00328 g012
Figure 13. Image of a corroded surface on a wharf captured under clear, medium turbidity, and high turbidity conditions.
Figure 13. Image of a corroded surface on a wharf captured under clear, medium turbidity, and high turbidity conditions.
Jmse 08 00328 g013
Figure 14. Detected cracks (white pixels) under each turbidity level.
Figure 14. Detected cracks (white pixels) under each turbidity level.
Jmse 08 00328 g014
Figure 15. Receiver operating characteristic (ROC) curves for the crack detection technique applied under three turbidity conditions.
Figure 15. Receiver operating characteristic (ROC) curves for the crack detection technique applied under three turbidity conditions.
Jmse 08 00328 g015
Figure 16. Detected corroded regions (unfaded pixels) of a corrosion stain under each turbidity level for a texture analysis-based image processing technique.
Figure 16. Detected corroded regions (unfaded pixels) of a corrosion stain under each turbidity level for a texture analysis-based image processing technique.
Jmse 08 00328 g016
Figure 17. ROC curves for a texture analysis-based image processing technique applied under three turbidity conditions for detecting corrosion.
Figure 17. ROC curves for a texture analysis-based image processing technique applied under three turbidity conditions for detecting corrosion.
Jmse 08 00328 g017
Table 1. Alignment of Case Studies to UN Sustainable Development Goals.
Table 1. Alignment of Case Studies to UN Sustainable Development Goals.
Case StudyAlignment to UN Sustainable Development Goals
Fisheries14. Life Below Water
Marine Growth9. Industry, Innovation and Infrastructure
Structural Inspections11. Sustainable Cities and Communities

Share and Cite

MDPI and ACS Style

O’Byrne, M.; Ghosh, B.; Schoefs, F.; Pakrashi, V. Applications of Virtual Data in Subsea Inspections. J. Mar. Sci. Eng. 2020, 8, 328. https://doi.org/10.3390/jmse8050328

AMA Style

O’Byrne M, Ghosh B, Schoefs F, Pakrashi V. Applications of Virtual Data in Subsea Inspections. Journal of Marine Science and Engineering. 2020; 8(5):328. https://doi.org/10.3390/jmse8050328

Chicago/Turabian Style

O’Byrne, Michael, Bidisha Ghosh, Franck Schoefs, and Vikram Pakrashi. 2020. "Applications of Virtual Data in Subsea Inspections" Journal of Marine Science and Engineering 8, no. 5: 328. https://doi.org/10.3390/jmse8050328

APA Style

O’Byrne, M., Ghosh, B., Schoefs, F., & Pakrashi, V. (2020). Applications of Virtual Data in Subsea Inspections. Journal of Marine Science and Engineering, 8(5), 328. https://doi.org/10.3390/jmse8050328

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop