Next Article in Journal
Stability Analysis of Rocky Slopes on the Cuenca–Girón–Pasaje Road, Combining Limit Equilibrium Methods, Kinematics, Empirical Methods, and Photogrammetry
Previous Article in Journal
Distribution of Enhanced Potentially Toxic Element Contaminations Due to Natural and Coexisting Gold Mining Activities Using Planet Smallsat Constellations
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

AI-Prepared Autonomous Freshwater Monitoring and Sea Ground Detection by an Autonomous Surface Vehicle

1
Scientific Diving Center, Freiberg University of Mining and Technology, Gustav Zeuner Straße 7, 09599 Freiberg, Germany
2
Institute of Informatics, Freiberg University of Mining and Technology, Bernhard-von-Cotta-Straße 2, 09599 Freiberg, Germany
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Remote Sens. 2023, 15(3), 860; https://doi.org/10.3390/rs15030860
Submission received: 10 November 2022 / Revised: 28 January 2023 / Accepted: 29 January 2023 / Published: 3 February 2023

Abstract

:
Climate change poses special and new challenges to inland waters, requiring intensive monitoring. An application based on an autonomous operation swimming vehicle (ASV) is being developed that will provide simulations, spatially and depth-resolved water parameter monitoring, bathymetry detection, and respiration measurement. A clustered load system is integrated with a high-resolution sonar system and compared with underwater photogrammetry objects. Additionally, a holistic 3D survey of the water body above and below the water surface is generated. The collected data are used for a simulation environment to train artificial intelligence (AI) in virtual reality (VR). These algorithms are used to improve the autonomous control of the ASV. In addition, possibilities of augmented reality (AR) can be used to visualize the data of the measurements and to use them for future ASV assistance systems. The results of the investigation into a flooded quarry are explained and discussed. There is a comprehensive, high-potential, simple, and rapid monitoring method for inland waters that is suitable for a wide range of scientific investigations and commercial uses due to climate change, simulation, monitoring, analyses, and work preparation.

1. Introduction

In Germany, in 2019, the air temperature record of 42.6 °C was officially measured [1] and has led to shortages of water in the agricultural sector, among others. Furthermore, heavy precipitation occurs more and more often, causing localized flooding in a very short period of time. All these extreme weather events put a strain on inland and flowing waters. In addition, the growing population in urban centers, the use of inland waters for recreation and swimming, and the increasing requirements of the European Water Framework Regulation [2] have all led to an increase in water use. All the aforementioned uses of inland waters require extensive knowledge about the development of the processes within the water bodies. Only in this way are sustainable improvements of the water quality and long-term use of the water bodies possible. Thus, comprehensive knowledge of the inflows and outflows, the water parameters (pH values, conductivity, turbidity, oxygen content, etc.) over time, as well as the nutrient content of the water body is necessary [2]. Only with this knowledge, targeted water quality improvement can be measured and monitored. This requirement is also included in the Water Framework Directive [2]. There is a requirement in which an improvement of the water quality is to be strived for. Furthermore, monitoring is already required here. For example, it is state-of-the-art to carry out regular water quality checks on water bodies used for water management, e.g., dams. These are necessary to maintain the optimal quality of the raw water at any time of the year. In the case of current dams, for example, the extraction depth for the raw water can be varied to respond to seasonal fluctuations in water quality. In addition, it simplifies the process and reduces the cost of drinking water treatment. For this purpose, profile measurements with multi-parameter probes are carried out at regular intervals [3]. In addition, samples are taken at selected depths, the on-site parameters are determined, and further constituents are analyzed in the laboratory [4]. There are many new bodies of water in Germany as a result of the coal phase-out and the renaturation of former opencast mines, which have to be monitored and systematically restored to a natural state. In the case of the former open-cast lakes, the influx of brine or the dissolution of the sediments leads to acidification of the new water bodies. This must be neutralized by the large-scale addition of lime [5]. This is one way to achieve an ecological balance and enable the use of the area for tourism. Water bodies and wetlands are some of the world’s largest sources of climate-altering trace gases [6]. At the same time, there is a lack of demonstrably representative data [7]. Accordingly, the assessment of the exchange of climate-affecting trace gases between the water surface and the atmosphere is essential to characterize the role of water bodies in the context of climate change.

1.1. Motivation

All of the mentioned points in Section 1 are problems where the robot-assisted freshwater monitoring (RoBiMo) (RoBiMo is a research project of the Freiberg University of Mining and Technology (TUBAF), Germany https://tu-freiberg.de/robimo, accessed on 10 November 2022) project, comes in and makes a decisive contribution to the monitoring of inland waters with a 3D depth-resolved simultaneous and autonomous recording of water parameters as well as the underwater structure. The goal of the investigations and development by RoBiMo is to improve detailed and high-resolution water monitoring. Additionally, all data should be analyzed and evaluated with methods of AI for a highly customized visualization and easy interpretation.
Therefore, the overall aim of the research in this paper is to develop an autonomous platform that serves as a carrier for various sensor systems for 3D mapping and environmental data acquisition in inland waters. For this purpose, robotic boats are augmented with navigation sensors and self-driving skills for autonomous operation in inland waters. The robotic boats serve as a carrier for various sensor systems for environmental mapping and monitoring and are navigated autonomously from a base station on the bank with a network-independent connection. AI and VR methods are used to evaluate and visualize the results. Scientific divers then compare the measured data with in situ measurements, and can also manipulate sensors in a targeted manner. Furthermore, water and sediment samples are taken at different depths and areas. This enables local phenomena (groundwater access, changes in geology, seasonal fluctuations, mixture at estuaries) to be specifically examined and assessed. With the samples taken, additional water parameters can be determined through laboratory analyses. The results are applied to form a basis for automated water monitoring, as well as to develop and test methods and approaches. Results that can be derived from this are used in hydrology/limnology, environmental technology, water treatment, as well as robotics and computer science.
The system will offer a currently unknown local resolution of the water parameter for a complete inland water body and special areas with an inflow, groundwater sources, or morphological peculiarities. Among other things, acidic seepage water from coal layers can be detected. Furthermore, local effects and influences of major interventions in the water body as a result of heavy rain, flood or drought, or liming can be investigated.
In summary, the main objectives of the research presented in this paper are:
  • The 3D depth-resolved recording of inland water quality parameters with autonomously driving swimming robots.
  • Validation of the results through in situ measurements and sampling conducted by scientific divers to carry out further analytical methods.
  • Recording the underwater subsurface with a sonar-based system and the ASV.
  • Combination of photogrammetry and sonar data under and above water for a holistic model of a water body.
  • Data analysis and visualization by AI and VR.
Because of the interdisciplinary project, key points in this paper are the underwater ground detection and simulation and visualization of the process by AI. To enable automation of the platform, an AI-based evaluation of the sensor has to be done. The 3D point clouds obtained from sound navigation and ranging (sonar) and light detection and ranging (LiDAR) mapping are semantically segmented and classified through machine learning (ML) analysis. To prepare an application in Saxon inland waters (e.g., lakes, river dams), the AI algorithms are trained with synthetic data obtained in virtual environments. This can be used to derive knowledge about which geometric structures can be detected below the water surface and how these can provide feedback on the robot boat’s motion behavior. For the provision of synthetic training data, which we generate by virtual sensor technology in VR, an insight within this paper is given. Subsequently, some results of the sonar and photogrammetry process and the AI and VR/AR methods are explained and discussed.
In this paper, the state-of-the-art of remote and autonomous operating vehicles for the sea ground and water quality measurement is shown in Section 1.2. The main sections can be divided into three parts. The concept of the investigations by RoBiMo is explained in Section 2 to obtain an overview and background of the interdisciplinary research and single parts and methods. The data acquisition by sonar and photogrammetry data in Section 3 and the results are shown in the second part. The third part focuses on the software tools for processing and simulation Section 4 and their results by VR and AI. Subsequently, the results are concluded and discussed in Section 5, and they show the next steps for ongoing research.

1.2. Literature Review and State-of-the-Art

The literature is focused on measurements of freshwater and the central European area, which carry out a survey of the water surface by sonar as well as a simultaneous recording of water parameters.
The name of the used robot for the investigations is corresponding to the used function. A distinction is made between two groups [8,9]. For this study, only unmanned vehicles are considered. Furthermore, a distinction is made between the type of control and the area of application on the water. In terms of control, a distinction can be made between autonomous and remote-controlled robots. For the area of application, a distinction is made between floating and underwater robots. There are four different types of robotic vehicles:
1.
Remote operation underwater vehicle (RUV);
2.
Remote operation swimming vehicle (RSV);
3.
Autonomous operation underwater vehicle (AUV);
4.
Autonomous operation swimming vehicle (ASV).
For the investigations in this paper, autonomous surface vehicles ASVs are further investigated. The sensor data of the ASV are used to draw conclusions about the robot’s motion behavior. It is elementary that the related data structures are evaluated with suitable AI methods. The recognition of objects (classification) and the subdivision of sensor data into subsets (semantic segmentation) have to be evaluated. Point clouds as the data type of the sensors are used: LiDAR and sonar. One option for automated processing of point clouds with ML, e.g., for segmentation, is unsupervised learning. Such a procedure is described in [10,11]. The classification of data points into groups (also: clusters) is done by grouping elements that are as similar as possible to each other. More detailed information about this can be found in [12]. Regardless of the number of dimensions, the methods of supervised learning are often used to automatically recognize patterns and relationships. Such an approach, however, requires a large amount of training data with the correct classification for each pixel or point in a scene (pixel-wise or point-wise segmentation). For this purpose, an approach to synthetically generate a variety of labeled data and use them for training the evaluation [13] was developed. Learning of classifiers for the case of three-dimensional data was investigated, e.g., in [14,15,16]. For sea ground detection, sonar sampling is a common method. However, there are other ways, so it is possible to use LiDAR for areas with a water depth of less than 15 m [17].
For that, the different investigated parameters and the kind of sea ground detection are of interest. Moreover, it is interesting which area and face are covered with the water parameter measurement. The state-of-the-art [18] for lakes and dams involve the use of a depth profile at one or more interesting points and additional water samples. However, for the measurements of three-dimensional water values, no standards are established. As part of the investigation, area investigations and interpolation of different depth profiles are common. Three different methods and projects can be seen in Table 1 with a comparison to the investigations of this paper. A detailed explanation of the RoBiMo parts follows in Section 2.1, Section 2.3 and Section 2.4.
This is only a selection of literature that uses an ASV to carry out a subsurface survey and a survey of watercourse parameters. Many projects and studies deal exclusively with subsurface surveys [24,25], autonomous surveys of water bodies, environmental parameters [26], or with RUV/AUV [27,28]. For the comparison presented here, however, only combined surveys of the subsurface with water monitoring data by an ASV were considered.
Next to the robot-based approach are many common methods of water quality measurements. For that, some selected methods are part of the [29,30,31].

2. Materials and Methods

2.1. Investigation Concept

The aim of the project RoBiMo is to develop a modular measuring system that not only enables a better understanding of the water dynamics of inland waters of all kinds, as a result of seasonal fluctuations and extreme weather events. To achieve this, an own constructed platform is equipped with a multibeam echo sounder, a sensor chain with multi-parameter measurement nodes, and a gas measurement bell to investigate a water body regular, continuous, and 3D site-resolved base. The different measurement parts and the concept of the project are shown in Figure 1.
Similar to the paper, the concept can divide into three parts, see Figure 2. The foundation is the development and construction of hardware, such as the ASV in Section 2.2 and the customized sensor nodes in Section 2.4 with the winch system. Second, the conduction of the investigation by the three different types of measurements, the validation by scientific divers, and additional sampling in Section 2.5. The third part, the roof, is the software development for the simulation of the ASV, the data characterization by AI, and the visualization by VR and AR; see Section 4.
The individual components have different temporal behaviors during the measurements. For example, data acquisition of the echo sounder takes place while the ASV is moving. The gas measurement requires remaining in the same place for 20 to 30 min. To do justice to this, the individual sensors are realized as an independent module mounted on the motorized carrier platform. This makes it possible to keep the dimensions and weight of the platform low, so it can be handled by two people. This allows the platform to be used on smaller waters without a slipway or technical aid. The sensor nodes are connected on a line with a customized distance. The measurement during the drive of the platform would generate a huge dataset of simultaneous depth-resolved and geo-referenced water parameters. For that amount of data, the use of AI is necessary and offers the opportunity to improve the understanding and evaluation. For the operation control, a real-time connection and VR and AR is used (Section 4). The path planing for the autonomous operation has to be optimized for the single task and fit all waters.

2.2. ASV–Autonomous Surface Vehicle

For the investigation, two different ASV’s are used, see Figure 3. The robot “Elisabeth”, a Kingfisher ClearPath catamaran, with a dimension of 1.3 × 0.9 m and a weight of 29 kg with the power supply is used. Because of the different cases in Section 2.1, an own developed carrier platform with different modules is needed. That is why the platform “Ferdinand” (made of polystyrene blocks and covered with fiberglass layers) and a clustered carrier load system (made of aluminum profiles) are constructed. The platform has a size of 1.2 × 0.8 m and a weight of approximately 15 kg with the drives and without the modules.
Because of the needed high agility of the platform, especially for the operation of the gas measuring bell and to be able to carry out measurements in the shore area, the platform has a differential drive with two motors T200 Thruster from BlueRobotics. Two computers are used to control the platform: The motors are controlled by a Pixhawk 5X controller with Ardupilot Rover, which also covers the control for approaching points or paths. Computationally intensive tasks such as path planning or obstacle avoidance are carried out on an Intel NUC, which is also used to record the measurement data. By using the Pixhawk 5X controller, it is possible to use the platform with a limited range of functions even during development, as well as to maintain basic maneuverability even if the measurement computer fails. The measurement data as well as status information of the platform are transmitted live to a mobile shore station. This makes it possible to continuously monitor the measurements and to react promptly to interesting phenomena or problems manually or by the algorithm. The data transmission of the measured values takes place at a close range via WLAN and with the option of a mobile phone connection as a fallback level. Status information, such as position and speed, is transmitted separately in the 433 MHz band.

2.3. Sea Ground Detection

For the sea ground detection sonar, a distance measurement method, based on sound waves, is used. Similar to LiDAR, measurement devices can be divided into active and passive sonar (in passive sonar, the target object itself rather than the sensing device emits a sound signal. This signal can be identified by its characteristic signal profile). Again, only the more common active type is considered in this paper. Here, the transmitter emits a signal as a sound wave. The sound wave is reflected at the target object and registered at the receiver. The time difference between signal transmission and reception provides information about the distance. In [13], the sonar measurement is simplified by using ray casting to check for intersection points with the 3D environment. Depending on the sensor, different configuration options are available for the sensor’s field of view. The first two sensors have a field of view in the horizontal and vertical directions, while the side-scan sensor only has a downward opening angle. Active sonars is used, for example, to locate swarms of fish in waters or to map underwater structures [32].
The bathymetric survey of the waters is carried out using an R2 Sonic 2020 wideband multibeam echo sounder. This can deliver up to 1024 measurement points per ping at a maximum aperture angle of 130 . The frequency can be selected between 170 and 450 plus 700 kHz and can be changed during the measurement. A measurement of the sound profile at the deepest point of the lake shall be collected before the start. The sound profile probe in Figure 8 is used for this purpose. The sampling uses the European Petroleum Survey Group Geodesy (EPSG)::25,833 (standard of the Saxony state reference system) with a corrected GPS signal for accuracy of less than 1 cm horizontal and 1.5 cm vertical. The data from the multibeam echo sounder have dual functions in the project. The generated bathymetric maps represent parameters to be measured for the respective water body, but at the same time, they serve as the bases for path planning with the measurement chain. Therefore, it is possible to travel in recorded areas where optimal length adjustments have to be done during the measurements without coming into contact with the bottom or obstacles. By further evaluation, the underwater morphology with the ground areas and sedimentation can be investigated. Post-processing of the sonar data is done with QPS automatic and manual filtering of the raw data. Errors and interference signals are separated and not used for the evaluation.

2.4. Water Parameter and Respiration Measurement

The measurement of the water parameters based on multisensory nodes, see in Figure 4, is used at different depths for simultaneous recording. The prototype records the pressure, temperature, acceleration values, electrical conductivity, and turbidity of the water. It can be used up to a depth of 20 m. This will enable a simulated recording of depth-dependent values without the mixing of the individual depth layers by lowering a probe.
Figure 4. (a) photo of the first prototype, (b) 3D-model of the second prototype sensor node [33] and (c) cad-model of the winch system.
Figure 4. (a) photo of the first prototype, (b) 3D-model of the second prototype sensor node [33] and (c) cad-model of the winch system.
Remotesensing 15 00860 g004
These measurements are currently tested and will be integrated into the measurement concept by further measurement campaigns. An expected advantage is a continuous measurement during the slow travel of the ASV to enable a fast holistically resolved survey of the water body. As described in Section 1.2, the current investigations of water bodies are carried out by point depth profiles, which only allow interpolation between the individual points. The system offers a much higher data density and only the interpolation between the ’ship’ way.
The measurement of the gas exchange is based on the chamber system SEMACH-FG developed at the TUBAF [34]. With this measuring system, on-site measurement of carbon dioxide ( C O 2 ) concentration changes, as well as the collection of gas samples to measure the concentrations of nitrous oxide ( N 2 O ) and methane ( C H 4 ) is possible. This is done based on gas chromatography. The data are used to calculate the gas fluxes and to determine whether the water body is a greenhouse gas source or a sink [35].

2.5. Scientific Divers Investigations

Scientific divers are specially qualified scientists that use scientific methods underwater. Certified training to be a scientific diver at TUBAF involves safety, documentation, mapping, sampling, measuring, and special underwater methods [36]. The diver’s tasks for the investigation are validation, testing, sampling, and further investigation of special underwater spots.
The task of validation is additional to common methods of validation of the sensor data, such as lab calibration and testing the time-dependent and setting times. Some special recommended cases can be simulated in situ, such as groundwater discharge. Additionally, the divers have an overview of the underwater landscape to improve the underground classification.
The advantage is to be in place of the investigation and have a look at the processes during all steps of the project. Therefore, full documentation in higher depths of the water column is possible. The validation of the water parameter will be done by water and sediment samples, in situ measurements, and targeted influence underwater. After the calibration of the sensor nodes in the lab, another validation during the dynamic measurement is necessary. With the known deviation of the diver’s used measurement device, the deviation of the sensor node can be estimated. For that reason, common measurement devices for lab usage are housed with custom-built cases (see Figure 5), offer higher accuracy, and are faster than the investigation of samples.
To obtain more detailed information and verification of other parameters, water and sediment samples are taken by the scientific divers, for further investigations and additional corrections can be done in the field lab or the lab of the chair of hydrochemistry. With this kind of sample, it is possible to achieve a higher accuracy because the sediment samples are unmixed and horizontally stable.
For the validation of sonar data and the training point cloud for the obstacle detection and AI algorithm, photogrammetry data are collected. For that, with a professional underwater camera system, high-resolute models and point clouds are generated. The reconstruction process after [37] is used, based on the software Metashape [38]. The results and the comparison with the sonar data are explained in Section 3. The aim is to determine the objects by photos and to enable documentation and subsequent measurement. With the ASV, a large-scale investigation is possible. The detailed investigation of selected sites regarding groundwater discharge, leaching areas, or structural elements can be done by the scientific diver.

2.6. Investigation Area

The system is developed for inland waters quality measurement and, hence, does not depend on the geography of the location of the system being used. For the first step, the construction, and testing are done at potable water dams in Germany. For the validation, different water types, such as a flooded quarry, flooded surface mine, and used water reservoir are investigated. The system is additionally tested in the wetlands of Brazil, see Figure 6, and, for further investigations, in a coastal area of the Mediterranean Sea.
The first investigations are done in a flooded quarry in Saxony, Germany. It has a size of 100 × 50 m and a maximum water depth of 18 m. It is used as a fishing/diving spot for local associations. Because of the use for diving, a lot of cultural heritage underwater objects are found, which include old lorries, rails, and guns. It offers a view distance between 5 and 10 m during the investigations in March 2021. These are excellent freshwater conditions for photogrammetry and diving. For sonar sampling, it offers good conditions because of a small area of low-level parts with high vegetation, sharp edges with a clear reflect signal, and a low sediment layer. Furthermore, many artifacts and objects are part of the quarry and can investigate by the sonar point cloud. The surface of the quarry has floating islands for better water quality and is, therefore, ideal for difficult navigation and investigation conditions.

3. Results of the Photogrammetry and Sonar Sampling

3.1. Sonar Sampling

The data recording with the multibeam echo sounder R2 Sonic 2020 was conducted from a small motorboat, as the autonomous swimming platform was not available at the time of the measurement. For this purpose, the echo sounder was mounted on the side of the boat and perpendicular to the water bottom. The measurement data were viewed live on the boat. Therefore, the sonar parameters of the sensor of the opening angle could be adjusted within the dynamics to the local conditions. The aperture angle was chosen at approximately 90°. Nearby the shore, it was inclined by 30°. Five frequencies between 200 and 450 kHz were selected for measurement, changing after each ping. For the next investigations, the ASV is used from the base station.
The aim was to navigate the watercourse in parallel paths without turning during the measurement. A uniform distribution of the measurement points on the bottom of the watercourse should be achieved. Due to the small geometry of the quarry with many static obstacles, this approach could not be followed. The desirable crossing of objects from different directions was also not possible due to the local conditions. This resulted in arc-shaped measurement runs, which were primarily dictated by the open areas of the watercourse. The result is shown in Figure 7.
For the error correction, measurements of the sound velocity profile, see Figure 8, at the deepest point of the water body are done. It was repeated twice during the day using a measuring probe. Due to the active circulation of the water body, a change of less than 1.4 m/s was observed in the first two meters of the water column. The sound velocity remained almost constant.
Figure 8. Course of the sound profile and the water parameters during the echo sounder measurement.
Figure 8. Course of the sound profile and the water parameters during the echo sounder measurement.
Remotesensing 15 00860 g008
For the sonar data, no model-specific accuracy is calculated. There are no control points available underwater. Hence, the accuracy of the measurement system and the Global Navigation Satellite System (GNSS) system is used. A distinction is made between the error in the position X/Y) and in the height (Z). This results in a mean position error of the sonar data of ± 0.02 m and a height error of ± 0.05 m.

3.2. Photogrammetry Results

Two different classes, the surrounding area and underwater objects, are used for reconstruction at the quarry, see Section 2.6. The landscape and surrounding area of the quarry are investigated with a drone (DJI Mini 2). An underwater camera (Sony Alpha A6000) with a light rig (two Scubalamp P53) is used for underwater photography.

3.2.1. Landscape Reconstruction

For the landscape, 347 images with a resolution of 4000 × 3000 pixels are used, which results in a point cloud with 64 million points. The height of the flight is approximately 20 m above the surface. According to the manual cleaning of the water surface, reflections, outliers, and fragmented trees, 17 million points are used for the mesh reconstruction. The 3D model includes 7 million faces. The used georeference coordination system is EPSG:4326 and is based on the drone metadata. For the used reconstruction parameters, a resolution of 0.0126 m/pix and 6330 points/m2 is achieved. The result of the surrounding area of the quarry is shown in Figure 9.
The accuracy can be determined with control points distributed over the area of the survey. The following averaged accuracy of 0.1 m was obtained. There is an area error of 0.099 m and a height error of 0.014 m.

3.2.2. Underwater Reconstruction

With the photogrammetry process (Table 2), eight objects were investigated. It offered a better resolution than the drone results because of the smaller camera distance. Because of the needed light underwater, only a camera distance between 1 and 2.5 m was possible. The underwater images had the same resolutions as the drone images (4000 × 3000 pixels). Because of the missing underwater GPS reference, a local reference by a marker was used for the scaling.
For that, a resolution of up to 0.404 × 10−3 m/pix with 6.13 × 106 points/m2 was reached. All objects had a size between 0.5 and 2 m. For the imaging process, constant video light at a distance between 0.5 and 1.5 m covered good results. For greater distances, the two lights with 6000 lum were not enough under the actual conditions.
Table 2. Parameter of the reconstructed photogrammetry models.
Table 2. Parameter of the reconstructed photogrammetry models.
ValueInjection Pump (Figure 10a)Weapons (Figure 10b)Switch Box (Figure 10c)Wheel (Figure 10d)
images number17776173228
point density6.13 × 106 points/m23.54 × 106 points/m22.06 × 106 points/m22.65 × 106 points/m2
accuracy0.404 × 103 m/pix0.532 × 10−3 m/pix0.698 × 10−3 m/pix0.614 × 10−3 m/pix
calculation time1541 min596 min283 min1237 min

3.3. Combination of All Models

To compare the measurement results of underwater photogrammetry and echo sounder data, two gravestones were chosen as characteristic points at a water depth of about 8 m. The accuracy of Section 3.2.2 is just the hardware accuracy that has to be investigated for the used case. Figure 11 shows the photogrammetry point cloud and the point cloud of the echo sounder measurements. It can be seen in principle, but there is no clear match between the subsoil and the geometry of the object.
A conclusion about the object from the point cloud of the echo sounder data, therefore, does not seem possible at present. Further investigations must show whether a better result can be achieved by a different choice of measurement parameters on the echo sounder, which would record the geometries of objects underwater much more precisely.
For the combination of under- and over-water photogrammetry, the sonar point cloud is needed. It offers the link between the two different reference systems. The landscape (EPSG::4326) and the sonar point cloud (EPSG::25,833) use a georeference coordinate system, but the underwater model has just a scaled local coordinate system. For the combination of all models, the following process in Figure 12 is used.
Therefore, the two coordination systems are transformed in Agisoft Metashape [38] to EPSG:25,833. Now, it is possible to align the drone images to the sonar data. The sonar data are used for reference because of the corrected and more accurate GPS position of the sonar system. The alignment is done by five fixed points with the same feature. That is possible in the overlapping area of the water depth of 1 me or less. The EPSG:25,832 is more accurate than the EPSG::4326. The same alignment is applied to the underwater parts. For that, a manual preposition stitching process is done to fit the sonar data. The result is shown in Figure 13.
All objects can now merge to one point cloud and one 3D-model (https://skfb.ly/ontR9, accessed on 28 January 2023), which can be used and exported to the needed formats. The combination of the model offers an accuracy of 0.0129 m/pix and 6020 points/m2 and has the expected range. Now the processed data can be used for the virtual data. The combination process works well, and it is possible to compare the two different methods of photogrammetry and sonar. It can be used for underwater diving training, scientific work preparation, measurements, calculations of the water volume, and highly accurate 3D flow simulations. For higher-accuracy applications, such as structure analysis or detailed object classification, higher accuracies of the source data are necessary (Section 5).

4. Simulation and Data Visualization

As mentioned in Section 2.2, we transfer the training of the AI into a virtual world for reasons of simulations and preparation of the boat movement, as well as for visualization aspects. The use of sonar and other sensor technology generates a large amount of data. The processing, visualization, and analysis of which require specific models and can exploit opportunities for AI. An AI function is essential for the automatic movement of the ASV. The sonar’s sound pulses generate highly complex point clouds of the waters from which knowledge can be extracted. In detail, this involves automatic object recognition and classification, summarized under the term semantic segmentation. Due to the selection of inland waters in the Saxon region (Germany), the application cases are very specific and have a low data basis. Therefore, one of the main focuses of our research is the laboratory-based preparation of AI algorithms by generating synthetic data in virtual environments. For this purpose, virtual sensors have already been implemented [13] considering the technical specifics, which simulate real measurements and can consider important environmental parameters (conductivity, water temperature, etc.).
Figure 14 shows the general pipeline of creating synthetic data in virtual environments to train certain AI modules, e.g., [14,15,40]. By using underwater photogrammetry by scientific divers for the three-dimensional recording of underwater objects, additional input data (point clouds, 3D models) can be obtained for the path planning of the AI (see Section 3.1). Furthermore, individual objects can be recorded in detail for later investigations. The varying generation of reference environments thus creates a data foundation that prepares the AI specifically for the field experiments. The extensive computations are solved with an AI computer NVIDIA DGX-2, a powerful system with 16 graphics processors for handling complex AI challenges. The pipeline can be summarized as:
  • Create a virtual environment, e.g., with photogrammetry and post-processing in Blender; (see Section 3.2).
  • Parameterize depth sensors (mainly sonar from Section 3.1, but LiDAR would be possible as well) and water body layers.
  • Implement water body physics (abstract with water surface displacement or more realistic with physics engine for fluids).
  • Recreate animation pipelines and path movements.
  • Create data with virtual sensors of [13], export these data and train AI.
With complete virtual environments and a final depth sensor setup, labeled synthetic data can be created and exported for AI training. The quality and characteristics of the data depend on the used sensor type, the setting within the virtual world, and the objects, which need to be detected. In [41] applications of virtual worlds generated from heterogeneous kinds of open geodata, such as 2D maps, digital elevation models, and aerial photographs are discussed and explored. These virtual worlds with the photogrammetry models of Section 3 as simulation environments are used for the experiments. In Section 4, we present further information about the aspects of AI and virtual worlds.

4.1. Simulations in Virtual Reality

To simulate the movements of the sonar module attached to the robotic boats, an abstract shape of a water surface (cf. [32]) is created. While there are options in Blender for high-dimensional simulation of fluids using MantaFlow, due to lower computational time, the generated water surface is using displacement modifiers based on a procedural noise texture. Changing the texture over time allows the adjustment of the height values of the water surface. The boat was connected to the water surface using a shrink-wrap modifier (see Figure 15a) that leads to translation and rotation of the boat based on the water surface. Thus, a time-dependent variation of the sonar orientation could be created, which is freely parametrizable and adaptable to different wave motions. It should be mentioned here that this is not intended to provide a real-time simulation, but rather to provide suitable transformations for the virtual sensor system to generate realistic synthetic measurement data.
Both the virtual environment and the water simulation allow the simulation to be presented in the most accurate way possible. For visualization, a variety of renderings can be created in Blender (Figure 15b) to increase the understanding of the results along with explaining the simulation outcome itself.
For the simulations, we implemented several additional elements for the virtual 3D worlds, to recreate a more realistic simulation environment and to train the AI for specific tasks. One important part is to detect movable objects in lakes and seas, especially fish swarms. To derive depth sensing data of fish swarms, the swarms need to be implemented in the virtual world themselves. The implementation of such a virtual swarm in [32] is described with low poly fish meshes with a simple rig for a wiggle animation. To create swarm dynamics and animations, particle systems and force fields were used. To simulate the fish swarm, a self-created fish model and particle system were used within Blender. To provide some added value to the simulation, we animated the fish model within the particle animation. To summarize the implementation, we worked through the following steps:
1.
Fish modeling (low poly mesh);
2.
Fish rigging and animation (wiggle animation);
3.
Particle system with fish object instancing;
4.
Force field (vortex and turbulence) to simulate fish swarm movements.
The mesh for the fish was designed based on a mackerel. Even though this is a fish species living in coastal waters, it can serve as a prototypical basis for the experiments. In the data of the depth sensors, a distinction of fish types is almost impossible anyway. Nevertheless, we modeled a low-poly mesh according to the given template image (which was also used as texture) and converted it to an organic fish shape using a subdivision surface modifier (2nd iteration, still low-poly). In addition to the mesh, we created a simple rig of four bones (see Figure 16). Weight painting was used for skinning and connecting the rig to the mesh. We added the copy rotation constraint to bones 2–4 to transfer the rotation from the main bone (head). The goal was to easily incorporate a wiggle animation. This was created by rotating the main bone and adding a cycle modifier for cyclic repetition with Bézier interpolation.

4.2. Synthetic Depth Sensing Data

For prototypical experiments, we created some exemplary datasets of synthetic sonar scans in a static (no fish swarms) and dynamic (with fish swarms) environment. It should be mentioned that the following figures show optimal outputs. With the environment and measurement uncertainties, the point clouds would display more entropy in real-world measurements. For this behavior, ref. [13] provides parametrizable errors that can be added to the measurements. We want to enrich the actual virtual measurements to provide the training data with these errors.
For segmentation tasks, the derived data need to be classified and linked to object classes. In [40] a novel approach to focus on temporal impulses of depth sensing techniques within the classification process could show. The ongoing research focuses on the analysis of the synthetic data with the developed classification process, as well as with other known AI and statistical methods for semantic segmentation and part segmentation in point clouds.

4.2.1. Static Underwater Body

As an example, Figure 17 shows the scan of the water bottom based on a side scan sonar. A scan of this quality class can be generated in a few seconds with the mentioned tools of synthetic data generation. For the quality of these synthetic data, the structure, and quality of the used mesh of the water bottom is elementary. The results of the photogrammetry models have to be partially cleaned or adjusted.
The color coding of the point cloud refers to the height level, not to the classification. Labeling is not included here. This makes sense if objects or object classes are included in the virtual scene whose classification and automatic separation are worthwhile. This can be, e.g., fish swarms.

4.2.2. Dynamic Underwater Scene Augmentation

The scans show the results of a dynamic scan including the swarm of fish to test and visualize its detection within the virtual sensor system. Only one category (fish) exists here for labeling, but again, the coloring is linked to the height level (see Figure 18).
As can be seen, the dynamics of the swarm motion (generated by the force fields vortex and turbulence) are visible in the scan. Despite everything, this is an idealized measurement, but entropy and measurement errors could be included. Basically, sonar scans depend on the quality of the water body. Characteristic properties of water bodies have to be considered here, mainly signal velocity c [m/s] (varies depending on the temperature T [°C]), the salinity S [‰], and the depth D [m]. If there are not enough parameters available for the water body, a random error could be added using certain distributions (e.g., Gaussian error).
Note that not the whole swarm is captured as a static object within the scanning. Fewer points are captured per fish because of the swarm’s entropy, and the animation of the sensor affects how the inherently small objects of the fish swarm are hit. It should be noted that fewer beams are shot in the side scan than in the static scan. The dynamics of the swarm are still detectable in the upper part but disappear at a higher distance from the sensor. Despite all of this, a separate classification of swarm and background can be created. The point cloud contains 4603 points (see Figure 19).
Data generated in this way can be stored efficiently using suitable formats (e.g., HDF5). Thus, the following information can be stored: point position in space (X-, Y-, and Z-coordinate), semantic label (ground truth—belonging to an object in the scene), the intensity of the measuring point and distance between sensor and object surface. This information can now serve as input information for AI models and represents a very comprehensive training database. From the automatic recognition of objects, structures, or patterns, knowledge about the water surface as well as conclusions about the robot’s motion behavior can be drawn. The AI-based data processing is ongoing research and mentioned in the outlook in Section 5.

4.3. Visualization with VR and AR Techniques

AR and VR techniques offer versatile possibilities for visualizing simulation and measurement results of the concepts presented. On the one hand, measurement results, especially bathymetry data from the sonar, can be displayed correctly anchored in the environment at the respective target water. AR techniques are used here. On the other hand, the Cave automatic virtual environment (CAVE) of the TUBAF serves to explore the measurement data and illustrate the simulation calculations–a possibility to jointly use VR techniques.
The applications allow scientists to anchor and examine a previously created point cloud in a mixed-reality environment at a specific location. The application of AR in the context of this project aims at different aspects of visualization. While we have already addressed VR as a possible use for the simulation of the robotic systems, AR serves to provide information outdoors after measurements for a representation of the real results. We, thus, see two main application areas for AR:
1.
visualization of real acquired depth sensing data by sonar sensors on real water bodies.
2.
assistance systems for robotic control by feedback to live measurement data.
Sonar scans of seafloors will be able to be anchored at the correct scale at the correct position in the “real” world, allowing conclusions to be drawn about areas of interest. Measured point clouds are to be displayed correctly from the edge of the bank at the target water body and registered into the environment. Due to the data size, a live view of a measurement run is currently not possible, but the display of an already measured set of points in their entirety.

4.3.1. VR Indoor Visualization with a CAVE

The Freiberg extreme definition of spatial immersion and interaction environment (X-SITE) is an innovative CAVE that provides visualizations in an ultra-high resolution of 50 million pixels. Similar to other CAVE-like displays, 3D visualizations of interactive virtual worlds are projected on multiple wall-sized, seamless projection screens. As a distinctive feature of the X-SITE, images on the projection walls are generated through the coordinated operation of 24 full-HD projectors. Even at a very close distance to the projection walls, users experience smooth and detailed images but not individual pixels. The combination of extreme definition visualization quality and reduced space requirements makes the X-SITE a one-of-a-kind VR room.
We use the CAVE to visualize the measurement data and recapitulate the measurement runs by displaying the field tests in simulation environments. For this purpose, game engines, e.g., Unity or Godot, are used in which a realistic representation of the environment takes place (e.g., through free geodata as in [41]) and use of the 3D models for ASV and water subsurface. An example is shown in Figure 20.

4.3.2. AR Outdoor Visualization with Mobile Devices

The use case for AR described here presents a data visualization opportunity that can be performed by a person at the edge of the shore to project bathymetry data into the real world. A Microsoft HoloLens 2 (HL2) is used as AR hardware for this application as a stand-alone solution without an external computing unit. This conception allows mobile applications at the waters but limits the display scope of the point clouds (according to empirical values, a maximum of 300,000 points). The application is implemented as a Unity application and uses the pcx-Point Cloud Importer (https://github.com/keijiro/Pcx, accessed on 28 January 2023) to display the point clouds. Currently, this workflow is implemented as follows:
  • Sensed point cloud from real data from the sonar sensor system.
  • Convert the point cloud to Potree format (https://github.com/potree/PotreeConverter, accessed on 28 January 2023), level-of-detail (LoD) representation of the point clouds possible.
  • Build the scene in the Unity game engine and integrate the Potree point cloud as a dynamic point cloud set into the Unity scene.
  • Load the output in the HL2 app.
  • Register via the global positioning system (GPS).
The Potree format allows the dynamic inclusion of large point clouds but requires a conversion step that is not applicable for live viewing. Thus, no live view can be based on this workflow. Another challenge is the correct georeferencing of the point clouds from the water’s edge. Since no markers will be used and the features will be largely redundant due to the natural environment, we additionally use GPS here. Since GPS is not integrated into the HL2, the smartphone sends GPS information (https://play.google.com/store/apps/details?id=com.cajax.gps2bt2&hl=de, accessed on 10 November 2022) of the user’s position via Bluetooth. The measurement data of the sonar is enriched with GPS information. Thus, the user and measurement data can be transferred into a common coordinate system (see Figure 21).
Integration of automatic segmentation and classification of the point clouds (parts) by AI, which was introduced in Section 4.2, can be done here to cluster the augmented data. In combination, this results in an AR application that not only displays the measured data but also provides additional information about specific geological and terrestrial features.
Our future work will focus on the development of AI-based assistance systems, including the associated human–machine interfaces for the semi-autonomous control of ASV. The goal is to expand the range of application of robotic boats to areas of water that are difficult to navigate, e.g., areas that cannot be seen by the operator on the shore or whose navigation cannot be planned exactly in advance due to changing water levels. The sensors enable the detection of the environment and obstacles above and below water. Navigation routes are automatically adjusted in case of imminent collisions or re-planned in case of an interaction between AI and human operators. For this purpose, the sensor-based 3D environment model of the robotic boat is transmitted to the operator and presented via VR and AR user interfaces.

5. Conclusions and Outlook

5.1. Conclusion and Discussion

Referring to the aim of Section 1.1, the initial steps were achieved. A proofed measurement concept for the different cases, subsurface, water quality, and respiration, was proven. During the field trips and the hardware test of the ASV, the response was immense for researchers, such as limnologists and hydrogeologists; for water suppliers, renaturation companies, and water owners, the depth-resolved water parameters are of interest. All want a better understanding of the water values and processes across the water.
The ASV was customized and constructed. Moreover, the cluster load system offers the best performance. It is easy to change between the different measurements and generate the different parameters; Section 2.1. Additionally, the system is easy to move and can be handled by two people. For further investigations, a measurement with three different robots at the same time can be possible. The used engine provides enough power for the actual tasks, but for a higher travel speed and better performance, a common electrical boat’s engine should be used. Although a streamlined, optimized structure of the floating body will be used, see Figure 22.
The integration of the echo sounder system into the swimming platform was done. The use of the sonar operates with the swimming platform. The generated data during the field trips are used to obtain a dataset of the simulation of the platform motion behavior. Additionally, a combined 3D model of the current results offers various possibilities for improving the application. Therefore, autonomous path planning should be tested in straightforward ways during the measurement. Furthermore, it is necessary to improve the use of multi-frequency data and real-time adjustments of the recording parameters, such as the fan angle and power. Furthermore, the integration of obstacle detection for autonomous driving is necessary, which will be tested in future investigations.
The use of scientific divers to examine the surveyed geometry (and support the surveys) expands and accelerates the interpretation of data that would otherwise remain completely inaccessible for direct examination. For the underwater photogrammetry of objects in fresh water, the use of light is essential for the result of the model. It was shown that underwater photogrammetry offers a higher resolution of the underwater surface with the addition of texture. Therefore, more analyses can be done. The area of investigation at the same time is smaller than with a sonar system, but the effort is higher because of the on-site underwater investigation. It has been shown that a combination of remote sensing methods and on-site investigation through underwater photogrammetry imaging provides the highest information content.
Currently, the accuracy of the landscape data is just for representation and visualization. It can be improved by a specialized drone for photogrammetric use or by a combination of nadir and oblique images [42]. That makes it possible to use the combined model for issues, such as civil engineering, environmental monitoring, or inspections.
The combination process of sonar and photogrammetry data was developed and works well with Agisoft Metashape. For the underwater and sonar data, similar faces and geometry are used. For the over-water and underwater objects, shallow areas are used. The combination is based on the sonar data point cloud because the corrected GNSS signal was used. In the area, less than 1 m of water depth has quality potential. This can be improved with a higher density of sonar data in these areas. In soft ground with plants, in particular, the performance of the sonar system needs further investigation. The area can be investigated better with the ASV platform-V2 and another projection angle. An important point is the cleaning process of the landscape for the water surface. It was done manually, but can be improved by point classification [40] or the use of image-masked areas.
For the combination of all parts, no detailed error calculation was possible. The problem is that there are no underwater control points available. This problem has not been solved as yet. It is possible to calculate an error for a single object in the dimension but not the position and a large scale, such as 100 m of underwater distances. It was not possible to measure the distance by divers with the needed accuracy for validation. These investigations have to be done in an artificial environment (such as a swimming pool) where the positions and distances can be controlled with a higher resolution.
We succeeded in using modern methods of human–machine communication, namely AR and VR, in a variety of ways. On the one hand, we were able to show that virtual test environments can produce data that correspond to real-world models, thus reducing the time required to provide AI models. On the other hand, both AR and VR could be used to visualize the diverse datasets. Although AI models are not integrated into the actual robot control in the research results to date, we could show that diverse training data can be generated for a wide range of different application scenarios. This approach combines the results of photogrammetry with the virtual sensor technology we have developed and will be combined with the presented ASV in the next steps.

5.2. Outlook

The collected data are the bases for all further investigations of the RoBiMo project and the combined ASV investigation. The data from the water sensor and the respiration measurement should be integrated into the model. These measurements have to first be validated in the lab and compared to commercial sensor systems. After, the in situ validation by the scientific diver and an endurance test, the system should be used by companies.
To improve the quality of the underwater photogrammetry data, for representing purposes or environmental monitoring, color correction, such as in [43], will be done. Better long-term monitoring of underwater areas will be possible. For higher accuracy, further calibrations [44] than the use of markers and scales can improve the results, such as the use of a stereo camera system [44].
The next step is an automated combination of the sonar point cloud and the photogrammetry data by a python script in Metashape. With that, a better and more efficient process with the use of the TUBAF high-performance cluster can be realized.
The final goal is to, on the one hand, use the information on photogrammetry to design virtual environments for simulations of ASV. This approach could already be presented conceptually in this paper and coupled with the already developed virtual sensors. In the future, these virtual worlds have to be supplemented with further details (e.g., fish swarms) and geologically significant effects (e.g., sediment migration) to achieve a realistic image within the virtualization. On the other hand, it includes the integration of AR and VR interfaces into the extended visualization of acquired data and integration into the real environment (e.g., the registration of measured water substrates into the real world). These have to be used within assistance systems, which support a manual control necessary in certain situations. Therefore, we would combine methods from AR and VR directly with the boats to construct a modern assistance system for telerobotics, which will support the live operation. Here, we also intend to use the AI models whose databases are discussed in this paper. A detailed investigation of suitable AI paradigms and their integration into the assistance system (e.g., for live detection of objects underwater) will be the goal of our future research.
Finally, a long-term test with a base station and a small water garage should be realized. After the first investigation of the underwater surface, one robot for water monitoring and one for respiration can work daily, weekly, or for a 24 h measurement. A better understanding of the short-term influence of heavy rain, heat waves, and storms on the water quality and C O 2 absorption of water is possible.

Author Contributions

Conceptualization, S.P. and S.R.; virtual reality and artificial intelligence S.R.; ASV and echo sampling, G.J.L.; photogrammetry, scientific diving, and model combination, S.P.; writing—original draft preparation, S.P., S.R. and G.J.L.; writing—review and editing, S.P., S.R.,G.J.L.,T.G. and T.F.; visualization, S.P., S.R. and G.J.L.; supervision, T.G. and T.F.; project administration, T.F. All authors have read and agreed to the published version of the manuscript.

Funding

Open Access Funding by the Publication Fund of the TU Bergakademie Freiberg. The publication is content of the ESF research group RoBiMo. This research is co-financed by taxes based on the budget passed by the Saxon state parliament and the European social fund. The authors acknowledge computing time on the compute cluster of the Faculty of Mathematics and Computer Science of Technische Universität Bergakademie Freiberg, operated by the computing center (URZ) and funded by the Deutsche Forschungsgemeinschaft (DFG) under DFG grant number 397252409.

Data Availability Statement

The authors do not have premission to share the data.

Conflicts of Interest

The authors declare no conflict of interest.

Glossary

The following abbreviations are used in this manuscript:
AIartificial intelligence
ARaugmented reality
ASVautonomous operation swimming vehicle
AUVautonomous operation underwater vehicle
CAVEcave automatic virtual environment
EPSGEuropean Petroleum Survey Group Geodesy
GNSSglobal navigation satellite system
GPSglobal positioning system
HL2Microsoft HoloLens 2
LiDARlight detection and ranging
LoDlevel-of-detail
MLmachine learning
RoBiMorobot-assisted freshwater monitoring
RUVremote operation underwater vehicle
RSVremote operation swimming vehicle
Sonarsound navigation and ranging
TUBAFFreiberg University of Mining and Technology
X-SITEextreme definition of spatial immersion and interaction environment
VRvirtual reality

References

  1. FAZ. Deutscher Wetterdienst Bestätigt neuen Hitzerekord von 42.6 Grad. 2019. Available online: https://www.faz.net/aktuell/gesellschaft/deutscher-wetterdienst-bestaetigt-neuen-hitzerekord-von-42-6-grad-16303898.html (accessed on 31 January 2022).
  2. European Commission. Water Framework Directive 2000/60/EC. Off. J. Eur. Communities 2000, L269, 1–15. [Google Scholar]
  3. DIN 38402-12:1985-06; Deutsche Norm. German Standard Methods for the Examination of Water, Waste Water and Sludge; General Information (Group A); Sampling from Barrages and Lakes (A 12). Deutsches Institut für Normung e.V.: Berlin, Germany, 1985. [CrossRef]
  4. Ziemińska-Stolarska, A.; Imbierowicz, M.; Jaskulski, M.; Szmidt, A.; Zbiciński, I. Continuous and periodic monitoring system of surface water quality of an impounding reservoir: Sulejow reservoir, Poland. Int. J. Environ. Res. Public Health 2019, 16, 301. [Google Scholar] [CrossRef] [PubMed]
  5. Schipek, M. Treatment of acid mine lakes–Lab and field studies. FOG Freib. Online Geosci. 2011, 29, 1–358. [Google Scholar]
  6. Oertel, C.; Matschullat, J.; Zurba, K.; Zimmermann, F.; Erasmi, S. Greenhouse gas emissions from soils—A review. Geochemistry 2016, 76, 327–352. [Google Scholar] [CrossRef]
  7. Marcé, R.; Obrador, B.; Gómez-Gener, L.; Catalán, N.; Koschorreck, M.; Arce, M.I.; Singer, G.; von Schiller, D. Emissions from dry inland waters are a blind spot in the global carbon cycle. Earth Sci. Rev. 2019, 188, 240–248. [Google Scholar] [CrossRef]
  8. Xiang, X.; Niu, Z.; Lapierre, L.; Zuo, M. Hybrid underwater robotic vehicles: The state-of-the-art and future trends. HKIE Trans. Hong Kong Inst. Eng. 2015, 22, 103–116. [Google Scholar] [CrossRef]
  9. DVW-Gesellschaft für Geodasie Geoinformation und Landmanagement. Hydrographie 2018–Trend zu Unbemannten Messsystemen; Wißner-Verlag: Augsburg, Germany, 2018; p. 241. [Google Scholar]
  10. Nakagawa, M. Point Cloud Clustering Using Panoramic Layered Range Image. In Recent Applications in Data Clustering; IntechOpen: London, UK, 2018. [Google Scholar] [CrossRef]
  11. Kisner, H.; Thomas, U. Segmentation of 3D Point Clouds using a New Spectral Clustering Algorithm without a-priori Knowledge. In Proceedings of the 13th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, VISAPP, Funchal, Portugal, 27–29 January 2018; Volume 4, pp. 315–322. [Google Scholar] [CrossRef]
  12. Aggarwal, C.C.; Reddy, C.K. Data Clustering–lgorithms and Applications; CRC Press: Boca Raton, FL, USA, 2013. [Google Scholar]
  13. Reitmann, S.; Neumann, L.; Jung, B. Blainder—A blender ai add-on for generation of semantically labeled depth-sensing data. Sensors 2021, 21, 2144. [Google Scholar] [CrossRef] [PubMed]
  14. Qi, C.R.; Su, H.; Mo, K.; Guibas, L.J. PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 652–660. [Google Scholar]
  15. Qi, C.R.; Yi, L.; Su, H.; Guibas, L.J. PointNet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space. Adv. Neural Inf. Process. Syst. 2017, 30. [Google Scholar] [CrossRef]
  16. Yi, L.; Kim, V.G.; Ceylan, D.; Shen, I.C.; Yan, M.; Su, H.; Lu, C.; Huang, Q.; Sheffer, A.; Guibas, L. A Scalable Active Framework for Region Annotation in 3D Shape Collections. ACM Trans. Graph. ToG 2016, 35, 1–12. [Google Scholar] [CrossRef]
  17. Agrafiotis, P.; Skarlatos, D.; Georgopoulos, A.; Karantzalos, K. Shallow water bathymetry mapping from UAV imagey based on machine learning. In Proceedings of the ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences; Copernicus GmbH: Hannover, Germany, 2019; Volume 42, pp. 9–16. [Google Scholar] [CrossRef]
  18. Gibs, J.; Wilde, F.D.; Heckathorn, H.A. Use of Multiparameter Instruments for Routine Field Measurements (Ver. 1.1): U.S. Geological Survey Techniques of Water-Resources Investigations; Book 9, Chapter A6, Section 6.8; U.S. Geological Survey: Restan, VA, USA, 2007; Volume 1, pp. 1–48.
  19. Hofmann, H.; Ostendorp, W. Seeufer: Wellen–Erosion–Schutz–Renaturierung: Handlungsempfehlungen für den Gewässerschutz: Ergebnisse aus dem ReWaM-Verbundprojekt HyMoBioStrategie (2015–2018); KOPS Universität Konstanz: Konstanz, Germany, 2019. [Google Scholar]
  20. Degel, C. FactSheet HydroCrawler; Technical Report; Fraunhofer Institut für Biomedizinische Technik (IBMT) Joseph-von-Fraunhofer-Weg: Sulzbach, Germany, 2019. [Google Scholar]
  21. Müller, E.N.; Schaik, L.V.; Blume, T.; Bronstert, A.; Carus, J.; Fleckenstein, J.H.; Fohrer, N.; Gerke, H.H.; Graeff, T.; Hesse, C.; et al. Herausforderungen der ökohydrologischen Forschung in Deutschland. Hydrol. Wasserbewirtsch. 2014, 58, 221–240. [Google Scholar] [CrossRef]
  22. Krebs, P. FactSheet BOOT-Monitoring; Technical Report; Technische Universität Dresden: Dresden, Germany, 2019. [Google Scholar]
  23. Wehmeyer, D. FactSheet RiverBoat; Technical Report; Forschungsinstitut für Wasser-und Abfallwirtschaft an der RWTH Aachen (FiW) e. V. Kackertstr: Aachen, Germany, 2019. [Google Scholar]
  24. Kapetanović, N.; Vasilijević, A.; Nađ, Đ.; Zubčić, K.; Mišković, N. Marine robots mapping the present and the past: Unraveling the secrets of the deep. Remote Sens. 2020, 12, 3902. [Google Scholar] [CrossRef]
  25. Gangelhoff, J.; Werner, C.S.; Reiterer, A. Compact, large aperture 2D deflection optic for LiDAR underwater applications. SPIE Int. Soc. Opt. Eng. 2022, 12263, 5. [Google Scholar] [CrossRef]
  26. Dun, M.; Grinham, A. Experimental evaluation of an Autonomous Surface Vehicle for water quality and greenhouse gas emission monitoring. In Proceedings of the 2010 IEEE International Conference on Robotics and Automation, Anchorage, AK, USA, 3–7 May 2010; pp. 5268–5274. [Google Scholar] [CrossRef]
  27. Robotic Subsea Exploration Technologies | ROBUST Project | Fact Sheet | H2020; CORDIS European Commission: Brüssel, Belgium, 2020. [CrossRef]
  28. Edson, E.C.; Patterson, M.R. MantaRay: A novel autonomous sampling instrument for in situ measurements of environmental microplastic particle concentrations. In Proceedings of the OCEANS 2015–MTS/IEEE Washington; Institute of Electrical and Electronics Engineers Inc.: New York, NY, USA, 2016. [Google Scholar] [CrossRef]
  29. Mills, G.; Fones, G. A review of in situ/IT methods and sensors for monitoring the marine environment. Sens. Rev. 2012, 32, 17–28. [Google Scholar] [CrossRef]
  30. Polonschii, C.; Gheorghiu, E. A Multitiered Approach for Monitoring Water Quality. In Proceedings of the Energy Procedia; Elsevier: Amsterdam, The Netherlands, 2017; Volume 112, pp. 510–518. [Google Scholar] [CrossRef]
  31. Miles, E.J. Guidelines Shallow Water Quality Monitoring Continuous Monitoring Station: Selection, Assembly & Construction; Technical Report; Virginia Institute of Marine Science: Gloucester Point, VA, USA, 2009. [Google Scholar] [CrossRef]
  32. Reitmann, S.; Jung, B. Generating Synthetic Labeled Data of Animated Fish Swarms in 3D Worlds with Particle Systems and Virtual Sound Wave Sensors; Springer International Publishing: Berlin/Heidelberg, Germany, 2022. [Google Scholar]
  33. Dreier, O.; Güth, F.; Joseph, Y. P4.7–Multiparameter-Messkette für tiefenaufgelöste Gewässerüberwachung. In Proceedings of the Poster; AMA Service GmbH: Wunstorf, Germany, 2021; pp. 203–207. [Google Scholar] [CrossRef]
  34. Schwarzak, S.; Hänsel, S.; Matschullat, J. Projected changes in extreme precipitation characteristics for Central Eastern Germany (21st century, model-based analysis). Int. J. Climatol. 2015, 35, 2724–2734. [Google Scholar] [CrossRef]
  35. Jarosch, L.; Pose, S.; Reitmann, S.; Dreier, O.; Licht, G.; Röder, E. Roboter für das Wasser der Zukunft. WWT Wasserwirtsch. Wassertech. 2020, 66–69. [Google Scholar]
  36. Scientific Diving Center TU Bergakademie Freiberg. Training of Scientific Divers | TU Bergakademie Freiberg; Scientific Diving Center TU Bergakademie Freiberg: Freiberg, Germany, 2021. [Google Scholar]
  37. Pose, S.; Reitmann, S.; Licht, G.; Grab, T.; Fieback, T. RoBiMo—The tasks of scientific divers for robot-assisted fresh-water monitoring. Freib. Online Geosci. Spec. Vol. Proc. 6th Eur. Conf. Sci. Diving 2021, 58, 32–38. [Google Scholar]
  38. Agisoft. AgiSoft Metashape Professional (Version 1.7.4) (Software). 2021. Available online: https://www.agisoft.com/ (accessed on 28 January 2023).
  39. QPS. Qinsy 9. 2022. Available online: https://www.qps.nl/qinsy/# (accessed on 28 January 2023).
  40. Reitmann, S.; Kudryashova, E.V.; Jung, B.; Reitmann, V. Classification of Point Clouds with Neural Networks and Continuum-Type Memories. In Proceedings of the IFIP International Conference on Artificial Intelligence Applications and Innovations; Springer: Berlin/Heidelberg, Germany, 2021; pp. 505–517. [Google Scholar]
  41. Richter, F.; Reitmann, S.; Jung, B. Integration of Open Geodata into Virtual Worlds. In Proceedings of the 6th International Conference on Virtual and Augmented Reality Simulations, ICVARS ’22; Association for Computing Machinery: New York, NY, USA, 2022; pp. 9–13. [Google Scholar] [CrossRef]
  42. Kersten, T.P.; Wolf, J.T.; Lindstaedt, M. Genauigkeitsuntersuchungen des UAV-Systems DJI Matrice 300 RTK mit den Sensoren P1 und L1 im Hamburger Testfeld. In Beitrag Oldenburger 3D-Tage; Wichmann: Berlin, Germany, 2022; pp. 298–313. [Google Scholar]
  43. Akkaynak, D.; Treibitz, T. Sea-THRU: A method for removing water from underwater images. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 16–20 June 2019; Volume 2019, pp. 1682–1691. [Google Scholar] [CrossRef]
  44. Shortis, M. Camera Calibration Techniques for Accurate Measurement Underwater; Springer Nature Switzerland AG: Cham, Switzerland, 2019; pp. 11–27. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Schematic description of the investigations with the autonomous operation ASV with a continuous, three-dimensional, multisensory recording of inland water bathymetry, depth-resolved water parameters, the detection of groundwater inflow, measurement of water body respiration, validation of the results by scientific divers, and presentation of the results using AI and VR methods.
Figure 1. Schematic description of the investigations with the autonomous operation ASV with a continuous, three-dimensional, multisensory recording of inland water bathymetry, depth-resolved water parameters, the detection of groundwater inflow, measurement of water body respiration, validation of the results by scientific divers, and presentation of the results using AI and VR methods.
Remotesensing 15 00860 g001
Figure 2. Show the foundation of the hardware base as the swimming robot (Section 2.2), the three different measurement cases of the sonar sampling (Section 3.1), the water parameter (Section 2.4) and gas exhausting measurement (Section 2.4) with the roof of the software functions (Section 4) and visualisation.
Figure 2. Show the foundation of the hardware base as the swimming robot (Section 2.2), the three different measurement cases of the sonar sampling (Section 3.1), the water parameter (Section 2.4) and gas exhausting measurement (Section 2.4) with the roof of the software functions (Section 4) and visualisation.
Remotesensing 15 00860 g002
Figure 3. The used ASV’s (a) “Elisabeth” Clearpath Kingfisher and (b) “Ferdinand” with the operating system and the drives.
Figure 3. The used ASV’s (a) “Elisabeth” Clearpath Kingfisher and (b) “Ferdinand” with the operating system and the drives.
Remotesensing 15 00860 g003
Figure 5. The in situ measurement special lab devices for water parameters, such as electrical conductivity, pH-value, and redox potential are housed for the use of scientific divers.
Figure 5. The in situ measurement special lab devices for water parameters, such as electrical conductivity, pH-value, and redox potential are housed for the use of scientific divers.
Remotesensing 15 00860 g005
Figure 6. Shows the (a) sonar data acquisition with the swimming platform “Ferdinand” at a flooded quarry in Germany and (b) the respiration measurement in the wetlands of Brazil.
Figure 6. Shows the (a) sonar data acquisition with the swimming platform “Ferdinand” at a flooded quarry in Germany and (b) the respiration measurement in the wetlands of Brazil.
Remotesensing 15 00860 g006
Figure 7. 3D Model of the sonar point cloud with the reconstruction parameter for Agisoft Metashape.
Figure 7. 3D Model of the sonar point cloud with the reconstruction parameter for Agisoft Metashape.
Remotesensing 15 00860 g007
Figure 9. Results of the reconstructed landscape and the reconstruction parameter of the Agisoft Metashape [38] as the digital height model of the drone images manually cleaned.
Figure 9. Results of the reconstructed landscape and the reconstruction parameter of the Agisoft Metashape [38] as the digital height model of the drone images manually cleaned.
Remotesensing 15 00860 g009
Figure 10. Reconstructed underwater object (a) injection pump, (b) weapons, (c), switch box, and (d) wheel with the scale and the local coordinate system.
Figure 10. Reconstructed underwater object (a) injection pump, (b) weapons, (c), switch box, and (d) wheel with the scale and the local coordinate system.
Remotesensing 15 00860 g010
Figure 11. Comparison of (a) the sonar data, (b) a detailed view of the sonar data, and (c) the photogrammetric data (right) of two gravestones.
Figure 11. Comparison of (a) the sonar data, (b) a detailed view of the sonar data, and (c) the photogrammetric data (right) of two gravestones.
Remotesensing 15 00860 g011
Figure 12. Combination of the sonar data processed with Qinsy [39], the drone recordings and underwater models aligned and stitched by Metashape for a holistic georeference model.
Figure 12. Combination of the sonar data processed with Qinsy [39], the drone recordings and underwater models aligned and stitched by Metashape for a holistic georeference model.
Remotesensing 15 00860 g012
Figure 13. Result of the reconstruction underwater and landscape model combined to the EPSG::25,833 coordinate system.
Figure 13. Result of the reconstruction underwater and landscape model combined to the EPSG::25,833 coordinate system.
Remotesensing 15 00860 g013
Figure 14. Pipeline of depth-sensing data analysis with AI for the robotic boat.
Figure 14. Pipeline of depth-sensing data analysis with AI for the robotic boat.
Remotesensing 15 00860 g014
Figure 15. Robotic boat (a) “Elisabeth” within the virtual environment. Movements of the water surface are implemented through displacement maps and shrink-wrap modifiers. Color textures (b) enable possibilities for accurate renderings for visualization aspects.
Figure 15. Robotic boat (a) “Elisabeth” within the virtual environment. Movements of the water surface are implemented through displacement maps and shrink-wrap modifiers. Color textures (b) enable possibilities for accurate renderings for visualization aspects.
Remotesensing 15 00860 g015
Figure 16. (a) Rigged fish model with four bones and bone constraints (green colored, copy rotation of the main bone at the head region) to create a wiggle animation; (b) shows the model used in a particle system influenced by a force field in the virtual environment created by photogrammetry (MatCap coloring for 3D representation).
Figure 16. (a) Rigged fish model with four bones and bone constraints (green colored, copy rotation of the main bone at the head region) to create a wiggle animation; (b) shows the model used in a particle system influenced by a force field in the virtual environment created by photogrammetry (MatCap coloring for 3D representation).
Remotesensing 15 00860 g016
Figure 17. Side scan sonar results for synthetic data generation of point cloud data (perspective view).
Figure 17. Side scan sonar results for synthetic data generation of point cloud data (perspective view).
Remotesensing 15 00860 g017
Figure 18. Dynamic scan of a virtual fish swarm (left: side perspective, right: top perspective).
Figure 18. Dynamic scan of a virtual fish swarm (left: side perspective, right: top perspective).
Remotesensing 15 00860 g018
Figure 19. (a) The 3D mesh representation with colored material for segmentation with virtual sonar sensor (top), (b) segmented point cloud with fish (red) and ground (green) classes.
Figure 19. (a) The 3D mesh representation with colored material for segmentation with virtual sonar sensor (top), (b) segmented point cloud with fish (red) and ground (green) classes.
Remotesensing 15 00860 g019
Figure 20. Exemplary scene in (a) Freiberg X-SITE CAVE to visualize field trials using historic data of the runs embedded in 3D scenes (b).
Figure 20. Exemplary scene in (a) Freiberg X-SITE CAVE to visualize field trials using historic data of the runs embedded in 3D scenes (b).
Remotesensing 15 00860 g020
Figure 21. Virtual scene in Unity game engine to display large point clouds and stream them to an HMD, in our case Microsoft HoloLens 2 (HL2).
Figure 21. Virtual scene in Unity game engine to display large point clouds and stream them to an HMD, in our case Microsoft HoloLens 2 (HL2).
Remotesensing 15 00860 g021
Figure 22. Construction model and ASV-prototype for the next level with a higher performance and streamlined design of the winch and sonar system.
Figure 22. Construction model and ASV-prototype for the next level with a higher performance and streamlined design of the winch and sonar system.
Remotesensing 15 00860 g022
Table 1. Comparison of the projects HyMoBio, BOOT-Monitoring, and River View as ASV with water quality measurement and sea ground detection by an echo sounder system.
Table 1. Comparison of the projects HyMoBio, BOOT-Monitoring, and River View as ASV with water quality measurement and sea ground detection by an echo sounder system.
ProjectHyMoBio [19,20]BOOT-Monitoring [21,22]River-View [9,23]RoBiMo
Drive/path planningAutonomous with four rotatable drives along a gridNo own driveNo path planningRemote-controlled or autonomous using a pre-programmed routeAutonomous catamaran with two impeller engines
Measured water parameter
  • Water temperature.
  • Electric conductivity.
  • Turbidity.
  • Solute oxygen.
  • pH value.
  • Redox potential.
  • Water temperature.
  • Electric conductivity.
  • Turbidity.
  • Solute oxygen.
  • pH value.
  • Redox potential.
  • Solute oxygen.
  • Photosynthetic usable radiation.
  • Other elements.
  • Water temperature.
  • Electrical conductivity.
  • Turbidity.
  • Solute oxygen.
  • pH value.
  • Redox potential.
  • Water density.
  • Water temperature.
  • Electrical conductivity.
  • Turbidity.
  • pH value.
Water parameter resolutionArea or profile measurement with a multi-parameter probe.Surface measurement by ion-selective probes, multi-parameter probes, photometry, spectroscopy, light sensor.Surface measurement using a multi-parameter probe.Area measurement with continuous depths profiles with 10 steps during the motion.
Underground detection
  • Multibeam 600–1400 kHz.
  • Single beam.
  • Side scan-sonar.
  • Single beam.
  • Side scan-sonar.
  • Multibeam 200–450,700 kHz.
  • Side scan-sonar.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Pose, S.; Reitmann, S.; Licht, G.J.; Grab, T.; Fieback, T. AI-Prepared Autonomous Freshwater Monitoring and Sea Ground Detection by an Autonomous Surface Vehicle. Remote Sens. 2023, 15, 860. https://doi.org/10.3390/rs15030860

AMA Style

Pose S, Reitmann S, Licht GJ, Grab T, Fieback T. AI-Prepared Autonomous Freshwater Monitoring and Sea Ground Detection by an Autonomous Surface Vehicle. Remote Sensing. 2023; 15(3):860. https://doi.org/10.3390/rs15030860

Chicago/Turabian Style

Pose, Sebastian, Stefan Reitmann, Gero Jörn Licht, Thomas Grab, and Tobias Fieback. 2023. "AI-Prepared Autonomous Freshwater Monitoring and Sea Ground Detection by an Autonomous Surface Vehicle" Remote Sensing 15, no. 3: 860. https://doi.org/10.3390/rs15030860

APA Style

Pose, S., Reitmann, S., Licht, G. J., Grab, T., & Fieback, T. (2023). AI-Prepared Autonomous Freshwater Monitoring and Sea Ground Detection by an Autonomous Surface Vehicle. Remote Sensing, 15(3), 860. https://doi.org/10.3390/rs15030860

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop