1. Introduction
In recent years, many industrial 3D scanners have been introduced that have achieved sufficient performance to meet the needs of various industries and research fields. However, most industrial 3D scanners are designed to work in normal environments, and some special purpose 3D scanners are very expensive and do not support a wide range of applications. When a researcher develops a 3D scanner suitable for research purposes using well-known basic knowledge, the developed 3D scanner may show insufficient performance compared to industrial 3D scanners, which have competitiveness in the market.
This study starts from the need of 3D imaging equipment for a remote dismantling system using a robotic system in nuclear facility dismantling. When dismantling a nuclear facility that has been in operation for several decades, highly activated structures must be remotely dismantled underwater to prevent exposure to equipment and workers. Existing nuclear facility decommissioning projects have been carried out in a way that large, submerged cutting equipment is operated by field workers above the water surface. Since long-term low-dose exposure to workers cannot be avoided despite the protection of water and all simple repetitive tasks are performed manually, a remote dismantling system using robots is required for safety and efficiency. The 3D imaging equipment is necessary for localization of a target structure of a robotic system in radioactive and underwater environments.
In the remote dismantling system developed by us, the robotic system uses a pre-planned path created by a digital manufacturing system developed in our previous research [
1]. Since the pre-planned path is created in a virtual environment, aligning the pre-planned path to the actual target structure is needed through target localization using 3D scanning. The proposed remote dismantling system implements a semi-automated robotic system through the pre-planned path and target localization instead of the direct teleoperation or shared teleoperation in previous studies [
2,
3,
4]. Since both direct teleoperation and shared teleoperation rely on manipulation by an operator and visual feedback, the operator’s excessive workload and visual obstacles, such as flames, bubbles and debris, have made the application of robotic systems impractical in nuclear facility dismantling. Therefore, 3D imaging equipment capable of implementing the pre-planned path and target localization is key to the application of the semi-automated robotic system in nuclear facility dismantling.
Considering the radiation intensity of the dismantling site and the working space of the robotic system, it is necessary for the 3D imaging equipment to be able to measure up to 2.5 m away, to have a positioning accuracy within 1 mm, and to survive up to a cumulative dose of 1 kGy. During a survey of commercial underwater 3D scanners [
5], Newton Labs’ NM200UW [
6] was found to be usable in radioactive and underwater environments, but the NM200UW is insufficient for the requirements of a remote dismantling system in terms of a maximum measurable distance of 0.9 m and a radiation resistance of 43 Gy. Since Photoneo’s PhoXi XL [
7] can measure up to 3.78 m and has a positional accuracy of 0.5 mm, we decided to develop 3D imaging equipment based on the PhoXi XL because it satisfies the requirements of the remote dismantling system. The PhoXi XL uses a continuous wave Laser Line Scanning (LLS) method and consists of a moving laser pointer made of a mirror galvanometer, a camera and an embedded computer for image processing.
We aimed to develop 3D imaging equipment based on the industrial 3D scanner PhoXi XL that is waterproof and has radiation resistance and refraction correction. Waterproofing could be solved simply by making a housing with windows and waterproof connectors. Radiation resistance could also be solved simply by mounting a shield on the front side since the main radiation source is the target structure on the front side in dismantling highly activated structures. However, 3D imaging equipment needs to have an optical path that can send and receive light to the front despite the front shielding. Refraction correction could be solved by refraction modeling, measurement experiments and parameter studies. Hidden technical details of the industrial 3D scanner cause many unknown parameters in the refraction model, so a process for optimizing unknown parameters through experimental results was necessary.
Radiation hardening and shielding are typical methods used for radiation protection. Radiation hardening makes electronic components and circuits radiation tolerant; shielding uses a solid or liquid material that absorbs radiation energy to block radiation. Radiation hardening can be a more fundamental solution compared to shielding and can minimize the increase in volume and weight, but radiation hardening can only be achieved by exchanging fragile electronic components with electronic components that are tolerable to radiation exposure. Therefore, shielding was the only choice for radiation protection of the 3D imaging equipment using an industrial 3D scanner, of which electronic components are not easily accessible to users.
To develop 3D imaging equipment, radiation hardening can be attempted. Zhao and Chi [
8,
9] developed an underwater 3D scanner using a continuous wave LLS method. To make the developed 3D scanners radiation tolerant, the electronic components and the camera need to be exchanged with radiation tolerant components [
10,
11], and the embedded computer for image processing needs to be located away from the radioactive environment. However, such an attempt may require a very long time and much effort to satisfy the requirements of the remote dismantling system. In a radioactive environment, the use of industrial products through shielding can be a very efficient alternative, as the study by Shin shows [
12].
Refraction correction is required to improve the positional accuracy of the 3D point cloud measured in water, so that the position estimated by 3D registration represents the actual location of the scanned object. When a 3D scanner housed in waterproof housing scans an object in water through a window, the laser emitted from the laser pointer is refracted as it passes through the window and travels through the water, where it is reflected by the object and refracted into the camera through the window. Since the 3D scanner calculates the 3D position by triangulation principles, the angle changed by refraction causes a position error. From previous studies [
8,
9,
13,
14,
15,
16,
17,
18], all the parameters for refraction correction were known because all 3D scanners were designed and manufactured by the researchers themselves. In this study, unlike in previous studies, we optimized unknown parameters for refraction correction by an intuitive parameter study.
The goal of this study was to develop 3D imaging equipment for use with a robotic system that can accurately localize an activated target structure in water despite the position difference between the digital model and the actual target structure. The requirements of the 3D imaging equipment are the positional accuracy within 1 mm in a distance range of 0.5 to 3.5 m and radiation resistance of 1 kGy. To achieve the requirements, we designed a housing structure for radiation protection and waterproofing, calculating the shielding thickness which we verified through experiments, and performed refraction correction through refraction modeling, measurement experiments and parameter studies.
The following section presents the proposed housing design for radiation protection and waterproofing.
Section 3 describes the process of investigating the initial radiation resistance, calculating the thickness of the shield, and verifying the shield thickness.
Section 4 includes the process of refraction modeling for the developed 3D imaging equipment, performing measurement experiments in air and water, and studying unknown parameters of the refraction model. The conclusion section summarizes the study and states the potential contributions of the presented method.
3. Radiation Protection
Radiation protection through shielding is finally determined by calculating the thickness based on the initial radiation resistance, the sources of radiation and the shielding material. According to the radiological characterization of the reactor pressure vessel in a nuclear power plant, the main source emitting gamma rays is Co-60 [
20,
21]. Half value layers vary according to the energy of gamma rays and shield materials. When the source of radiation is Co-60 and the shield material is lead, half value layers is 1.2 cm [
22]. Now we show the process of investigating the initial radiation resistance, calculating the shielding thickness and verifying final radiation resistance.
3.1. Initial Radiation Resistance
An irradiation experiment was performed to investigate the initial radiation resistance of the industrial 3D scanner and to investigate changes in optical characteristics for window and mirror candidates, as shown in
Figure 5. A thick lead block protects the embedded computer inside the industrial 3D scanner, because the embedded computer must survive to the end of the experiment to drive the laser pointer, mirror galvanometer and camera during the irradiation experiment. The initial radiation resistance of the embedded computer was investigated by an embedded computer of the same type. Window and mirror candidates are a fused silica window, an aluminum coated first surface mirror and a silver coated first surface mirror. One of the two IP cameras observes the scan target continuously. When an image change event is caused by the laser pattern projected by the industrial 3D scanner, the IP camera transmits the laser pattern image to the monitoring computer according to the image change event. The other IP camera observes the windows and mirrors and transmits images to the monitoring computer every 60 s. The monitoring computer requests a 3D point cloud from the industrial 3D scanner and requests a ping from the embedded computer every 60 s.
The source of radiation is Co-60 of which the energy spectrum is 1.17 and 1.33 MeV. The absorbed dose rate in the experiment is 4.32 × 101 Gy/h, which is the dose rate at a location 730 mm away from the source of radiation. The total absorbed dose is 1.21 × 103 Gy during the experiment and irradiation time is 28 h. The ambient laboratory temperature is 25.6 °C at the start of the experiment and 25.4 °C at the end of the experiment.
The initial radiation resistance of the industrial 3D scanner is 259.4 Gy, since an abnormality occurred 5 h and 57 min after the start of the experiment. As shown in
Figure 6, in the normal case, when the monitoring computer requests data from the industrial 3D scanner, the industrial 3D scanner sends the 3D point cloud and the image captured by the camera. At the same time, the IP camera detects an image change event caused by the laser pattern projected by the industrial 3D scanner and transmits the laser pattern image to the monitoring computer.
At 5 h 57 min, as shown in
Figure 7, when the monitoring computer requests data from the industrial 3D scanner, the image captured by the camera of the industrial 3D scanner is the same as the image in
Figure 6b, but the 3D point cloud is almost empty. In
Figure 7c, the reason that the IP camera transmits the image without any laser pattern is that the holding time of the laser pattern is shorter than normal, so when the IP camera detects the projected laser pattern and then sends the image to the monitoring computer, the laser pattern has already disappeared at that moment. The state shown in
Figure 7 lasts for 5 min, after which the industrial 3D scanner stops responding. When the laser pattern completely stops working, the embedded computer also stops responding, and the IP camera stops sending images.
In the industrial 3D scanner, the most vulnerable component to radiation is the laser pattern projector, and the radiation resistances of the remaining components are unknown. Considering that the quality of the images captured by the camera in the industrial 3D scanner is maintained for the last 5 min, the radiation resistance of the camera is larger than the radiation resistance of the laser pattern projector. However, the exact radiation resistance of the camera cannot be determined. Because the monitoring program stops working due to no response of the industrial 3D scanner, the exact surviving time of the other embedded computer is unknown. At the ping test after the irradiation experiment, the embedded computer in the industrial 3D scanner responds normally and the other embedded computer without a shield does not respond.
Figure 8a is an image of the IP camera at the start of irradiation, and
Figure 8b is an image of the IP camera at the end of irradiation. The noise observed in
Figure 8a is the photon shot noise generated during irradiation, and it disappears at the end of irradiation as shown in
Figure 8b. In all optical components, no significant deterioration is observed after irradiation with the naked eye. The reason why the silver-coated mirror in
Figure 8b turned slightly yellow is because the substrate of the silver-coated mirror is discolored. In the case of a first surface mirror, the discoloration of the substrate does not affect the reflectivity of the mirror.
Table 1 shows the measurement results of light transmittance and light reflectance for irradiated components. In all optical components, no performance degradation of more than 1% is observed at 637 nm, the wavelength of the industrial 3D scanner’s laser. Therefore, all optical components are suitable for the 3D scanner in this study. Finally, the silver coated first surface mirror is selected because the reflectance of the silver coated first surface mirror is superior to that of the aluminum coated first surface mirror.
Since the degradation of optical components is less than 1% at 1.21 × 103 Gy of the total absorbed dose, the radiation resistance of the developed 3D imaging equipment is estimated to be dominated by the shielding thickness.
3.2. Shielding Thickness Calculation
The shielding thickness is calculated by following the basic equation [
21] that assumes a narrow beam of radiation penetrating a thin shield.
where X is the exposure rate with the shield in place, X
0 is the exposure rate without the shield,
u/ρ is the mass attenuation coefficient,
ρ is the density of the shielding material, and
x is the thickness of the shield. If Equation (1) is used, since Co-60 emits 1173 and 1332 keV gamma rays, the exposure rate for the two gamma rays must be calculated and added. In this study, Half Value Layers (HVL) was used for simple calculations. A HVL is the thickness of material that reduces the radiation intensity by one-half.
As shown in
Table 2, when the shielding material is lead and the radiation source is Co-60, the HVL is 1.2 cm. The shielding thickness is calculated by Equation (4) where X
0 is 1000 Gy and X is 259.4 Gy. The calculated shielding thickness is 3.0 cm, considering a margin, and the expected radiation resistance is 1467 Gy.
3.3. Radiation Resistance Verification
An irradiation experiment was performed to verify the radiation resistance of the shielded 3D scanner, as shown in
Figure 9. The industrial 3D scanner is housed in the housing manufactured in this study and it scans the target through mirrors and windows. The housing is equipped with three lead blocks with a 3 cm thickness. Three dosimeters are fixed in front of the components of the 3D scanner, as shown in a, b, and c in
Figure 9b. Dosimeter a is fixed in front of the shielding at the position closest to the radiation source, and dosimeters b and c are fixed behind the shielding at the position of the camera and laser pattern projector, respectively. Dosimeter a is measured at a distance of 644 mm from the surface of the radiation source. The irradiation experiment is performed for 24 h at a dose rate of 64.3 Gy/h based on dosimeter a.
After the irradiation experiment for 24 h the 3D scanner operated normally, and the total absorbed dose was 1534.4 Gy based on dosimeter a. An abnormal termination occurred at 22 h and 34 min during the irradiation experiment. However, the abnormal termination turned out to be temporary because restarting the monitoring process created no problems.
Figure 10 and
Figure 11 show the 3D point clouds and images captured by the industrial 3D scanner at the start of the experiment (a), just before the abnormal termination (b), and at the end of the experiment (c). No degradation of quality is observed for any of the 3D point clouds and images. In
Figure 10, since the cylinder-shaped surface of the radiation source is closer than 800 mm from the 3D imaging equipment, only the scan target is measured.
Figure 12 shows the results from analyzing images captured by the 3D scanner using the image analysis tool. In the 3D scanner based on the LLS method, the image quality has a decisive effect on the quality of the measured 3D point cloud, so the result of the image quality analysis may represent the quality of the measured 3D point cloud. The noise level and hot pixels increase rapidly after irradiation starts, but do not show a distinct increase trend during 24 h of the experiment. Since position estimation based on 3D registration uses hundreds or more points, such random noise may not significantly affect position accuracy. Therefore, the developed 3D imaging equipment is expected to be successfully applied to the remote dismantling system in nuclear facility decommissioning.
The total absorbed dose of dosimeter b fixed in front of the laser pattern projector is 184.2 Gy, and the total absorbed dose of dosimeter c fixed in front of the camera is 191.8 Gy. When calculating the actual penetration thickness of gamma rays based on the drawing in
Figure 9b, the shielding thickness is 31.3 mm and the total absorbed dose without shielding is calculated as 1112.3 Gy and 1171.6 Gy in dosimeter b and dosimeter c, respectively. In a previous experiment the laser pattern projector was investigated and found to be the weakest to radiation, and in this verification experiment the total absorbed dose of the laser pattern projector equipped with shielding is 1112.3 Gy, so we considered this radiation protection as successful. In addition, considering the radiation resistance of the applied optical components and the result of image quality analysis, easy improvement of the radiation resistance is expected through shielding design reinforcement.
4. Refraction Correction
Refraction correction finds the light plane and camera ray from the 3D coordinates of the point measured by the industrial 3D scanner, and then finds the corrected 3D coordinates of the point by recalculating the refracted light plane and the refracted camera ray based on the refraction model. The light plane is the equation of the plane projected by the laser pointer, and the camera ray is the equation of the line connecting the point on the camera’s image plane and the point measured by the industrial 3D scanner. An accurate refraction model is required for this calculation. However, the exact values required for the refraction model are rarely found due to the inaccessibility of design information in the industrial 3D scanner. In this study, we developed a basic refraction model and studied the unknown parameters by a simple intuitive approach.
4.1. Refraction Modeling
A refraction model is developed fundamentally based on a pinhole model and a perspective projection [
23]. Although the calculation algorithm performed inside the industrial 3D scanner is unknown, the point measured by the industrial 3D scanner can be reconstructed with a light plane and camera ray, assuming that the positions of the laser pointer and camera are known.
Figure 13 shows the reconstructed light plane and camera ray for the measured point
pmea and the refraction correction point
pcor on the x-z plane of the coordinate frame, which is located in the pinhole of the 3D scanner’s camera. In front of the laser pointer and camera, a glass window is installed at distances of
dL and
dC, respectively. The thickness of both glass windows is the same at
tG. On both planes of the glass window positioned at the laser pointer, the plane
PGL is the interface between air and glass, and the plane
PWL is the interface between glass and water. Likewise, the plane
PGC is the interface between air and glass, and the plane
PWC is the interface between glass and water at the glass window positioned at the camera.
The plane of the glass window located in 3D space can be represented as in
Figure 14 for the angles
θ and
φ, at which the normal vector is inclined with respect to the z-axis of the coordinate frame.
The planes
PGC,
PWC,
PGL and
PWL are given by Equations (5)–(8)
where
θC and
φC are the angles of the glass window positioned at the camera,
θL and
φL are the angles of the glass window positioned at the laser, and the point (
xL,
yL,
zL) represents
pL1 of the laser.
When
pL1,
θC,
φC,
θL,
φL,
dC,
dL,
tG and
pmea are known, the refraction correction point
pcor can be derived by Equations (9)–(28). First, the camera ray
Rmea are given by Equation (9)
The intersection point
pC2 of
Rmea and
PGC can be computed by Equations (10) and (11).
The vector
v2 of refraction with the incident vector
v1 and the interface normal
n is computed by Snell’s law, quaternion rotation operation and Equations (12)–(17) as shown
Figure 15.
Therefore, the refracted camera ray
RC2 at the point
pC2 on the plane
PGC is computed by Equations (18) and (19).
The intersection point
pC3 of
RC2 and
PWC is computed by Equation (20).
In the same way, the corrected camera ray
Rcor at the point
pC3 on the plane
PWC is computed by following Equations (21) and (22).
The corrected
pcor is the intersection of the corrected camera ray
Rcor and the corrected light plane
Pcor, so the corrected light plane
Pcor needed to be found. Since the laser starts from a point light source, the laser rays
RL1,
RL2 and
RL3 are calculated in the same way as the previous camera ray calculation method, and the corrected light plane
Pcor can be found with the condition that it is perpendicular to the xz plane and includes the finally refracted laser ray
RL3. The corrected light plane
Pcor is computed by following Equations (23)–(30).
Finally, the corrected
pcor is found by computing the intersection of the corrected camera ray
Rcor and the corrected light plane
Pcor as following Equation (31).
4.2. Experiments and Results
To find unknown parameters in the presented refraction model, experiments to measure 3D point clouds of a target board were performed in air and water at the same distance. The experimental equipment consisted of a water tank, a 3D scanner, a linear motion stage, a rotation stage, a micrometer, and a target board as shown
Figure 16. At the bottom of the water tank, a bread board with an M6 threaded hole pattern was installed to fix components in the correct position. The 3D scanner was housed in the manufactured waterproof housing and fixed to one end of the breadboard. The linear motion stage was fixed longitudinally to vary the distance between the 3D scanner and the target board. The rotation stage was fixed on the movable table of the linear motion stage and aligned the target board parallel to the image plane of the 3D scanner. The micrometer was connected to the drive shaft of the linear motion stage by a timing belt, so that the position of the movable table could be precisely adjusted. The target board was a plate with a 30 mm size checkered pattern printed on an 800 × 600 mm aluminum plate.
Figure 17 shows the experimental setup with the water tank filled and the target board.
Figure 18 shows the results of 17 scans in air (a), and water (b), while the movable table moves from 0 mm to 800 mm at 50 mm intervals. The 3D point clouds obtained in water are enlarged and curved compared to 3D point clouds obtained in air.
Figure 19 shows the image captured by the camera of the industrial 3D scanner, and the results of checkerboard detection at the position where the micrometer indicates 0 mm. Since the indexes of the image and the 3D point cloud are the same, the 3D coordinates of the points found by checkerboard detection in the image can be found. Therefore, we could find a pair of 3D points pointing to the same position in the 3D point cloud measured in water and air, and the pair of 3D points could be used to find the error of the point measured underwater. For example, the 3D point indicated by the upper right dot in the image of
Figure 19a should have the same position as the 3D point indicated by the upper right dot in the image of
Figure 19b. If the positions of the two 3D points are different, the difference in positions indicates a measurement error due to refraction.
Figure 20 shows 408 pairs and measurement errors estimated from
Figure 19.
A pair of measurement errors was obtained by processing and merging 15 pairs of 3D point clouds with the previously described method. The pair of measurement errors consists of points pair measured in air and points pwater measured in water as shown
Figure 21. The 16th and 17th measured 3D point clouds were excluded because the number of measured point clouds was small, and the measured 3D point clouds deviate from the proper measurement range because of narrowing the angle of view and appearing closer than the actual position of the object in water.
4.3. Parameter Study and Refraction Correction Result
In the refraction model presented in this study, the variables to find the refraction correction point pcor are pL1, θC, φC, θL, φL, dC, dL, tG and pmea. Among the listed variables, the glass thickness tG can be considered as a constant and pmea is a value measured by the 3D scanner, so pL1x, pL1y, pL1z, θC, φC, θL, φL, dC and dL can be considered as unknown parameters. The measured point pmea is the same as the points pwater measured in water.
The objective function is the sum of refraction correction errors, which are the distances between the refraction correction points
pcor and the points
pair measured in air, and the goal of the parameter study was to minimize the objective function. Equation (32) represents the objective function
J.
In this study, we attempted a parameter study with an intuitive and simple approach rather than a sophisticated theoretical approach. A discrete linear vector space was generated for each parameter, and parameter values that minimize the objective function were selected in the generated vector space. However, when the objective function simultaneously searched for the vector space of all the nine parameters, the number of cases increased exponentially, and the search time was too long. Therefore, at the beginning of the search, a strategy of combining parameters with similar characteristics and searching was attempted, and in the stationary phase of the search, a strategy of generating a random combination for the entire vector space and searching of all parameters was attempted. The search by a random combination made a homogeneous search possible for the entire vector space and reduction in search time through parallel operation, despite the incomplete search of the entire vector space.
The initial values of all parameters were obtained from the CAD geometry of the housing design in
Section 2.2, and the values are shown in
Table 3. The mean error calculated from the initial values is 12.2 mm, and the 95% error is 101.2 mm and the 99% error is 138.0 mm in the normal distribution of the error. The reason that 99% error should be considered is that the error guaranteed by the 3D scanner presented in this study needs to be verified. Considering the result of refraction correction based on the initial value, the 99% error reveals that the characteristics of refraction correction are very bad, and
Figure 22 shows the characteristics well.
Figure 22 is the initial error of refraction correction using the first pair of 3D point cloud measured in air and water, which shows that the plane of the refraction correction points
pcor is inclined in relation to the plane of the points
pair measured in the air.
Table 3 shows the results of the parameter study by the search strategy described above. The decrease in mean error is not evident in the case of No. 1, in which the search to minimize the objective function is performed in the vector space of
θC and
φC, and No. 2, in which the search is performed in the vector space of
θL and
φL. A distinct decrease in mean error is seen in case 3 searching on the vector space of
dC and
dL. Through cases 4, 5, and 6, the mean error is reduced to 0.57 mm to reach the precision required by the remote dismantling system described above. In particular, as the difference between the 99% error and the mean error is reduced, the error distribution is considerably improved. A homogeneous 3D registration error can be expected in the entire scan volume.
Figure 23 shows the results of the refractive correction performed on the entire scan volume. The
pcor with the
pwater transformed through refraction correction appears to be visually the same as
pair.
Figure 24 shows the measurement error for all points. Many peak errors of 1 mm or more are observed, exceeding the mean error and 99% error. However, such peak errors may not significantly affect the 3D registration method for finding the position of the target object through a 3D point cloud consisting of hundreds of points.