Next Article in Journal
Real-Time Instance Segmentation of Traffic Videos for Embedded Devices
Next Article in Special Issue
A Novel Master–Slave Interventional Surgery Robot with Force Feedback and Collaborative Operation
Previous Article in Journal
Adaptive Traffic Signal Control: Game-Theoretic Decentralized vs. Centralized Perimeter Control
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Real-Time Multi-Modal Sensing and Feedback for Catheterization in Porcine Tissue

by
Christoff M. Heunis
1,*,
Filip Šuligoj
1,
Carlos Fambuena Santos
1 and
Sarthak Misra
1,2
1
Surgical Robotics Laboratory, Department of Biomechanical Engineering, University of Twente, 7500 AE Enschede, The Netherlands
2
Department of Biomedical Engineering, University of Groningen and University Medical Centre Groningen, 9713 GZ Groningen, The Netherlands
*
Author to whom correspondence should be addressed.
Sensors 2021, 21(1), 273; https://doi.org/10.3390/s21010273
Submission received: 25 November 2020 / Revised: 22 December 2020 / Accepted: 30 December 2020 / Published: 3 January 2021
(This article belongs to the Special Issue Sensors and Imaging for Medical Robotics)

Abstract

:
Objective: In this study, we introduce a multi-modal sensing and feedback framework aimed at assisting clinicians during endovascular surgeries and catheterization procedures. This framework utilizes state-of-the-art imaging and sensing sub-systems to produce a 3D visualization of an endovascular catheter and surrounding vasculature without the need for intra-operative X-rays. Methods: The catheterization experiments within this study are conducted inside a porcine limb undergoing motions. A hybrid position-force controller of a robotically-actuated ultrasound (US) transducer for uneven porcine tissue surfaces is introduced. The tissue, vasculature, and catheter are visualized by integrated real-time US images, 3D surface imaging, and Fiber Bragg Grating (FBG) sensors. Results: During externally-induced limb motions, the vasculature and catheter can be reliably reconstructed at mean accuracies of 1.9 ± 0.3 mm and 0.82 ± 0.21 mm, respectively. Conclusions: The conventional use of intra-operative X-ray imaging to visualize instruments and vasculature in the human body can be reduced by employing improved diagnostic technologies that do not operate via ionizing radiation or nephrotoxic contrast agents. Significance: The presented multi-modal framework enables the radiation-free and accurate reconstruction of significant tissues and instruments involved in catheterization procedures.

1. Introduction

X-ray fluoroscopy has been commonly used as a modality for visualizing endovascular catheters for arterial diagnosis and treatments owing to its high-speed display of complex vasculature. Specifically, it has led to the development of remote-controlled catheter navigation systems (RCCNS) such as Amigo remote catheter system (Catheter Precision Inc., Ledgewood, NJ, USA), Sensei robotic navigation system (Hansen Medical Inc., Mountain View, CA, USA), and Niobe (Stereotaxis Inc., St. Louis, MO, USA). These commercially-available systems have demonstrated not only that they can perform safely, but also that they improve the control and manipulability of catheters during endovascular interventions when compared to manually-controlled catheters [1,2]. Nonetheless, the control and positioning of catheters are still dependent on the expertise of the clinician and require long periods of specialized training time [3]. Moreover, despite the improvements brought by RCCNS to the field, these systems employ fluoroscopy images as their primary source of information, causing adverse effects to high-risk patients due to ionizing radiation exposure [4,5]. This results in limited periods of visibility of catheters inside the body, accompanied by additional challenges, such as those related to arterial inaccessibility and the lack of information about three-dimensional (3D) visual feedback [6].
Some studies have investigated the integration of intraoperative magnetic resonance imaging (iMRI) with RCNNS to reduce radiation exposure to patients and clinicians. Bell et al. developed a tendon-driven catheter that was visualized in real-time inside an MRI bore [7]. However, the catheter heated up close to the edge of the scanner bore and required a thick shaft diameter to be visible. This, in turn, increased the force necessary to control it, which reduced its positioning accuracy. Additional drawbacks of iMRI were noted when Liu et al. developed a framework that visualizes a robotic ablation catheter inside an MRI scanner [8]. They had difficulties with identifying the real-time shape of the catheter from MR images due to a low image acquisition rate. This limits automatizing catheter control strategies using iMRI, as substantiated by Alam et al.—a comprehensive study investigating multiple optical imaging techniques [9]. More notably, iMRI systems produce high magnetic fields that affect the electronics of electromagnetic actuators and sensors [10].
Several attempts have been made to bypass the use of iMRI systems by using either ultrasound (US) imaging [11,12] or Fiber Bragg Grating (FBG) sensors [13,14]. The latter has shown significant prospects in the real-time shape sensing of catheters [15,16,17]. However, these studies have not yet demonstrated real-time and simultaneous arterial and instrument 3D reconstruction. Alternatively, robotically-actuated US transducers can be employed for arterial reconstruction. Moreover, force/torque sensors can be integrated with transducers, since consistent contact force is required with the skin for visualizing blood vessels. Such strategies have shown potential for real-time automatic arterial characterization [18,19]. Mathiassen et al. further suggested the potential of robotic US path planning through force control for real-time 3D arterial reconstruction [20]. They proposed a hybrid-force sensing strategy on a stationary abdominal phantom, which has been adopted for autonomous US-guided tumor dissection [21]. However, this was not tested on heterogeneous tissue with uneven surfaces, which is a crucial challenge for a US-specific patient-oriented approach. This challenge was addressed by Graumann et al., who generated robotic US paths for covering a volume of interest selected in diagnostic images [22]. Nonetheless, they assumed that the US transducer is always positioned perpendicular to the tissue surface. A more desirable assumption would have been to position the transducer above the vessel of interest, as would be required to visualize catheters inside vasculature. Jiang et al. attempted to improve this framework by acquiring real-time US images through impedance control on the US transducer [23]. However, their method requires a full fan sweep motion of the transducer at each surface point to optimize the US transducer orientation alone. More importantly, none of these studies has incorporated representations of limb motions, for instance, Periodic Limb Movements (PLMs) [24]. Such movements affect the reconstruction accuracies of autonomous image acquisition frameworks that utilize robotic arms. PLMs that occur during interventions introduce real-time disturbances, which should be compensated for when arterial and instrument reconstructions are essential to the clinician.
The challenge of patient motions has been approached with non-invasive commercial systems. Tracking systems, such as Northern Digital Inc. (NDI) Polaris (Polaris Industries, Medina, MN, USA) [25,26] and the NDI Aurora electromagnetic (EM) tracking system [27] have been used to visualize surgical instruments in relation to anatomic structures [28]. However, considering that the operating room is a clustered environment and the capture volumes of these systems are small, the movement of surgical staff is confined. EM sensors have also been integrated into US transducers themselves, resulting in so-called Tomographic US (tUS) devices. One such device, the PIUR tUS Infinity (PIUR Imaging, Vienna, Austria), combines data from optical markers, EM sensors, and inertial sensors [29]. However, preliminary studies have reported drawbacks associated with freehand tUS scans, such as overlapping US slices and the loss of subsurface spatial information due to a change in applied force between the transducer and the skin [30]. Furthermore, in a clinical setting, metals can affect the accuracy of EM sensors [31].
In order to obtain information about catheters inside the human body, the need for integrating alternative imaging and sensing modalities remains paramount. Based on the aforementioned challenges of conventional approaches, further clinical needs have been identified: firstly, a safe solution for multi-modal sensing is required. Sub-systems can be integrated that eliminate excessive radiation resulting from fluoroscopy. Secondly, intra-operative visual feedback of both an endovascular catheter and vascular anatomy should be implemented. The scope for this requirement should be to assist clinicians during endovascular surgeries and catheterization procedures. In this study, we provide solutions to these needs without requiring intra-operative X-ray imaging. Specifically, we aim to generate an efficient US path to cover a surface while compensating for PLMs and adhering to prescribed contact forces and US transducer poses. This is accomplished by combining data from three imaging modalities (a US scanner, a 3D surface point-cloud camera, and motion capture cameras) with FBG sensing data of an endovascular catheter. This catheter is inserted into a porcine limb subjected to periodic motions. We show that FBG sensors remain a viable option for flexible catheters, due to their reliability in tracking without the need for line of sight. To the best of our knowledge, such a framework that covers the integration of the imaging and sensing technologies discussed herein does not exist yet. Hence, the contributions of this study are as follows:
  • Assembly and calibration of an endovascular catheter that is embedded with FBG sensors and infrared precision spheres, allowing for real-time feedback.
  • Introduction of a radiation-free intra-operative imaging framework for catheterizations.
  • Fully autonomous US acquisition directly performed by a robotic system with visual-servo (VS)-based compensation of externally-induced PLMs.
  • Real-time multi-modal sensing and 3D visualization of the vasculature, catheter, and surrounding surface tissue.
This paper is organized as follows: Section 2 describes the multi-modal system integration, the calibration of the imaging and sensing modalities, and the pre-operative US planning algorithms. In Section 3, the intra-operative planning phase is discussed. This phase includes the VS-based motion compensation and the real-time vasculature and catheter visualization. We then demonstrate the sensing and feedback framework in a clinically-relevant experiment, followed by the results and a discussion of these results in Section 4. Section 5 concludes this paper and provides directions for future work.

2. Pre-Operative Calibration and Planning

This section provides an overview of the sub-systems used in this study—specifically with the intention to guide clinicians to utilize FBG-embedded catheters in a real-time environment. This is followed by a description of the multi-modal calibration process, US planning, and reconstruction algorithms. The workflow of the pre-operative phase is illustrated in Figure 1.

2.1. Imaging and Sensing Modalities

This study is performed in the Advanced Robotics for a Magnetic Manipulation (ARMM) system [32]. The framework discussed in this study is implemented on the ARMM workstation (Precision Tower 7910, Dell Technologies, Austin, TX, USA) running Ubuntu 18.04 (Kernel 4.4.236) and an RT-PREEMPT patch for real-time visualizations. A Point-Cloud Library (PCL) package (Willow Garage, Menlo Park, CA, USA) is used for the 3D rendering of point-clouds, while OpenCV (Open Source Computer Vision Library, v.3.4.2) is used for image processing. The modalities used in this study are as follows: A 6 degree-of-freedom (DoF) serial-link robot (Model UR5, Universal Robots, Odense, Denmark) is used to maneuver a linear US L14-5 transducer connected to a SonixTouch Q+ US scanner (BK Medical, Quickborn, Germany). Furthermore, a passive robot (Panda, FRANKA EMIKA GmbH, Munich, Germany) is used to hold and adjust the pose of a 3D depth camera (Intel Realsense SR305, Santa Clara, CA, USA). The depth camera reconstructs a 3D point-cloud structure of any surface at a rate of 60 Hz, and the transducer captures and streams 2D US images at a rate of 30 Hz. Motions of the catheter and tissue inside the ARMM workspace are recorded by an Optitrack Flex13 (NaturalPoint Inc., Corvallis, OR, USA) infrared precision sphere-based motion capture system. The endovascular catheter shaft is embedded with an optic fiber containing FBG sensors and connected to an FBG-Scan 804D interrogator (FBGS Technologies GmbH, Jena, Germany). These sensors provide 3D catheter reconstruction in a real-time environment.
In addition to employing these imaging and sensing modalities, this framework is demonstrated with porcine tissue, thus ensuring a near-realistic representation of a human limb. PLMs that mimic involuntary limb movement are simulated by a 6-DoF Stewart platform. This movement is compensated during US image acquisition by controlling the US transducer with a hybrid position-force controller designed for uneven body tissue surfaces. The implemented methods within this framework encapsulate the pre-operative planning of the arterial model of the porcine tissue, followed by the real-time intra-operative visualization during an endovascular catheterization procedure.

2.2. FBG-Embedded Catheter Assembly

The catheter assembly (Figure 2, number ①) consists of an endovascular catheter ( D c = 2 mm diameter, 270 mm length) made from Polyethylene terephthalate (PET). This material has high stiffness and is chosen due to the curvature constraint imposed by the FBG fiber, which has a maximum bending radius of 50 mm. The multicore fiber ( f = 32 FBG sensors, 125 μ m cladding diameter) is embedded into the central lumen of the catheter shaft. Precision spheres with 3 M 7610 reflective tape (Engineering Systems Technologies GmbH, Kaiserslautern, Germany) are then attached to the catheter base. This is triangulated by the motion capture cameras in the calibration phase using a Software Development Kit (SDK) from Motive NatNet (NaturalPoint Inc., Corvallis, OR, USA) and represents the catheter reference frame ( { C } ) inside the ARMM workspace.

2.3. Calibration of the Imaging and Sensing Modalities

The motion tracker frame of origin is chosen as the global reference frame ( { G } ) (Figure 2, number ②). All homogeneous transformations between the modalities and the motion tracker are derived to map their local reference frames to { G } . These transformations are calculated as
H b a = R b a p b a 0 1 × 3 1 SE ( 3 ) ,
where R b a SO ( 3 ) is a rotation matrix describing the relative orientation of a frame ( { b } ) with respect to another frame ( { a } ) and p b a R 3 is the translation vector from a point in frame a to frame b. In this study, we utilize the precision spheres to calibrate the FBG sensors with the catheter base, in addition to tracking it with the motion capture system.
In order to calibrate the catheter, we first determine the configuration of the catheter shape in the global reference frame ( { G } ) . This is done through a mold alignment process. In this process, the catheter is inserted into three different channels with known curvature (Figure 2, number ③. The mold reference frame ( { M } ) is registered in the global coordinate frame ( { G } ) using Motive NatNet. Following this registration, the catheter is inserted into three mold channels with local design frame ( { L } ), while its shape and base frame pose are simultaneously recorded by the FBG sensors [33] and motion capture system, respectively. The catheter shape is reconstructed in 3D Euclidean space as a point-cloud dataset ( Φ F R 3 ). Utilizing (1), the catheter shape in the global frame ( Φ G R 3 ) is calculated by
Φ G = H C G H F C Φ F ,
where H F C is calculating during the calibration procedure. Next, we register the robot base frame ( { B } ) and end-effector (i.e., the US transducer) frame ( { U } ), to the global frame through accurate full pose measurements (position and orientation) of the end-effector in Cartesian space. These measurements result in obtaining H U G and H B G . The final calibration is required to localize the polar coordinate US data (pixels) to Cartesian coordinates using a custom-designed calibrator object. For this calibration, the US image plane is mapped to obtain H I U , where { I } is the image reference frame. All frames are displayed in Figure 3a. Details regarding these calibration procedures can be referred to in the Supplementary Materials. Following these calibrations, the imaging and sensing modalities are ready to be used in the pre-operative planning phase.
Algorithm 1 3D centroid generation inside an arterial volume
Sensors 21 00273 i001
Algorithm 2 Robot-mounted US transducer pose generation
Sensors 21 00273 i002

2.4. Pre-Operative Planning

Precision spheres are rigidly attached to a 3D-printed calibration stick that represents the reference frame ( { P } ) of the porcine tissue for tracking purposes. We use an Artis Pheno robotic C-arm cone-beam scanner (Siemens Healthcare GmbH, Forchheim, Germany) to scan the porcine tissue. Open-source software (InVesalius, Renato Archer Information Technology Center, Campinas, Brazil) is then utilized to eliminate soft tissues and strip the bone structure, keeping the vasculature and the calibration stick as two separate mesh objects. These anatomical details are converted to point-cloud data structures and used as input to both the arterial centroid generator (Algorithm 1) and the US pose planner (Algorithm 2). The centroid generator is implemented to obtain data points that correspond to the CT slices along the global longitudinal axis (i.e., along the length of the artificial artery). This algorithm processes three sets of data points: Imported 3D points ( Q s R k × 3 ) of the calibration stick Standard Triangle Language (STL) data, the calibration stick mesh ( Q t R l × 3 ), and the point-cloud mesh ( B R m × 3 ) of the vasculature, where k , l , and m are the number of data points. Points within Q s and Q t are processed only once to derive the transformation matrix ( H t s ) between the CT slices and the tissue reference frame. This is done through an iterative closest point (ICP) algorithm to compute a matching that minimizes the root mean squared distance between these two point-sets [34]. Next, a fourth-order polynomial representation ( P ( v ) : R R 3 ) of the artificial arterial centerline is derived and discretized to obtain a set of n centroid positions within the arterial volume. These centroids are individually defined as c i R 3 ( i 1 , . . . , n ) and form part of a subset ( Ψ CT R n × 3 ) consisting of n user-defined CT points.
Subsequently, the US pose planner Algorithm 2) is executed. The input to this algorithm is the point-cloud dataset ( C R r × 3 ) of the tissue surface. To obtain this point-cloud, the 3D depth camera is positioned above the tissue. A 3D RGB-D image is captured and rendered as an x y z -point-cloud structure using the PCL interface. Once both the arterial centroids and the surface point-cloud are transformed into the same coordinate system ( { P } ), Algorithm 2 uses the CT-derived subset ( Ψ CT ) and C as an input to calculate a trajectory for the US transducer on the surface of the tissue. Recalling that the US transducer should be positioned above the centroid ( c i ), these centroids are first projected upwards in the porcine tissue y-axis (Figure 3b). The nearest point ( C j { C } ) found on the surface to the ray ( c i C j ) then becomes a surface path point ( s i R 3 ). For each surface point, a rotation matrix ( R i P SO ( 3 ) ) is also calculated, defining the transducer z-axis as the vector pointing from the surface point to the centroid. An additional vector lying on the x y -plane is derived from two consecutive surface points ( s i and s i + 1 ). The final output of the algorithm is the US transducer pose defined for each of the n corresponding arterial centroids ( Ψ CT ), comprising a set of positions ( Υ R n × 3 ) and orientations ( θ S n × 3 ) in the tissue reference frame. With this data in place, the porcine limb is positioned in the ARMM workspace.

3. Multi-Modal Sensing and Feedback

This section describes the second half of our framework—a real-time data acquisition protocol and multi-modal feedback via a visualization interface. We begin by describing the position-force hybrid controller and our VS approach. This is then followed by the image processing of US images and reconstruction of the catheter shape for real-time visualization.

3.1. Visual-Servo-Based Motion Compensation

A custom-built 6-DoF Stewart platform is used to introduce PLMs to the tissue. This platform functions independently from the VS-controller and is controlled by six servo motors (MX-64AR Dynamixel, Robotis, Korea). Reflexive PLMs in patients have been reported to occur approximately every 30 s, though they have no predictive validity [35]. Hence, we assume that the disturbance is periodic. The model that describes the periodic motion of the tissue is given by a two-term Fourier series (see Supplementary Materials). Second, we only introduce translational motions in the tissue z y -plane, and rotation about the x-axis, via reference signals that describe PLMs that are motion-constrained along the length of a leg. These signals are recorded via precision spheres by the motion-capture system at a rate of 120 Hz in the global reference frame ( { G } ).
The transducer poses acquired from Algorithm 2 are used to control the robotically-actuated US transducer at each surface point on the tissue surface with the assumption that the tissue is stationary. Thus, when motions occur, an updated pose should be calculated and used as input to the robot. The current (u) robotic end-effector pose at any specific instant is expressed in terms of the transducer position ( p u R 3 ) and angle-axis orientation ( θ u S 3 ) in the robot base frame ( { B } ). Therefore, according to (1), we first calculate the transformed pose ( H i P = R i P , p i P ). Specifically,
p i P = s i x o z s i y + o y s i z + o x ,
where o x , o y , and o z represent the real-time displacements of the precision spheres (obtained from the Motive NatNet data stream and transformed to the tissue reference frame). Second, the updated target surface position ( Δ s i ) is obtained from the translation component of the transformation matrix ( H i B = H B G 1 H P G H i P ). Here, H B G and H P G represent the homogeneous transformation matrices calculated according to (1), and R i P from Algorithm 2, line 7 is updated by the rotation of the tissue in space. Concurrently, the position-orientation controller is activated for the US transducer. Let us consider the transducer leaving a current (u) pose to reach a target (i) pose. Then, given the current position ( p u ) of the transducer, the position error ( E p R 3 ) and orientation error ( E θ R 3 ) are calculated. These errors are minimized by a task-space velocity controller, outlined in Appendix A, which allows the transducer to reach a target.
When the transducer approaches a compliant environment, i.e., the surface of the porcine tissue, the force-based explicit force controller is active. Force-feedback is provided by a three-axis force sensor (K3D40, Mesysteme AG, Henningsdorf, Germany) connected between the transducer and the robot tool flange. Forces are only measured until conditions describing a positioning threshold, ( | E p | < d p R ), orientation threshold, ( E θ y , E θ z < d θ R ) and a force error ( E f R 3 ) have been met. The force error is calculated as the difference between the normal force of the tissue against the transducer surface and the desired force ( d f R ). The desired force is chosen such that sudden surface deflection is avoided, and constant contact is maintained. Satisfying all feedback conditions, such as the contact force ( d f ), positioning threshold ( d p ), and orientation thresholds ( d θ ), implies that the transducer is positioned both on the surface point ( s i ) and above an arterial centroid ( c i ) and, hence, a US image is acquired.

3.2. Ultrasound-Based Arterial Reconstruction

Brightness (B)-mode images are acquired at each surface point and converted to 2D US images which are then processed to obtain arterial shape information. This information consists of reconstructed arterial contour points (set Λ US R N × 3 ) and centroids (subset ( Ψ US R n × 3 )). The boundary of the contour is estimated by a probabilistic edge-detection filter [36]. This information is combined with the predictions of an ellipse model, which are assumed to be suitable for semi-circular arteries. Generally, for each US slice, we describe an arterial contour using the nonlinear dynamic system:
x q + 1 = x q + ξ q , r q = D x q + η q ,
where r q R is the output radius length and q N represents each of N indices around the arterial centroid for which the states of the system ( x q R 3 × 1 ) are estimated. These states are the lengths of the semi-major axis ( f q R ) and semi-minor axis ( h q R ) axes, as well as the tilt angle ( ϕ q R ) of the ellipse (Figure 4a). Next, as shown in [36], we assume white, zero-mean Gaussian process for the sequence ( ξ q ) and measurement noise ( μ q ) with known covariances which are constant throughout the entire contour. The tuning of the covariance values is done empirically until the estimated contours adapt well to the actual boundaries of the artery. The calculations of the model ( D ( x q ) ) and covariances are provided in Appendix B.
When each contour point has been defined, the arterial centroids ( Ψ CT ) in the subsequent iterations are calculated by means of a Star Algorithm [37]. This algorithm calculates the center of mass of all contour points (Figure 4c). After the final iteration, each centroid is stored in the subset ( Ψ CT ). All centroids are then compared to the previously-obtained ground truth centroids ( Ψ CT ) for further validation.

3.3. Catheter Shape Reconstruction

The arterial reconstruction is then followed by localizing the polar coordinate US data to Cartesian coordinates for real-time feedback. First, homogeneous representations of the set of N US contour positions ( Λ US R N × 4 ) are transformed to the global reference frame ( { G } ) by concatenating matrices described by (1) as follows:
H US G = H B G H U B H I U Λ US ,
which also applies to the arterial centroids ( Ψ US ) to obtain Ψ G . These points are visualized in 3D space to provide feedback on the geometry of the artery. The final step is to visualize the catheter shape ( Φ G ) obtained from (2) inside this artery. The catheter shape information is again obtained using the FBG sensor data. By transforming this data to the reference frame of the catheter base (as discussed in Figure 2), a real-time representation of the catheter shaft is obtained in the global frame ( { G } ). Furthermore, contact points between the catheter shaft and the inner arterial wall (spline-fitted contour points) are estimated. Since we do not account for soft-tissue deflections in this study, we determine these contact points in a non-deformable mock-up phantom of the porcine artery. After implementing the discussed framework on this mock-up, these contacts are determined by estimating the Euclidean distances between the mock-up boundary and the FBG sensor positions in 3D. Once these distances are below a certain threshold, contacts are visualized.

4. Experimental Results

This section describes the experimental setup used to validate our multi-modal sensing and feedback framework. The accuracy of US reconstruction is quantified using the CT-generated point-cloud data as a ground truth. This is followed by the estimation of contact points between the catheter shaft and the inner arterial wall, which is validated within the mock-up phantom of an artificial porcine artery. Finally, we validate the overall reconstruction accuracy of an artificial artery that is inside the porcine tissue.

4.1. Procedure

Catheter insertion is performed on a freshly excised porcine hind limb obtained from a local slaughterhouse (Figure 1, number ④. We prepare the limb by making incisions below the knee and above the hip. In order to establish artificial blood flow, a flexible silicone tube (10 mm inner diameter, 1 mm thickness) with similar dimensional characteristics of a femoral artery is inserted into the limb [38]. The limb is scanned with the CT scanner, and the DICOM output data file is imported into the ARMM Graphical User Interface (GUI) [32]. This GUI converts anatomical details to point-cloud data structures which are automatically processed by Algorithms 1 and 2 for n = 16 setpoints. Next, the pre-operative planning phase (Section 2.4) is validated by performing a US sweep on the stationary (ST) limb. The US focus depth is set to 20–40 mm, with a maximum depth of 90 mm and 10 Mhz resolution. The position-orientation and force-control parameters for the US transducer are shown in Table 1. For the 10 mm diameter tube representing the artery, we choose boundary detection indices as N = 30 .
Subsequently, multi-modal feedback and visualization (Section 3) are demonstrated during PLMs, which are produced by the 6-DoF Stewart platform and compensated for by the VS controller. The desired US transducer poses and trajectories are calculated for a segment of the artificial artery (length 125 mm), followed by arterial visualization. Finally, insertion is done with the FBG-embedded catheter, which is visualized inside the artificial artery on the PCL interface in conjunction with the tissue surface. Please refer to the accompanying video (https://www.dropbox.com/s/643q13ixv52oavn/ARMM_Multimodal_Systems.mp4?dl=0) to view this procedure.

4.2. Results

The accuracies at which the US images and the catheter shape are reconstructed in the stationary and VS experiments are calculated (Figure 5). In order to first validate the accuracy of the US reconstruction, we calculate the 3D Euclidean errors of the reconstructed arterial centroids and quantify the mean and mean absolute deviations of these errors. During the stationary trial, the arterial centroids ( Ψ US ) are compared with the ground truth data obtained from the CT slices ( Ψ CT ) resulting in a mean spatial error of 1.1 ± 0.5 mm. With PLM-induced disturbances, the arterial centroids ( Ψ VS ) deviate from the actual centroids ( Ψ US ) at a mean positioning error of 1.9 ± 0.3 mm. These results show the desired correspondence of the reconstructed US artery with the ground truth (Figure 6).
Next, we validate the reproducibility of US images and the consistency of the transducer orientations. The reproducibility concerns the amount of overlap between two US images of the same setpoint during and without motions. This overlap is obtained by deriving the Sørensen–Dice coefficient of the corresponding 2D masks of 16 reconstructed US slices. The mean similarity between US slices is 0.84 . Specifically, in Figure 6c, we observe that the controller reliably compensates for disturbances only from the second via-point, shifting from 1.00 ( n = 1 ) to 0.02 ( n = 2 ). We attribute this to the gradual, low change in tissue-transducer contact force that occurs at the second via-point. This is in contrast to the high, sudden transition between position and force control at the first via-point. This outlier can be improved by manually positioning the transducer close to the first surface point before commencing the US sweep. Overall, the controller adapts quickly to sudden changes in the phantom pose, as robot joint velocities take an average of 6 ms to be calculated and prescribed. Finally, catheter shape information is retrieved from FBG sensors embedded in the catheter shaft and successfully visualized (Figure 7). The shape is represented as a point-cloud that encapsulates the measured FBG sensor positions in 3D. The mean Euclidean error between the reconstructed coordinates of the FBG sensors and those measured by the FBGS interrogator is 0.82 mm, with a maximum error of 1.52 mm. Finally, we have demonstrated that FBG sensors also aid with the identification of potential contact points inside the artery.

4.3. Error Analysis

Sources of the cumulative localization error are divided into those concerning arterial and catheter reconstructions, respectively. Reconstructing the arteries result from relative motion matrices presented (5), during which errors can accumulate. These sources include the in-image localization errors and robot positioning error ( 0.3 ± 0.1 mm) of which 90 % are attributed to robot geometric errors [39]. Catheter reconstruction errors originate from the FBG sensor reflectivity and the uncertainties from precision sphere triangulation. A higher reflectivity of the sensors would result in more accurate detection of the Bragg wavelength and, hence, reduced error [33]. Moreover, triangulation errors occur due to the calibration of the motion tracking system, which has been measured for the ARMM system as shown in [40]. The triangulation error is 0.56 ± 0.08 mm, which is the mean Euclidean distance between the coordinates in the global reference frame ( { G } ) and their reprojections. Quantification of these calibration errors is explained in more detail in the Supplementary Materials (Table S1).
Due to the nature of soft tissue, reconstruction errors are expected to be inconsistent and dependent on the precision limitations of the sub-systems. Such errors are important to consider for measurement accuracy of arterial geometry, and guiding catheters using sub-surface imaging. Apart from the reported mean positioning errors between estimated and actual spatial coordinates, the reconstruction errors are fitted to two realistic scenarios. In the first scenario (S.A), the dichotomous independent variable is chosen as a threshold of 1.12 mm, defining whether the error is clinically-acceptable or not [41]. In the second scenario (S.B), this variable is defined by the length of a US sweep, which we believe influences the accuracy due to robotic drift. We arbitrarily choose the half the distance traveled along the segment of the porcine artery (125 mm length) as the threshold.
Our dependent variable is constructed by three categories regarding arterial reconstruction errors that would influence catheter tip placement for both scenarios. These categories are based on bounds ε p ( D c / 2 , D c ) of the catheter diameter, with a low error of (≤ D c / 2 ), a medium error within these bounds, and a high error (≥ D c ). Based on these variables, uncertainties are calculated to understand the reliability of the results. We perform a Proportional Reduction in Error (PRE) analysis in SPSS Statistics (IBM, New York, NY, USA) for each scenario (Table 2). This analysis delivers uncertainty coefficients of 0.618 and 0.261 , respectively. For scenario S.A, this implies that choosing a clinically-accepted threshold reconstruction reduces the probability of a prediction error by 61.8 % . In addition, having knowledge of the artery length in scenario S.B improves the probability of predicting the correct error by 26.1 % . Finally, the US reproducibility validation metric of 0.84 is quantified for stationary measurements, resulting in a mean Dice coefficient of 0.96 . According to [42], any coefficients higher than 0.7 are regarded as an excellent agreement.

5. Conclusions and Future Work

In this study, we present a clinically-relevant 3D visualization framework for an autonomous robotic system that integrates multiple imaging sub-systems with FBG sensing technology. By means of multi-modal sensing, the robotic system provides feedback of an FBG-embedded catheter and 3D tissue surfaces in real-time while compensating for uncertainties such as PLMs. The controlled synchronization of a serial-link robot with a moving limb is achieved. The stabilized images of the limb tissue surface, vasculature, and catheter are presented to the operator, allowing for catheterization in a virtually-motionless limb. We experimentally evaluate the reconstruction accuracy of the system in motionless and non-static scenarios, resulting in mean positioning errors of 1.9 ± 0.3 mm and 0.82 ± 0.21 mm for the reconstructed arteries and catheter, respectively.

5.1. Current Limitations and Clinical Feasibility

While the framework presented in this study is reliable in terms of its reconstruction accuracies, several findings have been identified that have important implications for developing a clinic-ready system. Firstly, whether the reported errors are considered acceptable depends on the application for vascular surgeries. The acoustic lens of the L14-5 transducer allows for the visualization of all arterial diameters, since the maximum reported diameter (24 mm) is that of the aorta [43]. In contrast to neurosurgeries, where cerebral vein diameters are much smaller, the consensus among vascular surgeons is that submillimeter reconstruction accuracy is not necessarily required [44]. Nonetheless, the maximum tracking accuracies of commercial RCCNS systems are ≥4 mm [45], making the current framework still comparable with the state-of-the-art.
Secondly, the integration of numerous sub-systems may be tedious and difficult to implement during point-of-care diagnosis. The ARMM GUI can alleviate this difficulty since it allows clinicians to experience surgical practice by delivering visual and sensing information. Furthermore, some of the presented sub-systems can be reduced or replaced. A reliable alternative for the US transducer robot arm would be a redundant 7-DoF robot such as the Panda (Franka Emika GmbH, Munich, Germany) or the LBR IIWA robot (Kuka, Augsburg, Germany). These robots contain integrated torque sensors which eliminate the need for explicit force control, as an impedance controller can then instead be utilized in conjunction with the 3D camera. Furthermore, the 3D camera can be employed for both the purpose of streaming topographical landmarks and their poses, thereby potentially eliminating the need for the motion-capture system.
Thirdly, this framework is regarded as safe and effective from a risk perspective. Catheters and surrounding arteries can be visualized to a clinician with low error, and indicate a strong potential of this framework towards virtual stabilization in a surgical environment. However, there may still be cases in which intra-operative X-rays are required to visualize vascular structures at the target site or observe clinical procedures that are more intricate. Notwithstanding, the use of US images can bypass this requirement [46]. Since US images and motion data are captured in real-time, this framework could help clinicians visualize the vascular target and particular tools, such as stents, ablation tips, and angioplasty balloons. This framework can aid with this visualization, given that this target is known a priori. It is recommended that visualization should remain without intra-operative X-ray imaging, unless mechanical complications arise.
Finally, the maneuverability of current FBG-embedded catheter is imposed by a design constraint: the bending radius of the FBG fiber. For vascular catheterizations—especially those relating to cardiac disease—arteries may be more tortuous than those presented in this study. Fortunately, other FBG fibers exist which can be integrated with catheter shafts with more resilience. Fibers with higher core aperture values and polyimide protective coatings have been reported with bending radii of 2.6 mm [47]. Such fibers can replace the one presented in Figure 2, in addition to choosing catheters based on the distance of vascular lesions, the tortuousness of the route, and the diameter of the vasculature.

5.2. Future Work

In future studies, this framework will be utilized to allow for improved control over the inner arterial positioning of catheters within the ARMM system. Catheters will be guided in this system by means of magnetic actuation [32], followed by the demonstration of a specific function (e.g., atrial fibrillation, angioplasty, or atherectomy techniques). The contact-point formulation can be expanded in an attempt to combat undesirable catheter-tissue friction during insertion. Furthermore, this framework would serve as a baseline upon which more complicated diagnostic capabilities can be built, for example, segmenting different layers of body tissue by exclusively processing US images and 3D depth camera data. This would require improvements that enable the 3D reconstruction of the entire US volume, as opposed to separate slices in that volume. Finally, different diameters of vasculature should be tested, since this framework could then be applicable during the treatment of brain and spinal cord aneurysms, which consist of much smaller vascular networks than the size of those investigated in this study. It is recommended that this framework should be tested by clinicians who conventionally utilize X-ray fluoroscopy, and compare the results to report on its feasibility. Furthermore, the reconstruction errors should be evaluated through practice-based statistical analysis as explained by [48], which would require a large number of observations through US sweeps over larger distances than those demonstrated in this study.

Supplementary Materials

The following are available at https://www.mdpi.com/1424-8220/21/1/273/s1, Figure S1: Calibration of the robot arm and ultrasound (US) probe, Figure S2: Transforming UR5 coordinates to the global reference frame, Figure S3: Design schematic of the ultrasound (US) calibration object, Figure S4: Results of the calibration and acquisition protocol for the Fiber Bragg Grating (FBG)-embedded catheter, Figure S5: The Stewart platform assembly, Table S1: Sources of error and their fractional uncertainties. Accuracies and absolute uncertainties are shown (in mm) for each source. Reconstruction errors are indicated for the stationary (ST) versus Computed Tomography (CT) data, and for the ST versus Visual-Servoing (VS) data, Table S2: Measured pixel distances and pixel density, Table S3: The Stewart Platform Fourier coefficients, and Video S1: Video.mp4.

Author Contributions

Conceptualization, C.M.H.; Funding acquisition, S.M.; Methodology, C.M.H., F.Š. and C.F.S.; Project administration, C.M.H.; Software, C.M.H., F.Š. and C.F.S.; Supervision, S.M.; Validation, C.F.S.; Visualization, C.M.H. and F.Š.; Writing—review and editing, F.Š. and S.M. All authors have read and agreed to the published version of the manuscript.

Funding

This work is supported by funds from the Netherlands Organization for Scientific Research (Innovational Research Incentives Scheme—VIDI: SAMURAI project # 14855).

Institutional Review Board Statement

This work was performed on tissue from animals that were killed as part of routine commercial food production and does not need ethical approval as per the institutional and national guidelines.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data sharing is not applicable to this article.

Acknowledgments

The authors would like to thank H.R. Liefers and J.G.M. Greve for their expert advice on CT scanning and assistance with the CT scanner. The help with the porcine tissue provided by Keurslager Herman Huuskes is also acknowledged.

Conflicts of Interest

The authors declare no conflict of interest. Furthermore, the funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

Abbreviations

The following abbreviations are used in this manuscript:
CTComputed Tomography
FBGFiber Bragg Grating
PLMsPeriodic Limb Movements
RCCNSRemote-Controlled Catheter Navigation Systems
USUltrasound

Appendix A. Task-Space Velocity Controller

In (A1)–(A14), all velocities are specified as those necessary to reach the target pose, and their indices (i) and those of the error vectors are ignored here for notational simplicity. In Section 3.1, the position error ( E p R 3 ) of the transducer is calculated as
E p = Δ s i p u ,
followed by calculating the orientation error ( E θ = [ 0 E θ y E θ z ] T R 3 ). Two unit vectors describe the orientation of the transducer: its z-axis ( θ ^ z R 3 ) and y-axis ( θ ^ y R 3 ), respectively (Figure 3a). First, the angular error between the current (u) and desired (i) z-axes is calculated:
E θ z = arcsin ( | θ z | ) ,
θ z = θ ^ u , z × θ ^ i , z .
Likewise, the angular error of the y-axes is
E θ y = arcsin ( | θ y | ) ,
θ y = R T ( E θ z , θ ^ z ) ( R ( E θ z , θ ^ z ) θ ^ u , y ) × θ ^ i , y ) .
In (A5), we compensate for the error in the z-axis through the rotation ( R ( E θ z , θ ^ z ) SO ( 3 ) ) that rotates θ ^ u , z onto θ ^ i , z , about θ ^ z . The desired angular velocities ( ω y R 3 ) and ( ω z R 3 ) can now be calculated for each axis:
ω y = ω d ( | θ ^ y | ( 1 e E θ y / γ ) ) ,
ω z = ω d ( | θ ^ z | ( 1 e E θ z / γ ) ) ,
which then constitutes the final angular velocity,
ω e = ω y + ω z .
In (A6) and (A7), ω d is the maximum angular velocity of θ y and θ z and γ serves as a threshold for the stopping region. To reduce downward surface tension before force control is active, the 2D positioning error is considered, i.e., when (4) becomes E p = [ E p x E p y 0 ] T :
V e = V d ( 1 e | E p | / μ ) E ^ p + ( τ t 1 t E p d t ) e ( | E p | / μ ) 5 .
The constant ( μ ) is analogous to γ , E ^ p = E p / | E p | denotes the direction of the error, V d is the maximum tool velocity, and the term ( 1 e | E p | / μ ) ensures continuity. The integral time constant ( τ ) is tuned empirically, and t is the updated timestep at each iteration. A discrete setpoint velocity is calculated for force-based explicit force control as
V [ k ] = K p E f [ k ] + K i V [ k 1 ] ,
where K p and K i are the proportional and integral gains and k describes the discrete-time index. In this case, the positioning error in (A1) changes to E | | , which is the component that is parallel to the surface:
E | | = E p x ^ · ( x ^ · E p ) ,
and x ^ is the unit vector perpendicular to the surface. The desired velocity is then recalculated by summing (A8) and (A9):
V e = V e + V [ k ] .
The spatial velocity of the robot end-effector comprises (A8) and (A12), resulting in V s = [ V e ω e ] . Utilizing the task velocity in (A9) when the transducer is in the air, and in (A12) for direct contact with the tissue, we prescribe joint velocities ( q ˙ R 6 ) using the robot Jacobian ( J m R 6 × 6 ) inverse approach, satisfying
q ˙ = J m V s .
J m denotes the damped pseudo-inverse of J m :
J m = J m T ( J m J m T + ρ 2 I ) 1 ,
where ρ is the damping coefficient and I R 6 × 6 is the identity matrix.

Appendix B. Artificial Arterial Boundary Reconstruction

The model ( D ( x q ) ) from (4) is calculated as
D x q = f q h q h q 2 cos 2 θ q ϕ q + f q 2 sin 2 θ q ϕ q ,
and relates the states of the ellipse with its radius at the angle θ q . The final estimates of the states ( x ^ q | q ) are:
x ^ q | q = x ^ q | q 1 + G q r q D x ^ q | q 1 ,
where G q is the Kalman gain calculated at each iteration (q):
G q = P q | q 1 J q T S q 1 .
In (A17), P q | q 1 is the covariance of the model prediction, J q T is the Jacobian matrix transpose of D x ^ q | q 1 , and S q is the covariance of the innovation r q D x ^ q | q 1 . Finally, the covariance matrix of the states estimate is updated at each iteration as
P q | q = P q | q 1 G q S q G q .
In order to calculate the innovation term (A16), a radius ( r q ) is defined on the first US image (Figure 4b). This radius originates from a 2D seed point obtained from the first known centroid c i = 1 = [ 0 , c y , c z ] and constitutes M equi-spaced edge candidates. It has a weighted average length of
r q = j = 1 M r q j β q j .
Weights ( β q j ) determine the likelihood of a certain candidate (j) to be an edge by assuming a normal distribution around the contour of each predicted edge point:
β q j = p q j j p q j ,
where
p q j = F e r q j , θ q 2 2 π S q exp ( r q j D x ^ q | q 1 2 S q
is the probability distribution function of the correct measurement. Fe(rqj, θq) is the edge magnitude at point (rqj, θq) and is calculated as an intensity constrained by an upper threshold.

References

  1. Mahmud, E.; Naghi, J.; Ang, L.; Harrison, J.; Behnamfar, O.; Pourdjabbar, A.; Reeves, R.; Patel, M. Demonstration of the safety and feasibility of robotically assisted percutaneous coronary intervention in complex coronary lesions: Results of the CORA-PCI study (Complex Robotically Assisted Percutaneous Coronary Intervention). JACC Cardiovasc. Interv. 2017, 10, 1320–1327. [Google Scholar] [CrossRef] [PubMed]
  2. Fu, Y.; Liu, H.; Huang, W.; Wang, S.; Liang, Z. Steerable catheters in minimally invasive vascular surgery. Int. J. Med. Robot. Comput. Assist. Surg. 2009, 5, 381–391. [Google Scholar] [CrossRef] [PubMed]
  3. Sharei, H.; Alderliesten, T.; van den Dobbelsteen, J.J.; Dankelman, J. Navigation of guidewires and catheters in the body during intervention procedures: A review of computer-based models. J. Med. Imaging 2018, 5, 010902. [Google Scholar] [CrossRef] [PubMed]
  4. Aspelin, P.; Aubry, P.; Fransson, S.-G.; Strasser, R.; Willenbrock, R.; Berg, K.J. Nephrotoxic effects in high-risk patients undergoing angiography. N. Eng. J. Med. 2003, 348, 491–499. [Google Scholar] [CrossRef]
  5. Einstein, A.J. Effects of radiation exposure from cardiac imaging: How good are the data? J. Am. Coll. Cardiol. 2012, 59, 553–565. [Google Scholar] [CrossRef] [Green Version]
  6. Trejos, A.L.; Patel, R.V.; Naish, M.D. Force sensing and its application in minimally invasive surgery and therapy: A survey. Proc. Inst. Mech. Eng. Part C J. Mech. Eng. Sci. 2010, 224, 1435–1454. [Google Scholar] [CrossRef]
  7. Bell, J.A.; Saikus, C.E.; Ratnayaka, K.; Wu, V.; Sonmez, M.; Faranesh, A.Z.; Colyer, J.H.; Lederman, R.J.; Kocaturk, O. A deflectable guiding catheter for real-time MRI-guided interventions. J. Magn. Reson. Imaging 2012, 35, 908–915. [Google Scholar] [CrossRef] [Green Version]
  8. Liu, T.; Poirot, N.L.; Franson, D.; Seiberlich, N.; Griswold, M.A.; Çavuşoğlu, M.C. Modeling and validation of the three-dimensional deflection of an MRI-compatible magnetically-actuated steerable catheter. Trans. Biomed. Eng. 2015, 63, 2142–2154. [Google Scholar]
  9. Alam, I.S.; Steinberg, I.; Vermesh, O.; van den Berg, N.S.; Rosenthal, E.L.; van Dam, G.M.; Ntziachristos, V.; Gambhir, S.S.; Hernot, S.; Rogalla, S. Emerging intraoperative imaging modalities to improve surgical precision. Mol. Imaging Biol. 2018, 20, 705–715. [Google Scholar] [CrossRef] [Green Version]
  10. Tavallaei, M.A.; Lavdas, M.K.; Gelman, D.; Drangova, M. Magnetic resonance imaging compatible remote catheter navigation system with 3 degrees of freedom. Int. J. Comput. Assist. Radiol. Surg. 2016, 11, 1537–1545. [Google Scholar] [CrossRef]
  11. Abolmaesumi, P.; Salcudean, S.E.; Zhu, W.-H.; Sirouspour, M.R.; DiMaio, S.P. Image-guided control of a robot for medical ultrasound. IEEE Trans. Robot. Autom. 2002, 18, 11–23. [Google Scholar] [CrossRef]
  12. Chen, X.; Bao, N.; Li, J.; Kang, Y. A review of surgery navigation system based on ultrasound guidance. In Proceedings of the IEEE International Conference on Information and Automation (ICIA), Shenyang, China, 6–8 June 2012; pp. 882–886. [Google Scholar]
  13. Roesthuis, R.J.; Kemp, M.; van den Dobbelsteen, J.J.; Misra, S. Three-dimensional needle shape reconstruction using an array of fiber bragg grating sensors. IEEE/ASME Trans. Mechatron. 2014, 19, 1115–1126. [Google Scholar] [CrossRef]
  14. Denasi, A.; Khan, F.; Boskma, K.J.; Kaya, M.; Hennersperger, C.; Göbl, R.; Tirindelli, M.; Navab, N.; Misra, S. An observer-based fusion method using multicore optical shape sensors and ultrasound images for magnetically-actuated catheters. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Brisbane, Australia, 21–25 May 2018; p. 50. [Google Scholar]
  15. Khan, F.; Roesthuis, R.J.; Misra, S. Force sensing in continuum manipulators using fiber bragg grating sensors. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vancouver, BC, Canada, 24–28 September 2017; pp. 2531–2536. [Google Scholar]
  16. Kouh Soltani, M.; Khanmohammadi, S.; Ghalichi, F. A three-dimensional shape-based force and stiffness-sensing platform for tendon-driven catheters. Sensors 2016, 16, 990. [Google Scholar] [CrossRef] [PubMed]
  17. Zhang, C.; Guo, S.; Gao, B.; Jia, F. The beam theory-based collision force detection of the catheter operating system. In Proceedings of the IEEE International Conference on Mechatronics and Automation (ICMA), Harbin, China, 7–10 August 2016; pp. 701–706. [Google Scholar]
  18. Antico, M.; Sasazawa, F.; Wu, L.; Jaiprakash, A.; Roberts, J.; Crawford, R.; Pandey, A.K.; Fontanarosa, D. Ultrasound guidance in minimally invasive robotic procedures. Med. Image Anal. 2019, 54, 149–167. [Google Scholar] [CrossRef] [PubMed]
  19. Langsch, F.; Virga, S.; Esteban, J.; Göbl, R.; Navab, N. Robotic Ultrasound for Catheter Navigation in Endovascular Procedures. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Macau, China, 4–8 November 2019; pp. 5404–5410. [Google Scholar]
  20. Mathiassen, K.; Fjellin, J.E.; Glette, K.; Hol, P.K.; Elle, O.J. An ultrasound robotic system using the commercial robot UR5. Front. Robot. AI 2016, 3, 1–16. [Google Scholar] [CrossRef] [Green Version]
  21. Pratt, P.; Hughes-Hallett, A.; Zhang, L.; Patel, N.; Mayer, E.; Darzi, A.; Yang, G.-Z. Autonomous ultrasound-guided tissue dissection. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI), Munich, Germany, 5–9 October 2015; pp. 249–257. [Google Scholar]
  22. Graumann, C.; Fuerst, B.; Hennersperger, C.; Bork, F.; Navab, N. Robotic ultrasound trajectory planning for volume of interest coverage. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Stockholm, Sweden, 16–21 May 2016; pp. 736–741. [Google Scholar]
  23. Jiang, Z.; Grimm, M.; Zhou, M.; Esteban, J.; Simson, W.; Zahnd, G.; Navab, N. Automatic Normal Positioning of Robotic Ultrasound Probe Based Only on Confidence Map Optimization and Force Measurement. IEEE Robot. Autom. Lett. 2020, 5, 1342–1349. [Google Scholar] [CrossRef]
  24. Vanegas-Arroyave, N.; Zaghloul, K.; Hallett, M.; Lungu, C. Leg Movements During General Anesthesia. Mov. Disord. Clin. Pract. 2016, 3, 510–512. [Google Scholar] [CrossRef] [Green Version]
  25. Kwartowitz, D.M.; Miga, M.I.; Herrell, S.D.; Galloway, R.L. Towards image guided robotic surgery: Multi-arm tracking through hybrid localization. Int. J. Comput. Assist. Radiol. Surg. 2009, 4, 281–286. [Google Scholar] [CrossRef]
  26. Nguyen, Q.C.; Kim, Y.; Kwon, H. Optimization of layout and path planning of surgical robotic system. Int. J. Control Autom. Syst. 2017, 15, 375–384. [Google Scholar] [CrossRef]
  27. Ren, H.; Banerjee, H. A Preface in Electromagnetic Robotic Actuation and Sensing in Medicine. In Electromagnetic Actuation and Sensing in Medical Robotics; Springer: Singapore, 2018; pp. 1–10. [Google Scholar]
  28. Kenngott, H.; Neuhaus, J.; Müller-Stich, B.; Wolf, I.; Vetter, M.; Meinzer, H.P.; Köninger, J.; Büchler, M.; Gutt, C. Development of a navigation system for minimally invasive esophagectomy. Surg. Endosc. 2008, 22, 1858–1865. [Google Scholar] [CrossRef]
  29. Goldsmith, A.; Pedersen, P.; Szabo, T. An inertial-optical tracking system for portable, quantitative, 3D ultrasound. In Proceedings of the IEEE Ultrasonics Symposium, Beijing, China, 2–5 November 2008; pp. 45–49. [Google Scholar]
  30. Ataide, E.J.G.; Ziegle, J.; Kalmar, M.; Rathi, S.; Shukla, S.; Boese, A.; Friebe, M. Feasibility and Initial Results of Assisted Ultrasound Scan Acquisition for Improved Tomographic Visualization. In Proceedings of the 2019 IEEE 16th India Council International Conference (INDICON), Rajkot, India, 13–15 December 2019; IEEE: New York, NY, USA, 2019; pp. 1–4. [Google Scholar]
  31. Frantz, D.D.; Wiles, A.; Leis, S.; Kirsch, S. Accuracy assessment protocols for electromagnetic tracking systems. Phys. Med. Biol. 2003, 48, 2241–2251. [Google Scholar] [CrossRef] [PubMed]
  32. Heunis, C.M.; Wotte, Y.P.; Sikorski, J.; Furtado, G.P.; Misra, S. The ARMM system-autonomous steering of magnetically-actuated catheters: Towards endovascular applications. IEEE Robot. Autom. Lett. 2020, 5, 704–711. [Google Scholar] [CrossRef]
  33. Khan, F.; Donder, A.; Galvan, S.; y Baena, F.R.; Misra, S. Pose Measurement of Flexible Medical Instruments using Fiber Bragg Gratings in Multi-Core Fiber. IEEE Sens. J. 2020. [Google Scholar] [CrossRef]
  34. Besl, P.J.; McKay, N.D. Method for registration of 3D shapes. Proc. Sens. Fusion IV Control Paradig. Data Struct. 1922, 1611, 586–606. [Google Scholar]
  35. Koo, B.B.; Blackwell, T.; Ancoli-Israel, S.; Stone, K.L.; Stefanick, M.L.; Redline, S. Osteoporotic Fractures in Men (MrOS) Study Group. Association of incident cardiovascular disease with periodic limb movements during sleep in older men: Outcomes of sleep disorders in older men (MrOS) study. Circulation 2011. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  36. Guerrero, J.; Salcudean, S.E.; McEwen, J.; Masri, B.A.; Nicolaou, S. Real-time vessel segmentation and tracking for ultrasound imaging applications. IEEE Trans. Med. Imaging 2007, 26, 1079–1090. [Google Scholar] [CrossRef]
  37. Friedland, N.; Adam, D. Automatic ventricular cavity boundary detection from sequential ultrasound images using simulated annealing. IEEE Trans. Med. Imaging 1989, 8, 344–353. [Google Scholar] [CrossRef]
  38. Maggi, L.E.; Von Krüger, M.A.; Pereira, W.C.A.; Monteiro, E.E.C. Development of silicon-based materials for ultrasound biological phantoms. IEEE Int. Ultrason. Symp. (IUS) 2009, 1962–1965. [Google Scholar] [CrossRef]
  39. Elatta, A.; Gen, L.P.; Zhi, F.L.; Daoyuan, Y.; Fei, L. An overview of robot calibration. Inform. Technol. J. 2004, 3, 74–78. [Google Scholar] [CrossRef] [Green Version]
  40. Heunis, C.M.; Barata, B.F.; Furtado, G.P.; Misra, S. Collaborative Surgical Robots: Optical Tracking During Endovascular Surgery. IEEE Robot. Autom. Mag. 2020. [Google Scholar] [CrossRef]
  41. Rogers, S.; Carreira, J.; Thompson, R.; Morais, A.; Miller, C.; Wein, W.; Ghosh, J.; McCollum, C. An Ex Vivo Evaluation of Tomographic 3D Ultrasound, B-Mode Ultrasound, CT In addition, MR Imaging to Measure Artery Diameter, Length and Wall Volume. Ultrasound Med. Biol. 2019, 45, 2819–2829. [Google Scholar] [CrossRef] [PubMed]
  42. Zijdenbos, A.P.; Dawant, B.M.; Margolin, R.A.; Palmer, A.C. Morphometric analysis of white matter lesions in MR images: Method and validation. IEEE Trans. Med. Imaging 1994, 13, 716–724. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  43. Osnabrugge, R.L.; Mylotte, D.; Head, S.J.; Mieghem, N.M.V.; Nkomo, V.T.; LeReun, C.M.; Bogers, A.J.; Piazza, N.; Kappetein, A.P. Aortic Stenosis in the Elderly. J. Am. Coll. Cardiol. 2013, 62, 1002–1012. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  44. Kwoh, Y.S.; Hou, J.; Jonckheere, E.A.; Hayati, S. A robot with improved absolute positioning accuracy for CT guided stereotactic brain surgery. IEEE Trans. Biomed. Eng. 1988, 35, 153–160. [Google Scholar] [CrossRef] [PubMed]
  45. Kenngott, H.; Wegner, I.; Neuhaus, J.; Nickel, F.; Fischer, L.; Gehrig, T.; Meinzer, H.; Müller-Stich, B. Magnetic tracking in the operation room using the da Vinci® telemanipulator is feasible. J. Robot. Surg. 2013, 7, 59–64. [Google Scholar] [CrossRef] [Green Version]
  46. Miccini, M.; Cassini, D.; Gregori, M.; Gazzanelli, S.; Cassibba, S.; Biacchi, D. Ultrasound-guided placement of central venous port systems via the right internal jugular vein: Are chest x-ray and/or fluoroscopy needed to confirm the correct placement of the device? World J. Surg. 2016, 40, 2353–2358. [Google Scholar] [CrossRef]
  47. Bronnikov, K.; Wolf, A.; Yakushin, S.; Dostovalov, A.; Egorova, O.; Zhuravlev, S.; Semjonov, S.; Wabnitz, S.; Babin, S. Durable shape sensor based on FBG array inscribed in polyimide-coated multicore optical fiber. Opt. Express 2019, 27, 38421–38434. [Google Scholar] [CrossRef] [Green Version]
  48. Sim, I.; Gorman, P.; Greenes, R.A.; Haynes, R.B.; Kaplan, B.; Lehmann, H.; Tang, P.C. Clinical decision support systems for the practice of evidence-based medicine. J. Am. Med. Inform. Assoc. 2001, 8, 527–534. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Pre-operative planning for the ultrasound (US) transducer. ① The porcine tissue is scanned pre-operatively with a computed tomography (CT) scanner. The resulting scanning data are converted to a point-cloud dataset ( B R m × 3 ). 3D datapoints (m) are used as input to Algorithm 1, which calculates a set consisting of n arterial centroids ( Ψ CT R n × 3 ) ② A second point-cloud ( C R r × 3 ) is obtained from a 3D surface scan of the tissue and used as input to the Algorithm 2, the ultrasound pose planner. ③ The output is visualized as a set of poses consisting of projected surface via-points ( Υ R n × 3 ) and orientations ( θ S n × 3 ). ④ Finally, in the Advanced Robotics for Magnetic Manipulation (ARMM) workspace, a US sweep can be performed across the tissue, followed by multi-modal feedback of the 3D artery, tissue surface, and the Fiber Bragg Grating (FBG)-embedded catheter in the intra-operative phase.
Figure 1. Pre-operative planning for the ultrasound (US) transducer. ① The porcine tissue is scanned pre-operatively with a computed tomography (CT) scanner. The resulting scanning data are converted to a point-cloud dataset ( B R m × 3 ). 3D datapoints (m) are used as input to Algorithm 1, which calculates a set consisting of n arterial centroids ( Ψ CT R n × 3 ) ② A second point-cloud ( C R r × 3 ) is obtained from a 3D surface scan of the tissue and used as input to the Algorithm 2, the ultrasound pose planner. ③ The output is visualized as a set of poses consisting of projected surface via-points ( Υ R n × 3 ) and orientations ( θ S n × 3 ). ④ Finally, in the Advanced Robotics for Magnetic Manipulation (ARMM) workspace, a US sweep can be performed across the tissue, followed by multi-modal feedback of the 3D artery, tissue surface, and the Fiber Bragg Grating (FBG)-embedded catheter in the intra-operative phase.
Sensors 21 00273 g001
Figure 2. Assembly and calibration protocol for the Fiber Bragg Grating (FBG)-embedded catheter: ① 32 FBG sensors are inside a multicore fiber that is inserted in, and fixed to, the catheter shaft. The catheter base has a known offset with respect to the first sensor (f = 1). ② A rigid body precision sphere tool on the catheter base is tracked by eight infrared cameras that surround the Advanced Robotics for Magnetic Manipulation (ARMM) workspace. ③ A mold containing three channels with known coordinates in local frame ( { L } ) is used to calibrate the shape of the catheter to its base, ( { C } ) recorded in the global reference frame ( { G } ) . Frame { M } is recorded via precision-spheres to obtain the pose of the mold. The straight channel (I) acts as a reference with a zero strain shift of the FBG fiber. The second channel (II) is used to calculate the transformation ( H L M ). The final channel (III) is used to validate the global transformation between the FBG coordinates streamed by ④, the interrogator and both the local channel shape ( H M G ), and catheter base frame ( H C G ). This information is combined to display the catheter shape in real-time. All dimensions are in mm.
Figure 2. Assembly and calibration protocol for the Fiber Bragg Grating (FBG)-embedded catheter: ① 32 FBG sensors are inside a multicore fiber that is inserted in, and fixed to, the catheter shaft. The catheter base has a known offset with respect to the first sensor (f = 1). ② A rigid body precision sphere tool on the catheter base is tracked by eight infrared cameras that surround the Advanced Robotics for Magnetic Manipulation (ARMM) workspace. ③ A mold containing three channels with known coordinates in local frame ( { L } ) is used to calibrate the shape of the catheter to its base, ( { C } ) recorded in the global reference frame ( { G } ) . Frame { M } is recorded via precision-spheres to obtain the pose of the mold. The straight channel (I) acts as a reference with a zero strain shift of the FBG fiber. The second channel (II) is used to calculate the transformation ( H L M ). The final channel (III) is used to validate the global transformation between the FBG coordinates streamed by ④, the interrogator and both the local channel shape ( H M G ), and catheter base frame ( H C G ). This information is combined to display the catheter shape in real-time. All dimensions are in mm.
Sensors 21 00273 g002
Figure 3. Intra-operative ultrasound (US) transducer pose planning. (a) In this study, we integrate three imaging modalities: a 3D depth camera, a US transducer, and a motion tracking system. Additionally, data from the Fiber Bragg Grating (FBG) sensors are fused with imaging data to provide real-time shape information of the endovascular catheter. This information is combined with the reconstructed environment from the pre-operative planning to visualize the catheter inside the arterial volume, which is subjected to Periodic Limb Movements (PLMs). All reference frames are indicated in brackets ( { } ). (b) Surface points ( s i ) are obtained by vertically projecting the arterial centroids ( c i ) obtained from Algorithm 1, and finding the closest point on the surface point-cloud dataset ( C j ). A trajectory for the US transducer is then calculated along the tissue surface. A proportional-integral (PI) controller positions the transducer, while its orientation is controlled by a P-controller. Together, these controllers realize the pose of the transducer (obtained from Algorithm 2) at a 3D surface target point.
Figure 3. Intra-operative ultrasound (US) transducer pose planning. (a) In this study, we integrate three imaging modalities: a 3D depth camera, a US transducer, and a motion tracking system. Additionally, data from the Fiber Bragg Grating (FBG) sensors are fused with imaging data to provide real-time shape information of the endovascular catheter. This information is combined with the reconstructed environment from the pre-operative planning to visualize the catheter inside the arterial volume, which is subjected to Periodic Limb Movements (PLMs). All reference frames are indicated in brackets ( { } ). (b) Surface points ( s i ) are obtained by vertically projecting the arterial centroids ( c i ) obtained from Algorithm 1, and finding the closest point on the surface point-cloud dataset ( C j ). A trajectory for the US transducer is then calculated along the tissue surface. A proportional-integral (PI) controller positions the transducer, while its orientation is controlled by a P-controller. Together, these controllers realize the pose of the transducer (obtained from Algorithm 2) at a 3D surface target point.
Sensors 21 00273 g003
Figure 4. Ultrasound (US) image arterial reconstruction scheme. (a) For each US image slice, the arterial contours are extracted and reconstructed centroids ( c i ) are estimated. Knowledge of the ellipse parameters (f, h, and ϕ ) are required and included in an extended Kalman filter; (b) equi-spaced edge candidates along a radial line with maximum predefined length ( r m a x ). For each candidate ( j ) in line (q), the distance to the center along this line is defined as r q j , j N ( j = 1 , , N ) . (c) Each centroid of the US slices is estimated at each iteration (q), repeated iteratively, until the calculated center converges to the weighted center.
Figure 4. Ultrasound (US) image arterial reconstruction scheme. (a) For each US image slice, the arterial contours are extracted and reconstructed centroids ( c i ) are estimated. Knowledge of the ellipse parameters (f, h, and ϕ ) are required and included in an extended Kalman filter; (b) equi-spaced edge candidates along a radial line with maximum predefined length ( r m a x ). For each candidate ( j ) in line (q), the distance to the center along this line is defined as r q j , j N ( j = 1 , , N ) . (c) Each centroid of the US slices is estimated at each iteration (q), repeated iteratively, until the calculated center converges to the weighted center.
Sensors 21 00273 g004
Figure 5. Calculating the reconstruction accuracies: The US images are paired as slices pertaining to each via-point on the skin surface in the transducer frame ( { U } ). Dice similarities are calculated between stationary (ST) US slices and dynamic slices captured during the visual-servo (VS) trial, both in the image reference frame ( { I } ). Finally, the 3D Euclidean errors ( ε p ) are calculated, using the centroids ( c * ) of each slice ( = i , j ) as reference point in frame ( { P } ).
Figure 5. Calculating the reconstruction accuracies: The US images are paired as slices pertaining to each via-point on the skin surface in the transducer frame ( { U } ). Dice similarities are calculated between stationary (ST) US slices and dynamic slices captured during the visual-servo (VS) trial, both in the image reference frame ( { I } ). Finally, the 3D Euclidean errors ( ε p ) are calculated, using the centroids ( c * ) of each slice ( = i , j ) as reference point in frame ( { P } ).
Sensors 21 00273 g005
Figure 6. 3D Euclidean errors ( ε p ) and reconstruction results: (a) stationary US image centroids are compared to the ground truth centroids obtained during the Computed Tomography (CT) scan. The mean positioning error ( μ ), single ( σ ), and double ( 2 σ ) measurement of variations are indicated for each slice (n). (b) The Dice similarity results are shown, comparing the binary masks of the segmented US image ( I * ) pairs for the two trials ( * = ST , VS ). Each mask is mapped in image coordinates ( u , v ). (c) Finally, the 3D Euclidean errors are reported, comparing the US transducer positioning accuracy of the ST and VS acquisition trials.
Figure 6. 3D Euclidean errors ( ε p ) and reconstruction results: (a) stationary US image centroids are compared to the ground truth centroids obtained during the Computed Tomography (CT) scan. The mean positioning error ( μ ), single ( σ ), and double ( 2 σ ) measurement of variations are indicated for each slice (n). (b) The Dice similarity results are shown, comparing the binary masks of the segmented US image ( I * ) pairs for the two trials ( * = ST , VS ). Each mask is mapped in image coordinates ( u , v ). (c) Finally, the 3D Euclidean errors are reported, comparing the US transducer positioning accuracy of the ST and VS acquisition trials.
Sensors 21 00273 g006
Figure 7. Multi-modal visualization results: (a) The point-cloud library (PCL) interface is used to visualize the porcine tissue surface, red projected surface points, the ultrasound (US) transducer orientation frames, and the reconstructed US slices (white); (b) visualization of the Fiber Bragg Grating (FBG)-embedded catheter ( Φ G ) inside the artificial artery; (c) the reconstructed US artery is reproduced as a surface mesh comprising centroids and boundaries. Visual inspection of the reconstruction of the artificial artery yields commendatory results when compared to the Computed Tomography (CT) point-cloud; (d) Potential contact points between the catheter shaft and the artificial artery inner wall are estimated using an approximative method to threshold their Euclidean distances. For the current catheter (with a radius of 1 mm), this threshold is chosen as 1.25 mm. Please refer to the accompanying video (https://www.dropbox.com/s/643q13ixv52oavn/ARMM_Multimodal_Systems.mp4?dl=0) to view this visualization.
Figure 7. Multi-modal visualization results: (a) The point-cloud library (PCL) interface is used to visualize the porcine tissue surface, red projected surface points, the ultrasound (US) transducer orientation frames, and the reconstructed US slices (white); (b) visualization of the Fiber Bragg Grating (FBG)-embedded catheter ( Φ G ) inside the artificial artery; (c) the reconstructed US artery is reproduced as a surface mesh comprising centroids and boundaries. Visual inspection of the reconstruction of the artificial artery yields commendatory results when compared to the Computed Tomography (CT) point-cloud; (d) Potential contact points between the catheter shaft and the artificial artery inner wall are estimated using an approximative method to threshold their Euclidean distances. For the current catheter (with a radius of 1 mm), this threshold is chosen as 1.25 mm. Please refer to the accompanying video (https://www.dropbox.com/s/643q13ixv52oavn/ARMM_Multimodal_Systems.mp4?dl=0) to view this visualization.
Sensors 21 00273 g007
Table 1. The position, orientation, and force control parameters for the robotically-actuated ultrasound (US) transducer. Each parameter can be referred to in Appendix A and Appendix B.
Table 1. The position, orientation, and force control parameters for the robotically-actuated ultrasound (US) transducer. Each parameter can be referred to in Appendix A and Appendix B.
Control ParameterSymbolValue
PositionPositioning setpoint d p 0.5 mm
Velocity threshold μ 25 mm
Integral time constant τ 20
Maximum transducer linear velocity V d 50 mm\/s
OrientationOrientation setpoint d θ 0.01 rad
Orientation threshold γ 0.1 rad
Maximum transducer angular velocity ω d 0.1 rad/s
ForceContact force setpoint d f 0.8 N
Proportional gain K p 0.5
Integral gain K i 0.7
Table 2. Directional measures of association between two scenarios (S.A and S.B) and the reconstruction 3D errors. Uncertainty coefficients of 0.618 and 0.261 are estimated and considered to be statistically significant ( p < 0.01 for S.A and p < 0.05 for S.B).
Table 2. Directional measures of association between two scenarios (S.A and S.B) and the reconstruction 3D errors. Uncertainty coefficients of 0.618 and 0.261 are estimated and considered to be statistically significant ( p < 0.01 for S.A and p < 0.05 for S.B).
ValueAsymptotic Standard Error aApproximate TbApproximate Significance
S.AS.BS.AS.BS.AS.BS.AS.B
LambdaSymmetric0.6250.4120.1380.1663.5632.0670.0000.039
Goodman & Kruskal tauScenario (dependent)0.3800.1560.1100.093--0.000 c 0.008  c
Positioning error (dependent)0.6370.3040.1020.127--0.000  c 0.009  c
Uncertainty CoefficientSymmetric0.5080.2160.0840.0915.5972.2830.000  d 0.003  d
Scenario (dependent)0.4310.1840.0720.0765.5972.2830.000  d 0.003  d
Positioning error (dependent)0.6180.2610.1050.1145.5972.2830.000  d 0.003  d
a Not assuming null hypothesis. b Using the asymptotic standard error assuming the null hypothesis. c Based on chi-square approximation. d Likelihood ratio chi-square probability.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Heunis, C.M.; Šuligoj, F.; Fambuena Santos, C.; Misra, S. Real-Time Multi-Modal Sensing and Feedback for Catheterization in Porcine Tissue. Sensors 2021, 21, 273. https://doi.org/10.3390/s21010273

AMA Style

Heunis CM, Šuligoj F, Fambuena Santos C, Misra S. Real-Time Multi-Modal Sensing and Feedback for Catheterization in Porcine Tissue. Sensors. 2021; 21(1):273. https://doi.org/10.3390/s21010273

Chicago/Turabian Style

Heunis, Christoff M., Filip Šuligoj, Carlos Fambuena Santos, and Sarthak Misra. 2021. "Real-Time Multi-Modal Sensing and Feedback for Catheterization in Porcine Tissue" Sensors 21, no. 1: 273. https://doi.org/10.3390/s21010273

APA Style

Heunis, C. M., Šuligoj, F., Fambuena Santos, C., & Misra, S. (2021). Real-Time Multi-Modal Sensing and Feedback for Catheterization in Porcine Tissue. Sensors, 21(1), 273. https://doi.org/10.3390/s21010273

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop