Next Article in Journal
ER-MACG: An Extreme Precipitation Forecasting Model Integrating Self-Attention Based on FY4A Satellite Data
Previous Article in Journal
Seasonal and Interannual Variations in Sea Ice Thickness in the Weddell Sea, Antarctica (2019–2022) Using ICESat-2
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Cross-Spectral Navigation with Sensor Handover for Enhanced Proximity Operations with Uncooperative Space Objects

by
Massimiliano Bussolino
,
Gaia Letizia Civardi
*,
Matteo Quirino
,
Michele Bechini
and
Michèle Lavagna
Department of Aerospace Science and Technology (DAER), Politecnico di Milano, Via la Masa 34, 20156 Milan, Italy
*
Author to whom correspondence should be addressed.
Remote Sens. 2024, 16(20), 3910; https://doi.org/10.3390/rs16203910
Submission received: 18 September 2024 / Revised: 18 October 2024 / Accepted: 19 October 2024 / Published: 21 October 2024

Abstract

:
Close-proximity operations play a crucial role in emerging mission concepts, such as Active Debris Removal or small celestial bodies exploration. When approaching a non-cooperative target, the increased risk of collisions and reduced reliance on ground intervention necessitate autonomous on-board relative pose (position and attitude) estimation. Although navigation strategies relying on monocular cameras which operate in the visible (VIS) spectrum have been extensively studied and tested in flight for navigation applications, their accuracy is heavily related to the target’s illumination conditions, thus limiting their applicability range. The novelty of the paper is the introduction of a thermal-infrared (TIR) camera to complement the VIS one to mitigate the aforementioned issues. The primary goal of this work is to evaluate the enhancement in navigation accuracy and robustness by performing VIS-TIR data fusion within an Extended Kalman Filter (EKF) and to assess the performance of such navigation strategy in challenging illumination scenarios. The proposed navigation architecture is tightly coupled, leveraging correspondences between a known uncooperative target and feature points extracted from multispectral images. Furthermore, handover from one camera to the other is introduced to enable seamlessly operations across both spectra while prioritizing the most significant measurement sources. The pipeline is tested on Tango spacecraft synthetically generated VIS and TIR images. A performance assessment is carried out through numerical simulations considering different illumination conditions. Our results demonstrate that a combined VIS-TIR navigation strategy effectively enhances operational robustness and flexibility compared to traditional VIS-only navigation chains.

1. Introduction

In recent years, researchers have dedicated considerable attention towards operations in close-proximity to uncooperative orbiting artificial objects. Within this context, the onboard reconstruction of the chaser-target relative state vector is a crucial capability for incoming mission scenarios such as formation flying missions (FF), on-orbit servicing demonstrators (OOS), and active debris removal, as well as small bodies exploration [1]. While these missions are currently in the spotlight of discussions, attaining feasibility still hinges on substantial technological advancements. The necessity for close-proximity manoeuvring introduces a requirement for a guidance, navigation, and control chain to be autonomously solved onboard to ensure timeliness, reactivity, effectiveness, and robustness in both nominal and off-nominal operations. The initial component of this chain is the relative state reconstruction and navigation. This is especially challenging when dealing with artificial uncooperative targets, which calls for a robust solution relying solely on the chaser’s capabilities, as highlighted in [2,3]. In this operational context, imaging with passive sensors emerges as the best sensor architecture. A comprehensive review of initial pose determination techniques based on a VIS monocular camera is provided in [4]. Further, VIS-based optical navigation has been successfully applied within both cooperative and uncooperative rendezvous, as pointed out in [5,6]. Nonetheless, the effectiveness of visible imaging is heavily reliant on illumination conditions. Consequently, OOS missions encounter substantial constraints when illumination requirements for VIS image acquisition are integrated into the design and definition of the close proximity operations. Elements such as the target orbit beta angle, attitude history, solar aspect angle induced by the chaser’s fly-around, and the camera axis could significantly jeopardize the ability to detect and track the target appropriately. This limitation may lead to an unacceptable increase in either mission duration or risk. Illumination bottlenecks become particularly pronounced for targets in Low Earth Orbit (LEO) experiencing prolonged eclipses, as highlighted in [7]. In recent years, the Hayabusa 2 mission successfully exploited its thermal-infrared (TIR) imager for vision-based navigation purposes [8]. Such an outcome has highlighted the possibility of combining sensors operating in different spectral bands to enhance the navigation solution accuracy. This work aims at exploiting a TIR imager leveraging its insensitivity to illumination conditions, to overcome the limitation imposed by imaging sensors operating in the visible spectrum. Some preliminary work on the topic can be found in [9], in which the author performs an assessment of the best feature detector and descriptors for thermal infrared images. Nevertheless, TIR sensors are usually characterized by a smaller array size compared to visible ones, thus they have a lower resolution and poorer contrast with respect to VIS sensors, which negatively affects image processing algorithms, as highlighted in [10]. To overcome these limitation, sensor fusion strategies can play a major role, as pointed out in [11]. Multispectral data fusion is a dominant technique within the field of Unmanned Aerial Vehicle (UAV) navigation [12], yet its role remains marginal in the domain of spacecraft relative navigation. The different multispectral data fusion schemes can be divided into two main approaches: image fusion and high-level data fusion, as portrayed in Figure 1.
Multispectral image fusion aims at creating a new and more informative image type by combining the complementary strengths of the two distinguished spectral bands. The newly obtained image type can then be fed to the subsequent navigation chain to enhanche its robustness to illumination conditions. Image fusion has been successfully applied within the context of remote imaging [13], while its application in spacecraft navigation scenarios remain marginal. The work presented in [14] is concerned with the evaluation of the different pixel-level image fusion techniques within the context of spacecraft relative navigation. The validity of image fusion techniques for navigation purposes is assessed in [15,16]; where the authors test the effectiveness of pose initialization algorithms on VIS-TIR fused images. A further step is then presented in [17], in which a Convolutional Neural Network (CNN) based pose estimation algorithm is successfully tested on this new image type. Despite the effectiveness of pixel-level image fusion, this work focuses on high level data fusion. Decision-level multispectral data fusion allows for more flexibility and robustness of the whole navigation chain. In fact, the two source images are processed separately and they are treated as two independent sensors. In this way, the information channels can be treated as redundant, and thus it is easier to isolate faulty measurements or to exclude one sensor when it is no longer providing meaningful information. The work presented in [18] presents a navigation architecture in which feature tracking is performed simultaneously on both VIS and TIR images, and this information is then fused within a Kalman Filter to achieve robustness in person motion tracking. Subsequently, the idea of performing high-level multispectral data fusion was adopted for relative navigation and mapping of asteroids and unknown spacecraft by [19,20], respectively. Further research on asteroid relative navigation is carried out in [21], in which the authors perform a CNN-based feature map fusion to enhance the accuracy of centroid detection during proximity operations of the HERA mission. In the presented research, the author build on the approach presented in [22], to propose a flexible navigation strategy to fuse multispectral information. Specifically, the aforementioned work introduces the concept of sensor handover from onecamera to the other to support night-time navigation for a mobile robot. The results presented in the paper suggest that the navigation accuracy and robustness can clearly benefit from the introduction of a thermal infrared imager. However, the handover from the VIS to the TIR camera is controlled by an external user; whereas we implement a fully automatic switch between the two different sensing modalities. Feature detection and feature tracking are performed separately for VIS and TIR images, and the feature points position is fed as observable to an Extended Kalman Filter (EKF). Model-to-image matching is then employed to establish 3D-2D correspondences between the known model and the extracted feature points. Such tightly coupled architecture can be affected in terms of robustness whenever the target’s shape is particularly complex, since it may be more challenging to accurately detect and track a high number of features. However, given the relatively simple shape of the Tango spacecraft and the utilization of two cameras simultaneously, these robustness issues are easily mitigated. Further, we introduce autonomous sensor handover from one sensing modality to the other, in such a way to retain only the most meaningful measurements source. This approach may be extremely useful during eclipses, where the VIS camera is automatically excluded from the navigation chain, since it cannot contribute to the pose estimation task. The presented navigation scheme is applicable to known uncooperative targets, for which at least a simplified geometrical model is available. It is important to remark that the target geometry of an uncooperative space object may not be always known. The most common way of dealing with unknown uncooperative objects is to rely on Simultaneous Localization and Mapping (SLAM) techniques, which enable the chaser to both reconstruct the target’s shape and to perform relative navigation. However, such techniques tend to be numerically heavy, due to the high number of map points which shall be stored in the memory. As proposed in [23], it can be useful to split the mission into two operative phases: the first one relies on SLAM to gather information about the target’s shape; while in the second phase of the mission the known geometry of the target object is exploited to perform model-based relative navigation. Building on this concept, we can assume to be beyond the preliminary mapping phase, and that at least partial information regarding the target’s shape can be exploited. The major contributions of the paper can be then summarized as follows:
  • Development of a tightly-coupled navigation chain for cross-spectral relative navigation, capable of performing autonomous sensor handover among the different sensing modalities.
  • Quantitative assessment of the advantages of introducing a TIR imager in the navigation chain through numerical simulations and high-fidelity image rendering tools.
The paper is organized as follows: Section 2 presents a schematic outline of the developed navigation architecture, highlighting the functionalities of each building block. The Image Processing (IP) functional block is thoroughly described in Section 3; while Section 4 details the relative navigation filter. The simulation environment is introduced in Section 5, while the results are presented and discussed in Section 6. Conclusion summarizing the main outcomes of our study are reported in Section 7.

2. Navigation Chain

A block diagram of the proposed visual navigation architecture is shown in Figure 2. The navigation chain can operate either using VIS or TIR images, or both of them. The image processing functional block performs feature detection and tracking; while an Extended Kalman Filter (EKF) estimates the relative chaser-target pose. The filter employs the target point-features positions on the images as measurements and estimates the relative pose by comparing them with the expected position of the matching target’s model landmarks projection on the image plane. Our EKF exploits a target model which is built offline on the target’s geometry information, as similarly done in [2,24].

3. Image Processing

The image processing functional block is in charge of extracting features and tracking them across the incoming images. Periodically, feature re-initialization is performed. Lastly, sensor handover is performed from one camera to the other when necessary.

3.1. Features Detection and Tracking

Among different feature detectors, Oriented Fast and Rotated Brisk (ORB) [25] has been selected for this work to be applied both to VIS and TIR images; due to its robustness to challenging illumination conditions and scale variations. ORB feature detection has been used in a wide number of application for relative spacecraft navigation in the visible spectral band [26]. On the other hand, the literature concerning the evaluation of feature detection algorithms on TIR images is scarce. A preliminary performance evaluation of different detector and descriptor is available in [27], in which different feature detector and descriptors are tested on synthetic thermal infrared images. This work shows that ORB features offer a good compromise between detection accuracy and computational time, and that they are well-suited for feature tracking. The number of features is restrained to 250 to reduce the computational burden. The detected feature points are then tracked across the subsequent images using Lucas-Kanade tracking algorithm [28].

3.2. Features Re-Initialization

The number of tracked features decreases throughout the sequence images due to the relative motion between the chaser and the target. To keep the number of tracked features high enough to ensure a reliable pose estimation, a dedicated routine is implemented to detect and match new features to be added to the already tracked set. This process is initiated when the ratio between the convex hull of the tracked image features and the convex hull formed by projecting all the landmarks onto the images falls below a predetermined value. This indicates that only a portion of the target is currently encompassed by the tracked features. When this condition is verified new features are detected in the part of the image outside the convex hull of the tracked set. The newly identified candidate features are subsequently projected based on the homography that establishes the relationship between the tracked image features and the model’s landmarks. This enables the matching of these candidates to the model landmarks, through a straightforward nearest neighbour association. However, this method relies on correlations between previous features and landmarks to match new candidates. This dependence can introduce error propagation, especially if the initial matching is erroneous. To mitigate this issue, the navigation chain undergoes periodic full re-initialization of features. This process involves discarding the entire tracked set, detecting new features across the entire image, and matching them with the model landmarks. Without the possibility to use any existing features-to-landmarks correlations, it is necessary to perform 2D-3D point registration. A pseudocode of the algorithm designed to re-initialize the features is reported in Algorithm 1. Initially, the algorithm calculates the convex hull encompassing all detected features and all model points projected according to the best state estimate, as shown in Figure 3a. Assuming that estimation errors are constrained during the pose tracking process, the algorithm randomly guesses the association between the feature set’s peripheral points and the closest model landmarks’ peripheral points, which is shown in Figure 3b. The homography associated with these pairings is then computed and applied to all model landmarks. After this homographic transformation, it is reasonable to consider that the model landmarks and their respective image features are closely overlapped. Therefore, the matching of features to landmarks is computed using a nearest-neighbor association technique, as shown in Figure 3c. The peripheral points association, homographic transformation, and nearest-neighbor matching process are executed iteratively. Utilizing a RANSAC-like approach, the iteration that yields the most matched features is saved as the output of the routine. While this algorithm effectively tackles the 3D-2D registration problem with efficiency, it exhibits decreased robustness when faced with relative attitude errors exceeding 15 degrees. This implies its suitability primarily for the re-initialization of features during the pose tracking process. The feature reinitialization procedure is summarized in Algorithm 1.
Algorithm 1 Feature Re-initialization
1:
Given I: set of image feature locations on the image plane
2:
Given M: set of model landmark locations on the image plane
3:
Given  n m a x : maximum number of feature points
4:
Given  i max : maximum number of iterations
5:
procedure Re-initialize Features( I , M , n m a x )
6:
    Identify boundary point sets I b and M b from sets I and M
7:
    for  i 1 to i max  do
8:
          Randomly pair four coordinates from set I b with four from set M b
9:
          Compute homography H mapping these pairs
10:
        Project set M onto set I using homography H
11:
        Identify the matching pairs between I and M using nearest neighbor association
12:
        if number of matched pairs > n max  then
13:
           result ← matched pairs
14:
            n max ← number of matched pairs
15:
        end if
16:
    end for
17:
    Return result
18:
end procedure

3.3. Sensing Modalities Switch

For optimal performance, the navigation chain is designed to leverage both spectral information whenever possible. Conversely, a sensing modality switch is introduced to automatically discard one of the two spectra when it lacks informative content. Concerning thermal images, they are excluded during the re-initialization process if reliable results are not obtained (i.e., the number of features output by the routine is lower than the trigger for re-initialization). The same is done for visible images, but in addition a further check on the pixel intensity is performed. The visible images are used only if the number of pixels with an intensity above a threshold set at three times the standard deviation ( 3 σ ) of the image’s Gaussian noise exceeds a user-defined value. This value was calibrated by considering images useful if at least 10% of the region of interest, where the target is located, is illuminated. This additional condition is introduced because, in low-light conditions, the target loses clarity in visible images, while in thermal images, pixel intensity is preserved even under dynamic illumination conditions. If the switch discards a spectrum, periodic re-initialization tests occur to assess whether positive results can be obtained. If successful, the previously discarded spectrum may be reintroduced into the sensor fusion process, enhancing adaptability to changing environmental conditions. The spectra selection procedure is summarized in Algorithm 2.
Algorithm 2 Spectra Selection
1:
Given  V I S : current VIS image
2:
Given  x V I S : VIS feature points
3:
Given  T I R : current TIR image
4:
Given  x T I R : TIR feature points
5:
Given  p x _ i n t e n s i t y _ t h r e s h o l d : pixel intensity threshold
6:
Given  p x _ n u m b e r _ t r e s h o l d : number of activated pixels treshold
7:
Given  f e a t u r e _ t h r e s h o l d : feature points threshold
8:
procedure Select_spectrum( V I S , T I R )
9:
       f l a g _ V I S T r u e
10:
     f l a g _ T I R T r u e
11:
    if  s u m ( n _ p i x e l s ( V I S ) > p x _ i n t e n s i t y _ t h r e s h o l d ) > p x _ n u m b e r _ t r e s h o l d  then
12:
         x V I S R e _ i n i t i a l i z e _ F e a t u r e s
13:
        if  l e n g t h ( x V I S ) < f e a t u r e _ t h r e s h o l d  then
14:
            f l a g _ V I S F a l s e
15:
        end if
16:
    else
17:
         f l a g _ V I S F a l s e
18:
    end if
19:
     x T I R R e _ i n i t i a l i z e _ F e a t u r e s
20:
    if  l e n g t h ( x T I R ) < f e a t u r e _ t h r e s h o l d  then
21:
         f l a g _ T I R F a l s e
22:
    end if
23:
    Return  x V I S , x T I R , f l a g _ V I S , f l a g _ T I R
24:
end procedure

4. Navigation Filter

We employ an EKF for relative pose estimation. The most common estimation techniques, i.e., EKF, Unscented Kalman Filter (UKF) and Particle Filter (PF) were taken into account [29]. The main driver for choosing the EKF is its computational cost. Adopting a linear model to describe the relative translational motion allows us to directly have an analytical expression for the translational part of the Jacobian matrix used to evaluate the State Transition Matrix (STM). This solution is extremely effective in terms of computational resources, thus driving our choice to the EKF rather than more complex filters. The filter is designed to estimate the relative position, velocity, attitude and angular rates of the target with respect to the chaser spacecraft. Since the observation model depends on both the relative position and attitude, the navigation filter needs to be coupled. The state vector is defined as:
x = ϱ T , ϱ ˙ T , q T , ω T T
being ϱ and ϱ ˙ the relative position and velocity between the two spacecraft’s centers of mass, q the relative quaternion and ω the relative angular velocity.

4.1. Coordinate Systems

The reference frames considered in this paper include the Earth-Centered Inertial (ECI) coordinate system, represented by I ; a chaser-fixed Local Vertical, Local Horizon (LVLH) frame, denoted as L , where the x-component aligns with the spacecraft’s radial direction, the z-component aligns with the orbit’s angular momentum, and the y-axis completes the right-hand triad; and two body-fixed reference frames, aligned with the target’s and chaser’s principal inertia axes, designated as C and T respectively. Without loss of generality, the chaser body frame C is assumed to coincide with the imaging sensors’ reference frame. Please notice that the LVLH frame is here defined on the chaser to exploit the knowledge of the chaser spacecraft true orbital motion, which is reflected in the mean motion parameter n.

4.2. Dynamical Model

The translational dynamical model selected for the propagation of the relative position is based on the Clohessy–Wiltshire [30] equations of unperturbed relative motion, which are reported in Equation (2).
x ¨ 2 n y ˙ 3 n 2 x = 0 y ¨ + 2 n x ˙ = 0 z ¨ + n 2 z = 0
where, ϱ = [ x , y , z ] L represents the position of the target in the chaser’s LVLH frame, and n is the chaser’s mean motion. The attitude parametrization follows the formulation of a Multiplicative Extended Kalman Filter (MEKF) [31]. The filter propagates a three-element local attitude error a formalized in Modified Rodrigues Parameters (MRP) while keeping track of a reference quaternion. For the propagation of the attitude error, the differential equation detailed in [32] from quaternion kinematics is used, as presented in Equation (3).
a ˙ = 1 2 [ ω × ] a + ω
The relative angular velocity vector ω between the target and the chaser, expressed in the chaser body frame, is defined as in Equation (4). Here, ω T and ω C represent the angular rates of the target and chaser in their respective rotating frames, and A C / T is the rotation matrix mapping the target body frame into the chaser body frame.
[ ω ] C = A C / T [ ω T ] T [ ω C ] C
Equation (5) describes the dynamics of the relative angular rates, in which the inertia tensors of the chaser and target are labeled I C and I T , u C is the control torque of the chaser spacecraft and the subscript C has been omitted for brevity.
ω ˙ = I C 1 { I C A C / T I T 1 A C / T ( ω + ω C ) × I T A C / T T ( ω + ω C ) u C ω C × I C ω C I C ( ω C × ω ) }
The filter performs a discrete propagation of the states and the associated covariance by computing the State Transition Matrix (STM) of the ordinary differential equations reported in Equations (2), (3) and (5).

4.3. Observation Model

The filter processes the projected positions of the target’s model landmarks onto the image plane, according to Equation (6)
p i = K ( A C / T P i + ϱ )
In which P i is the i-th feature point position in the target’s reference frame and K is the intrinsic camera calibration matrix. A more detailed derivation of the observation model is available in [33] for the interested reader. The target’s landmarks are the vertices of a reduced CAD model of its shape. In the case of the target Tango, this model contains 170 vertices.

4.4. Measurement Noise Covariance Adaptation

For optimal functionality, the Extended Kalman Filter (EKF) necessitates precise modelling of the noise covariance of the measurements. This task holds particular significance in this research application, where a dynamic environment introduces time-varying measurement noise. To overcome these shortcomings, an online adaptation of the measurement noise covariance matrix based on the residual of the filter is implemented. At each iteration, the covariance matrix R is updated according to the equation derived in [34] and reported in Equation (7).
R k = α R k + ( 1 α ) ( ϵ k ϵ k T + H k P k + H k T )
In Equation (7), ϵ represents the filter’s residual, H k is the Jacobian of the measurement function, and P k + is the updated covariance matrix of the states. A major advantage of this solution is that the noise covariance is adapted online for each target’s feature individually, progressively identifying and assigning less significance to the least reliable ones. The main parameter which rules the adaptation is the forgetting factor α , which ranges from 0 to 1. A higher α value leads to a slower adaptation of the R k matrix, yet it provides more stability. In this work the value is set to α = 0.8 ; which represents a good compromise between adaptation capabilities and stability of the R k matrix.

4.5. Outliers Rejection

To increase the robustness of the filter an outlier rejection routine is introduced, as explained in [35] and successfully employed in [32]. The null-hypothesis test is performed, assuming that the measurement noise is Gaussian distributed, using the Mahalanobis distance as a figure of merit. By defining the filter’s innovation at the k-th step as d k and its covariance as S k , the Mahalanobis distance γ k can be computed as shown in Equation (8).
γ k = m k 2 = d k T S k 1 d k
Under the assumption that the null hypothesis holds, γ k follows a Chi-square distribution. Therefore, all the measurements with a squared Mahalanobis distance exceeding a threshold k are excluded, defining k to ensure that Equation (9) holds.
P ( k > α ) = 0.05

5. Simulation Environment

5.1. Image Rendering

This work employs the open-source POV-Ray [36] to generate photorealistic spaceborne validated VIS images, as described in [37]. The VIS rendering tool used for this work has been validated against the SPEED dataset [38], which in turn has been validated against images of the PRISMA mission. It is worth noticing that the different illumination conditions can be easily simulated by the user by properly selecting the Sun position in the ECI frame, as well as the chaser and target absolute coordinates in the ECI frame. The Sun represents the only light source of the POV-Ray scene, and thus it is representative of a realistic scenario. Concerning TIR images, Blender 2.93 has been preferred as the main image rendering software, exploiting a tool internally developed by the ASTRA research group. The full description of the thermal infrared rendering tool is available in [39,40]. The tool exploits a validated thermal simulation to compute the temperature field of the Tango spacecraft. Such temperature field is then processed to obtain the heat flux captured by the camera, exploiting the relative pose between the camera and the target to compute the view factors. It is then possible to obtain the thermal image based on the assumption that the output Digital Number (DN) produced by the sensor is proportional to the heat flux received by the pixels. Furthermore, a thorough description of the characteristic noise of a thermal camera, that has been used for this work, is available in [41]. Please notice that the validation of TIR images is significantly more challenging due to the lack of TIR datasets. However, the model used for the thermal simulation of the Tango spacecraft has been validated against a thermovacuum experiment within the context of CubeSat testing and integration [42]. It is also acknowledged that the acquisition of real thermal infrared images requires a dedicated and calibrated facility, which is not currently available. The images produced using such tools are reported in Figure 4 where the VIS images (Left) are compared with the TIR images (Right) for a simplified Tango spacecraft model and the camera parameters reported in Table 1.
For simplicity, the VIS and TIR camera centers are assumed to be coincident with the chaser spacecraft center of mass, and they are always pointing towards the target. Concerning the noise level of the synthetically generated images, VIS images are postprocessed by adding a white Gaussian noise with σ 2 = 0.0022 and blurred with a Gaussian blurring characterized by σ 2 = 1 and zero mean. The noise parameters have been selected equal to [43]. With regards to thermal imaging sensors, the research presented in [44] demonstrated that microbolometers are mostly affected by two sources of noise: the thermal noise and the 1 / f noise. The former is a characteristic of all electronic devices and it is modeled as an additive white Gaussian noise, assuming the same characteristics adopted for VIS images. The 1 / f noise, which is also referred to as flicker noise or pink noise, is instead dominant at low frequencies, as demonstrated by [45]. An additive pink noise can be numerically obtained by applying a suitably shaped low-pass filter to a white Gaussian noise. A two-dimensional Fourier transform is used to decompose white noise into the frequency domain. The amplitude (A) of each frequency is then scaled such that the higher the frequency, the lower the amplitude using the following relationship:
A = A 1 ( f x 2 + f y 2 ) α / 2
where f x and f y are the spatial frequencies and α is an exponent which determines the spectral slope ( α = 1 for flicker noise). The inverse Fourier transform is then applied to convert the filtered result back to the spatial domain. The variance of the White Noise to be filtered is here assumed to be σ 2 = 0.0022 . Similar to VIS images, also the TIR images are blurred with a Gaussian blurring characterized by σ = 1 and zero mean.

5.2. Reference Dynamic

The reference trajectory is computed apriori. The translational motion is computed according to a perturbed restricted two-body model which accounts for the J 2 effect, that is the most dominant perturbation for satellites flying in LEO orbits. Furthermore, we consider the effects of solar radiation pressure and atmospheric drag. The former contribution is computed according to a simplified cannonball model, while the latter considers an exponential air model to describe the atmospheric density. The rotational motion is instead described through an unperturbed free motion, which is a valid assumption during the limited simulation time of the work. Furthermore, an uncertainty of 15 % on the principal moments of inertia of the target is considered to simulate an imperfect knowledge of the target spacecraft. This situation reflects a scenario in which the shape of an uncooperative target is roughly known, perhaps from a previous inspection phase, yet its inertia properties may not be certain. To ensure the target remains consistently within the cameras’ field of view, a PID controller is employed to control the chaser’s attitude, guaranteeing accurate pointing. The target rotates with an angular velocity of 0.25 deg/s around each axis. The target is assumed to be in a nearly-circular equatorial orbit, while the relative initial conditions are selected to have a planar and quasi-bounded motion of the chaser with respect to the target. The corresponding relative translational state initial conditions are:
ϱ 0 = 6.60 × 10 1 1.47 × 10 1 0.00 m ϱ ˙ 0 = 1.70 × 10 1 1.49 × 10 3 0.00 ms 1
Figure 5 reports the evolution of the chaser-target intersatellite distance (right) for the considered trajectory.
The metrics used to assess the estimator performances are position absolute and relative Knowledge Error (KE) and attitude KE. The position KE is defined as in Equations (12) and (13), where ^ indicates the estimated values.
e ϱ = ϱ ^ ϱ
e ρ r e l = e ϱ ρ
By defining as q e r r and q e r r the vectorial and scalar part of the quaternion error (i.e., the quaternion representing the discrepancy between the true and estimated attitude), the attitude KE is defined as in Equation (14).
e q = 2 atan ( q e r r , q e r r )
The mean and associated standard deviation of these figures of merit over a period of time are computed as:
μ e = 1 N i = 1 N e i
σ e = 1 N i = 1 N ( e i μ e ) 2
where N represents the number of realizations.

5.3. Filter Initialization

The filter’s tuning parameters are detailed in the present section. Each submatrix of both the state covariance matrix and process noise matrix is assumed to be diagonal and isotropic and they are reported in Table 2 and Table 3, respectively. Please notice that these settings will be kept throughout all the simulations presented in the remainder of the paper. The process noise covariance adopted in this work is meant to compensate for the imperfect dynamical model of the filter and it has been tuned accordingly.
Since the algorithm does not include a pose acquisition routine, such as the ones presented in [46] or [47], the initial pose information is randomly generated by sampling within the 3 σ bounds of the initial covariance. Given this initial pose information, the re-initialization algorithm, described in Algorithm 1, is executed to retrieve the first matching information. The navigation chain is designed to work at 1 Hz.

6. Results

To assess the advantages and drawbacks of the three different sensing modalities, two different test cases are introduced, as reported in Table 4. The first test case represents an ideal situation, in which the SAA is forced to be near zero to achieve favourable illumination conditions. It is also employed as a reference for the next simulations, in which the SAA increases throughout the trajectory. It is worth noticing that the thermal infrared images do not change, since they do not rely on external illumination sources. The temperature profile of the spacecraft has been computed apriori, and it is assumed to be constant during the short simulation time.
For each test case, the results of three different sensing strategies are computed:
  • VIS-only
  • TIR-only
  • VIS-TIR fusion with sensor handover
The filter settings shown in the previous section are kept constant for both test cases, and that a total of 250 Montecarlo runs are performed for each sensing strategy.

6.1. Test Case n.1

Representative VIS images acquired at t = 0 and t = 1000 s, respectively, are shown in Figure 6.
The averaged results for the relative position (left) and attitude (right) errors are shown in Figure 7. It can be immediately noticed that there is hardly no difference between the combined (VIS and TIR) sensing strategy and the VIS-only sensing mode. This behavior is mainly due to two reasons. The first one is that the trigger to discard VIS images is never activated; the second one is that the filter tends to prioritize the most accurate source of measurement, that in this case is the visible spectrum, thus prioritizing these measurements within the filter update step. The quantitative results are reported in Table 5. Once again, it can be noticed that the VIS and the combined sensing modality achieve similar results. It should be acknowledged that the attitude estimation error achieved by the combined sensing modality is the best among the three. Furthermore, the uncertainty of the estimation error tends to decrease when relying on both cameras; even though the number of montecarlo runs is the same. The TIR-based navigation strategy instead shows a higher position and attitude error, together with a higher variance associated to these results. This is due to the fact that a reduced sensor size and a high noise value of the thermal images negatively affects the accuracy of the feature detection and tracking.
To further analyze the presented results, we report in Figure 8 the number of matched feature pairs and the average measurement covariance noise value that the online adaptation of the measurements’ covariance assigns to each camera. The reported value has been averaged considering all the features detected for each step to exclude outliers, and then it has been normalized over the camera size; since the VIS and TIR cameras have a different array size. It can be immediately noticed that the number of matched feature pairs is comparable between VIS and TIR images. This result can be expected due to the good illumination conditions of this test set. It can also be noticed that there is a drop in the number of matched features between time t = 1000 s and t = 1500 s, which coincides with an increase of the relative pose estimation error. Since this behaviour is present across both spectra, it can be assumed that it is linked to the relative target-chaser attitude, which does not allow for a higher number of matches. Concerning the mean covariance value assigned to the different spectra, it is immediate to notice that the thermal features are generally more noisy with respect to the visible ones. This result is expected due to the reduced array size and higher noise of thermal infrared cameras. Furthermore, whenever the covariance adaptation produces an increase in the measurement noise covariance, the overall state estimation is affected, as in the period between t = 2000 s and t = 2500 s. It can be also acknowledged that whenever the number of tracked features decreases, the covariance adaptation is less stable, since the value of outlier matches may have a greater influence on the overall noise estimation.

6.2. Test Case n.2

The second testing campaign is meant to highlight the flexibility of the developed navigation architecture, by enabling the sensing modality switch. To stress the navigation chain, the simulation lasts 3000 s, during which the illumination conditions change dynamically. In this simulation, the sun aspect angle increases progressively, until the target encounters an eclipse at t = 900 s approximately, forcing the switch to rely only on TIR images. The evolution of the SAA is portrayed in Figure 9, while two representative VIS images acquired at t = 0 and t = 1000 s are shown in Figure 10.
The averaged results over the Montecarlo runs are shown in Figure 11 for the three different sensing modalities, while Figure 12 reports the results for the combined VIS-TIR sensing strategy, in which the utilization of both camera is highlighted in green, and the periods during which the algorithm relies on ther TIR camera only are highlighted in red.
It is evident from the results presented in Figure 11 that visible images alone cannot provide a reliable navigation solution when the SAA increases. The navigation error increases until divergence before 1000 s, due to the fact that the filter can no longer detect and track any features. On the other hand, when the navigation chain relies solely on the thermal imaging measurements the errors present a more erratic behaviour and the pose estimation error tends to increase. This behavior is evident when comparing the results achieved through the sensor handover controlled by the switch, as illustrated in Figure 12, with those derived from using the thermal camera independently, as depicted in Figure 11. Notably, both the position and attitude errors are consistently lower only when the algorithm manages to fuse the visible and thermal information (green area of Figure 12), while they are coincident when the thermal camera is used independently. Analyzing the average errors presented in Table 6, it becomes evident that, although there is a slight decrease in attitude error, the benefits of employing an adaptive sensing modality solution are enhances during intervals of favorable illumination, while it does not extend to the overall performance for extended periods of dynamical illumination conditions. Figure 11 depicts the estimation error when using the visible spectrum as a standalone solution. As anticipated, the pose estimation abruptly diverges when the image processing pipeline fails to accurately identify and match the target’s features due to the images degradation. This further underscores the critical role of the thermal camera in ensuring the robustness of the navigation chain.
In Figure 11 it can be observed that in both the thermal and sensor-handover scenarios the attitude errors spikes at certain time-steps. This reduction in accuracy occurs when the camera orientation coincides with one of the principal planes of the target, leading to a loss of information regarding its three-dimensional shape. Although this behavior of the navigation chain requires for further investigation on the image processing techniques, it does not significantly impact the pose estimation capability over extended periods.
In the same way as the previous test section, we perform an analysis on the number of matched features and their associated measurement noise covariance. Figure 13 reports the number of matched feature pairs and the average measurement covariance noise value assigned to each camera. First of all, it can be noticed that the number of matched features for the visible spectrum is always lower than the number of thermal-infrared features. In addition, when the visible camera is re-introduced within the navigation chain, its initial covariance value is higher, and it decreases as the feature detection and tracking become reliable again. Despite this difference, the accuracy of VIS-only navigation in favourable illumination conditions is higher with respect to the thermal-infrared case. This is due to the fact that the most important factor is the associated feature noise, which is lower in the case of VIS images. The only time during which VIS features are noisier is in the proximity of eclipses, and the online covariance adaptation assigns a high measurement noise to such features. As for the number of matched thermal features, it can be noticed that the error spikes in the pose estimation error are linked to those periods in which the number of matched feature pairs decreases. This behaviour implies that due to the lower detection and tracking accuracy, thermal-based navigation needs to rely on a higher number of features to obtain accurate results. As expected, the measurement noise covariance of TIR features is higher with respect to VIS ones, since the detection and tracking process is less accurate. It is also necessary to acknowledge that towards the end of the simulation, the measurement noise covariance of TIR features tends to be higher, which is immediately reflected in the attitude error. It is also necessary to point out that the attitude estimation error is more sensitive with respect to the position estimation, since it is tightly linked to the behaviour of the measurements.

7. Conclusions

We present a navigation chain for estimating the relative pose between a chaser and an uncooperative target using combined visible and thermal sensing. The proposed navigation strategy allows for a satisfactory estimation of the relative state and relative angular velocity, and it highlights the two novel aspects of this work: the additional use of a thermal-infrared camera to complement the existing sensor in the relative navigation task and sensor handover between cameras operating in different part of the spectrum. The numerical evaluation of the presented approach confirms that the information added by the thermal camera improves the robustness of navigation architecture in a challenging scenario with dynamical illumination conditions. However, it is important to highlight that the accuracy of the navigation solution employing only thermal infrared images is lower with respect to the one obtained by using visible images in favourable illumination conditions. This outcome is due to two major factors: first of all TIR images have a lower array size and a higher noise level with respect to VIS ones, thus affecting the feature detection and tracking process; and secondly, the employed IP algorithms were initially developed for VIS images only, and thus they have worse performances on TIR images. In addition, an important new research contribution of the paper is in demonstrating the use of a cross-spectral sensor handover, meaning that the navigation filter employs a combined optical and thermal sensing for a part of the orbit, and it autonomously switches to a thermal-only navigation mode when an eclipse occurs. The outcome of this research represents a step forward towards a flexible navigation strategy capable of dealing with any illumination conditions, enabling autonomous operations with uncooperative resident space objects. A step further in this research field requires the development and tailoring of image processing techniques to thermal infrared images to enhance the accuracy of the navigation solution.

Author Contributions

Conceptualization, G.L.C. and M.B. (Massimiliano Bussolino), M.B. (Michele Bechini), M.Q. and M.L.; methodology, G.L.C. and M.B. (Massimiliano Bussolino); software, G.L.C., M.B. (Massimiliano Bussolino), M.B. (Michele Bechini) and M.Q.; investigation, G.L.C. and M.B. (Massimiliano Bussolino); writing—original draft preparation, G.L.C. and M.B. (Massimiliano Bussolino); writing—review and editing, G.L.C., M.B. (Massimiliano Bussolino), M.B. (Michele Bechini) and M.Q.; visualization, G.L.C., M.B. (Massimiliano Bussolino), M.B. (Michele Bechini) and M.Q.; supervision, M.L.; project administration, M.L.; funding acquisition, M.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Data is contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
CNNConvolutional Neural Network
DoFDegrees of Freedom
ECIEarth Centered Inertial
EKFExtended Kalman Filter
FOVField of View
IPImage Processing
LEOLow Earth Orbit
LKLucas-Kanade
LVLHLocal Vertical Local Horizon
KEKnowledge Error
MEKFMultiplicative Extended Kalman Filter
MKEMean Knowledge Error
MRPModified Rodrigues Parameters
ORBOriented FAST and Rotated BRIEF
PFParticle Filter
RANSACRandom Sample and Consensus
SAASun Aspect Angle
STMState Transition Matrix
TIRThermal-Infrared
UKFUnscented Kalman Filter
VISVisible

References

  1. Hussain, K.F.; Thangavel, K.; Gardi, A.; Sabatini, R. Passive Electro-Optical Tracking of Resident Space Objects for Distributed Satellite Systems Autonomous Navigation. Remote Sens. 2023, 15, 1714. [Google Scholar] [CrossRef]
  2. Pasqualetto Cassinis, L.; Fonod, R.; Gill, E.; Ahrns, I.; Gil-Fernández, J. Evaluation of tightly- and loosely-coupled approaches in CNN-based pose estimation systems for uncooperative spacecraft. Acta Astronaut. 2021, 182, 189–202. [Google Scholar] [CrossRef]
  3. Moghaddam, B.M.; Chhabra, R. On the guidance, navigation and control of in-orbit space robotic missions: A survey and prospective vision. Acta Astronaut. 2021, 184, 70–100. [Google Scholar] [CrossRef]
  4. Opromolla, R.; Fasano, G.; Rufino, G.; Grassi, M. A review of cooperative and uncooperative spacecraft pose determination techniques for close-proximity operations. Prog. Aerosp. Sci. 2017, 93, 53–72. [Google Scholar] [CrossRef]
  5. Ardaens, J.S.; Gaias, G. Flight demonstration of spaceborne real-time angles-only navigation to a noncooperative target in low earth orbit. Acta Astronaut. 2018, 153, 367–382. [Google Scholar] [CrossRef]
  6. Castellini, F.; Antal-Wokes, D.; Pardo de Santayana, R.; Vantournhout, K. Far Approach Optical Navigation and Comet Photometry for the Rosetta Mission. In Proceedings of the 25th International Symposium on Space Flight Dynamics, Munich, Germany, 19–23 October 2015; pp. 1–8. [Google Scholar]
  7. Fehse, W. Rendezvous with and Capture/Removal of Non-Cooperative Bodies in Orbit: The Technical Challenges. J. Space Saf. Eng. 2014, 1, 17–27. [Google Scholar] [CrossRef]
  8. Ogawa, N.; Terui, F.; Yasuda, S.; Matsushima, K.; Masuda, T.; Sano, J.; Hihara, H.; Matsuhisa, T.; Danno, S.; Yamada, M.; et al. Image-based Autonomous Navigation of Hayabusa2 using Artificial Landmarks: Design and In-Flight Results in Landing Operations on Asteroid Ryugu. In Proceedings of the AIAA Scitech 2020 Forum, Orlando, FL, USA, 6–10 January 2020; pp. 1–11. [Google Scholar] [CrossRef]
  9. Yilmaz, O.; Aouf, N.; Majewski, L.; Sanchez-Gestido, M. Evaluation of Feature Detectors for Infrared Imaging in View of Active Debris Removal. In Proceedings of the 7th European Conference on Space Debris, Darmstadt, Germany, 17–21 April 2017; pp. 1–8. [Google Scholar]
  10. Shi, J.F.; Ulrich, S.; Ruel, S. Spacecraft Pose Estimation using Principal Component Analysis and a Monocular Camera. In Proceedings of the AIAA Guidance, Navigation, and Control Conference, Grapevine, TX, USA, 9–13 January 2017; pp. 1–24. [Google Scholar] [CrossRef]
  11. Vitiello, F.; Causa, F.; Opromolla, R.; Fasano, G. Radar/visual fusion with fuse-before-track strategy for low altitude non-cooperative sense and avoid. Aerosp. Sci. Technol. 2024, 146, 108946. [Google Scholar] [CrossRef]
  12. Vlaminck, M.; Diels, L.; Philips, W.; Maes, W.; Heim, R.; Wit, B.D.; Luong, H. A Multisensor UAV Payload and Processing Pipeline for Generating Multispectral Point Clouds. Remote Sens. 2023, 15, 1524. [Google Scholar] [CrossRef]
  13. Saidi, S.; Idbraim, S.; Karmoude, Y.; Masse, A.; Arbelo, M. Deep-Learning for Change Detection Using Multi-Modal Fusion of Remote Sensing Images: A Review. Remote Sens. 2024, 16, 3852. [Google Scholar] [CrossRef]
  14. Civardi, G.L.; Bechini, M.; Quirino, M.; Margherita, P.; Alessandro, C.; Lavagna, M. Generation of fused visible and thermal-infrared images for uncooperative spacecraft proximity navigation. Adv. Space Res. 2023, 73, 5501–5520. [Google Scholar] [CrossRef]
  15. Bechini, M.; Civardi, G.L.; Quirino, M.; Colombo, A.; Lavagna, M. Robust Monocular Pose Initialization via Visual and Thermal Image Fusion. In Proceedings of the 73rd International Astronautical Congress (IAC 2022), International Astronautical Federation, IAF, Paris, France, 18–22 September 2022; pp. 1–15. [Google Scholar]
  16. Colombo, A.; Civardi, G.L.; Bechini, M.; Quirino, M.; Lavagna, M. VIS-TIR cameras data fusion to enhance relative navigation during In Orbit Servicing operations. In Proceedings of the 73rd International Astronautical Congress (IAC 2022), International Astronautical Federation, IAF, Paris, France, 18–22 September 2022; pp. 1–15. [Google Scholar]
  17. Bechini, M. Monocular Vision for Uncooperative Targets Through AI-Based Methods and Sensors Fusion. Ph.D. Thesis, Politecnico di Milano, Milan, Italy, 2024. [Google Scholar]
  18. Deodeshmukh, V.; Chaudhuri, S.; Roy, S.D. Cooperative infrared and visible band tracking. In Proceedings of the the International Conference on Applied Pattern Recognition, Mallorca, Spain, 4–6 June 2003; pp. 402–405. [Google Scholar]
  19. Piccinin, M.; Civardi, G.L.; Quirino, M.; Lavagna, M. Multispectral Imaging Sensors for Asteroids Relative Navigation. In Proceedings of the 72nd International Astronautical Congress (IAC 2021), Dubai, United Arab Emirates, 25–29 October 2021; pp. 1–12. [Google Scholar]
  20. Civardi, G.L.; Piccinin, M.; Lavagna, M. Small Bodies IR Imaging for Relative Navigation and Mapping Enhancement. In Proceedings of the 7th IAA Planetary Defense Conference, Wien, Austria, 26–30 April 2021; pp. 1–6. [Google Scholar]
  21. Hall, I.; Jinglang, F.; Vasile, M. AI-Based Sensor Fusion for Robust Pose Estimation and Autonomous Navigation of Spacecraft Mission to Asteroids. In Proceedings of the International Astronautical Congress: IAC Proceedings, Milan, Italy, 14–18 October 2024; pp. 1–15. [Google Scholar]
  22. Magnabosco, M.; Breckon, T.P. Cross-spectral visual simultaneous localization and mapping (SLAM) with sensor handover. Robot. Auton. Syst. 2013, 61, 195–208. [Google Scholar] [CrossRef]
  23. Maestrini, M.; Lizia, P.D. COMBINA: Relative Navigation for Unknown Uncooperative Resident Space Object. In Proceedings of the AIAA SCITECH 2022 Forum, San Diego, CA, USA, 3–7 January 2022. [Google Scholar] [CrossRef]
  24. Pesce, V.; Opromolla, R.; Sarno, S.; Lavagna, M.; Grassi, M. Autonomous relative navigation around uncooperative spacecraft based on a single camera. Aerosp. Sci. Technol. 2019, 84, 1070–1080. [Google Scholar] [CrossRef]
  25. Rublee, E.; Rabaud, V.; Konolige, K.; Bradski, G. ORB: An efficient alternative to SIFT or SURF. In Proceedings of the 2011 International Conference on Computer Vision, Barcelona, Spain, 6–13 November 2011; pp. 2564–2571. [Google Scholar] [CrossRef]
  26. Losi, L. Visual Navigation for Autonomous Planetary Landing. Master’s Thesis, Politecnico di Milano, Milan, Italy, 2015. [Google Scholar]
  27. Labo, S. Infrared Vision-Based Navigation for Planetary Landing. Master’s Thesis, Politecnico di Milano, Milan, Italy, 2024. [Google Scholar]
  28. Lucas, B.; Kanade, T. An Iterative Image Registration Technique with an Application to Stereo Vision. In Proceedings of the 7th International Joint Conference on Artificial Intelligence (IJCAI’81), Vancouver, BC, Canada, 24–28 August 1981; Volume 81, pp. 1–24. [Google Scholar]
  29. Simon, D. Optimal State Estimation: Kalman, H Infinity, and Nonlinear Approaches; John Wiley & Sons: Hoboken, NJ, USA, 2006. [Google Scholar]
  30. Clohessy, W.H.; Wiltshire, R.S. Terminal Guidance System for Satellite Rendezvous. J. Aerosp. Sci. 1960, 27, 653–658. [Google Scholar] [CrossRef]
  31. Crassidis, J.L.; Junkins, J.L. Optimal Estimation of Dynamic Systems; Chapman and Hall/CRC: Boca Raton, FL, USA, 2004; Chapter 7. [Google Scholar] [CrossRef]
  32. Tweddle, B.E.; Saenz-Otero, A. Relative Computer Vision-Based Navigation for Small Inspection Spacecraft. J. Guid. Control Dyn. 2015, 38, 969–978. [Google Scholar] [CrossRef]
  33. Bussolino, M.; Piccinin, M.; Civardi, G.; Lavagna, M. Multispectral Vision Based Relative Navigation to Enhance Space Debris Proximity Operations. In Proceedings of the International Astronautical Congress: IAC Proceedings, Baku, Azerbaijan, 2–6 October 2023; pp. 1–15. [Google Scholar]
  34. WANG, J. Stochastic Modeling for Real-Time Kinematic GPS/GLONASS Positioning. Navigation 1999, 46, 297–305. [Google Scholar] [CrossRef]
  35. Chang, G. Robust Kalman filtering based on Mahalanobis distance as outlier judging criterion. J. Geod. 2014, 88, 391–401. [Google Scholar] [CrossRef]
  36. Plachetka, T. POV Ray: Persistence of vision parallel raytracer. In Proceedings of the Spring Conference on Computer Graphics, Budmerice, Slovakia, 23–25 April 1998; Volume 123, p. 129. [Google Scholar]
  37. Bechini, M.; Lavagna, M.; Lunghi, P. Dataset generation and validation for spacecraft pose estimation via monocular images processing. Acta Astronaut. 2023, 204, 358–369. [Google Scholar] [CrossRef]
  38. Kisantal, M.; Sharma, S.; Park, T.H.; Izzo, D.; Märtens, M.; D’Amico, S. Spacecraft Pose Estimation Dataset (SPEED). Zenodo 2019. [CrossRef]
  39. Quirino, M. Novel Thermal Imges Generator for Autonomous Space Proximity Operations. Ph.D. Thesis, Politecnico di Milano, Milan, Italy, 2023. [Google Scholar]
  40. Quirino, M.; Lavagna, M.R. Spacecraft and Asteroid Thermal Image Generation for Proximity Navigation and Detection Scenarios. Appl. Sci. 2024, 14, 5377. [Google Scholar] [CrossRef]
  41. Bianchi, L. Synthetic Thermal Images Generation Towards Enhanced Close Proximity Navigation in Space. Master’s Thesis, Politecnico di Milano, Milan, Italy, 2023. [Google Scholar]
  42. Quirino, M.; Sciarrone, G.; Piazzolla, R.; Fuschino, F.; Evangelista, Y.; Morgante, G.; Guilizzoni, M.; Marocco, L.; Silvestrini, S.; Fiore, F.; et al. HERMES CubeSat Payload Thermal Balance Test and Comparison with Finite Volume Thermal Model. Appl. Sci. 2023, 13, 5452. [Google Scholar] [CrossRef]
  43. Kisantal, M.; Sharma, S.; Park, T.H.; Izzo, D.; Märtens, M.; D’Amico, S. Satellite Pose Estimation Challenge: Dataset, Competition Design, and Results. IEEE Trans. Aerosp. Electron. Syst. 2020, 56, 4083–4098. [Google Scholar] [CrossRef]
  44. Gao, Y.t.; Chen, H.m.; Xu, Y.; Sun, X.n.; Chang, B.k. Noise research of microbolometer array under temperature environment. In Proceedings of the International Symposium on Photoelectronic Detection and Imaging 2011: Advances in Infrared Imaging and Applications, International Society for Optics and Photonics, SPIE, Beijing China, 24–26 May 2011; Volume 8193, pp. 222–228. [Google Scholar] [CrossRef]
  45. Brageot, E.; Groussin, O.; Lamy, P.; Reynaud, J.L. Experimental study of an uncooled microbolometer array for thermal mapping and spectroscopy of asteroids. Exp. Astron. 2014, 38, 381–400. [Google Scholar] [CrossRef]
  46. Sharma, S.; D’Amico, S. Comparative assessment of techniques for initial pose estimation using monocular vision. Acta Astronaut. 2016, 123, 435–445. [Google Scholar] [CrossRef]
  47. Bechini, M.; Gu, G.; Lunghi, P.; Lavagna, M. Robust spacecraft relative pose estimation via CNN-aided line segments detection in monocular images. Acta Astronaut. 2024, 215, 20–43. [Google Scholar] [CrossRef]
Figure 1. VIS-TIR coupling strategies.
Figure 1. VIS-TIR coupling strategies.
Remotesensing 16 03910 g001
Figure 2. Navigation chain architecture.
Figure 2. Navigation chain architecture.
Remotesensing 16 03910 g002
Figure 3. Visualization of the major steps of the re-initialization process. (a) Landmarks vs. feature detection; (b) Convex Hull visualization; (c) Landmarks to features matching.
Figure 3. Visualization of the major steps of the re-initialization process. (a) Landmarks vs. feature detection; (b) Convex Hull visualization; (c) Landmarks to features matching.
Remotesensing 16 03910 g003
Figure 4. VIS synthetic images (Left) and respective TIR synthetic images (Right).
Figure 4. VIS synthetic images (Left) and respective TIR synthetic images (Right).
Remotesensing 16 03910 g004aRemotesensing 16 03910 g004b
Figure 5. Relative target-chaser distance.
Figure 5. Relative target-chaser distance.
Remotesensing 16 03910 g005
Figure 6. VIS images acquired at t = 0 s (Left) and t = 1000 s (Right), respectively. Test case n.1.
Figure 6. VIS images acquired at t = 0 s (Left) and t = 1000 s (Right), respectively. Test case n.1.
Remotesensing 16 03910 g006
Figure 7. Average position KE (Left) and attitude KE (Right) over 250 simulations. Test case n.1.
Figure 7. Average position KE (Left) and attitude KE (Right) over 250 simulations. Test case n.1.
Remotesensing 16 03910 g007
Figure 8. Number of matched feature pairs (Left) and mean feature measurement noise covariance normalized over image size (Right). Test case n.1.
Figure 8. Number of matched feature pairs (Left) and mean feature measurement noise covariance normalized over image size (Right). Test case n.1.
Remotesensing 16 03910 g008
Figure 9. Sun aspect angle evolution for test case n.2.
Figure 9. Sun aspect angle evolution for test case n.2.
Remotesensing 16 03910 g009
Figure 10. VIS images acquired at t = 0 s (left) and t = 1000 s (right), respectively. Test case n.2.
Figure 10. VIS images acquired at t = 0 s (left) and t = 1000 s (right), respectively. Test case n.2.
Remotesensing 16 03910 g010
Figure 11. Average position KE (Left) and attitude KE (Right) over 250 simulations. Test case n.2.
Figure 11. Average position KE (Left) and attitude KE (Right) over 250 simulations. Test case n.2.
Remotesensing 16 03910 g011
Figure 12. Average positionKE (Left) and attitude KE (Right) over 250 simulations. In the red band only the TIR camera is used due to poor illumination in the VIS spectrum.
Figure 12. Average positionKE (Left) and attitude KE (Right) over 250 simulations. In the red band only the TIR camera is used due to poor illumination in the VIS spectrum.
Remotesensing 16 03910 g012
Figure 13. Number of matched feature pairs (Left) and mean feature measurement noise covariance (Right). Test case n.2.
Figure 13. Number of matched feature pairs (Left) and mean feature measurement noise covariance (Right). Test case n.2.
Remotesensing 16 03910 g013
Table 1. VIS and TIR camera parameters.
Table 1. VIS and TIR camera parameters.
Array Size [px]FoV [deg]Focal Length [mm]
VIS1024 × 1024[14, 14]20
TIR512 × 512[14, 14]20
Table 2. Initial state covariance settings.
Table 2. Initial state covariance settings.
ParameterValueUnit
P 0 , ϱ 1 × 10 0 m2
P 0 , ϱ ˙ 1 × 10 2 m2/s2
P 0 , a 5 × 10 3 -
P 0 , ω 1 × 10 1 rad2/s2
Table 3. Process noise settings.
Table 3. Process noise settings.
ParameterValueUnit
σ ϱ 1.41 × 10 1 m
σ ϱ ˙ 1.00 × 10 3 m/s2
σ a 9.50 × 10 3 -
σ ω 1.00 × 10 3 rad/s
Table 4. Test plan summary.
Table 4. Test plan summary.
Test NumberDurationObjective
13000 sTo evaluate the navigation chain using only the VIS images in good illumination conditions.
23000 sTo highlight the benefits of the sensor handover data fusion strategy.
Table 5. Position and attitude Mean Knowledge Errors (MKE) for different sensing modalities. Test case n.1.
Table 5. Position and attitude Mean Knowledge Errors (MKE) for different sensing modalities. Test case n.1.
Test CasePosition ErrorsAttitude Errors
Abs. MKE [m]Rel. MKE [%] MKE [deg]
VIS 0.19 ± 0.10 1.25 ± 0.68 1.10 ± 0.97
TIR 0.23 ± 0.15 1.50 ± 0.99 1.83 ± 1.28
VIS—TIR Handover 0.21 ± 0.08 1.39 ± 0.49 0.94 ± 0.58
Table 6. Position and attitude Mean Knowledge Errors (MKE) for different sensing modalities. Test case n.2.
Table 6. Position and attitude Mean Knowledge Errors (MKE) for different sensing modalities. Test case n.2.
Test CasePosition ErrorsAttitude Errors
Abs. MKE [m]Rel. MKE [%]MKE [deg]
TIR 0.23 ± 0.15 1.50 ± 0.99 1.83 ± 1.28
VIS—TIR Handover 0.22 ± 0.14 1.43 ± 0.96 1.54 ± 1.16
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Bussolino, M.; Civardi, G.L.; Quirino, M.; Bechini, M.; Lavagna, M. Cross-Spectral Navigation with Sensor Handover for Enhanced Proximity Operations with Uncooperative Space Objects. Remote Sens. 2024, 16, 3910. https://doi.org/10.3390/rs16203910

AMA Style

Bussolino M, Civardi GL, Quirino M, Bechini M, Lavagna M. Cross-Spectral Navigation with Sensor Handover for Enhanced Proximity Operations with Uncooperative Space Objects. Remote Sensing. 2024; 16(20):3910. https://doi.org/10.3390/rs16203910

Chicago/Turabian Style

Bussolino, Massimiliano, Gaia Letizia Civardi, Matteo Quirino, Michele Bechini, and Michèle Lavagna. 2024. "Cross-Spectral Navigation with Sensor Handover for Enhanced Proximity Operations with Uncooperative Space Objects" Remote Sensing 16, no. 20: 3910. https://doi.org/10.3390/rs16203910

APA Style

Bussolino, M., Civardi, G. L., Quirino, M., Bechini, M., & Lavagna, M. (2024). Cross-Spectral Navigation with Sensor Handover for Enhanced Proximity Operations with Uncooperative Space Objects. Remote Sensing, 16(20), 3910. https://doi.org/10.3390/rs16203910

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop