The position of stars and planets in the sky are represented in the International Celestial Reference Frame (ICRF). The location of a celestial body is expressed in terms of right ascension,
, and declination,
, as seen in
Figure 1. Stars are assumed to be infinitely far away, lying on the celestial sphere. Consequently, translational motion has no effect on the apparent position of the stars.
The aircraft position is represented in the Earth-centred, Earth-fixed WGS84 system. This is the standard reference frame for GPS positional data. Aircraft attitude is represented in the local North-East-Down (NED) coordinate system. The camera is assumed to be mounted to the aircraft, sharing a location with the vehicle and differing in orientation by a single rotational transformation. The camera coordinate system is oriented with z positive in the optical axis, x positive towards the right of the image plane and y positive towards the bottom of the image plane.
A star catalogue must be used as a reference for the location of stars. While there are many star catalogues available, we selected the Yale Bright Star Catalogue (BSC) due to its minimal size. The BSC contains records of stars down to magnitude 6.5, totalling 9110 stars. This magnitude threshold is sufficient for most aircraft camera systems (including stabilized systems) [
1]. For ease of implementation, the ASCII-format catalogue was converted into an SQLite database. Indices were created for magnitude, right ascension and declination, including additional composite indices to allow fast querying of the database.
This simulation initially corrects for celestial phenomena which perturb the right ascension and declination in the star catalogue, before entering a simulation loop. The simulation loop performs the following steps:
2.1. Initial Corrections
On initialization, adjustments are made for the right ascension and declination of stars due to annual proper motion (the apparent motion of stars), precession (changes in the Earth’s rotational axis over time), nutation (axial changes due to the Moon’s gravitational pull) and aberration (due to the velocity of the Earth’s orbit).
Proper motion is provided in star catalogues and is compensated by computing the change in right ascension and declination since the given epoch, such that:
where
and
are the right ascension and declination at the epoch.
and
are the annual proper motion, typically expressed in arcseconds per year, and
T is the time (in years) since the epoch (J2000 in our case). Subsequently, right ascension and declination are corrected due to precession, following the method outlined in [
13]. That is, we use the polynomial approximation for the precession angles
,
z and
:
where
t is the number of centuries since the J2000 epoch. Right ascension
and declination
are then found given:
Following [
13], we correct for the effects of nutation. Nutation is comprised of nutation in longitude,
, and nutation in obliquity,
. These quantities can be approximated to within 0.5 arcseconds by the following equations (expressed in arcseconds):
where
, the longitude of the ascending node of the Moon’s mean orbit on the ecliptic, expressed in degrees, is approximated as:
where
t is expressed in Julian centuries, as above.
L and
, the mean longitudes of the Sun and Moon, respectively, expressed in degrees, are given by:
The mean obliquity of the ecliptic can be found given the following equation, expressed in degrees:
Subsequently, the true obliquity of the ecliptic
is given by:
The resulting corrections due to nutation,
for the star’s right ascension and declination, respectively, are given by:
We also consider the effects of aberration, as presented in [
13]. Given the constant of aberration,
arcseconds, the true longitude of the Sun
, eccentricity of the Earth’s orbit
e, and the longitude of the perihelion,
, we can compute the corrections
for the star’s right ascension and declination with the following equations:
where:
and the true longitude of the Sun is calculated as:
where the mean longitude of the Sun,
, is given by:
and the Sun’s equation of the center,
C, is given by:
and the mean anomaly of the Sun,
M, is given by:
The correction terms, , , and are added to and , yielding the corrected celestial coordinates of the star.
2.2. Updating Position and Time
Conversion from terrestrial time (as measured with the Gregorian calendar) to celestial time (typically expressed in Julian days) is a necessary step in simulating celestial bodies. The BSC represents the location of stars with respect to the J2000 epoch. The conversion from Gregorian date to Julian day is detailed in [
13]. The basic steps are shown in Algorithm 1.
Algorithm 1 Conversion from Gregorian date to Julian day |
let Y be the current year ▹ Integer, Gregorian let M be the current month ▹ Integer [1..12], Gregorian let D be the current day ▹ Decimal, Gregorian ifthen end if let A = let B = J = ▹ Julian Days
|
This conversion expresses the current time with respect to the number of Julian days that have elapsed since 4716 BC. Furthermore, the J2000 epoch is expressed in relation to the Gregorian date 1 January 2000 and can be found by subtracting the Julian day at that time from the current Julian date:
Sidereal local time, analogous to celestial longitude, can be calculated given WGS84 longitude and the current time of day expressed in decimal hours. Given longitude
, Julian date
expressed in the J2000 epoch, and decimal hours
, the local sidereal time
is given by [
13]:
is typically limited to the range
degrees. The hour-angle
of a celestial body can then be simply calculated from
and right ascension
as:
The hour-angle of a body can be used to compute its azimuth and elevation, expressed in local NED coordinates. The local elevation
of a celestial body given hour angle
, declination
and latitude
is given by:
Subsequently, using elevation from Equation (
23), the local azimuth
is given by the following equation:
where the addition of
radians converts azimuth to a representation that is positive East of North [
13].
The observed elevation of a celestial body is altered due to the effects of atmospheric refraction. Consequently, objects in the sky appear at a greater elevation than they would without the atmospheric effects. This effect is exaggerated at lower elevations (closer to the horizon), which leads to an angular displacement of up to 0.5
. For cameras in the visible light spectrum, the refractive distance
R (expressed in arcminutes) can be approximated to within 4 arcseconds [
14], given the following formula:
If a greater level of accuracy is required, alternative methods such as that seen in [
15] can be used. The apparent elevation
is then given by:
The inverse of this formula, Equation (
27) [
16], allows for the correction of observations:
Formulas (
25) and (
27) assume an atmospheric pressure of 1010 millibars, and an air temperature of 10 °C. According to [
13], an approximate scale factor may be applied given pressure
P at the Earth’s surface, and air temperature
T °C, given by the following formula:
Finally, we map the celestial sphere onto a unit sphere for the purposes of image plane projection. Given the azimuth and elevation of a star, the corresponding unit vector in local NED coordinates is given by:
For resource constrained systems, the rate at which these unit vectors are updated should be chosen relative to the precision required by the simulation. For reference, a geostationary camera with an update rate of 1 Hz will be accurate to within ±15 arcseconds (4.17 degrees). For an aircraft at a latitude of , travelling East/West at mach 1, an update rate of 1Hz will be accurate to within arcseconds ( ).
2.3. Updating Attitude
Representing celestial bodies in the local NED frame of reference simplifies the transformation from aircraft attitude to camera attitude. Aircraft attitude should come either directly from the onboard attitude reference or from a simulator. The work presented here uses the open source Ardupilot toolchain. Attitude log data collected from real flights were used for this research, so as to correlate real images with their simulated counterparts. The roll
, pitch
and yaw
Euler angles of the aircraft in local NED coordinates were logged at 30 Hz. These Euler angles can be represented as a rotation matrix, through a yaw–pitch–roll rotation sequence. The rotation matrix,
, transforms objects in the local NED frame to the aircraft body frame, where
is given by:
where c
and s
represent
and
, respectively. The camera is mounted to the aircraft, with roll
, pitch
and yaw
expressed in the aircraft body frame. The
z axis of the camera is parallel to the optical axis, positive in the direction of the image plane. The
y and
x axes are orthogonal and directed towards the bottom and the right of the image plane, respectively. The rotation matrix,
, transforms objects in the aircraft body frame to the camera frame, where
is given by:
Given rotation matrices
and
, the transformation of a unit vector,
, from the local NED frame to the camera frame, can be computed as:
Equation (
32) allows the unit vectors of the celestial bodies computed in
Section 2.2 to be represented in the camera frame of reference.
2.4. Projection
We assume that the camera intrinsic matrix,
, is known. The intrinsic properties of a camera can be found through a calibration method such as that described in [
17], yielding matrix:
where
and
are the
x and
y focal lengths in pixel units,
and
are the
x and
y pixel locations of the principal point, respectively, and
s describes the sensor skewness (typically 0 for digital sensors).
We first convert each celestial body unit vector into homogeneous coordinates. For components
x,
y and
z of unit vector
, the homogeneous point
in the camera frame of reference is calculated as:
Objects whose depth value,
z, is less than 0 are ignored due to being positioned behind the camera object. Finally, given no translational component to our camera, the pixel location,
, of the object is found as:
Lens distortion may also be included in the model. Lens distortion models are typically non-linear, and expressed as a function of displacement from the principal point. Various models exist for lens distortion, and should be chosen according to the level of precision required [
18]. If using a lens distortion model, this should be applied after the star is rendered, so as to capture the resultant eccentricities from the distortion. We do not model lens distortion in our simulation; instead, we rectify all images prior to analysis, such that any residual distortion is negligible.
2.5. Calibration
The apparent pixel intensity of a star is determined by the apparent star magnitude, atmospheric conditions, lens properties and sensor properties. We present here a method which precludes the need for detailed modeling, through a single-image calibration process. We use a reference image captured from a stationary aircraft on the ground and fit an exponential curve to define the relationship between star magnitude and pixel intensity.
The relationship between observed brightness and apparent star magnitude is given by the equation:
where
is the observed star magnitude,
is the reference star magnitude,
is the observed star brightness and
is the reference star brightness. This magnitude scale is designed such that a magnitude difference of
correlates with a brightness factor of 100. That is to say, a magnitude 1 star is 100 times brighter than a magnitude 6 star. For this work, we take the brightness of a star to be its maximum pixel value. Images are converted from the blue–green–red (BGR) colour space to the hue–saturation–value (HSV) colour space, and the value channel is used as a greyscale image.
By rearranging Equation (
36) we can compute the brightness of an observed star, given that we have a reference brightness
, reference magnitude
and observed magnitude
:
The choice of reference star is an important factor, as the magnitude of stars are typically considered to be unreliable [
19]. Many factors can cause the apparent magnitude of a star to differ from the catalogue, including spectral attenuation caused by the atmosphere, camera characteristics, atmospheric refraction, as well as the luminescent characteristics of the star itself. We assume that the magnitude error follows a zero-mean Gaussian distribution. Following this assumption, we select the star with magnitude and brightness that minimizes the Frobenius norm of the difference between observed and calculated brightness:
where
is a vector containing all calculated star brightnesses, and
is a vector containing all measured star brightnesses. The vector
is computed for each star in the reference image by choosing
and
from the reference star, and recomputing
for all stars in the image using Equation (
37). Stars whose brightness is saturated (i.e., have a maximum pixel value of 255) are excluded from this process.
Figure 2 shows the output from this calibration process, conducted on an image captured from a grounded aircraft, using a PiCamHQ with 500 ms exposure interval. We note that this procedure is most effective with a larger number of visible stars, such that the sample better represents the population. In this example, stars were automatically detected and matched to the database using a star identification algorithm. As an alternative to this automated process, one could manually label each star in the reference image.
Stars are effectively point light sources. The camera lens will tend to defocus a star, such that it appears larger than one pixel. The apparent size of a star is affected by magnitude only to the extent that otherwise-undetectable pixels exceed the noise floor. Consequently, we can treat each star as having constant size, and changes to the star’s brightness will yield a larger or smaller effective area. We assume that the pixel intensity of a star follows a two-dimensional Gaussian point spread function, with diagonal covariance matrix whose elements are equal. That is, a star image is circularly symmetric about its center. In practice, this may not be the case, and we note that such methods might be used for calibrating lens and sensor distortion (this is, however, out of scope for this work). Following this assumption, we measure the standard deviation across the x and y axes of each normalized star detection in the reference image and use the median value across all stars as the reference size for rendering in the simulation. We normalize by scaling the peak pixel value to 1. The standard deviation
is calculated as:
where
is the histogram of intensities with respect to pixel position, std
is the standard deviation of sample set
x, the symbol ⊙ represents element-wise multiplication, and max
is a scalar value equal to the maximum-valued element of
.
A graph showing the standard deviation in pixel intensity vs. star magnitude can be seen in
Figure 3, demonstrating the approximate uniformity of apparent projected star size across various magnitudes.
Figure 4 shows an example of a Gaussian star render, reconstructed from the standard deviation and star brightness.
Simulation noise levels are calibrated from the reference image. Sources such as moonlight and atmospheric light pollution contribute to Gaussian noise observed in an image and consequently reduce the signal-to-noise ratio and observability of stars. The mean and standard deviation is measured in the value channel of the image. We ignore sections of the image in which stars have been detected so as to avoid bias introduced by the stars themselves. Thus, we have the noise function:
2.6. Rendering
A long exposure image can be generated by superimposing multiple short-exposure images. Doing so requires a fast attitude update so as to reproduce the motion of the aircraft throughout the exposure period. The required temporal resolution is affected by the aircraft dynamics; however we found that a 5 ms (200 Hz) attitude update results in the contiguous rendering of stars for wide-angled lenses, even during aggressive maneuvers. A 200 Hz attitude measurement is not typically available on low-cost hardware, so we interpolate attitude measurements using the spherical linear interpolation algorithm described in [
20] to achieve the desired rate. This method requires orientations to be represented as quaternions. The details are omitted here for brevity; however, most computer graphics libraries support this operation, transforming the direction cosine matrix in Equation (
32) to quaternion format. We can retrieve any arbitrary orientation between two quaternions with the following equation:
where Slerp is the spherical linear interpolation function; a value of
returns
, a value of
returns
and intermediate values of
u provide interpolation along the shortest path between
and
on the unit sphere. Interpolation is performed such that attitudes are captured at 5 ms intervals from the beginning of the camera exposure interval, to the end, yielding a total of
attitude references, where
is the camera exposure time in seconds.
The long-exposure image is constructed by superimposing
n floating point images. For each attitude
, the pixel location
of each star
in the database is found from Equation (
35). A discrete Gaussian kernel,
is constructed using the standard deviation found in Equation (
39). The kernel is programmatically generated (see [
21]) such that the value
of element
i is given by:
where the kernel size
is odd and is selected to contain a minimum of
of the total star energy, and
is selected such that
. The kernel is subsequently scaled, such that the maximum element at index
is equal to the peak pixel value,
, calculated from Equation (
37) using the magnitude
of star
, as well as the reference magnitude and intensities,
and
respectively. Finally, the kernel is scaled down by a factor of
n. Assuming the photon flux density is constant across the exposure window, a single short-exposure window contains a fraction
of the total star energy. This scaling of kernel
G generated in Equation (
42) is given by:
Note that is stored as a floating point array. The short-exposure star is drawn to the image canvas by centering the kernel on pixel and rotating 180 about the centre, adding the values of to the canvas, so as to render the star symmetrically. This process is repeated for each star in the database at the current attitude.
Once each of the short-exposure images have been rendered, the Gaussian noise defined in Equation (
40) is added to the canvas. Finally, the image is converted from a single-channel floating point image to an 8-bit single-channel image. The resulting image contains stars rendered with the motion blur caused by camera movement throughout the exposure interval.