Next Article in Journal
Landau Tidal Damping and Major-Body Clustering in Solar and Extrasolar Subsystems
Previous Article in Journal
Constraining the Inner Galactic DM Density Profile with H.E.S.S.
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Deep Sky Objects Detection with Deep Learning for Electronically Assisted Astronomy

Luxembourg Institute of Science and Technology, 5 Avenue des Hauts-Fourneaux, L-4362 Esch-sur-Alzette, Luxembourg
*
Author to whom correspondence should be addressed.
Astronomy 2024, 3(2), 122-138; https://doi.org/10.3390/astronomy3020009
Submission received: 30 January 2024 / Revised: 24 April 2024 / Accepted: 6 May 2024 / Published: 13 May 2024

Abstract

:
Electronically Assisted Astronomy is a fascinating activity requiring suitable conditions and expertise to be fully appreciated. Complex equipment, light pollution around urban areas and lack of contextual information often prevents newcomers from making the most of their observations, restricting the field to a niche expert audience. With recent smart telescopes, amateur and professional astronomers can capture efficiently a large number of images. However, post-hoc verification is still necessary to check whether deep sky objects are visible in the produced images, depending on their magnitude and observation conditions. If this detection can be performed during data acquisition, it would be possible to configure the capture time more precisely. While state-of-the-art works are focused on detection techniques for large surveys produced by professional ground-based observatories, we propose in this paper several Deep Learning approaches to detect celestial targets in images captured with smart telescopes, with a F1-score between 0.4 and 0.62 on test data, and we experimented them during outreach sessions with public in Luxembourg Greater Region.

1. Introduction

Space telescopes such as Hubble [1], James Webb [2] or Euclid [3] provide breathtaking views of galactic fields and nebulae with unprecedented clarity, making astronomy more attractive to the media and the general public. This is encouraging a growing number of people to take an interest and to equip themselves to observe the night sky, but there are several challenges to be overcome.

1.1. Observing Night Sky

While a pair of binoculars could allow to observe celestial targets like like Andromeda galaxy (Messier 31) and Orion Nebula (Messier 42), stargazing is challenging in practice [4]. Among the different obstacles, we can mention:
  • Optimal weather conditions (such as clear skies free from clouds) are essential. Moreover, low light pollution level is crucial for a rewarding experience [5].
  • Visual observation requires a conditioning in total darkness, to accustom the eyes to night observation. With ideal conditions, watching through an eyepiece and a telescope can be disappointing because of the lack of contrast and colour [4].
  • Astronomy is an outdoor activity, and enduring the cold and dampness can test the patience of even the most curious observers.
  • People with limited physical abilities (visual impairment, handicap, etc.) might not be able to comfortably use the equipment.
  • Constellations of satellites can be dazzling (especially when they are put into orbit) and may disrupt the observations [6].

1.2. Going beyond the Limits with Electronically Assisted Astronomy

Nowadays, Electronically Assisted Astronomy (EAA) is increasingly applied by astronomers to observe Deep Sky Objects (DSO), i.e., astronomical objects that are not individual stars or Solar System objects, like nebulae, galaxies or clusters [7]. By capturing images directly from a camera coupled to a telescope and applying lightweight image processing, EAA allows to generate and display enhanced images on screens (laptop, tablet, smartphone), even in places heavily impacted by light pollution and poor weather conditions. By using a local network, it is possible to watch images captured via an external installation while staying warm indoors in front of a screen.
Many targets could be observed with EAA: open clusters, globular clusters, galaxies, nebula and comets They are documented in books and software [8], and are referenced in well-known astronomical catalogues [9] like Messier (110 DSO), New General Catalogue (7840 DSO), Index Catalogue (5386 DSO), Sharpless (312 DSO), Barnard (349 DSO): these lists have been established during a time when light pollution was not a problem.
Beyond the simple hobby, astronomy is one of the few domains where amateurs can contribute to make discoveries with the support of scientists [10]. For example, recent collaborations between professionals and amateurs have shown that unknown targets like gas clouds can be discovered by accumulating data over a very long period of time with equipment available to public [11]. Simultaneous use of a network of smart telescopes can contribute to the study of asteroids and even exoplanets [12]. Within the Kilonova-Catcher program (KNC), images collected by citizens can support reearch about gravitational waves [13,14]. For many years, American Association of Variable Star Observers (AAVSO) allows everyone to participate to variable stars discovery [15].
Unfortunately, EAA is not easy to implement in pratice and a technical background is needed [16]. EAA requires the management of complex equipment (motorized tracking mount, refractor ot reflector, dedicated cameras, pollution filters, etc.) and software (like Sharpcap 1 or AstroDMX 2). These barriers to entry prevent interested amateurs from organising stargazing sessions.

1.3. Stargazing with Smart Telescopes

The recent years brought the emergence of smart telescopes, making sky observation accessible to anyone. These instruments are used to collect images of DSO, by employing smartphones and tablets as the display devices. Several manufacturers around the world started to design and develop this type of products, with prices ranging from 500 to 4500 EUR. Requiring no prior knowledge and with a time investment of less than five minutes, such automated telescopes allow capturing and sharing of instant images of stars clusters, galaxies, and nebulae. Even the scientific community is taking advantage of these new tools to study astronomical events (i.e., asteroids occultations, exoplanets transits, eclipses) with simultaneous observations coming from a network of smart instruments [17].
During public outreach events with non-experts, and EAA sessions, the automation of repetitive tasks by smart telescopes provides a streamlined approach to covering a diverse array of topics. It enables live streaming of various DSO (Figure 1), all while the associated mobile applications use accessible language to describe their characteristics such as apparent size, physical composition, and distance [18].
Furthermore, smart telescopes allows for the demonstration of selecting DSO based on their seasonal visibility and position in the sky with minimal effort. It makes it possible to organise observation sessions during which each participant can connect their own smartphone to the smart telescopes currently collecting data, in order to observe the results on screens and save them.

1.4. Current Limitations

In practice, most of EAA setups ban be scheduled to start and stop capturing images for a specific part of the night sky, and the results are stored for later use on a portable hard drive. In this context, a feature is missing: the detection of DSO that are visible in the captured images:
  • Although the stars are generally visible in the images, it can be difficult to confirm the presence of DSO, particularly in the case of difficult objects of high magnitude that require several hours, or even several nights, of observation.
  • Unfavourable external conditions (light pollution, full moon, etc.) increases the background level of the sky and is therefore not conducive to the observation of high-magnitude DSO [5,16].
  • It is useful to automatically annotate the images with detected DSO—potentially with object that were not expected (e.g., comets, supernovas).
  • For people with a minimal knowledge of astronomy, a simple observation is enough to visually recognize DSO. It’s not that easy for novices.
In this paper, we present various approaches based on Deep Learning (DL) to automatically detect deep space targets from data captured with smart telescopes.
The rest of this document is organised as follows. First, we discuss existing techniques for detecting objects in astronomical images (Section 2). Secondly, we present the method used to build a data set with automated telescopes (Section 3). Thirdly, we detail different DL approaches to detect DSO in the collected images (Section 4.1, Section 4.2, Section 4.3). Finally, we discuss the results and the application of these approach for science outreach (Section 5), and we conclude by opening up a few perspectives (Section 6).

2. Related Works

Traditionally, the detection of astronomical objects is carried out using astrometry (i.e., by finding the exact position, scale and orientation of the image): by checking known celestial maps (containing the exact positions of the existing objects), it is then possible to find which objects are visible in the analysed image [19]. In fact, simplified astrometry/plate resolution is used during the automated initialisation of intelligent telescopes—in order to locate the stars and therefore the orientation of the instruments. This is effective, but requires access (local or network) to a database containing the coordinates of celestial bodies. And these methods cannot be used to discover objects that have not yet been catalogued (e.g., supernovas).
The role of Computer Vision (CV) and Artificial Intelligence (AI) in optimizing the use and operation of optical telescopes is becoming increasingly important [20]. Such approaches for object detection are numerous, as they allow information to be extracted directly from captured images. Almost ten years ago, an interesting work based on segmentation was proposed to detect galaxies in astronomical surveys [21]. Recent approaches based on YOLO (You Only Look Once) are specifically dedicated to the detection of objects in images, based on prior supervised training. YOLO allows to make predictions of bounding boxes and class probabilities simultaneously; the first version was published in [22]. For example [23], proposes to combine YOLOv2 and data augmentation to detect and classify galaxy types in large astronomical surveys. Recently, a paper described how to detect space objects from a partially annotated dataset [24], and another one proposed to combine citizen and expert knowledge to process unlabelled astronomical data [25]. Finally, a technique combining YOLOv7 and Generative AI was proposed to detect captured objects from radar images [26].
These methods require huge training datasets to be effective and, to our knowledge, there is limited research based on images captured under conditions and with equipment accessible to amateurs. Like [27], most of academic contributions target sky surveys produced by professional telescopes with large aperture under ideal conditions. The images captured during EAA (Electronically Assisted Astronomy) sessions can be noisy because the observation durations are generally not very long. They may be subject to issues such as light pollution and variable weather conditions, which ultimately result in highly heterogeneous image quality—it sometimes makes it challenging to visualize faint targets. We thus propose in this paper approaches based on this type of data.

3. Data

For this work, we collected a substantial amount of data (over 250 different targets visible from the Northern Hemisphere) by using equipment and by following a method that can be reproduced to amateurs. The images were taken between March 2022 and September 2023 in Luxembourg, France and Belgium, using the alignment and stacking functions built into these two instruments:
  • Stellina 3: ED doublet with an aperture of 80 mm and a focal length of 400 mm (focal ratio of f/5)—equipped with a Sony IMX178 CMOS sensor with a resolution of 6.4 million pixels (3096 × 2080 pixels).
  • Vespera 4: apochromatic quadruplet with a 50 mm aperture and 200 mm focal length (f/4 focal ratio)—equipped with a Sony IMX462 CMOS sensor with a resolution of 2 million pixels (1920 × 1080 pixels).
For each observation session, the instruments were set up in a dark environment (with no direct light) and properly balanced using a spirit level on a stable floor. Depending on the observing conditions, CLS (City Light Suppression) or DB (Dual Band) filters were used to obtain more signal on high-magnitude targets (particularly nebulae). The default parameters for Stellina and Vespera were applied: 10 s for the exposure time for each unit image and 20 dB for the gain. This configuration is optimal: this exposure time is a good compromise for obtaining good images with the instruments’ alt-azimuth motorised mounts (a higher value may give undesirable star trails). The images were obtained with reasonable cumulative integration times (from 20 to 120 min—leading to an acceptable signal-to-noise ratio for most targets): in this case, cumulative integration time refers to the total time during which observational data has been captured and efficiently processed to obtain a final image of an object or region of the sky.
As described in [28,29], collecting astronomical images from an area heavily impacted by light pollution, such as Luxembourg, France, and Belgium, posed a significant challenge due to the need for a long-term observation period. EAA is an outdoor activity that is subject to weather conditions, requiring us to be readily available when suitable context arose. It was necessary to consider that observation sessions varied greatly in length between seasons: summer nights were shorter, while winter nights were much longer. Despite these constraints, we were able to capture a large number of images under normal observing conditions –leading to images of heterogeneous quality. This is in contrast to surveys conducted under ideal conditions for capturing astronomical data, such as those using space telescopes or professional ground-based observatories located in deserts.
Data are available from open archives. On the one hand, raw data captured with Stellina corresponds to 205 observation sessions, 50068 FITS images of resolution 3096 × 2080 were obtained (with a field of view of approximately 1   × 0.7   ) [28]. On the other hand, post-processed images obtained with both Stellina and Vespera are published as a set of 4696 astronomical RGB images in JPEG format (minimal compression) with a resolution of 608 × 608 pixels [29].

4. Proposed Methods

In this section, we propose three approaches based on DL, presented in increasing order of complexity. The first involves computing bounding boxes by ignoring point sources; we are referring to point sources that appear as single, small points of light in astronomical images—as opposed to the DSO that we want to detect, which may appear to be more diffuse source. The second approach involves training a dedicated model to detect the presence of DSO, and the third describes how to determine the position of DSO in a different way.

4.1. Naive Approach: Remove Point Sources and Compute Bounding Boxes

A naive approach consists in ignoring the point sources in the images, and detect only the DSO of interest (i.e., nebulae, galaxies, globular clusters, etc.).
In fact, this task could be not done easily by using conventional CV techniques:
  • Point sources are not the same size or colour; not to mention the halo, which can be more or less strong. It can not be done with a simple filtering/mask operation.
  • When a large point source is in front of a nebula or galaxy, an inpainting phase is required to reconstruct missing data that are behind it.
Recently, AI techniques were proposed to process astronomical images [30]. Some of them aim to distinguish stars from galaxies (star-galaxy classification), and studies are regularly published on the subject [31,32,33].
A popular technique for astrophotography enthusiasts is StarNet (2.0.0) 5: it consists in removing point sources from images with a supervised model—producing an incredible visual effect, especially on nebulae and galaxies of large apparent size (Figure 2). Here, we have defined a method to apply StarNet on images and then compute bounding boxes around DSO that are present in images generated by StarNet (Algorithm 1). A Python prototype was developed by relying on Python packages like openCV (4.7.0) 6, scikit-images (0.20.0) 7, and StarNet.
Algorithm 1: Computing bounding boxes around DSO in astronomical images
Data: Input astronomical image I
Result: Astronomical image with bounding boxes drawn around DSO
1  R← Apply StarNet model on I;
2  R← Convert R to grayscale;
3  R← Apply edge detection using Canny on R;
4   b b L i s t ← Find bounding boxes by using a dedicated algorithm on R;
5  foreach  b b in b b L i s t  do
( 6   Draw b b on I;
7  return I
Despite the remarkable visual effect, the StarNet model does not allow the background of DSO to be discerned, especially with noisy images, faintest nebula, small apparent galaxies and large individual stars with halos.
In the next section, we propose to base on the outputs of this approach to train a supervised DL model.

4.2. Training a Custom YOLOv7 Model

To go further, we refined bounding boxes generated with the previous approach by using the Make Sense tool (1.11.0-alpha) [34]: this interactive web-based tool allows one to draw and resize bounding boxes with a mouse, and finally store them using the YOLO format (Figure 3).
As a results, we obtained DeepSpaceYoloDataset [29], a dataset formatted with the YOLO standard, i.e., a ZIP file containing 4696 RGB images in JPEG format (minimal compression), and 4696 text files containing the positions of DSO, with separated files for images and annotations, usable by state-of-the-art training tools [35].
DeepSpaceYoloDataset was then used to train a Yolov7 model by applying transfer learning (i.e., using the default pre-trained YOLOv7 model 8: the official implementation was used [35] with the following settings:
python3 train.py --weights yolov7.pt
          --data “data/custom.yaml”
          --single-cls
          --workers 8 --batch-size 4 --img 608
          --cfg cfg/training/yolov7.yaml
          --name yolov7-all
          --hyp data/hyp.scratch.p5.yaml
          --epochs 200
As a result, we obtained a YOLOv7 model with an acceptable accuracy (precision of 0.6 and recall of 0.5), leading to a model that was able to detect the presence and positions of DSO in RGB astronomical images, see Supplementary Materials to watch videos produced with the model). We have tested to train models without transfer learning and the results were not good (a long training period to achieve lower precision and recall).
In other words, the trained YOLO model allows to transform an input image into an annotated image, i.e., with the bouding boxes corresponding to the position of the DSO detected (as for example here with an image targeting the Messier 49 galaxy (Figure 3).
A problem described in the literature is that YOLO-based techniques have difficulty detecting objects of small apparent size (i.e., few pixels) wei2024review. In this case, it is objectively difficult to differentiate between a star and a DSO of very small size apparent on the images produced by smart telescopes.

4.3. Combining Binary Classification and eXplainable AI

Inspired by recent works in the industrial [36] and health domains [37], this third approach consists in training a supervised binary classifier to detect the presence of DSO, then applying an eXplainable AI (XAI) technique to automatically identify their position. XAI is an active area of research that aims to make the results of an AI model explainable and interpretable. Nowadays, these tools are seen as a tool for scientific discovery [38], in particular for understanding physical laws by comparing observations and AI predictions [39].
We have developed a pipeline to classify astronomical images with a DL model and the exploit the justification of the result with a XAI post-hoc technique to detect the celestial objects. In practice, here are the followed steps:
  • Starting from the data described in Section 3, we built a set of 4696 RGB images with 224 × 224 pixels—applying random crops on DeepSpaceYoloDataset to get a resolution that fits to ResNet50 models (and similar architectures).
  • We formed two distinct groups: images with DSO and images without DSO (we made sure that each group was balanced—to have a classifier with good recall). Images with only stars are classified as images without DSO. This preparation was carried out by first identifying the type of objects targeted in the images using the Aladin tool [40].
  • We made 3 sets: training, validation and test (80%, 10%, 10%).
  • A dedicated Python prototype was developed to train a ResNet50 model to learn this binary classification. The basic image processing tasks were performed following best practices for optimizing CPU/GPU usage [41], and the prototype was run on a high-performance infrastructure with the following hardware specifications: 40 cores with 128 GB RAM (Intel(R) Xeon(R) Silver 4210 @ 2.20 GHz CPU) and NVIDIA Tesla V100-PCIE-32 GB.
  • Empirically, the following hyper-parameters were used during training: ADAM optimizer, learning rate of 0.001, 50 epochs, 16 images per batch. We thus obtained a ResNet50 model with an accuracy of 97% on the validation dataset. Note that the VGG16 and MobileNetV2 architectures were also tested, but the results here are largely similar.
  • For inference of results, we built a pipeline to analyse the output of the ResNet50 model trained with XRAI (Region-based Image Attribution) [42]. XRAI is an incremental method that progressively builds the attribution scores of regions (i.e., the regions of the image that are most important for classification). XRAI is built upon Integrated Gradients (IG) [43] which uses a baseline (i.e., an image) to create the attribution map. The baseline choice is application-dependent, and in our case we operate under the assumption that a black one is appropriated because it corresponds to the sky background, and the attribution maps is calculated according to the XRAI integration path and reduces the attribution scores given to black pixels. In practice, we used the Python package saliency 9 and analysed the output of the last convolution layer.
  • To generate a heatmap indicating the attribution regions with the greatest predictive power, we keep only X% of the highest XRAI attribution scores here.
It is interesting to note that using XRAI after ResNet50 can also detect the position of DSO in the images (this is a side effect of the result justification)—whereas the Resnet50 model can only detect the presence or absence of DSO. If an image is classified as having DSO, then the heatmap produced by XRAI localizes the pixels of the image that contribute to this classification. If we look at the results obtained for several types of DSO, by keeping only 90% of the highest XRAI attribution scores here, we obtained:
  • Lion triplet (Messier 65, Messier 66, NGC 3628): the three galaxies and a large point source are highlighted (Figure 4).
  • Markarian chain (Messier 84, Messier 86, NGC 4477, NGC 4473, NGC 4461, NGC 4458, NGC 4438 et NGC 4435): all the galaxies are highlighted by the heatmap (Figure 5).
  • Hercules cluster (Messier 13): only the cluster core is highlighted, not the individual stars (Figure 6).
  • Little Dubmbell nebula (Messier 76): the planetary nebula is highlighted, as well as a star surrounded by a large coloured halo (Figure 7).
By highlighting the pixels contributing to the classification, the pipeline can be used to deduce where the DSO are located. In some cases, we can observe that the XRAI heatmap highlights large stars or stars surrounded by a strong halo. It highlights the difficulty for the ResNet50 model to do correctly the classification between these large point sources and small galaxies—this could be fixed by improving the training dataset 10.

5. Discussion

5.1. Accuracy of the Presented Approaches

To compare the approaches under the same conditions, we captured an additional dataset of 100 high-resolution images with smart telescopes (mixing nebulae, galaxies and globular clusters—essentially from the Messier catalogue) and we annotated them manually (i.e., drew the bounding boxes) with the MakeSense tool [34]. In other words, this data set is independent of DeepSpaceYoloDataset, the images are not the same—and we have captured them at the end of 2023. We evaluated the different approaches by comparing the bounding boxes of these dataset with the bounding boxes computed by each of them, by using the Python mAP package [44] 11. In Table 1, we report the following metrics:
Precision = T P T P + F P
Recall = T P T P + F N
F 1 = 2 Precision Recall Precision + Recall
As a result, applying the naive approach presented in Section 4.1 (i.e., Starnet model and then calculating the bounding boxes with openCV) is not very efficient because there are always residuals (stars halos, noise around stars, or both) in the images generated by Starnet, which means that the resulting detected bounding boxes are not satisfactory (precision of 0.45, recall of 0.36), particularly for faint DSO.
We tested the customized YOLOv7 model (Section 4.2) on these 100 high-resolution images and we obtained a precision of 0.79 and a recall of 0.51.
The approach based on XAI (Section 4.3) does not provide directly bounding boxes, we ran the ResNet50 model and applied XRAI to obtain the heatmap of each image in this set, and then calculated the corresponding contours using openCV (in a similar way to what is done with Algorithm 1: it leads to a precision of 0.68 and a recall of 0.41 on the 100 annotated high-resolution images.
Unsurprisingly, the second approach is the best in terms of accuracy for these data, which is in line with expectations since it is based on a finely annotated YOLO dataset.
We did not go further in customising the models to obtain better accuracy—it is almost certain that better results can be obtained by fine-tuning the hyper-parameters for each approach. The idea of this paper is to present some promising techniques for accompanying observations during EAA sessions.

5.2. Performances Bottlenecks

Another point concerns calculation time on high-resolution images.
For instance, applying XRAI has a cost in terms of computing time and resources that is not negligible; it requires more resources than a simple inference of the ResNet50 model. By considering the processing of a 3584 × 3584 astronomical image: with no overlap, it may be necessary to evaluate the ResNet50 prediction and the XRAI heatmap for 256 patches of 224 × 224 pixels—this may take some time depending on the hardware. To be efficient, these strategies can be applied to minimise the number of calculations required:
  • Reduce the size of the image to reduce the number of patches to be evaluated.
  • Process only a relevant subset of patches—for example, ignoring those for which the ResNet50 classifier detects nothing.
Even though the second strategy enables the heat map to be calculated quickly on large images, we propose in the next subsection another approach based on Generative AI to annotate in one step astronomical images by detecting and highlighting directly celestial objects.

5.3. Fast Approximation of the XRAI Heatmap with a Pix2Pix Model

To this end, we applied Generative Adversarial Networks (GAN), a class of DL frameworks that are frequently applied for CV tasks. In simple terms, a GAN model is composed of two DL models: a generator that ingests an image as input and provides another image as output, and a discriminator which guides the generator during the training by distinguishing real and generated images. Both are trained together through a supervised process—with the goal to obtain a generator that produces realistic images. Among the numerous existing GAN architectures, we selected Pix2Pix—a conditional adversarial approach that was designed for image to image translation [45]. It has been applied in many use-cases such as image colouration and enhancement [46], and even amplifier glow reduction [47].
Thus, a Pix2Pix model has been designed to do the mapping between the 4696 initial images (images set described in Section 3) and annotated images (initial images merged with the heatmap obtained with ResNet50 and XRAI—we changed the heatmap to green in order to make a difference with XRAI). We applied the standard Pix2Pix architecture as described and implemented in the official Tensorflow documentation 12, taking input images of 256 × 256 pixels, with the same resolution for outputs. The loss function was based on the Peak Signal-to-Noise Ratio (PSNR), and we trained the model during 100 epochs, the batch size was set to 1, and the process was realized with a learning rate of 0.0001. To improve the training phase, as described in [48], we applied random data augmentation during each epoch with the imgaug Python package 13.
It led to a Pix2Pix model with a good PSNR (higher that 38)—able to reproduce an annotated image (Figure 8), similar to what can be obtained with ResNet50 and the XRAI heatmap. We simply note that this model is slightly more sensitive to noise, especially if it is grouped in zones (and this can sometimes happen with hot pixel zones [49]).
In terms of performances, running an inference with the Pix2Pix model on a patch of 256 × 256 pixels is a better alternative to calculating a heatmap with XRAI on a patch of 224 × 224 pixels: for example, execution time is halved on a laptop without a GPU.

5.4. Benefits for EAA

The four approaches described in this paper can help to filter images that do not contain signal for expected celestial objects (due to bad weather condition, for example).
A first benefit consists in driving automatically the capture of data during the night, especially with a smart telescope or equivalent device (such as ZWO Asiair [49]). For instance, if the detection approaches presented in this paper find that DSO are present during a EAA session, then a notification can be sent to the end-user. Conversely, if after a certain period of time there is no detected signal on the live stacked image produced by the instrument, then the software driving the setup can warn the astronomer in order to stop the current capture.
A second benefit consists in supporting collaborative works for the collection and merging of data obtained during different observation sessions by using different instruments. In principle, the more time a telescope spends observing a target, the higher the signal-to-noise ratio of the resulting data: the signal (such as light from a distant galaxy) accumulates over time, while the undesired noise (such as thermal fluctuations in the instrument) remains roughly constant. More precisely, accumulating data with several telescopes during different nights allows to obtain a larger quantity of images which can then be filtered, aligned and stacked. The detection approaches presented in this paper can help to automatically determine if merging data has an impact on the number of DSO visible in the resulting images—allowing to drive the process.

5.5. Benefits for Outreach Events

In the context of the MILAN research project (MachIne Learning for AstroNomy), granted by the Luxembourg National Research Fund, we participated to several outreach events in Luxembourg Greater Region to share our results at stargazing sessions open to the public, including:
  • ’Expo Sciences 2022’ and ’Expo Sciences 2023’ organized by Fondation Jeunes Scientifiques Luxembourg (Luxembourg). Participants were young people with a strong scientific interest.
  • ’Journée de la Statistique 2022’ organized by Luxembourg Science Center and STATEC (Luxembourg). Participants were of all ages and had a strong interest in mathematics and science.
  • ’Robotic Event 2023’ organized by École Internationale de Differdange (Luxembourg). Participants were mainly students and teachers.
  • ’Science Festival 2023’ organized by Luxembourg National Fund (Luxembourg). Participants were of all ages and backgrounds.
We enabled demonstrations to the events participants immediately after acquiring images with smart telescopes, or even during, with a simple laptop (no internet connection is required). When we set a date for an outreach event, we obviously have no guarantee of the weather conditions. To have a fallback option in case of a cloudy night, we have recorded videos to show how observations are made with a smart telescope 14. It also allows us to share our results with people who can’t be there for the events – including captured images [28]. Among the topics discussed with events’ participants, we can mention:
  • Observing with both conventional and smart telescopes, to highlight the difference between what is seen (and sometimes invisible) with the naked eye and what can be seen in captured images. Participants are often surprised by what is visible through the eyepiece (the impressive number of stars), and amazed by the images of nebulae obtained in just a few minutes by smart telescopes.
  • Transferring smart telescopes images on the laptop (via a FTP server—this feature is provided with the Vespera smart telescope).
  • Applying the described detection methods on these images and presenting the results to the participants (giving the opportunity to explain in simple language how to train and use AI models). The naive approach allows to distinguish elements that are not visible on initial images (Section 4.1), as for example the dust in Andromeda galaxies arms. YOLO model allows to draw bounding box around small galaxies that are hard to see without zooming (Section 4.2). XRAI allows to show the precise bounding boxes of DSO that are detected by ResNet50 model (Section 4.3), and Pix2Pix model highlights in another way the zones of interests in the image (Section 5.3).
  • With some curious people, results opened up fascinating discussions that are difficult to transcribe here. In particular, it led to explain why detected DSO are often invisible when viewed directly with the naked eye or via an eyepiece and a telescope, then describing and comparing the different types of DSO (galaxies, open and globular clusters, nebulae, etc.). We did this through short oral questions and answers games [50], which showed that the participants understood the concepts quickly and made the connection with their prior knowledge (sometimes linked to popular culture [51]).
During these outreach events, the main objective was to demonstrate that observing the night sky is above all fun, and it can also be done easily with smart telescopes assisted with AI approaches to get contextual informations like automated objects detection.

5.6. Limitations

The approaches presented in this paper were tested on images obtained with specific equipment (aperture between 50 and 80 mm, focal length between 200 mm and 400 mm, recent CMOS sensors, alt-azimuth mount) and imperfect conditions. They can therefore be applied to images obtained with identical equipment or with similar characteristics (i.e., other models of smart telescopes with similar technical characteristics). Conversely, applying these techniques on images obtained with smaller or larger focal length instruments would require constituting a dataset that would contain this type of data, to then re-train models.

6. Conclusions and Perspectives

This paper presents various approaches based on Deep Learning to detect deep sky objects from astronomical images captured with smart telescopes, which required collecting data for over 250 different targets visible from the Northern Hemisphere, with equipment accessible to amateurs. One approach is based on a technique well-known by astrophotographers (Starnet), the second one is a dedicated YOLO model based on a customized annotated images set, the third one is the result of a pipeline combining a ResNet50 binary classifier and the post-hoc XRAI method, and the last one is a Generative AI model to mimic XRAI outputs. These approaches have a F1-score between 0.4 and 0.62 on images from smart telescopes used in areas subject to significant light pollution, during observation sessions of moderate duration. During outreach events, we applied to guide and support night sky observation for both experimented astronomers and novices.
In future work, we plan to gather more astronomical data, especially from the South Hemisphere, we will develop additional Deep Learning methods to detect satellites trails, and we will work on optimizations to customize the presented approaches to embed them into low resource devices.

Supplementary Materials

The following videos showcase the detection of deep sky objects from astronomical images captured with smart telescopes by authors: (https://youtube.com/playlist?list=PLLu6LwGAL4dt1S6_c5-5X-TpadZqI_lGk&si=XqVXpV5TMkRH3X-7 accessed on 15 January 2024).

Author Contributions

Conceptualization, O.P.; methodology, O.P.; writing—original draft preparation, O.P. and M.J.; writing—review and editing, O.P. and M.J.; visualization, O.P.; supervision, O.P.; project administration, O.P.; funding acquisition, O.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Luxembourg National Research Fund (FNR), grant reference 15872557.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Raw astronomical images used during this study can be found on the following page: ’MILAN Sky Survey, raw images captured with a Stellina observation station’ (https://doi.org/10.57828/f5kg-gs25 accessed on 15 January 2024). Annotated astronomical images can be retrieved from this page: (https://zenodo.org/doi/10.5281/zenodo.8387070 accessed on 15 January 2024). Additional materials used to support the findings of this study are available from the corresponding author upon request.

Acknowledgments

Data processing and models training were realized on the LIST Artificial Intelligence and Data Analytics platform (AIDA) (https://www.list.lu/en/institute/rd-infrastructures/data-analytics-platform/ (accessed on 15 January 2024)), thanks to Raynald Jadoul and Jean-Francois Merche.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

AIArtificial Intelligence
CVComputer Vision
DLDeep Learning
DSODeep Sky Objects
EAAElectronically Assisted Astronomy
GANGenerative Adversarial Networks
XAIeXplainable Artificial Intelligence

Notes

1
https://www.sharpcap.co.uk (accessed on 15 January 2024).
2
https://www.astrodmx-capture.org.uk (accessed on 15 January 2024).
3
https://vaonis.com/stellina (accessed on 15 January 2024).
4
https://vaonis.com/vespera (accessed on 15 January 2024).
5
https://www.starnetastro.com (accessed on 15 January 2024).
6
https://pypi.org/project/opencv-python/ (accessed on 15 January 2024).
7
https://pypi.org/project/scikit-image/ (accessed on 15 January 2024).
8
9
https://pypi.org/project/saliency/ (accessed on 15 January 2024).
10
This is the primary role of XAI: to enable the design and training of more accurate and robust models.
11
https://github.com/Cartucho/mAP (accessed on 15 January 2024).
12
13
https://imgaug.readthedocs.io/en/latest/ (accessed on 15 January 2024).
14

References

  1. Lallo, M.D. Experience with the Hubble Space Telescope: 20 years of an archetype. Opt. Eng. 2012, 51, 011011. [Google Scholar] [CrossRef]
  2. Gardner, J.P.; Mather, J.C.; Clampin, M.; Doyon, R.; Greenhouse, M.A.; Hammel, H.B.; Hutchings, J.B.; Jakobsen, P.; Lilly, S.J.; Long, K.S.; et al. The James Webb Space Telescope. Space Sci. Rev. 2006, 123, 485–606. [Google Scholar] [CrossRef]
  3. Racca, G.D.; Laureijs, R.; Stagnaro, L.; Salvignol, J.C.; Alvarez, J.L.; Criado, G.S.; Venancio, L.G.; Short, A.; Strada, P.; Bönke, T.; et al. The Euclid mission design. In Proceedings of the Space Telescopes and Instrumentation 2016: Optical, Infrared, and Millimeter Wave, SPIE, Edinburgh, UK, 26 June–1 July 2016; Volume 9904, pp. 235–257. [Google Scholar]
  4. Farney, M.N. Looking Up: Observational Astronomy for Everyone. Phys. Teach. 2022, 60, 226–228. [Google Scholar] [CrossRef]
  5. Varela Perez, A.M. The increasing effects of light pollution on professional and amateur astronomy. Science 2023, 380, 1136–1140. [Google Scholar] [CrossRef] [PubMed]
  6. Levchenko, I.; Xu, S.; Wu, Y.L.; Bazaka, K. Hopes and concerns for astronomy of satellite constellations. Nat. Astron. 2020, 4, 1012–1014. [Google Scholar] [CrossRef]
  7. Parisot, O.; Bruneau, P.; Hitzelberger, P.; Krebs, G.; Destruel, C. Improving accessibility for deep sky observation. ERCIM News 2022, 2022. [Google Scholar]
  8. Steinicke, W. Observing and cataloguing nebulae and star clusters: From Herschel to Dreyer’s New General Catalogue; Cambridge University Press: Cambridge, UK, 2010. [Google Scholar]
  9. Hunter, T.B.; Dobek, G.O.; McGaha, J.E. Astronomical Catalogs: An Overview. In Barnard Objects Then Now; Springer: Berlin/Heidelberg, Germany, 2023; pp. 41–76. [Google Scholar]
  10. Popescu, M.M. The impact of citizen scientist observations. Nat. Astron. 2023, 7, 516–517. [Google Scholar] [CrossRef]
  11. Drechsler, M.; Strottner, X.; Sainty, Y.; Fesen, R.A.; Kimeswenger, S.; Shull, J.M.; Falls, B.; Vergnes, C.; Martino, N.; Walker, S. Discovery of Extensive [O iii] Emission Near M31. Res. Notes AAS 2023, 7, 1. [Google Scholar] [CrossRef]
  12. Peluso, D.O.; Esposito, T.M.; Marchis, F.; Dalba, P.A.; Sgro, L.; Megowan-Romanowicz, C.; Pennypacker, C.; Carter, B.; Wright, D.; Avsar, A.M.; et al. The Unistellar Exoplanet Campaign: Citizen Science Results and Inherent Education Opportunities. Publ. Astron. Soc. Pac. 2023, 135, 015001. [Google Scholar] [CrossRef]
  13. Turpin, D. Kilonova-catcher: A new citizen science project to explore the multi-messenger transient sky. In Proceedings of the Annual Meeting of the French Society of Astronomy and Astrophysics, Paris, France, 21 June 2011; pp. 153–157. [Google Scholar]
  14. Agayeva, S.; Aivazyan, V.; Alishov, S.; Almualla, M.; Andrade, C.; Antier, S.; Bai, J.; Baransky, A.; Basa, S.; Bendjoya, P.; et al. The GRANDMA network in preparation for the fourth gravitational-wave observing run. In Proceedings of the Observatory Operations: Strategies, Processes, and Systems IX, SPIE, Montréal, QC, Canada, 17–22 July 2022; Volume 12186, pp. 440–452. [Google Scholar]
  15. Mattei, J.A. The AAVSO and its variable star data bank. In Proceedings of the International Astronomical Union Colloquium; Cambridge University Press: Cambridge, UK, 1989; Volume 110, pp. 222–224. [Google Scholar]
  16. Parker, G. Making Beautiful Deep-Sky Images; Springer: Berlin/Heidelberg, Germany, 2007. [Google Scholar]
  17. Cazeneuve, D.; Marchis, F.; Blaclard, G.; Asencio, J.; Martin, V. Detection of Occultation Events by Machine Learning for the Unistellar Network. In Proceedings of the AGU Fall Meeting Abstracts, New Orleans, LA, USA, 13–17 December 2021; Volume 2021. [Google Scholar]
  18. Billingsley, B.; Heyes, J.M.; Lesworth, T.; Sarzi, M. Can a robot be a scientist? Developing students’ epistemic insight through a lesson exploring the role of human creativity in astronomy. Phys. Educ. 2022, 58, 015501. [Google Scholar] [CrossRef]
  19. Lang, D.; Hogg, D.W.; Mierle, K.; Blanton, M.; Roweis, S. Astrometry. net: Blind astrometric calibration of arbitrary astronomical images. Astron. J. 2010, 139, 1782. [Google Scholar] [CrossRef]
  20. Hu, T.; Huang, K.; Cai, J.; Pang, X.; Hou, Y.; Zhang, Y.; Wang, H.; Cui, X. Intelligence of astronomical optical telescope: Present status and future perspectives. arXiv 2023, arXiv:2306.16834. [Google Scholar]
  21. Zheng, C.; Pulido, J.; Thorman, P.; Hamann, B. An improved method for object detection in astronomical images. Mon. Not. R. Astron. Soc. 2015, 451, 4445–4459. [Google Scholar] [CrossRef]
  22. Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You only look once: Unified, real-time object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition, Las Vegas, NV, USA, 26 June–1 July 2016; pp. 779–788. [Google Scholar]
  23. González, R.; Muñoz, R.; Hernández, C. Galaxy detection and identification using deep learning and data augmentation. Astron. Comput. 2018, 25, 103–109. [Google Scholar] [CrossRef]
  24. Dumitrescu, F.; Ceachi, B.; Truică, C.O.; Trăscău, M.; Florea, A.M. A Novel Deep Learning-Based Relabeling Architecture for Space Objects Detection from Partially Annotated Astronomical Images. Aerospace 2022, 9, 520. [Google Scholar] [CrossRef]
  25. Jiménez, M.; Alfaro, E.J.; Torres Torres, M.; Triguero, I. CzSL: Learning from citizen science, experts, and unlabelled data in astronomical image classification. Mon. Not. R. Astron. Soc. 2023, 526, 1742–1756. [Google Scholar] [CrossRef]
  26. Lamane, M.; Tabaa, M.; Klilou, A. New Approach Based on Pix2Pix–YOLOv7 mmWave Radar for Target Detection and Classification. Sensors 2023, 23, 9456. [Google Scholar] [CrossRef] [PubMed]
  27. Jia, P.; Zheng, Y.; Wang, M.; Yang, Z. A deep learning based astronomical target detection framework for multi-colour photometry sky survey projects. Astron. Comput. 2023, 42, 100687. [Google Scholar] [CrossRef]
  28. Parisot, O.; Hitzelberger, P.; Bruneau, P.; Krebs, G.; Destruel, C.; Vandame, B. MILAN Sky Survey, a dataset of raw deep sky images captured during one year with a Stellina automated telescope. Data Brief 2023, 48, 109133. [Google Scholar] [CrossRef]
  29. Parisot, O. DeepSpaceYoloDataset: Annotated Astronomical Images Captured with Smart Telescopes. Data 2024, 9, 12. [Google Scholar] [CrossRef]
  30. Kumar, A. Astronomy and AI Beyond Conventional Astronomy; IIM Calcutta: Calcutta, India, 2022. [Google Scholar]
  31. Andreon, S.; Gargiulo, G.; Longo, G.; Tagliaferri, R.; Capuano, N. Wide field imaging—I. Applications of neural networks to object detection and star/galaxy classification. Mon. Not. R. Astron. Soc. 2000, 319, 700–716. [Google Scholar] [CrossRef]
  32. Kim, E.J.; Brunner, R.J. Star-galaxy classification using deep convolutional neural networks. Mon. Not. R. Astron. Soc. 2016, 464, 4463–4475. [Google Scholar]
  33. Muyskens, A.L.; Goumiri, I.R.; Priest, B.W.; Schneider, M.D.; Armstrong, R.E.; Bernstein, J.; Dana, R. Star–galaxy image separation with computationally efficient gaussian process classification. Astron. J. 2022, 163, 148. [Google Scholar] [CrossRef]
  34. Skalski, P. Make Sense. 2019. Available online: https://github.com/SkalskiP/make-sense/ (accessed on 15 January 2024).
  35. Wang, C.Y.; Bochkovskiy, A.; Liao, H.Y.M. YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors. arXiv 2022, arXiv:2207.02696. [Google Scholar]
  36. Roth, K.; Pemula, L.; Zepeda, J.; Schölkopf, B.; Brox, T.; Gehler, P. Towards total recall in industrial anomaly detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18 June–24 June 2022; pp. 14318–14328. [Google Scholar]
  37. Chaddad, A.; Peng, J.; Xu, J.; Bouridane, A. Survey of explainable AI techniques in healthcare. Sensors 2023, 23, 634. [Google Scholar] [CrossRef] [PubMed]
  38. Li, Z.; Ji, J.; Zhang, Y. From Kepler to Newton: Explainable AI for Science Discovery. In Proceedings of the ICML 2022 2nd AI for Science Workshop, Baltimore, MD, USA, 18 July 2022. [Google Scholar]
  39. Roscher, R.; Bohn, B.; Duarte, M.F.; Garcke, J. Explainable machine learning for scientific insights and discoveries. IEEE Access 2020, 8, 42200–42216. [Google Scholar] [CrossRef]
  40. Bonnarel, F.; Fernique, P.; Genova, F.; Bartlett, J.G.; Bienaymé, O.; Egret, D.; Florsch, J.; Ziaeepour, H.; Louys, M. ALADIN: A reference tool for identification of astronomical sources. In Proceedings of the Astronomical Data Analysis Software and Systems VIII, Urbana, IL, USA, 1–4 November 1999; Volume 172, p. 229. [Google Scholar]
  41. Castro, O.; Bruneau, P.; Sottet, J.S.; Torregrossa, D. Landscape of High-Performance Python to Develop Data Science and Machine Learning Applications. ACM Comput. Surv. 2023, 56, 1–30. [Google Scholar] [CrossRef]
  42. Kapishnikov, A.; Bolukbasi, T.; Viégas, F.; Terry, M. XRAI: Better attributions through regions. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Paris, France, 2–6 October 2019; pp. 4948–4957. [Google Scholar]
  43. Sundararajan, M.; Taly, A.; Yan, Q. Axiomatic attribution for deep networks. In Proceedings of the International Conference on Machine Learning, PMLR, Honolulu, HI, USA, 23–29 July 2017; pp. 3319–3328. [Google Scholar]
  44. Cartucho, J.; Ventura, R.; Veloso, M. Robust Object Recognition Through Symbiotic Deep Learning In Mobile Robots. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018; pp. 2336–2341. [Google Scholar]
  45. Isola, P.; Zhu, J.Y.; Zhou, T.; Efros, A.A. Image-to-image translation with conditional adversarial networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 1125–1134. [Google Scholar]
  46. KumarSingh, N.; Laddha, N.; James, J. An Enhanced Image Colorization using Modified Generative Adversarial Networks with Pix2Pix Method. In Proceedings of the 2023 International Conference on Recent Advances in Electrical, Electronics, Ubiquitous Communication, and Computational Intelligence (RAEEUCCI), IEEE, Chennai, India, 19–21 April 2023; pp. 1–8. [Google Scholar]
  47. Parisot, O. Amplifier Glow Reduction. European Patent Office. EP4207056A1, 5 July 2023. [Google Scholar]
  48. Tran, N.T.; Tran, V.H.; Nguyen, N.B.; Nguyen, T.K.; Cheung, N.M. On data augmentation for gan training. IEEE Trans. Image Process. 2021, 30, 1882–1897. [Google Scholar] [CrossRef]
  49. O’Brien, M. Computer Control of a Telescope. In A Deep Sky Astrophotography Primer; Springer: Berlin/Heidelberg, Germany, 2023; pp. 73–94. [Google Scholar]
  50. Locritani, M.; Merlino, S.; Garvani, S.; Di Laura, F. Fun educational and artistic teaching tools for science outreach. Geosci. Commun. 2020, 3, 179–190. [Google Scholar] [CrossRef]
  51. Stanway, E.R. Evidencing the interaction between science fiction enthusiasm and career aspirations in the UK astronomy community. arXiv 2022, arXiv:2208.05825. [Google Scholar]
Figure 1. Observation by the authors of the Lagoon Nebula (i.e., Messier 8) during a stargazing session with a Vespera smart telescope (6 July 2023). At the left, the image after 10 s of capture. At the right, the image after 1200 s of capture.
Figure 1. Observation by the authors of the Lagoon Nebula (i.e., Messier 8) during a stargazing session with a Vespera smart telescope (6 July 2023). At the left, the image after 10 s of capture. At the right, the image after 1200 s of capture.
Astronomy 03 00009 g001
Figure 2. An image of Andromeda Galaxy (i.e., Messier 31), captured with a Vespera telescope (5 September 2023) and the processed with StarNet to remove the point sources and keep only the DSO.
Figure 2. An image of Andromeda Galaxy (i.e., Messier 31), captured with a Vespera telescope (5 September 2023) and the processed with StarNet to remove the point sources and keep only the DSO.
Astronomy 03 00009 g002
Figure 3. A field of view with a group of galaxies including Messier 49 captured with a Vespera smart telescope (27 March 2023). Red bounding boxes correspond to the annotations produced by the trained YOLOv7 model described in Section 4.2.
Figure 3. A field of view with a group of galaxies including Messier 49 captured with a Vespera smart telescope (27 March 2023). Red bounding boxes correspond to the annotations produced by the trained YOLOv7 model described in Section 4.2.
Astronomy 03 00009 g003
Figure 4. At the left, an image of the Lion Triplet captured (Messier 65, Messier 66, NGC 3628) with a Vespera smart telescope (27 March 2023). At the right, the XRAI heatmap highlighting the pixels that are considered by the Resnet50 classifier for detecting the presence of DSO.
Figure 4. At the left, an image of the Lion Triplet captured (Messier 65, Messier 66, NGC 3628) with a Vespera smart telescope (27 March 2023). At the right, the XRAI heatmap highlighting the pixels that are considered by the Resnet50 classifier for detecting the presence of DSO.
Astronomy 03 00009 g004
Figure 5. At the left, an image of the Markarian Chain (Messier 84, Messier 86, NGC 4477, NGC 4473, NGC 4461, NGC 4458, NGC 4438 et NGC 4435) captured with a Vespera smart telescope (14 April 2023). At the right, the XRAI heatmap highlighting the pixels that are considered by the Resnet50 classifier for detecting the presence of DSO.
Figure 5. At the left, an image of the Markarian Chain (Messier 84, Messier 86, NGC 4477, NGC 4473, NGC 4461, NGC 4458, NGC 4438 et NGC 4435) captured with a Vespera smart telescope (14 April 2023). At the right, the XRAI heatmap highlighting the pixels that are considered by the Resnet50 classifier for detecting the presence of DSO.
Astronomy 03 00009 g005
Figure 6. At the left, an image of the Great Cluster in Hercules (i.e., Messier 13) captured with a Stellina smart telescope (18 May 2023). At the right, the XRAI heatmap highlighting the pixels that are considered by the Resnet50 classifier for detecting the presence of DSO.
Figure 6. At the left, an image of the Great Cluster in Hercules (i.e., Messier 13) captured with a Stellina smart telescope (18 May 2023). At the right, the XRAI heatmap highlighting the pixels that are considered by the Resnet50 classifier for detecting the presence of DSO.
Astronomy 03 00009 g006
Figure 7. At the left, an image of the Little Dumbbell Nebula (Messier 76) captured with a Vespera smart telescope. At the right, the XRAI heatmap highlighting the pixels that are considered by the Resnet50 classifier for detecting the presence of DSO.
Figure 7. At the left, an image of the Little Dumbbell Nebula (Messier 76) captured with a Vespera smart telescope. At the right, the XRAI heatmap highlighting the pixels that are considered by the Resnet50 classifier for detecting the presence of DSO.
Astronomy 03 00009 g007
Figure 8. Capture of Omega Nebula (Messier 17) with a Vespera (28 July 2023) and processed with the Pix2Pix model to highlight the DSO.
Figure 8. Capture of Omega Nebula (Messier 17) with a Vespera (28 July 2023) and processed with the Pix2Pix model to highlight the DSO.
Astronomy 03 00009 g008
Table 1. Summary of results obtained on a dataset of 100 high-resolution images captured with smart telescopes.
Table 1. Summary of results obtained on a dataset of 100 high-resolution images captured with smart telescopes.
ApproachPrecisionRecallF1-Score
Compute bounding boxes after having removed the point sources (Section 4.1)0.450.360.40
Compute bounding boxes with a a dedicated YOLOv7 model (Section 4.2)0.790.510.62
Compute bounding boxes after Resnet50 and XRAI (Section 4.3)0.680.410.51
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Parisot, O.; Jaziri, M. Deep Sky Objects Detection with Deep Learning for Electronically Assisted Astronomy. Astronomy 2024, 3, 122-138. https://doi.org/10.3390/astronomy3020009

AMA Style

Parisot O, Jaziri M. Deep Sky Objects Detection with Deep Learning for Electronically Assisted Astronomy. Astronomy. 2024; 3(2):122-138. https://doi.org/10.3390/astronomy3020009

Chicago/Turabian Style

Parisot, Olivier, and Mahmoud Jaziri. 2024. "Deep Sky Objects Detection with Deep Learning for Electronically Assisted Astronomy" Astronomy 3, no. 2: 122-138. https://doi.org/10.3390/astronomy3020009

APA Style

Parisot, O., & Jaziri, M. (2024). Deep Sky Objects Detection with Deep Learning for Electronically Assisted Astronomy. Astronomy, 3(2), 122-138. https://doi.org/10.3390/astronomy3020009

Article Metrics

Back to TopTop