Next Article in Journal
PeV-Scale SUSY and Cosmic Strings from F-Term Hybrid Inflation
Previous Article in Journal
Cosmological Inference from within the Peculiar Local Universe
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Artificial Intelligence in Astronomical Optical Telescopes: Present Status and Future Perspectives

by
Kang Huang
1,2,3,
Tianzhu Hu
1,2,*,
Jingyi Cai
1,2,
Xiushan Pan
1,2,3,
Yonghui Hou
1,2,3,
Lingzhe Xu
1,2,
Huaiqing Wang
1,2,
Yong Zhang
1,2,4,* and
Xiangqun Cui
1,2,*
1
Nanjing Institute of Astronomical Optics & Technology, Chinese Academy of Sciences, Nanjing 210042, China
2
CAS Key Laboratory of Astronomical Optics & Technology, Nanjing Institute of Astronomical Optics & Technology, Nanjing 210042, China
3
University of Chinese Academy of Sciences, Beijing 100049, China
4
National Astronomical Observatories, Chinese Academy of Sciences, Beijing 100101, China
*
Authors to whom correspondence should be addressed.
Universe 2024, 10(5), 210; https://doi.org/10.3390/universe10050210
Submission received: 13 March 2024 / Revised: 19 April 2024 / Accepted: 3 May 2024 / Published: 8 May 2024
(This article belongs to the Section Space Science)

Abstract

:
With new artificial intelligence (AI) technologies and application scenarios constantly emerging, AI technology has become widely used in astronomy and has promoted notable progress in related fields. A large number of papers have reviewed the application of AI technology in astronomy. However, relevant articles seldom mention telescope intelligence separately, and it is difficult to understand the current development status of and research hotspots in telescope intelligence from these papers. This paper combines the development history of AI technology and difficulties with critical telescope technologies, comprehensively introduces the development of and research hotspots in telescope intelligence, conducts a statistical analysis of various research directions in telescope intelligence, and defines the merits of these research directions. A variety of research directions are evaluated, and research trends in each type of telescope intelligence are indicated. Finally, according to the advantages of AI technology and trends in telescope development, potential future research hotspots in the field of telescope intelligence are given.

1. Introduction

Sites with potential for excellent astronomical observations are limited to high-altitude areas, Antarctica, and outer space, making on-site operations challenging. The use of artificial intelligence (AI) to assist astronomers in harsh environments can alleviate burdens on them. AI can also enable the realization of functions that can only be achieved by complex equipment, thereby reducing equipment procurement and transportation costs and significantly alleviating the load on space telescopes. Additionally, AI can facilitate the scheduling of telescope missions and the diagnosis of faults, enhancing imaging quality and data output.
Since John McCarthy proposed the concept of artificial intelligence in 1956 [1], the field has undergone various stages of development. In the early 1970s, rule-based expert systems provided decision support in specific domains [2]. Neural networks developed rapidly during the 1980s due to the application of the backpropagation algorithm [3]. In the 1990s, statistical learning methods, such as support vector machines [4] and decision trees [5], gained prominence. The need to process massive amounts of data has led to the application of feature dimensionality reduction technologies in feature engineering [6]. Simultaneously, the concept of deep learning emerged in 2006 [7]. After 2010, deep learning demonstrated its remarkable capabilities, with deep neural networks excelling in fields such as image recognition, speech recognition, and natural language processing. And more complex deep learning models like recurrent neural networks [8] and convolutional neural networks [9] have been further developed. In recent years, large language models based on AI technologies, such as GPT-3, have attracted significant interest and been widely discussed. AI technologies are generally divided into connectionist and symbolist approaches. Connectionism, represented by deep learning, utilizes neurons for information processing, while symbolism is represented by the use of knowledge graphs which represent information using symbols and use rules to operate. As early as the 1990s, neural networks were applied in the observation planning of the Hubble Space Telescope (HST) [10], expert systems were used in the fault diagnosis of HST energy systems [11], and statistical machine learning algorithms were widely used in the preprocessing of database data to label quasars, stars, and galaxies [12,13,14]. With AI’s evolution, its applications in telescope intelligence have broadened, encompassing the selection of excellent stations, the calibration of telescope optical systems, and the optimization of imaging quality [15,16,17]. In general, large, ground-based astronomical telescopes are integrated optical and mechatronics devices encompassing mechanical and drive systems, optical paths and optical systems, imaging observation, control systems, and environmental conditions. Each subsystem comprises numerous complex entities, including but not limited to the parts shown in Figure 1. In the future, with the relentless progression of cutting-edge AI technology, telescope technology will undergo significant changes.
Many articles have described the application of AI technology in astronomy. In 2010, Ball and Brunner [18] introduced the application of traditional machine learning algorithms, including the use of support vector machines (SVMs), artificial neural networks (ANNs), K-nearest neighbor (KNN) algorithms, kernel density estimations, the expectation–maximization algorithm (EM), self-organizing maps (SOMs), and K-means clustering algorithms, in astronomical big data mining. Fluke and Jacobs [19] introduced the application of the latest deep learning algorithms in the field of astronomy and divided AI tasks into classification, regression, clustering, prediction, generation, discovery, and the promotion of scientific ideas according to the type of task. Meher and Panda [20] and Sen et al. [21] provided more comprehensive and detailed introductions to the application of deep learning algorithms in the field of astronomy. However, the scarcity of telescope-specific discussions in these articles makes it impossible to understand trends and hotspots in current research on telescope intelligence.
This paper explores the lifecycle of astronomical optical telescopes (which, in general, operate primarily within the visible range, the near-ultraviolet range, and the infrared range [22]), delineating two principle phases of telescope intelligence: manufacturing planning and operational maintenance. Various aspects, such as observatory site condition assessments, optical system design, telescope observation schedules, fault diagnosis, image quality optimization, and telescope database management, are included, and this paper systematically outlines the role of AI technology in each field. Then, these focal points of research are statistically analyzed, delineating the merits of each investigative direction. Furthermore, emergent trends in telescope intelligence are indicated. Finally, this paper considers the future advantages of both technology and telescope development trends, identifying prospective hotspots in future telescope intelligence research.
The arrangement of this article is as follows. Section 2 comprehensively introduces telescope intelligence, providing examples, and decribes hotspots in all stages of astronomical optical telescope research. Section 3 discusses and analyzes current research on telescope intelligence and describes research trends and future hotspots. Finally, the main conclusions of the article are summarized in Section 4.

2. Telescope Intelligence

Telescope intelligence encompasses two primary domains: manufacturing planning and operational maintenance. The former can be further divided into astronomical observatory site selection intelligence and optical system intelligence, while the latter can be further divided into observation schedule intelligence, fault diagnosis intelligence, image quality optimization intelligence, and database intelligence.

2.1. Observatory Site Selection

The selection of observation sites for astronomical telescopes is crucial to maximizing their observational capabilities. Theoretical research and empirical methods for site selection have rapidly developed in recent decades, leading to the discovery of exceptional ground-based sites such as Maunakea in Hawaii [23], Paranal and La Silla in the Chilean highlands, La Palma in Spain [24,25], Dome A in Antarctica [26], and Lenghu on the Tibetan Plateau in China [27]. The environments of some sites are shown in Figure 2.
Selecting an astronomical observatory site requires the careful consideration of several critical observation parameters. These include the number of clear nights, atmospheric seeing conditions, precipitable water vapor (PWV), night sky brightness, meteorological parameters (e.g., wind speed and cloud distribution), artificial light pollution, and terrain coverage. AI methods play a crucial role in the statistical analysis and forecasting of these key indicators.

2.1.1. Assessment of Site Observation Conditions

In recent years, meteorological satellites, GIS (Geographic Information System) technologies, and all-sky cameras have played prominent roles in assessing astronomical observation conditions at target sites [28,29,30]. With the use of AI technology, station cloud cover statistics can be quickly assessed.
Conventional cloud identification, which relies on multiband thresholding rules for classifying cloud areas, necessitates specific detection equipment for the corresponding bands and may yield low-accuracy results. SVM, principal component analysis (PCA), and Bayesian methods are utilized for single-pixel classifications of satellite images; however, their performance is hindered by the absence of spatial information. Francis et al. [31] utilized a convolutional neural network (CNN) algorithm to amalgamate single-pixel and spatial information, realizing the high-precision identification of satellite cloud conditions. Mommert [32] used a machine learning model based on gradient boosting called lightGBM and a residual neural network to classify cloud conditions from the Lowell Observatory’s all-sky camera imagery. They showed that lightGBM demonstrates superior accuracy. Li et al. [33] combined CNN and Transformer to classify and recognize cloud types, solving the problem of the global difficulty of CNN feature extraction. The model architecture is shown in Figure 3.
Meanwhile, employing AI techniques to classify statistical data from multiple stations can enable the prediction of PWV and sky background. Molano et al. [34] utilized unsupervised learning to cluster meteorological parameters from various weather stations from sites across Colombia, obtaining two very low PWV astronomical sites with high probability. Priyatikanto et al. [15] applied a random forest (RF) algorithm to classify sky brightness from different stations, enabling the monitoring of sky background brightness. Additionally, Kruk et al. [35] employed transfer learning and AutoML (automated machine learning) to analyze approximately two decades of Hubble Space Telescope imagery; their work quantifies the impact of artificial satellites on astronomical observations, a factor that should be considered in the selection of astronomical observatory sites.

2.1.2. Site Seeing Estimate and Prediction

The structure of the Earth’s atmosphere, as depicted in Figure 4, primarily consists of the troposphere, stratosphere, mesosphere, thermosphere, and exosphere. The troposphere can be further divided by altitude into the free atmosphere and the planetary boundary layer, which has the greatest impact on atmospheric turbulence [36,37]. Monitoring and predicting atmospheric turbulence is of significant importance for enhancing telescope observational efficiency.
As a major parameter reflecting the intensity of atmospheric turbulence and the most crucial site parameter, the atmospheric seeing of observatory sites significantly affects the image quality of optical telescopes. Certain models estimate atmospheric seeing by correlating atmospheric parameters with the integrated parameters of optical turbulence (astronomical optical parameters), such as the Dewan model [38], the Coulman-Vernin model [39], and the AXP model (the parameters A and p are functions related to the altitude h) proposed by Trinquet and Vernin [40]. However, these models, as they are based on statistical data from multiple stations, are less effective for specific site analyses. AI techniques can establish relationships between atmospheric parameters and astronomical optical parameters for a given station.
C N 2 , a key parameter reflecting the change in optical turbulence intensity, is also an important parameter for deriving atmospheric seeing through an analytical model. In a pioneering effort, Wang and Basu [41], from the Mauna Loa Observatory, first used ANNs on atmospheric temperature, pressure, and relative humidity to estimate the structure constant of the refractive index C N 2 . Jellen et al. [42] used the RF method to predict the C N 2 of the near-surface atmosphere and studied the contribution of environmental parameters to optical turbulence. Su et al. [43] operated an optimized backpropagation (BP) network to experiment with data obtained from Chinese Antarctic scientific research, showing that C N 2 forecasting results based on this method had a reliable correlation. Vorontsov et al. [44] processed short-exposure laser beam intensity scintillation patterns based on deep neural networks (DNNs) to predict C N 2 , achieving superior measurement accuracy and a higher temporal resolution. Bi et al. [45] used a GA-BP (Genetic Algorithm Backpropagation) neural network to train and predict meteorological parameters collected by an instrument, a technique which can deduce the relevant astronomical optical parameters. Grose and Watson [46] employed a turbulence prediction method based on recurrent neural networks (RNNs), utilizing prior environmental parameters to forecast turbulence parameters for the following 3 h.
AI techniques have also been used to directly forecast seeing. Kornilov [47] analyzed atmospheric optical turbulence data above Mount Shatdzhatmaz and predicted short-term seeing. Milli et al. [48] constructed a seeing prediction strategy based on the RF method for the Paranal Observatory to provide a reference for optimizing telescope observation efficiency. Giordano et al. [49] used the atmospheric parameter database of the Calern Observatory as an input for statistical learning to predict turbulence conditions while illustrating the importance of the in situ parameter characteristics of the station [50]. The Maunakea Observatory has used its observation and forecast data to build a machine learning seeing prediction model that can make predictions for the following five nights [51,52]. Turchi et al. [53] used the RF method to make a short-time-scale (1–2 h) forecast of atmospheric turbulence and seeing above the Very Large Telescope (VLT). Hou et al. [54] employed wind speed and temperature gradient data acquired from Antarctic Dome A as inputs and predicted seeing based on long short-term memory (LSTM) and Gaussian Process Regression (GPR). Masciadri et al. [55] introduced a method for short-term (1–2 h) predictions of astroclimate parameters, including seeing, airmass, coherence time, and ground-layer fraction, demonstrating its effectiveness at the VLT. Ni et al. [56] used situ monitoring data from the Large Sky Area Multi-Objective Fiber Spectroscopic Telescope (LAMOST) to predict seeing through a variety of AI techniques, including sthe tatistical models ARIMA and Prophet, the machine learning methods Multilayer Perceptron (MLP) and XGBoost, and the deep learning methods LSTM, Gate Recurrent Unit (GRU), and Transformer. The method, input parameters, output parameters, and statistical operators of some methods mentioned above are shown in Table 1.

2.2. Intelligence of Optical Systems

The optical system is the most crucial part of a telescope. Optical system misalignment directly affects imaging quality, resulting in the deformation of star shapes and the enlargement of star sizes. Traditional optical system calibration relies on a laser interferometer and wavefront detection equipment, which are challenging to operate in harsh environments. AI technology can replace or simplify equipment operations to achieve optical path and mirror surface calibration.

2.2.1. Optical Path Calibration

Large-aperture and wide-field survey telescopes often have a primary mirror with a fast focal ratio, making the secondary mirror more sensitive and requiring higher calibration accuracy [57]. By using laser interferometers and other equipment for manual adjustments, the mirror tilt accuracy can reach ten arc seconds, and the eccentricity accuracy can reach 0.1 mm [58]. Many computer-aided alignment methods have been developed to achieve higher-precision calibration, including the vector aberration theory proposed for the optical path alignment of the LSST (Large Synoptic Survey Telescope) and JWST (James Webb Space Telescope) [59]. With mature wavefront detection technology, the method of wavefront detection and inverse calculations of misalignment error based on Zernike polynomial decomposition have played enormous roles in the collimation and adjustment of telescopes.
The method mentioned above requires a wavefront sensor device and the relationship between the telescope’s aberration and the adjustment error, and it meets the challenge of adjusting multiple fields of view simultaneously. AI technology can construct a relationship between the adjustment error and the detection parameters. Wu et al. [57] used an ANN to construct a relationship between a star image obtained by a scientific camera and the adjustment error of a telescope to realize optical path calibration. Jia et al. [16] proposed a CNN-based algorithm to fit the relationship between the point spread function (PSF) and the four degrees of freedom of a secondary mirror which can be used to align the secondary mirrors of wide-field survey telescopes. The Rubin Observatory uses a CNN with a self-attention mechanism to realize correspondence between the degrees of freedom of the primary mirror, secondary mirror, and focal plane and the final imaging of the scienctific camera, achieving the active adjustment of the attitude of the LSST telescope [60].
In addition to being applied to the calibration of the entire optical path of a telescope, AI technology can also be applied to calibrating a specific device in a telescope. LAMOST is a large-field survey telescope with a focal surface that has 4000 optical fibers. Before each observation, each optical fiber needs to be moved to its corresponding position. The positioning accuracy of the optical fiber is closely related to the initial angle of the optical fiber head. A CNN is used to classify the camera’s pixels, including the optical fiber head, to carry out optical fiber contour extraction and achieve the optical fiber’s initial angle [61].

2.2.2. Mirror Surface Calibration

Active optics technology is the primary means of realizing the surface shape calibration of large-aperture telescopes. An active optical system can be divided into two steps. The first part comprises wavefront reconstruction, including phase retrieval and the use of the phase diversity method and the wavefront sensor-based method. In the second step, the calibration voltage according to the corresponding relationship between the wavefront and the calibration voltage is obtained. The calibration voltage is imported into the force actuator to realize the adjustment of the surface shape [62].
AI technology can be applied to active optical technology to calibrate surface shape. A DNN can be used to directly construct the relationship between a point map obtained by a Shack–Hartmann wavefront sensor and the calibration voltage to improve calibration efficiency [63]. Bi-GRU can be used to obtain the corresponding relationship between the defocused star images and the wavefront [64]. A sketch map of the co-phasing approach using the Bi-GRU network is shown in Figure 5. An SVM can be utilized to overcome the shortcomings of curvature sensing, which is easily affected by atmospheric disturbances, and improve the calibration ability of curvature sensing [65].
Segmented mirror telescope surface calibration mainly comprises piston error and tip/tilt error; among these, tip/tilt error detection methods are relatively mature, while piston error detection is still challenging and 2 π errors are prone to occurring. An ANN can be used to build the relationship between piston error and the amplitude of a modulation transfer function’s sidelobes to detect piston error [66]. A CNN can be used to distinguish the range of the piston error and improve the sensitivity of the Shack–Hartmann wavefront sensor to the 2 π error [67]. Wang et al. [68] proposed a CNN-based multichannel left-subtract-right feature vector piston error detection method which can improve the detection range of piston error to between 139 λ and 139 λ .

2.3. Intelligent Scheduling

A telescope executes observation tasks according to an observation schedule. A long-term schedule refers to allocating observation time for observation tasks over the next few months, selecting valuable observation proposals from numerous observation proposals; observations are then conducted according to each proposal’s priority, observation time, observation goals, and the constraints of the telescope itself. At the same time, it is necessary to formulate short-term plans and adjust the observation plan for new targets or transient phenomena.
Observation planning requires the calculation of many observation tasks to find the best observation plan. Manual planning is not sufficient for long-term task planning. Research in this area has been ongoing since the mid-1950s, ranging from simple heuristics to more complex genetic algorithms or neural networks. AI techniques are widely used in observation-planning tasks. Granzer [69] introduced traditional observation-planning methods, including queue planning, critical path planning, optimal planning, and allocation planning. Colome et al. [70] further comprehensively introduced observation-planning techniques developed over the past 50 years, mainly based on genetic algorithms, the ant colony optimization algorithm, multiobjective evolutionary algorithms, and other new methods.
In the 1990s, the Hubble Space Telescope utilized an ANN-based SPIKE system [10,71] to generate observation plans and extended its use to the VLT and Subaru telescopes. This neural network is built upon the Hopfield discrete neural network framework [72], with the addition of a guard network to prevent it from becoming trapped in local minima.
The telescope scheduling process is transformed into a constraint satisfaction problem in which N observation activities are scheduled into M time intervals. The schedule result can be presented by an N × M neural network, where y i m represents the state of observation activities A i in time intervals m. The constraints are categorized into unary constraints b i m and binary constraints W i m , j n . Unary constraints apply to single observation activities A i only. Binary activity constraints are derived from suitability functions which specify the impact of scheduling one activity A i on another A j . The neural node states y i m are initialized randomly, the neural state is then updated, and the input, denoted by x i m , is given by the following equation:
x i m = j n W j n y j n + b i m ,
A neuron selected for updating computes its state from its input via the following step transfer function:
y i m η ( x i m ) = 1 , if x i m > 0 0 , if x i m 0
and once the total utility U stabilizes, the schedule result is outputted. Here,
U = 1 2 m , n = 1 M i , j = 1 N W i m , j n y i m y j n + m = 1 M i = 1 N b i m y i m .
The non-dominated sorting genetic algorithm (NSGA-II) was employed in the DSAN, RTS2, and EChO projects [73] and the 3.5-m Zeiss telescope [74]. It is suitable for long-term planning tasks and can be used in conjunction with constraint-based methods. The generalized differential evolution 3 (GDE3) algorithm is more efficient than NSGA-II, and it was combined with SPIKE to provide observation planning for the JWST [75]. In addition, it is also used in the DSAN project. The SWO (Squeaky Wheel Optimization) optimizer based on the greedy algorithm is used in the SOFIA [76], Mars Rover, and THEMIS projects. Reinforcement learning is used for the planning of LSST telescopes and the ordering of sky areas observed by optical telescopes, improving the probability of optical telescopes discovering transient astronomical phenomena such as gravitational waves, gamma-ray bursts, and kilonovae [77,78].

2.4. Fault Diagnosis

Real-time fault monitoring and efficient fault diagnosis can prevent observation time waste and ensure high-quality imaging. In addition, equipment faults in telescopes can lead to significant economic losses. Traditional telescope fault diagnosis involves installing numerous sensors to monitor telescope parameters, such as voltage and meteorological conditions, and setting thresholds based on experience. When these parameters exceed the threshold, the alarm system will issue a warning and identify the location and cause of the fault.
The application of AI technology in fault diagnosis has a long history. Since the 1980s, fault diagnosis systems based on expert knowledge have been widely used in fields such as the aerospace industry and automotive fault diagnosis and telescopes. For instance, Dunham et al. [79] applied a knowledge-based diagnostic system to achieve fault diagnosis in the pointing and tracking system of the Hubble Space Telescope, while Bykat [11] used an expert system to diagnose faults in the energy system of the same telescope. The fault diagnosis system based on expert knowledge has the advantages of strong logic and intuitive knowledge representation and is still widely used. Yun and Shi-hai [80] used a knowledge tree to implement artificial intelligence in the main-axis control system of the Antarctic Sky Survey Telescope AST3. Tang et al. [81] proposed a method for the rapid localization of faults in LAMOST’s fiber positioner based on LSTM.
In the 1980s, fault diagnosis systems based on ANNs gained popularity with the rise of neural networks. The fault diagnosis problem can be viewed as a classification problem in which correspondence between sensor features and faults can be established. In recent years, deep learning has advanced the use of neural networks for deep fault feature extraction. For instance, Teimoorinia et al. [82] combined an SOM and CNN to classify star image shapes for telescope imaging, enabling the timely detection of poor-quality telescope imaging. Similarly, Hu et al. [83] used CNNs to establish a relationship between telescope failure and telescope images, enabling the initial diagnosis of telescope faults. Recently, we also proposed a methodology for the real-time monitoring and diagnosis of the imaging quality of astronomical telescopes [84], incorporating AI technologies such as CNNs and knowledge graphs, and validated it using observational data from LAMOST. This has profound implications for the future of monitoring and diagnosing imaging quality in next-generation large telescopes. The framework of this approach is shown in Figure 6.
Neural networks are also being used for fault prediction by establishing a relationship between parameters before a fault occurs and the probability of the fault occurring. However, fault prediction in telescopes is still in its early stages due to a lack of training data.

2.5. Optimization of Imaging Quality

The imaging quality of an astronomical telescope can be quantified by the full width at half-maximum (FWHM) of the light intensity distribution. A smaller FWHM indicates better imaging quality. The measured values of imaging quality ( I Q M e a s u r e d ) are mainly affected by defects in the optical system, turbulence introduced by the dome, and atmospheric turbulence, which are expressed using corresponding indicators ( I Q O p t i c s , I Q D o m e , and I Q A t m o s p h e r e , respectively). These indicators follow the 5/3-power law from different turbulent layers [85,86], and the following relationship exists:
I Q M e a s u r e d 5 / 3 = I Q O p t i c s 5 / 3 + I Q D o m e 5 / 3 + I Q A t m o s p h e r e 5 / 3 .
Assuming that the observatory site is confirmed and that the defects in the telescope’s optical system cannot be further improved, its imaging quality can be optimized by addressing atmospheric turbulence and dome seeing. This can be achieved through the use of adaptive optics technology to calibrate atmospheric turbulence and improve the dome design and ventilation system to enhance dome seeing.

2.5.1. Dome Seeing

The primary factors that affect dome seeing are local temperature differences inside the dome, especially near the primary and secondary mirrors, and turbulent flow caused by temperature differences inside and outside the dome. Therefore, controlling the temperature difference is crucial for reducing deteriorations in imaging quality caused by dome seeing.
In general, the air temperature inside an observatory dome is higher than the ambient temperature at night. The initial solution to this problem was to ventilate the dome for a few hours before nighttime observation to bring the temperature inside the dome in line with the ambient temperature. However, due to the high-temperature inertia of the primary and secondary mirrors, more than a few hours of ventilation is needed to bring the temperature down to the same level as the ambient temperature. As a result, a local turbulence layer, known as mirror seeing, can form near the mirror with the higher temperature, leading to a deterioration in imaging quality.
To mitigate this issue, the general practice is to use temperature control means, such as air conditioning, during the daytime to adjust the temperature inside the dome, primarily the temperature of the primary and secondary mirrors, so they are consistent with the ambient temperature during nighttime observations. The accurate prediction of the ambient temperature during nighttime observations is crucial for temperature control.
Murtagh and Sarazin [87] utilized meteorological data recorded by the European Southern Observatory in La Silla and Paranal to predict temperature 24 h later using the KNN method. They used the current, 24-h-ahead, and 48-h-ahead temperatures, as well as the current and 24-h-ahead air pressure, to achieve an accuracy of 85.1% with an error range of 2 °C and a 70% reliability of predicting good seeing. Aussem et al. [88] investigated the accuracy and adaptability of dynamic recurrent neural networks and KNN methods for time series predictions using the same data. They found that many interruptions in the recording sequence and insufficient data were the main factors limiting these two methods. Additionally, they proved that in the case of the fuzzy coding of seeing (dividing the seeing degree into Good, Moderate, and Bad), the forecasting accuracy of dynamic recurrent neural networks outperforms that of KNN methods. Buffa and Porceddu [89] studied the problem of forecasting observatory site temperatures and proved that the nonlinear autoregressive neural network model is more competitive than the traditional linear filtering algorithm.
In addition, Gilda et al. [17] developed a method for predicting the probability distribution function of observed image quality based on environmental conditions and observatory operating parameters using Canada–France–Hawaii Telescope data and a mixture density network. They then combined this approach with a robust variational autoencoder to forecast the optimal configuration of 12 vents to reduce the time required to reach a fixed signal-to-noise ratio (SNR) for observations. This approach has the potential to increase scientific output and improve the efficiency of astronomical observations. Figure 7 presents results on the improvement in MegaPrime Image Quality (MPIQ, MegaPrime is a scientific instrument of the Canada–France–Hawaii Telescope), as predicted by a mixture density network, given the (hypothetically) optimal vent configurations obtained from a restricted set of ID configurations selected by the robust variational autoencoder.

2.5.2. Adaptive Optics

Adaptive optics (AO) is a crucial tool for improving the imaging quality of ground-based optical telescopes by ameliorating atmospheric turbulence. Since its first implementation in the telescopes of the European Southern Observatory 30 years ago, it has been widely used for imaging quality optimization. Guo et al. [90] introduced machine learning methods in adaptive optics, including improving the performance of wavefront sensors, building WFSless adaptive optics systems, and developing wavefront prediction techniques.
Improving the performance of traditional wavefront sensors is similar to improving active optics, which includes enhancing the anti-noise of wavefront detection [91,92] and constructing a relationship between wavefront detection equipment imaging and the wavefront [93,94]. In addition to being implemented in conventional adaptive optics systems, AI techniques have also been utilized to overcome the sensitivity of multiobjective adaptive optics systems to atmospheric contour changes [95].
WFSless systems do not use conventional wavefront sensing devices to construct wavefronts. Kendrick et al. [96] used defocusing images to generate wavefronts based on neural networks. Wong et al. [97] ultilized bottleneck networks for more precise wavefront reconstruction, which was confirmed to have enhanced performance by data from the adaptive optics system of the Subaru Telescope. Some training and testing procedures using the proposed method are shown in Figure 8. With the wide application of deep learning, CNNs are also employed for WFSless systems for wavefront detection [98,99,100,101].
The calibration frequency of adaptive optics is much higher than that of active optics technology, and the reconstruction and feedback of the wavefront cause the wavefront calibrated by the optical calibration equipment to have an unavoidable time delay compared with the actual wavefront. Improving the speed of wavefront reconstruction or predicting future wavefronts can solve this problem. Montera et al. [102] compared the prediction accuracy of the linear minimum mean square error method and the neural network algorithm, indicating that the prediction performance of the latter was better. The recurrent neural network LSTM [103] and the Bayesian regularization-based neural network architectures [104] are also implemented for wavefront prediction.
In addition, AI technology is also used to expand the ultra-high-resolution imaging of light sources. This includes using CNNs to construct encoding and decoding layers in which the encoding layers extract image features and the decoding layers output corrected images and multi-frame information to improve solar imaging resolution [105]. Generative adversarial networks are also utilized to generate high-resolution solar magnetic field pictures [106,107]. However, as this research content does not belong to traditional telescope technology, we will not further discuss it as a telescope-related intelligent technology.

2.6. Database Intelligence

The astronomical database system serves as service platform for data storage and sharing, allowing astronomers and other relevant users worldwide to share, obtain, and mine valuable information from astronomical database lists [108]. To enhance the scientificity and richness of data, it is crucial to establish an intelligent astronomical database. AI technology is well suited to database data fusion and classification due to its ability to automatically extract features [19].

2.6.1. Database Data Fusion

The Astronomical Database has been accumulating data since the 1980s and comprises various sub-databases, such as the astronomical star list database, large-field multicolour sky survey database, asteroid database, and astronomical literature database. The cross-fusion of databases is a significant trend in current astronomy. Table 2 shows the classification of the database.
Bayesian-based methods are widely used in astronomical catalog cross-matching and image fusion [109]. Budavári and Szalay [110] developed a unified framework, grounded in Bayesian principles, for object matching which includes both spatial information and physical properties. Additionally, Medan et al. [111] presented a Bayesian method to cross-match 5,827,988 high-proper-motion Gaia sources with various photometric surveys. Furthermore, Bayesian methods are employed to determine the probability of whether the data represent objects or the background in image fusion [112,113].
To enhance the connection between various datasets, it is necessary to strengthen the application of AI in database retrieval and outlier retrieval. As an example, Du et al. [114] developed a method based on the bipartite ranking model and bagging techniques, which can systematically search for specific rare spectra in the Sloan Digital Sky Survey (SDSS) spectral dataset with high accuracy and little time consumption. Similarly, Wang et al. [115] proposed an unsupervised hash learning-based rare spectral automatic approximate nearest neighbor search method which searches for rare celestial bodies based on spectral data and retrieves rare O-type stars and their subclasses from the LAMOST database.
Retrieving outliers in a database can improve the reliability of astronomical databases and realize their fusion. Rebbapragada et al. [116] combined Periodic Curve Anomaly Detection with the K-means clustering algorithm to separate anomalous objects from known object categories. However, this method has poor scalability and may lose possible outliers in massive datasets and high-dimensional spaces. Therefore, Nun et al. [117] proposed a new RF-based method to automatically discover unknown abnormal objects in large astronomical catalogs and followed up on the outliers for a more in-depth analysis by cross-matching them with all publicly available catalogs. Moreover, the use of multiple data sources may lead to the loss of a certain amount of association information in different datasets. To address this issue, Ma et al. [118] proposed an outlier detection technique combining new KNN and RNN density parameters to mine relevant outlier information from multisource mega-data sets.

2.6.2. Database Data Labeling

Astronomical databases store an enormous amount of features of astronomical data, which gives rise to dimensional problems that are difficult to analyze. Therefore, the utilization of AI to analyze and annotate celestial information parameters in an astronomical database is of great significance for further astronomical research.

Automatic Data Classification

The exponential growth of astronomical data has made the automatic classification of data generated from large-scale surveys crucial. The automatic classification of celestial data includes the classification of quasars, galaxies, and stars based on various features, such as their spectra, luminosity, and other celestial information.
Banerji et al. [119] performed a morphological classification of galaxy samples from SDSS DR6 based on ANNs and compared the results with the Galaxy Zoo. Ball and Brunner [18] used spectral data from the third SDSS data release to train KdTree to provide reliable classification research for all 143 million non-repetitive photometric objects. Zhang et al. [120] combined the KNN and RF approaches to separate quasars and stars. Aguerri et al. [121] applied an SVM to automatically classify approximately 700,000 galaxy samples from SDSS DR7 and provided the probability that each sample belongs to a certain category.
Additionally, unsupervised techniques have been used to classify astronomical data. Mei et al. [122] adopted a three-dimensional convolutional autoencoder to implement an unsupervised spatial spectral feature classification strategy. Fraix-Burnet et al. [123] used the unsupervised clustering Fisher–EM (Fisher–expectation–maximization) algorithm to classify galaxies and quasars with spectral redshifts of less than 0.25 in the SDSS database.
Moreover, deep learning has been used to analyze high-dimensional spectral data to classify astronomical objects. Khalifa et al. [124] proposed a deep CNN structure for galaxy classification with high testing accuracy. Becker et al. [125] introduced a scalable end-to-end recurrent neural network scheme for variable star classification which can be extended to large datasets. Hinners et al. [126] discussed the effectiveness of LSTM and RNN deep learning in stellar curve classification. Awang Iskandar et al. [127] used transfer learning to classify planetary nebula in the HASH DB and Pan STARRS databases. Barchi et al. [128] combined accurate visual classifications from the Galaxy Zoo project with machine learning and deep learning methodologies to improve galaxy classification in large datasets. Wu et al. [129] proposed a new model called the Image-EFficientNetV2-Spectrum Convolutional Neural Network to realize the classification of spectra.

Preselecting Quasar Candidates

Quasars represent a type of active galactic nucleus, and their classification is crucial in astronomical research. However, due to the particularity of quasar samples, even a small amount of pollution can significantly increase the difficulty of discovering a quasar candidate. Several studies have been conducted to classify quasars using AI techniques. Gao et al. [13] studied the performance of the SVM and KdTree methods in classifying stars and quasars in multiband data. Richards et al. [130] used Bayesian methods to classify 5546 candidate quasars in the Sloan Digital Sky Survey (SDSS). Abraham et al. [131] implemented this topic using the difference-boosting neural network method. Jiang et al. [132] identified candidate stars for Catalytic Variables from SDSS and LAMOST database spectra based on the SVM and RF methods. Schindler et al. [133] used an RF machine learning algorithm on the SDSS and Wide-field Infrared Survey Explorer photometry to classify quasars and estimate their photometric redshift, proposing a quasar selection algorithm and quasar candidate directory.

Automatic Estimation of Photometric Redshift

Galaxy photometry redshift, the so-called photo-z, refers to the redshifting of celestial objects obtained using medium- and wide-band photometry or imaging data. Photometric redshifts are key characteristics, especially for dark sources from which spectral data cannot be obtained. In recent years, many studies have been conducted to measure redshifts in celestial objects using AI techniques, which have shown obvious advantages in reducing cost and time consumption. Gradient-boosting tree methods like XGBoost and CATBoost [134,135], Gaussian mixture models [136,137], KNNs [138,139], SOMs [140], and some other supervised machine learning models [141,142] have been implemented to estimate and measure photo-z from multisource data and have achieved certain effects. Combinations of neural networks and different machine learning methods to estimate photo-z are also utilized [143,144,145]. Meanwhile, deep learning has been a powerful strategy for assessing photo-z. For instance, based on CNNs, [146,147,148,149] have realized ideal results, demonstrating the feasibility of deep learning methods for photo-z measurement and estimation, such as the combination of three networks to jointly predict the morphology and photo-z shown in Figure 9.

Measurement of Stellar Parameters

Astronomical databases require high-precision measurements to provide the accurate positions, radial velocities, and physical parameters of a large number of individual stars. AI techniques have been increasingly applied in the automatic processing and measurement analysis of stellar physical parameters. Various studies have been conducted to determine stellar parameters using AI techniques. Bailer-Jones et al. [150] first adopted ANN methods to determine stellar atmospheric parameters based on different spectral characteristics. KNN [151], PCA, and Bayesian methods [152,153,154] have been used to effectively estimate stellar physical parameters. With the sudden progress of AI, different machine learning methods have been utilized to analyze stellar parameters [155,156,157,158], and some powerful pipeline tools based on machine learning have been implemented for stellar physics parameter estimation and measurement, such as ODUSSEAS [159], ROOSTER [160], and SUPPNet [161]. Benefiting from a huge volume of astrophysical data, deep learning methods are also widely used in this field and have become a research hotspot [162,163,164], and the assessment of stellar physical parameters yields remarkable results.
In particular, LAMOST has obtained more than 20 million pieces of spectral data, placing it in the leading position among all telescopes in the world as of this moment. AI technologies, such as machine learning and deep learning methods, are utilized to evaluate and measure stellar parameters based on mass data from LAMOST [165,166,167,168,169,170,171,172,173,174], and significant scientific research progress has been made in the fields of searching for special celestial bodies such as lithium-rich giants, metal-poor stars, hypervelocity stars, and white dwarfs.

3. Discussion

The articles cited in this review encompass both journal papers and conference papers. The methods proposed in these works are exclusively those that were empirically validated using either a telescope or a telescope prototype. Our review did not take into consideration methods requiring additional validation on the telescope using simulation data.

3.1. Telescope Intelligence Research Hotspots

AI technology has found applications in most facets of telescope operation. We categorize telescope intelligence research into six main areas: site selection, optical system calibration, observational schedule, fault diagnosis, imaging quality optimization, and database optimization. These categories are further divided into subcategories, each serving as a distinct research direction.
To study the focal points of each research direction, we have conducted an extensive count of the journal articles published in each field, including the number of papers per direction, the total citation count for papers, and the citations of articles published in the past five years. The statistical results indicate that the number of articles published on the subject of database data labeling far exceeds those in other areas, making it difficult to account for them fully. We have separately listed the articles and their citation counts from the past five years in this particular field. The results are presented in Figure 10. The citation counts for articles published in the last five years reveal that in addition to database calibration, the application of AI techniques in adaptive optical technology and site seeing assessment fields are currently research hotspots.

3.2. Telescope Intelligence Research Trends

We analyze the future research trajectory of AI technology in this domain by comparing it with traditional methods prevalent in the field, taking into account factors such as time efficiency and accuracy. We use a scoring system in which AI-based methods are awarded 1 point when their performance surpasses traditional methods or accomplishes tasks that traditional methods cannot, 0 points when they match the efficacy of traditional methods, and −1 point when they underperform in comparison. The overall evaluation is obtained by summing up these two evaluation metrics. The higher the score, the more significant the advantages of AI technology become, indicating a trend toward its wider adoption in the future. A classification of various research directions based on these criteria is provided in Table 3. The results indicate that the use of AI technology is particularly advantageous in the optimization of dome seeing, observational planning and database data labeling.

3.3. Future Hotspots of Telescope Intelligence

As the depth of our universe exploration expands, the requirement for higher imaging sensitivity in telescopes is escalating, especially for observing more distant and dimmer celestial objects. Increasing the aperture size of telescopes can enhance their resolution, with several 30-m class telescopes currently under construction. The technique of light interference using multiple smaller telescopes can also boost resolution while reducing the cost of constructing large-aperture telescopes.
To minimize the impact of atmospheric disturbances on telescope resolution, it is crucial to position these instruments at sites with superior seeing conditions and to develop high-performance adaptive optical systems. Space telescopes, unaffected by atmospheric disturbances, can deliver imaging at the optical diffraction limit, although they are subject to the challenge of high costs. In the era of large-aperture telescopes, smaller telescopes continue to play significant roles, particularly in detecting transient sources like pulsars, gamma-ray bursts, and gravitational waves. Coordinated observations employing small-lens arrays allow for the uninterrupted all-sky monitoring of observational targets.
The following sections will delve into the challenges and complexities associated with ground-based large-aperture telescopes, optical interference technology, space telescopes, and small telescope arrays. These same challenges also represent exciting frontiers for applying AI technology in the field of telescope technology in the future.

3.3.1. Large-Aperture Telescopes and Optical Interference Technology

Currently, construction is underway on a series of 30 m class telescopes, including the Thirty Meter Telescope (TMT), the Extremely Large Telescope (ELT), and the Giant Magellan Telescope (GMT). These large-aperture telescopes offer superior light flux and angular resolution but are accompanied by more intricate structures.
Consider the ELT as an example: the optical system of the ELT comprises five mirrors. The primary mirror, spanning 39 m in diameter, is assembled from 798 sub-mirrors, each with a 1.5 m aperture, and requires a surface accuracy of 10 nanometers. The M4 secondary mirror, with a thickness less than 2 mm and a diameter of 2.4 m, features 5000 magnets on its rear surface. This intricate arrangement is designed to achieve surface changes with an accuracy of 10 nanometers a thousand times per second [175].
For 30-m class optical telescopes, the testing of large-aperture sub-mirrors presents significant challenges due to airflows that cause image blurring and difficulties in assessing surface accuracy. Active optics demand control over a greater number of mirror surfaces and actuators; hence, the requirements for precision in active optical detection and control are elevated. The substantial weight of these large telescopes poses further challenges for support structures, while the increased size of the deformable adaptive optics necessitates more sophisticated manufacturing and control processes, all while maintaining high efficiency.
Optical interference technology, already deployed in telescopes such as the VLT, CHARA (Center for High Angular Resolution Astronomy), KECK telescope, and LBT (Large Binocular Telescope), will retain its high-resolution edge, even in the era of 30-m telescopes. The MRO (Magdalena Ridge Observatory) Interferometer, presently under construction and employing 10 telescopes for interference imaging, is projected to achieve a resolution 100 times greater than that of the Hubble Space Telescope [176]. Concurrently, the Nanjing Institute of Astronomical Optics & Technology of the Chinese Academy of Sciences is developing an optical interference project that incorporates three 600 mm aperture telescopes and a baseline length of 100 m. Optical interference, due to its advantages of lengthy baselines and a wide spectrum of observable wavebands, is attracting widespread interest. However, current optical interference experiments have resolutions far below the theoretical limit owing to factors such as detector noise and telescope vibrations [177]. As a result, the development of AI techniques to enhance the wavelength and baseline dynamic range of optical interference will be a critical area for future exploration.

3.3.2. Space Telescope

Space telescopes, which operate free from atmospheric disturbances, include the recently successfully launched JWST, along with the planned Chinese Space Station Telescope (CSST), the Space Infrared Telescope for Cosmology and Astrophysics (SPICA), and the Large Ultraviolet Optical Infrared Surveyor (LUVOIR). However, these instruments pose unique challenges: their maintenance costs are substantial, their deployment presents significant hurdles, and they require advanced levels of automated control.
Among the key difficulties faced by space telescopes is the need for active optical technologies to maintain the integrity of their surfaces. This is akin to the active optical technologies employed in ground-based telescopes. Another challenge is the need to minimize equipment size. An example of such an innovation is the JWST’s design, which uses low-temperature programmable slit masks for multiobject spectroscopy [178]. The application of AI to replace intricate hardware devices will be a crucial area of research in the future. In addition, the high-resolution imaging generated by these telescopes produces vast amounts of data. Thus, achieving low-power, high-speed, long-distance data transmission will also be a significant focus of future research.

3.3.3. Small-Aperture Telescope Array

Contrary to possible anticipations, the advent and construction of larger telescopes have not rendered smaller telescopes obsolete. Instead, these compact instruments have found wide-ranging applications in diverse fields, including the study of gamma-ray bursts, the detection of exoplanets, and the investigation of microlensing phenomena. By assembling arrays of small telescopes across multiple continents, researchers can continuously monitor specific astronomical targets or, alternatively, utilize wide-field small telescope arrays for sky survey observations.
Projects currently under development, such as the Panoramic Survey Telescope and Rapid Response System (Pan-STARRS), intend to facilitate rapid sky surveys by utilizing arrays of four small telescopes [179]. Similarly, the SiTian project employs an array of small telescopes for all-sky survey observations, utilizing a network of 54 telescopes, each with a diameter of one meter [180]. This methodology is expected to catalyze significant breakthroughs in the field of time-domain astronomy. Moreover, the Stellar Observations Network Group (SONG) project is planning to construct a globally interconnected observational system by installing eight 1-m telescopes, each with varying apertures, at fixed latitudes and longitudes in both the Northern and Southern Hemispheres [181].
The hotspot for future research in this domain will likely encompass the coordination and control of multiple telescopes for collaborative observations, the determination of optimal observational strategies, and the implementation of effective cluster control systems.

3.3.4. The Challenge of Satellite Megaconstellations

Ground-based telescopes are increasingly contending with interference resulting from sunlight reflected off artificial satellites. In recent years, a multitude of satellite launch projects, including Starlink2, Kuiper, and WorldVu, have proposed ambitious plans to deploy approximately 60,000 low-Earth-orbit satellites by the year 2030 [182].
Even after the implementation of sunshades, the brightness of Starlink’s VisorSat version is expected to reach the sixth magnitude, causing significant disruptions to ground-based telescopes, particularly those engaged in sky survey operations [183]. Moreover, the Hubble Space Telescope has reported impacts stemming from satellite-reflected light [35]. Therefore, strategies for mitigating the influence of satellites on astronomical observations will be a critical focal point for future research in the field.

3.3.5. Large Language Models Improve Telescope Intelligence

Large language models (LLMs), such as BERT [184], Llama2 [185], and GPT-4 [186], are representative AI technologies that have attracted a significant amount of attention recently. Meanwhile, LLMs have been widely utilized in many fields such as text reading [187], medicine [188,189], and education [190], demonstrating outstanding application capabilities.
Future LLMs may be able to more directly serve telescope intelligence, providing huge potential in telescope equipment status monitoring, optimizing observation plans, etc. In an intelligent scheduling system, the LLM primarily focuses on gathering information, which includes applicant assumptions, historical application records, the importance of the research direction, and weather forecast information. This information is then exported to the scheduling system for intelligent scheduling purposes. Additionally, the vast amounts of data used in training LLMs enables them to recognize and extract meaningful information from complex astronomical data. This capability offers new avenues for accelerating scientific discovery and deepening our understanding of the universe.

4. Conclusions

This article delves into the role of AI in various aspects of telescope operation and research. These aspects include selecting telescope sites, calibrating optical systems, diagnosing faults, optimizing image quality, making observational decisions, and enhancing the intelligence of databases. The piece presents both the current focus areas and specific topics of research within the realm of telescope intelligence.
This paper provides a comprehensive statistical analysis of recent research trends. It reveals that the labeling of astronomical data within intelligent databases has become a significant research hotspot. This particular field has seen the most prolific publication of papers. Additionally, there is an extensive body of published work dedicated to adaptive optical technology and site seeing assessments. Through a comparison of the time efficiency and precision of AI technology with traditional methods, the findings indicate that the use of AI technology is particularly advantageous in optimizing dome seeing, observational planning, and database data labeling.
This article concludes by projecting future advancements in telescopes. These include the development of large-aperture telescopes, optical interference technology, arrays of small telescopes, space telescopes, and large language models customized for astronomy. Additionally, it addresses potential threats posed by satellite megaconstellations to telescopes. Given the evolving landscape of telescope technology, it also identifies likely areas of focus for future research.

Author Contributions

Conceptualization, T.H., Y.Z., H.W. and X.C.; methodology, K.H. and T.H.; software, K.H., J.C. and X.P.; validation, K.H., T.H., Y.H., L.X. and Y.Z.; formal analysis, K.H., T.H., J.C., X.P., L.X., Y.Z., H.W. and X.C.; investigation, K.H., T.H., J.C. and X.P.; resources, T.H., Y.H., L.X., Y.Z., H.W. and X.C.; data curation, T.H.; writing—original draft preparation, K.H. and T.H.; writing—review and editing, T.H., Y.H., L.X., Y.Z., H.W. and X.C.; visualization, K.H. and T.H.; supervision, T.H.; project administration, T.H.; funding acquisition, T.H., L.X., Y.Z., H.W. and X.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Nature Science Foundation of China (Grant No.12203079, 12103072, and 12073047), the Natural Science Foundation of Jiangsu Province (Grant No. BK20221156 and BK20210988), and the Jiangsu Funding Program for Excellent Postdoctoral Talent (Grant No. 2022ZB448).

Data Availability Statement

No new data was generated for this paper.

Acknowledgments

Thanks are given to the reviewer for the constructive comments and helpful suggestions. Additionally, we thank OpenAI’s GPT-4 for assisting with the grammatical refinement of our manuscript.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. McCarthy, J.; Minsky, M.L.; Rochester, N.; Shannon, C.E. A proposal for the dartmouth summer research project on artificial intelligence, August 31, 1955. AI Mag. 2006, 27, 12. [Google Scholar]
  2. Tan, C.F.; Wahidin, L.; Khalil, S.; Tamaldin, N.; Hu, J.; Rauterberg, G. The application of expert system: A review of research and applications. ARPN J. Eng. Appl. Sci. 2016, 11, 2448–2453. [Google Scholar]
  3. Rumelhart, D.E.; Hinton, G.E.; Williams, R.J. Learning representations by back-propagating errors. Nature 1986, 323, 533–536. [Google Scholar] [CrossRef]
  4. Cortes, C.; Vapnik, V. Support-vector networks. Mach. Learn. 1995, 20, 273–297. [Google Scholar] [CrossRef]
  5. Navada, A.; Ansari, A.N.; Patil, S.; Sonkamble, B.A. Overview of use of decision tree algorithms in machine learning. In Proceedings of the 2011 IEEE Control and System Graduate Research Colloquium, Shah Alam, Malaysia, 27–28 June 2011; pp. 37–42. [Google Scholar]
  6. Van Der Maaten, L.; Postma, E.; Van den Herik, J. Dimensionality reduction: A comparative. J. Mach. Learn. Res. 2009, 10. [Google Scholar]
  7. Hinton, G.E.; Salakhutdinov, R.R. Reducing the dimensionality of data with neural networks. Science 2006, 313, 504–507. [Google Scholar] [CrossRef]
  8. Hewamalage, H.; Bergmeir, C.; Bandara, K. Recurrent neural networks for time series forecasting: Current status and future directions. Int. J. Forecast. 2021, 37, 388–427. [Google Scholar] [CrossRef]
  9. Li, Z.; Liu, F.; Yang, W.; Peng, S.; Zhou, J. A survey of convolutional neural networks: Analysis, applications, and prospects. IEEE Trans. Neural Netw. Learn. Syst. 2021, 33, 6999–7019. [Google Scholar] [CrossRef] [PubMed]
  10. Johnston, M.D.; Adorf, H.M. Scheduling with neural networks—The case of the Hubble Space Telescope. Comput. Oper. Res. 1992, 19, 209–240. [Google Scholar] [CrossRef]
  11. Bykat, A. NICBES-2, a nickel-cadmium battery expert system. Appl. Artif. Intell. Int. J. 1990, 4, 133–141. [Google Scholar] [CrossRef]
  12. Li, L.; Zhang, Y.; Zhao, Y. k-Nearest Neighbors for automated classification of celestial objects. Sci. China Ser. G Phys. Mech. Astron. 2008, 51, 916–922. [Google Scholar] [CrossRef]
  13. Gao, D.; Zhang, Y.X.; Zhao, Y.H. Support vector machines and kd-tree for separating quasars from large survey data bases. Mon. Not. R. Astron. Soc. 2008, 386, 1417–1425. [Google Scholar] [CrossRef]
  14. Owens, E.; Griffiths, R.; Ratnatunga, K. Using oblique decision trees for the morphological classification of galaxies. Mon. Not. R. Astron. Soc. 1996, 281, 153–157. [Google Scholar] [CrossRef]
  15. Priyatikanto, R.; Mayangsari, L.; Prihandoko, R.A.; Admiranto, A.G. Classification of continuous sky brightness data using random forest. Adv. Astron. 2020, 2020, 1–11. [Google Scholar] [CrossRef]
  16. Jia, P.; Wu, X.; Li, Z.; Li, B.; Wang, W.; Liu, Q.; Popowicz, A.; Cai, D. Point spread function estimation for wide field small aperture telescopes with deep neural networks and calibration data. Mon. Not. R. Astron. Soc. 2021, 505, 4717–4725. [Google Scholar] [CrossRef]
  17. Gilda, S.; Draper, S.C.; Fabbro, S.; Mahoney, W.; Prunet, S.; Withington, K.; Wilson, M.; Ting, Y.S.; Sheinis, A. Uncertainty-aware learning for improvements in image quality of the Canada–France–Hawaii Telescope. Mon. Not. R. Astron. Soc. 2022, 510, 870–902. [Google Scholar] [CrossRef]
  18. Ball, N.M.; Brunner, R.J. Data mining and machine learning in astronomy. Int. J. Mod. Phys. D 2010, 19, 1049–1106. [Google Scholar] [CrossRef]
  19. Fluke, C.J.; Jacobs, C. Surveying the reach and maturity of machine learning and artificial intelligence in astronomy. Wiley Interdiscip. Rev. Data Min. Knowl. Discov. 2020, 10, e1349. [Google Scholar] [CrossRef]
  20. Meher, S.K.; Panda, G. Deep learning in astronomy: A tutorial perspective. Eur. Phys. J. Spec. Top. 2021, 230, 2285–2317. [Google Scholar] [CrossRef]
  21. Sen, S.; Agarwal, S.; Chakraborty, P.; Singh, K.P. Astronomical big data processing using machine learning: A comprehensive review. Exp. Astron. 2022, 53, 1–43. [Google Scholar] [CrossRef]
  22. Bely, P. The Design and Construction of Large Optical Telescopes; Springer: Berlin/Heidelberg, Germany, 2003. [Google Scholar]
  23. Morrison, D.; Murphy, R.; Cruikshank, D.; Sinton, W.; Martin, T. Evaluation of Mauna Kea, Hawaii, as an observatory site. Publ. Astron. Soc. Pac. 1973, 85, 255. [Google Scholar] [CrossRef]
  24. Vernin, J.; Muñoz-Tuñón, C. Optical seeing at La Palma Observatory. I-General guidelines and preliminary results at the Nordic Optical Telescope. Astron. Astrophys. 1992, 257, 811–816. [Google Scholar]
  25. Vernin, J.; Munoz-Tunon, C. Optical seeing at La Palma Observatory. 2: Intensive site testing campaign at the Nordic optical telescope. Astron. Astrophys. 1994, 284, 311–318. [Google Scholar]
  26. Ma, B.; Shang, Z.; Hu, Y.; Hu, K.; Wang, Y.; Yang, X.; Ashley, M.C.; Hickson, P.; Jiang, P. Night-time measurements of astronomical seeing at Dome A in Antarctica. Nature 2020, 583, 771–774. [Google Scholar] [CrossRef] [PubMed]
  27. Deng, L.; Yang, F.; Chen, X.; He, F.; Liu, Q.; Zhang, B.; Zhang, C.; Wang, K.; Liu, N.; Ren, A.; et al. Lenghu on the Tibetan Plateau as an astronomical observing site. Nature 2021, 596, 353–356. [Google Scholar] [CrossRef] [PubMed]
  28. Aksaker, N.; Yerli, S.K.; Erdoğan, M.A.; Erdi, E.; Kaba, K.; Ak, T.; Aslan, Z.; Bakış, V.; Demircan, O.; Evren, S.; et al. Astronomical site selection for Turkey using GIS techniques. Exp. Astron. 2015, 39, 547–566. [Google Scholar] [CrossRef]
  29. Aksaker, N.; Yerli, S.K.; Erdoğan, M.; Kurt, Z.; Kaba, K.; Bayazit, M.; Yesilyaprak, C. Global site selection for astronomy. Mon. Not. R. Astron. Soc. 2020, 493, 1204–1216. [Google Scholar] [CrossRef]
  30. Wang, X.Y.; Wu, Z.Y.; Liu, J.; Hidayat, T. New analysis of the fraction of observable nights at astronomical sites based on FengYun-2 satellite data. Mon. Not. R. Astron. Soc. 2022, 511, 5363–5371. [Google Scholar] [CrossRef]
  31. Francis, A.; Sidiropoulos, P.; Muller, J.P. CloudFCN: Accurate and robust cloud detection for satellite imagery with deep learning. Remote Sens. 2019, 11, 2312. [Google Scholar] [CrossRef]
  32. Mommert, M. Cloud Identification from All-sky Camera Data with Machine Learning. Astron. J. 2020, 159, 178. [Google Scholar] [CrossRef]
  33. Li, X.; Qiu, B.; Cao, G.; Wu, C.; Zhang, L. A Novel Method for Ground-Based Cloud Image Classification Using Transformer. Remote Sens. 2022, 14, 3978. [Google Scholar] [CrossRef]
  34. Molano, G.C.; Suárez, O.L.R.; Gaitán, O.A.R.; Mercado, A.M.M. Low Dimensional Embedding of Climate Data for Radio Astronomical Site Testing in the Colombian Andes. Publ. Astron. Soc. Pac. 2017, 129, 105002. [Google Scholar] [CrossRef]
  35. Kruk, S.; García-Martín, P.; Popescu, M.; Aussel, B.; Dillmann, S.; Perks, M.E.; Lund, T.; Merín, B.; Thomson, R.; Karadag, S.; et al. The impact of satellite trails on Hubble Space Telescope observations. Nat. Astron. 2023, 7, 262–268. [Google Scholar] [CrossRef]
  36. Lombardi, G.; Navarrete, J.; Sarazin, M. Review on atmospheric turbulence monitoring. Adapt. Opt. Syst. IV SPIE 2014, 9148, 678–689. [Google Scholar]
  37. Bolbasova, L.A.; Lukin, V. Atmospheric research for adaptive optics. Atmos. Ocean. Opt. 2022, 35, 288–302. [Google Scholar] [CrossRef]
  38. Dewan, E.M. A Model for C2n (optical turbulence) profiles using radiosonde data. In Number 1121, Directorate of Geophysics, Air Force Materiel Command; DTIC: Fort Belvoir, VA, USA, 1993. [Google Scholar]
  39. Coulman, C.; Vernin, J.; Coqueugniot, Y.; Caccia, J. Outer scale of turbulence appropriate to modeling refractive-index structure profiles. Appl. Opt. 1988, 27, 155–160. [Google Scholar] [CrossRef]
  40. Trinquet, H.; Vernin, J. A model to forecast seeing and estimate C2N profiles from meteorological data. Publ. Astron. Soc. Pac. 2006, 118, 756. [Google Scholar] [CrossRef]
  41. Wang, Y.; Basu, S. Using an artificial neural network approach to estimate surface-layer optical turbulence at Mauna Loa, Hawaii. Opt. Lett. 2016, 41, 2334–2337. [Google Scholar] [CrossRef] [PubMed]
  42. Jellen, C.; Burkhardt, J.; Brownell, C.; Nelson, C. Machine learning informed predictor importance measures of environmental parameters in maritime optical turbulence. Appl. Opt. 2020, 59, 6379–6389. [Google Scholar] [CrossRef]
  43. Su, C.; Wu, X.; Luo, T.; Wu, S.; Qing, C. Adaptive niche-genetic algorithm based on backpropagation neural network for atmospheric turbulence forecasting. Appl. Opt. 2020, 59, 3699–3705. [Google Scholar] [CrossRef]
  44. Vorontsov, A.M.; Vorontsov, M.A.; Filimonov, G.A.; Polnau, E. Atmospheric turbulence study with deep machine learning of intensity scintillation patterns. Appl. Sci. 2020, 10, 8136. [Google Scholar] [CrossRef]
  45. Bi, C.; Qing, C.; Wu, P.; Jin, X.; Liu, Q.; Qian, X.; Zhu, W.; Weng, N. Optical turbulence profile in marine environment with artificial neural network model. Remote Sens. 2022, 14, 2267. [Google Scholar] [CrossRef]
  46. Grose, M.G.; Watson, E.A. Forecasting atmospheric turbulence conditions from prior environmental parameters using artificial neural networks. Appl. Opt. 2023, 62, 3370–3379. [Google Scholar] [CrossRef] [PubMed]
  47. Kornilov, M.V. Forecasting seeing and parameters of long-exposure images by means of ARIMA. Exp. Astron. 2016, 41, 223–242. [Google Scholar] [CrossRef]
  48. Milli, J.; Rojas, T.; Courtney-Barrer, B.; Bian, F.; Navarrete, J.; Kerber, F.; Otarola, A. Turbulence nowcast for the Cerro Paranal and Cerro Armazones observatory sites. Adapt. Opt. Syst. VII SPIE 2020, 11448, 332–344. [Google Scholar]
  49. Giordano, C.; Rafalimanana, A.; Ziad, A.; Aristidi, E.; Chabé, J.; Fanteï-Caujolle, Y.; Renaud, C. Statistical learning as a new approach for optical turbulence forecasting. Adapt. Opt. Syst. VII SPIE 2020, 11448, 871–880. [Google Scholar]
  50. Giordano, C.; Rafalimanana, A.; Ziad, A.; Aristidi, E.; Chabé, J.; Fanteï-Caujole, Y.; Renaud, C. Contribution of statistical site learning to improve optical turbulence forecasting. Mon. Not. R. Astron. Soc. 2021, 504, 1927–1938. [Google Scholar] [CrossRef]
  51. Cherubini, T.; Lyman, R.; Businger, S. Forecasting seeing for the Maunakea observatories with machine learning. Mon. Not. R. Astron. Soc. 2022, 509, 232–245. [Google Scholar] [CrossRef]
  52. Lyman, R.; Cherubini, T.; Businger, S. Forecasting seeing for the Maunakea Observatories. Mon. Not. R. Astron. Soc. 2020, 496, 4734–4748. [Google Scholar] [CrossRef]
  53. Turchi, A.; Masciadri, E.; Fini, L. Optical turbulence forecast over short timescales using machine learning techniques. Adapt. Opt. Syst. VIII SPIE 2022, 12185, 1851–1861. [Google Scholar]
  54. Hou, X.; Hu, Y.; Du, F.; Ashley, M.C.; Pei, C.; Shang, Z.; Ma, B.; Wang, E.; Huang, K. Machine learning-based seeing estimation and prediction using multi-layer meteorological data at Dome A, Antarctica. Astron. Comput. 2023, 43, 100710. [Google Scholar] [CrossRef]
  55. Masciadri, E.; Turchi, A.; Fini, L. Optical turbulence forecasts at short time-scales using an autoregressive method at the Very Large Telescope. Mon. Not. R. Astron. Soc. 2023, 523, 3487–3502. [Google Scholar] [CrossRef]
  56. Ni, W.J.; Shen, Q.L.; Zeng, Q.T.; Wang, H.Q.; Cui, X.Q.; Liu, T. Data-driven Seeing Prediction for Optics Telescope: From Statistical Modeling, Machine Learning to Deep Learning Techniques. Res. Astron. Astrophys. 2022, 22, 125003. [Google Scholar] [CrossRef]
  57. Wu, Z.; Zhang, Y.; Tang, R.; Li, Z.; Yuan, X.; Xia, Y.; Bai, H.; Li, B.; Chen, Z.; Cui, X.; et al. Machine learning for improving stellar image-based alignment in wide-field Telescopes. Res. Astron. Astrophys. 2022, 22, 015008. [Google Scholar] [CrossRef]
  58. Li, Z.; Yuan, X.; Cui, X. Alignment metrology for the Antarctica Kunlun dark universe survey telescope. Mon. Not. R. Astron. Soc. 2015, 449, 425–430. [Google Scholar] [CrossRef]
  59. Thompson, K.P.; Schmid, T.; Rolland, J.P. The misalignment induced aberrations of TMA telescopes. Opt. Express 2008, 16, 20345–20353. [Google Scholar] [CrossRef]
  60. Yin, J.E.; Eisenstein, D.J.; Finkbeiner, D.P.; Stubbs, C.W.; Wang, Y. Active Optical Control with Machine Learning: A Proof of Concept for the Vera C. Rubin Observatory. Astron. J. 2021, 161, 216. [Google Scholar] [CrossRef]
  61. Zhou, M.; Lv, G.; Li, J.; Zhou, Z.; Liu, Z.; Wang, J.; Bai, Z.; Zhang, Y.; Tian, Y.; Wang, M.; et al. LAMOST Fiber Positioning Unit Detection Based on Deep Learning. Publ. Astron. Soc. Pac. 2021, 133, 115001. [Google Scholar] [CrossRef]
  62. Su, D.Q.; Cui, X.Q. Active optics in LAMOST. Chin. J. Astron. Astrophys. 2004, 4, 1. [Google Scholar] [CrossRef]
  63. Li, W.; Kang, C.; Guan, H.; Huang, S.; Zhao, J.; Zhou, X.; Li, J. Deep Learning Correction Algorithm for The Active Optics System. Sensors 2020, 20, 6403. [Google Scholar] [CrossRef]
  64. Wang, Y.; Jiang, F.; Ju, G.; Xu, B.; An, Q.; Zhang, C.; Wang, S.; Xu, S. Deep learning wavefront sensing for fine phasing of segmented mirrors. Opt. Express 2021, 29, 25960–25978. [Google Scholar] [CrossRef] [PubMed]
  65. Cao, H.; Zhang, J.; Yang, F.; An, Q.; Wang, Y. Extending capture range for piston error in segmented primary mirror telescopes based on wavelet support vector machine with improved particle swarm optimization. IEEE Access 2020, 8, 111585–111597. [Google Scholar] [CrossRef]
  66. Yue, D.; He, Y.; Li, Y. Piston error measurement for segmented telescopes with an artificial neural network. Sensors 2021, 21, 3364. [Google Scholar] [CrossRef] [PubMed]
  67. Li, D.; Xu, S.; Wang, D.; Yan, D. Large-scale piston error detection technology for segmented optical mirrors via convolutional neural networks. Opt. Lett. 2019, 44, 1170–1173. [Google Scholar] [CrossRef] [PubMed]
  68. Wang, P.F.; Zhao, H.; Xie, X.P.; Zhang, Y.T.; Li, C.; Fan, X.W. Multichannel left-subtract-right feature vector piston error detection method based on a convolutional neural network. Opt. Express 2021, 29, 21320–21335. [Google Scholar] [CrossRef] [PubMed]
  69. Granzer, T. What makes an automated telescope robotic? Astron. Nachrichten Astron. Notes 2004, 325, 513–518. [Google Scholar] [CrossRef]
  70. Colome, J.; Colomer, P.; Guàrdia, J.; Ribas, I.; Campreciós, J.; Coiffard, T.; Gesa, L.; Martínez, F.; Rodler, F. Research on schedulers for astronomical observatories. In Proceedings of the Observatory Operations: Strategies, Processes, and Systems IV, Amsterdam, The Netherlands, 4–6 July 2012; SPIE: Bellingham, WA, USA, 2012; Volume 8448, pp. 469–486. [Google Scholar]
  71. Johnston, M.D.; Miller, G. Spike: Intelligent scheduling of hubble space telescope observations. Intell. Sched. 1994, 19, 3–5. [Google Scholar]
  72. Adorf, H.M.; Johnston, M.D. A discrete stochastic neural network algorithm for constraint satisfaction problems. In Proceedings of the 1990 IJCNN International Joint Conference on Neural Networks, IEEE, San Diego, CA, USA, 17–21 June 1990; pp. 917–924. [Google Scholar]
  73. Garcia-Piquer, A.; Ribas, I.; Colomé, J. Artificial intelligence for the EChO mission planning tool. Exp. Astron. 2015, 40, 671–694. [Google Scholar] [CrossRef]
  74. Garcia-Piquer, A.; Morales, J.; Ribas, I.; Colomé, J.; Guàrdia, J.; Perger, M.; Caballero, J.A.; Cortés-Contreras, M.; Jeffers, S.; Reiners, A.; et al. Efficient scheduling of astronomical observations-Application to the CARMENES radial-velocity survey. Astron. Astrophys. 2017, 604, A87. [Google Scholar] [CrossRef]
  75. Adler, D.S.; Kinzel, W.; Jordan, I. Planning and scheduling at STScI: From Hubble to the James Webb Space Telescope. In Proceedings of the Observatory Operations: Strategies, Processes, and Systems V, Montreal, QC, Canada, 25–27 June 2014; SPIE: Bellingham, WA, USA, 2014; Volume 9149, pp. 145–158. [Google Scholar]
  76. Frank, J. SOFIA’s challenge: Automated scheduling of airborne astronomy observations. In Proceedings of the 2nd IEEE International Conference on Space Mission Challenges for Information Technology (SMC-IT’06), Pasadena, CA, USA, 17–20 July 2006; p. 8. [Google Scholar]
  77. Astudillo, J.; Protopapas, P.; Pichara, K.; Becker, I. A Reinforcement Learning–Based Follow-up Framework. Astron. J. 2023, 165, 118. [Google Scholar] [CrossRef]
  78. Naghib, E.; Yoachim, P.; Vanderbei, R.J.; Connolly, A.J.; Jones, R.L. A framework for telescope schedulers: With applications to the Large Synoptic Survey Telescope. Astron. J. 2019, 157, 151. [Google Scholar] [CrossRef]
  79. Dunham, L.L.; Laffey, T.J.; Kao, S.M.; Schmidt, J.L.; Read, J.Y. Knowledge-based monitoring of the pointing control system on the Hubble space telescope. In Proceedings of the NASA. Marshall Space Flight Center, Third Conference on Artificial Intelligence for Space Applications, Part 1, Huntsville, AL, USA, 2–3 November 1987. [Google Scholar]
  80. Yun, L.; Shi-hai, Y. Reliability Analysis of Main-axis Control System of the Antarctic Equatorial Astronomical Telescope Based on Fault Tree. Chin. Astron. Astrophys. 2018, 42, 448–461. [Google Scholar] [CrossRef]
  81. Tang, Y.; Wang, Y.; Duan, S.; Liang, J.; Cai, Z.; Liu, Z.; Hu, H.; Wang, J.; Chu, J.; Cui, X.; et al. Fault Diagnosis of the LAMOST Fiber Positioner Based on a Long Short-term Memory (LSTM) Deep Neural Network. Res. Astron. Astrophys. 2023, 23, 125006. [Google Scholar] [CrossRef]
  82. Teimoorinia, H.; Kavelaars, J.; Gwyn, S.; Durand, D.; Rolston, K.; Ouellette, A. Assessment of Astronomical Images Using Combined Machine-Learning Models. Astron. J. 2020, 159, 170. [Google Scholar] [CrossRef]
  83. Hu, T.Z.; Zhang, Y.; Cui, X.Q.; Zhang, Q.Y.; Li, Y.P.; Cao, Z.H.; Pan, X.S.; Fu, Y. Telescope performance real-time monitoring based on machine learning. Mon. Not. R. Astron. Soc. 2021, 500, 388–396. [Google Scholar] [CrossRef]
  84. Hu, T.; Zhang, Y.; Yan, J.; Liu, O.; Wang, H.; Cui, X. Intelligent monitoring and diagnosis of telescope image quality. Mon. Not. R. Astron. Soc. 2023, 525, 3541–3550. [Google Scholar] [CrossRef]
  85. Woolf, N. High resolution imaging from the ground. Annu. Rev. Astron. Astrophys. 1982, 20, 367–398. [Google Scholar] [CrossRef]
  86. Racine, R.; Salmon, D.; Cowley, D.; Sovka, J. Mirror, dome, and natural seeing at CFHT. Publ. Astron. Soc. Pac. 1991, 103, 1020. [Google Scholar] [CrossRef]
  87. Murtagh, F.; Sarazin, M. Nowcasting astronomical seeing: A study of ESO La Silla and Paranal. Publ. Astron. Soc. Pac. 1993, 105, 932. [Google Scholar] [CrossRef]
  88. Aussem, A.; Murtagh, F.; Sarazin, M. Dynamical recurrent neural networks—towards environmental time series prediction. Int. J. Neural Syst. 1995, 6, 145–170. [Google Scholar] [CrossRef]
  89. Buffa, F.; Porceddu, I. Temperature forecast and dome seeing minimization-I. A case study using a neural network model. Astron. Astrophys. Suppl. Ser. 1997, 126, 547–553. [Google Scholar] [CrossRef]
  90. Guo, Y.; Zhong, L.; Min, L.; Wang, J.; Wu, Y.; Chen, K.; Wei, K.; Rao, C. Adaptive optics based on machine learning: A review. Opto-Electron. Adv. 2022, 5, 200082. [Google Scholar] [CrossRef]
  91. Li, Z.; Li, X. Centroid computation for Shack-Hartmann wavefront sensor in extreme situations based on artificial neural networks. Opt. Express 2018, 26, 31675–31692. [Google Scholar] [CrossRef] [PubMed]
  92. Guo, H.; Korablinova, N.; Ren, Q.; Bille, J. Wavefront reconstruction with artificial neural networks. Opt. Express 2006, 14, 6456–6462. [Google Scholar] [CrossRef] [PubMed]
  93. Suárez Gómez, S.L.; González-Gutiérrez, C.; Díez Alonso, E.; Santos Rodríguez, J.D.; Sánchez Rodríguez, M.L.; Carballido Landeira, J.; Basden, A.; Osborn, J. Improving adaptive optics reconstructions with a deep learning approach. In Proceedings of the Hybrid Artificial Intelligent Systems: 13th International Conference, HAIS 2018, Oviedo, Spain, 20–22 June 2018; Proceedings 13. Springer: Berlin/Heidelberg, Germany, 2018; pp. 74–83. [Google Scholar]
  94. DuBose, T.B.; Gardner, D.F.; Watnik, A.T. Intensity-enhanced deep network wavefront reconstruction in Shack–Hartmann sensors. Opt. Lett. 2020, 45, 1699–1702. [Google Scholar] [CrossRef] [PubMed]
  95. Osborn, J.; Guzmán, D.; de Cos Juez, F.; Basden, A.G.; Morris, T.J.; Gendron, É.; Butterley, T.; Myers, R.M.; Guesalaga, A.; Lasheras, F.S.; et al. First on-sky results of a neural network based tomographic reconstructor: Carmen on Canary. Adapt. Opt. Syst. IV SPIE 2014, 9148, 1541–1546. [Google Scholar]
  96. Kendrick, R.L.; Acton, D.S.; Duncan, A. Phase-diversity wave-front sensor for imaging systems. Appl. Opt. 1994, 33, 6533–6546. [Google Scholar] [CrossRef] [PubMed]
  97. Wong, A.P.; Norris, B.R.; Deo, V.; Tuthill, P.G.; Scalzo, R.; Sweeney, D.; Ahn, K.; Lozi, J.; Vievard, S.; Guyon, O. Nonlinear Wave Front Reconstruction from a Pyramid Sensor using Neural Networks. Publ. Astron. Soc. Pac. 2023, 135, 114501. [Google Scholar] [CrossRef]
  98. Swanson, R.; Lamb, M.; Correia, C.; Sivanandam, S.; Kutulakos, K. Wavefront reconstruction and prediction with convolutional neural networks. Adapt. Opt. Syst. VI SPIE 2018, 10703, 481–490. [Google Scholar]
  99. Guo, H.; Xu, Y.; Li, Q.; Du, S.; He, D.; Wang, Q.; Huang, Y. Improved machine learning approach for wavefront sensing. Sensors 2019, 19, 3533. [Google Scholar] [CrossRef]
  100. Ma, H.; Liu, H.; Qiao, Y.; Li, X.; Zhang, W. Numerical study of adaptive optics compensation based on convolutional neural networks. Opt. Commun. 2019, 433, 283–289. [Google Scholar] [CrossRef]
  101. Wu, Y.; Guo, Y.; Bao, H.; Rao, C. Sub-millisecond phase retrieval for phase-diversity wavefront sensor. Sensors 2020, 20, 4877. [Google Scholar] [CrossRef] [PubMed]
  102. Montera, D.A.; Welsh, B.M.; Roggemann, M.C.; Ruck, D.W. Prediction of wave-front sensor slope measurements with artificial neural networks. Appl. Opt. 1997, 36, 675–681. [Google Scholar] [CrossRef] [PubMed]
  103. Liu, X.; Morris, T.; Saunter, C.; de Cos Juez, F.J.; González-Gutiérrez, C.; Bardou, L. Wavefront prediction using artificial neural networks for open-loop adaptive optics. Mon. Not. R. Astron. Soc. 2020, 496, 456–464. [Google Scholar] [CrossRef]
  104. Sun, Z.; Chen, Y.; Li, X.; Qin, X.; Wang, H. A Bayesian regularized artificial neural network for adaptive optics forecasting. Opt. Commun. 2017, 382, 519–527. [Google Scholar] [CrossRef]
  105. Ramos, A.A.; de la Cruz Rodríguez, J.; Yabar, A.P. Real-time, multiframe, blind deconvolution of solar images. Astron. Astrophys. 2018, 620, A73. [Google Scholar] [CrossRef]
  106. Kim, T.; Park, E.; Lee, H.; Moon, Y.J.; Bae, S.H.; Lim, D.; Jang, S.; Kim, L.; Cho, I.H.; Choi, M.; et al. Solar farside magnetograms from deep learning analysis of STEREO/EUVI data. Nat. Astron. 2019, 3, 397–400. [Google Scholar] [CrossRef]
  107. Rahman, S.; Moon, Y.J.; Park, E.; Siddique, A.; Cho, I.H.; Lim, D. Super-resolution of SDO/HMI magnetograms using novel deep learning methods. Astrophys. J. Lett. 2020, 897, L32. [Google Scholar] [CrossRef]
  108. Ribeiro, V.; Russo, P.; Cárdenas-Avendaño, A. A survey of astronomical research: A baseline for astronomical development. Astron. J. 2013, 146, 138. [Google Scholar] [CrossRef]
  109. Yu, C.; Li, B.; Xiao, J.; Sun, C.; Tang, S.; Bi, C.; Cui, C.; Fan, D. Astronomical data fusion: Recent progress and future prospects—A survey. Exp. Astron. 2019, 47, 359–380. [Google Scholar] [CrossRef]
  110. Budavári, T.; Szalay, A.S. Probabilistic cross-identification of astronomical sources. Astrophys. J. 2008, 679, 301. [Google Scholar] [CrossRef]
  111. Medan, I.; Lépine, S.; Hartman, Z. Bayesian Cross-matching of High Proper-motion Stars in Gaia DR2 and Photometric Metallicities for 1.7 million K and M Dwarfs. Astron. J. 2021, 161, 234. [Google Scholar] [CrossRef]
  112. Jalobeanu, A.; Gutiérrez, J.; Slezak, E. Multi-source data fusion and super-resolution from astronomical images. Stat. Methodol. 2008, 5, 361–372. [Google Scholar] [CrossRef]
  113. Petremand, M.; Jalobeanu, A.; Collet, C. Optimal bayesian fusion of large hyperspectral astronomical observations. Stat. Methodol. 2012, 9, 44–54. [Google Scholar] [CrossRef]
  114. Du, C.; Luo, A.; Yang, H.; Hou, W.; Guo, Y. An efficient method for rare spectra retrieval in astronomical databases. Publ. Astron. Soc. Pac. 2016, 128, 034502. [Google Scholar] [CrossRef]
  115. Wang, K.; Guo, P.; Luo, A.; Xu, M. Unsupervised pseudoinverse hashing learning model for rare astronomical object retrieval. Sci. China Technol. Sci. 2022, 65, 1338–1348. [Google Scholar] [CrossRef]
  116. Rebbapragada, U.; Protopapas, P.; Brodley, C.E.; Alcock, C. Finding anomalous periodic time series: An application to catalogs of periodic variable stars. arXiv 2009, arXiv:0905.3428. [Google Scholar] [CrossRef]
  117. Nun, I.; Pichara, K.; Protopapas, P.; Kim, D.W. Supervised detection of anomalous light curves in massive astronomical catalogs. Astrophys. J. 2014, 793, 23. [Google Scholar] [CrossRef]
  118. Ma, Y.; Zhao, X.; Zhang, C.; Zhang, J.; Qin, X. Outlier detection from multiple data sources. Inf. Sci. 2021, 580, 819–837. [Google Scholar] [CrossRef]
  119. Banerji, M.; Lahav, O.; Lintott, C.J.; Abdalla, F.B.; Schawinski, K.; Bamford, S.P.; Andreescu, D.; Murray, P.; Raddick, M.J.; Slosar, A.; et al. Galaxy Zoo: Reproducing galaxy morphologies via machine learning. Mon. Not. R. Astron. Soc. 2010, 406, 342–353. [Google Scholar] [CrossRef]
  120. Zhang, Y.; Zhao, Y.; Zheng, H. Automated classification of quasars and stars. Proc. Int. Astron. Union 2009, 5, 147. [Google Scholar] [CrossRef]
  121. Aguerri, J.; Bernardi, M.; Mei, S.; Almeida, J.S. Revisiting the Hubble sequence in the SDSS DR7 spectroscopic sample: A publicly available Bayesian automated classification. Astron. Astrophys. 2011, 525, A157. [Google Scholar]
  122. Mei, S.; Ji, J.; Geng, Y.; Zhang, Z.; Li, X.; Du, Q. Unsupervised spatial–spectral feature learning by 3D convolutional autoencoder for hyperspectral classification. IEEE Trans. Geosci. Remote Sens. 2019, 57, 6808–6820. [Google Scholar] [CrossRef]
  123. Fraix-Burnet, D.; Bouveyron, C.; Moultaka, J. Unsupervised classification of SDSS galaxy spectra. Astron. Astrophys. 2021, 649, A53. [Google Scholar] [CrossRef]
  124. Khalifa, N.E.M.; Taha, M.H.N.; Hassanien, A.E.; Selim, I. Deep galaxy: Classification of galaxies based on deep convolutional neural networks. arXiv 2017, arXiv:1709.02245. [Google Scholar]
  125. Becker, I.; Pichara, K.; Catelan, M.; Protopapas, P.; Aguirre, C.; Nikzat, F. Scalable end-to-end recurrent neural network for variable star classification. Mon. Not. R. Astron. Soc. 2020, 493, 2981–2995. [Google Scholar] [CrossRef]
  126. Hinners, T.A.; Tat, K.; Thorp, R. Machine learning techniques for stellar light curve classification. Astron. J. 2018, 156, 7. [Google Scholar] [CrossRef]
  127. Awang Iskandar, D.N.; Zijlstra, A.A.; McDonald, I.; Abdullah, R.; Fuller, G.A.; Fauzi, A.H.; Abdullah, J. Classification of Planetary Nebulae through Deep Transfer Learning. Galaxies 2020, 8, 88. [Google Scholar] [CrossRef]
  128. Barchi, P.H.; de Carvalho, R.; Rosa, R.R.; Sautter, R.; Soares-Santos, M.; Marques, B.A.; Clua, E.; Gonçalves, T.; de Sá-Freitas, C.; Moura, T. Machine and Deep Learning applied to galaxy morphology-A comparative study. Astron. Comput. 2020, 30, 100334. [Google Scholar] [CrossRef]
  129. Wu, J.; Zhang, Y.; Qu, M.; Jiang, B.; Wang, W. Automatic Classification of Spectra with IEF-SCNN. Universe 2023, 9, 477. [Google Scholar] [CrossRef]
  130. Richards, G.T.; Deo, R.P.; Lacy, M.; Myers, A.D.; Nichol, R.C.; ZAkAMSkA, N.L.; Brunner, R.J.; Brandt, W.; Gray, A.G.; PAREJkO, J.K.; et al. Eight-dimensional mid-infrared/optical Bayesian quasar selection. Astron. J. 2009, 137, 3884. [Google Scholar] [CrossRef]
  131. Abraham, S.; Philip, N.S.; Kembhavi, A.; Wadadekar, Y.G.; Sinha, R. A photometric catalogue of quasars and other point sources in the Sloan Digital Sky Survey. Mon. Not. R. Astron. Soc. 2012, 419, 80–94. [Google Scholar] [CrossRef]
  132. Jiang, B.; Luo, A.; Zhao, Y.; Wei, P. Data mining for cataclysmic variables in the large sky area multi-object fibre spectroscopic telescope archive. Mon. Not. R. Astron. Soc. 2013, 430, 986–995. [Google Scholar] [CrossRef]
  133. Schindler, J.T.; Fan, X.; McGreer, I.D.; Yang, Q.; Wu, J.; Jiang, L.; Green, R. The extremely luminous quasar survey in the SDSS footprint. I. Infrared-based candidate selection. Astrophys. J. 2017, 851, 13. [Google Scholar] [CrossRef]
  134. Humphrey, A.; Cunha, P.; Paulino-Afonso, A.; Amarantidis, S.; Carvajal, R.; Gomes, J.; Matute, I.; Papaderos, P. Improving machine learning-derived photometric redshifts and physical property estimates using unlabelled observations. Mon. Not. R. Astron. Soc. 2023, 520, 305–313. [Google Scholar] [CrossRef]
  135. Li, C.; Zhang, Y.; Cui, C.; Fan, D.; Zhao, Y.; Wu, X.B.; Zhang, J.Y.; Tao, Y.; Han, J.; Xu, Y.; et al. Photometric redshift estimation of galaxies in the DESI Legacy Imaging Surveys. Mon. Not. R. Astron. Soc. 2023, 518, 513–525. [Google Scholar] [CrossRef]
  136. Hatfield, P.; Almosallam, I.; Jarvis, M.; Adams, N.; Bowler, R.; Gomes, Z.; Roberts, S.; Schreiber, C. Augmenting machine learning photometric redshifts with Gaussian mixture models. Mon. Not. R. Astron. Soc. 2020, 498, 5498–5510. [Google Scholar] [CrossRef]
  137. Jones, D.M.; Heavens, A.F. Gaussian mixture models for blended photometric redshifts. Mon. Not. R. Astron. Soc. 2019, 490, 3966–3986. [Google Scholar] [CrossRef]
  138. Zhang, Y.X.; Zhang, J.Y.; Jin, X.; Zhao, Y.H. A new strategy for estimating photometric redshifts of quasars. Res. Astron. Astrophys. 2019, 19, 175. [Google Scholar] [CrossRef]
  139. Han, B.; Qiao, L.N.; Chen, J.L.; Zhang, X.D.; Zhang, Y.X.; Zhao, Y.H. GeneticKNN: A weighted KNN approach supported by genetic algorithm for photometric redshift estimation of quasars. Res. Astron. Astrophys. 2021, 21, 017. [Google Scholar] [CrossRef]
  140. Wilson, D.; Nayyeri, H.; Cooray, A.; Häußler, B. Photometric redshift estimation with galaxy morphology using self-organizing maps. Astrophys. J. 2020, 888, 83. [Google Scholar] [CrossRef]
  141. Bilicki, M.; Dvornik, A.; Hoekstra, H.; Wright, A.; Chisari, N.; Vakili, M.; Asgari, M.; Giblin, B.; Heymans, C.; Hildebrandt, H.; et al. Bright galaxy sample in the Kilo-Degree Survey Data Release 4-Selection, photometric redshifts, and physical properties. Astron. Astrophys. 2021, 653, A82. [Google Scholar] [CrossRef]
  142. Razim, O.; Cavuoti, S.; Brescia, M.; Riccio, G.; Salvato, M.; Longo, G. Improving the reliability of photometric redshift with machine learning. Mon. Not. R. Astron. Soc. 2021, 507, 5034–5052. [Google Scholar] [CrossRef]
  143. Henghes, B.; Pettitt, C.; Thiyagalingam, J.; Hey, T.; Lahav, O. Benchmarking and scalability of machine-learning methods for photometric redshift estimation. Mon. Not. R. Astron. Soc. 2021, 505, 4847–4856. [Google Scholar] [CrossRef]
  144. Hong, S.; Zou, Z.; Luo, A.L.; Kong, X.; Yang, W.; Chen, Y. PhotoRedshift-MML: A multimodal machine learning method for estimating photometric redshifts of quasars. Mon. Not. R. Astron. Soc. 2023, 518, 5049–5058. [Google Scholar] [CrossRef]
  145. Curran, S.; Moss, J.; Perrott, Y. QSO photometric redshifts using machine learning and neural networks. Mon. Not. R. Astron. Soc. 2021, 503, 2639–2650. [Google Scholar] [CrossRef]
  146. Dey, B.; Andrews, B.H.; Newman, J.A.; Mao, Y.Y.; Rau, M.M.; Zhou, R. Photometric redshifts from SDSS images with an interpretable deep capsule network. Mon. Not. R. Astron. Soc. 2022, 515, 5285–5305. [Google Scholar] [CrossRef]
  147. Zhou, X.; Gong, Y.; Meng, X.M.; Cao, Y.; Chen, X.; Chen, Z.; Du, W.; Fu, L.; Luo, Z. Extracting photometric redshift from galaxy flux and image data using neural networks in the CSST survey. Mon. Not. R. Astron. Soc. 2022, 512, 4593–4603. [Google Scholar] [CrossRef]
  148. Pasquet, J.; Bertin, E.; Treyer, M.; Arnouts, S.; Fouchez, D. Photometric redshifts from SDSS images using a convolutional neural network. Astron. Astrophys. 2019, 621, A26. [Google Scholar] [CrossRef]
  149. Liang, R.; Liu, Z.; Lei, L.; Zhao, W. Kilonova-Targeting Lightcurve Classification for Wide Field Survey Telescope. Universe 2023, 10, 10. [Google Scholar] [CrossRef]
  150. Bailer-Jones, C.A.; Irwin, M.; Gilmore, G.; von Hippel, T. Physical parametrization of stellar spectra: The neural network approach. Mon. Not. R. Astron. Soc. 1997, 292, 157–166. [Google Scholar] [CrossRef]
  151. Fuentes, O.; Gulati, R.K. Prediction of stellar atmospheric parameters from spectra, spectral indices and spectral lines using machine learning. Rev. Mex. De Astron. Y Astrofísica 2001, 10, 209–212. [Google Scholar]
  152. Bailer-Jones, C.A. Bayesian inference of stellar parameters and interstellar extinction using parallaxes and multiband photometry. Mon. Not. R. Astron. Soc. 2011, 411, 435–452. [Google Scholar] [CrossRef]
  153. Maldonado, J.; Micela, G.; Baratella, M.; D’Orazi, V.; Affer, L.; Biazzo, K.; Lanza, A.; Maggio, A.; Hernández, J.G.; Perger, M.; et al. HADES RV programme with HARPS-N at TNG-XII. The abundance signature of M dwarf stars with planets. Astron. Astrophys. 2020, 644, A68. [Google Scholar] [CrossRef]
  154. Ciucă, I.; Kawata, D.; Miglio, A.; Davies, G.R.; Grand, R.J. Unveiling the distinct formation pathways of the inner and outer discs of the Milky Way with Bayesian Machine Learning. Mon. Not. R. Astron. Soc. 2021, 503, 2814–2824. [Google Scholar] [CrossRef]
  155. Perger, M.; Anglada-Escudé, G.; Baroch, D.; Lafarga, M.; Ribas, I.; Morales, J.; Herrero, E.; Amado, P.; Barnes, J.; Caballero, J.; et al. A machine learning approach for correcting radial velocities using physical observables. Astron. Astrophys. 2023, 672, A118. [Google Scholar] [CrossRef]
  156. Remple, B.A.; Angelou, G.C.; Weiss, A. Determining fundamental parameters of detached double-lined eclipsing binary systems via a statistically robust machine learning method. Mon. Not. R. Astron. Soc. 2021, 507, 1795–1813. [Google Scholar] [CrossRef]
  157. Passegger, V.; Bello-García, A.; Ordieres-Meré, J.; Antoniadis-Karnavas, A.; Marfil, E.; Duque-Arribas, C.; Amado, P.J.; Delgado-Mena, E.; Montes, D.; Rojas-Ayala, B.; et al. Metallicities in M dwarfs: Investigating different determination techniques. Astron. Astrophys. 2022, 658, A194. [Google Scholar] [CrossRef]
  158. Hughes, A.C.; Spitler, L.R.; Zucker, D.B.; Nordlander, T.; Simpson, J.; Da Costa, G.S.; Ting, Y.S.; Li, C.; Bland-Hawthorn, J.; Buder, S.; et al. The GALAH Survey: A New Sample of Extremely Metal-poor Stars Using a Machine-learning Classification Algorithm. Astrophys. J. 2022, 930, 47. [Google Scholar] [CrossRef]
  159. Antoniadis-Karnavas, A.; Sousa, S.; Delgado-Mena, E.; Santos, N.; Teixeira, G.; Neves, V. ODUSSEAS: A machine learning tool to derive effective temperature and metallicity for M dwarf stars. Astron. Astrophys. 2020, 636, A9. [Google Scholar] [CrossRef]
  160. Breton, S.N.; Santos, A.R.; Bugnet, L.; Mathur, S.; García, R.A.; Pallé, P.L. ROOSTER: A machine-learning analysis tool for Kepler stellar rotation periods. Astron. Astrophys. 2021, 647, A125. [Google Scholar] [CrossRef]
  161. Różański, T.; Niemczura, E.; Lemiesz, J.; Posiłek, N.; Różański, P. SUPPNet: Neural network for stellar spectrum normalisation. Astron. Astrophys. 2022, 659, A199. [Google Scholar] [CrossRef]
  162. Cargile, P.A.; Conroy, C.; Johnson, B.D.; Ting, Y.S.; Bonaca, A.; Dotter, A.; Speagle, J.S. MINESweeper: Spectrophotometric Modeling of Stars in the Gaia Era. Astrophys. J. 2020, 900, 28. [Google Scholar] [CrossRef]
  163. Claytor, Z.R.; van Saders, J.L.; Llama, J.; Sadowski, P.; Quach, B.; Avallone, E.A. Recovery of TESS Stellar Rotation Periods Using Deep Learning. Astrophys. J. 2022, 927, 219. [Google Scholar] [CrossRef]
  164. Johnson, J.E.; Sundaresan, S.; Daylan, T.; Gavilan, L.; Giles, D.K.; Silva, S.I.; Jungbluth, A.; Morris, B.; Muñoz-Jaramillo, A. Rotnet: Fast and scalable estimation of stellar rotation periods using convolutional neural networks. arXiv 2020, arXiv:2012.01985. [Google Scholar]
  165. Rui, W.; Luo, A.L.; Shuo, Z.; Wen, H.; Bing, D.; Yihan, S.; Kefei, W.; Jianjun, C.; Fang, Z.; Li, Q.; et al. Analysis of Stellar Spectra from LAMOST DR5 with Generative Spectrum Networks. Publ. Astron. Soc. Pac. 2019, 131, 024505. [Google Scholar] [CrossRef]
  166. Minglei, W.; Jingchang, P.; Zhenping, Y.; Xiaoming, K.; Yude, B. Atmospheric parameter measurement of Low-S/N stellar spectra based on deep learning. Optik 2020, 218, 165004. [Google Scholar] [CrossRef]
  167. Zhang, B.; Liu, C.; Deng, L.C. Deriving the stellar labels of LAMOST spectra with the Stellar LAbel Machine (SLAM). Astrophys. J. Suppl. Ser. 2020, 246, 9. [Google Scholar] [CrossRef]
  168. Li, X.; Lin, B. Estimating stellar parameters from LAMOST low-resolution spectra. Mon. Not. R. Astron. Soc. 2023, 521, 6354–6367. [Google Scholar] [CrossRef]
  169. Bai, Y.; Liu, J.; Bai, Z.; Wang, S.; Fan, D. Machine-learning regression of stellar effective temperatures in the second gaia data release. Astron. J. 2019, 158, 93. [Google Scholar] [CrossRef]
  170. Yang, L.; Yuan, H.; Xiang, M.; Duan, F.; Huang, Y.; Liu, J.; Beers, T.C.; Galarza, C.A.; Daflon, S.; Fernández-Ontiveros, J.A.; et al. J-PLUS: Stellar parameters, C, N, Mg, Ca, and [α/Fe] abundances for two million stars from DR1. Astron. Astrophys. 2022, 659, A181. [Google Scholar] [CrossRef]
  171. Wang, R.; Luo, A.L.; Chen, J.J.; Hou, W.; Zhang, S.; Zhao, Y.H.; Li, X.R.; Hou, Y.H.; LAMOST MRS Collaboration. SPCANet: Stellar parameters and chemical abundances network for LAMOST-II medium resolution survey. Astrophys. J. 2020, 891, 23. [Google Scholar] [CrossRef]
  172. Chen, S.X.; Sun, W.M.; He, Y. Application of Random Forest Regressions on Stellar Parameters of A-type Stars and Feature Extraction. Res. Astron. Astrophys. 2022, 22, 025017. [Google Scholar] [CrossRef]
  173. Li, Y.B.; Luo, A.L.; Du, C.D.; Zuo, F.; Wang, M.X.; Zhao, G.; Jiang, B.W.; Zhang, H.W.; Liu, C.; Qin, L.; et al. Carbon stars identified from LAMOST DR4 using machine learning. Astrophys. J. Suppl. Ser. 2018, 234, 31. [Google Scholar] [CrossRef]
  174. Wang, K.; Qiu, B.; Luo, A.l.; Ren, F.; Jiang, X. ESNet: Estimating Stellar Parameters from LAMOST Low-Resolution Stellar Spectra. Universe 2023, 9, 416. [Google Scholar] [CrossRef]
  175. Hippler, S. Adaptive optics for extremely large telescopes. J. Astron. Instrum. 2019, 8, 1950001. [Google Scholar] [CrossRef]
  176. Buscher, D.F.; Creech-Eakman, M.; Farris, A.; Haniff, C.A.; Young, J.S. The conceptual design of the Magdalena ridge observatory interferometer. J. Astron. Instrum. 2013, 2, 1340001. [Google Scholar] [CrossRef]
  177. Eisenhauer, F.; Monnier, J.D.; Pfuhl, O. Advances in Optical/Infrared Interferometry. Annu. Rev. Astron. Astrophys. 2023, 61, 237–285. [Google Scholar] [CrossRef]
  178. Böker, T.; Arribas, S.; Lützgendorf, N.; de Oliveira, C.A.; Beck, T.; Birkmann, S.; Bunker, A.; Charlot, S.; de Marchi, G.; Ferruit, P.; et al. The near-infrared spectrograph (nirspec) on the james webb space telescope-iii. integral-field spectroscopy. Astron. Astrophys. 2022, 661, A82. [Google Scholar] [CrossRef]
  179. Magnier, E.A.; Chambers, K.; Flewelling, H.; Hoblitt, J.; Huber, M.; Price, P.; Sweeney, W.; Waters, C.; Denneau, L.; Draper, P.; et al. The Pan-STARRS data-processing system. Astrophys. J. Suppl. Ser. 2020, 251, 3. [Google Scholar] [CrossRef]
  180. Chen, C.; Li, Z.; Liu, J.; Han, Z.; Yuan, X. Optical design for SiTian project. In Proceedings of the Optical Design and Testing XII; SPIE: Bellingham, WA, USA, 2022; Volume 12315, pp. 16–22. [Google Scholar]
  181. Grundahl, F.; Christensen-Dalsgaard, J.; Pallé, P.L.; Andersen, M.F.; Frandsen, S.; Harpsøe, K.; Jørgensen, U.G.; Kjeldsen, H.; Rasmussen, P.K.; Skottfelt, J.; et al. Stellar observations network group: The prototype is nearly ready. Proc. Int. Astron. Union 2013, 9, 69–75. [Google Scholar] [CrossRef]
  182. Halferty, G.; Reddy, V.; Campbell, T.; Battle, A.; Furfaro, R. Photometric characterization and trajectory accuracy of Starlink satellites: Implications for ground-based astronomical surveys. Mon. Not. R. Astron. Soc. 2022, 516, 1502–1508. [Google Scholar] [CrossRef]
  183. Hainaut, O.R.; Williams, A.P. Impact of satellite constellations on astronomical observations with ESO telescopes in the visible and infrared domains. Astron. Astrophys. 2020, 636, A121. [Google Scholar] [CrossRef]
  184. Devlin, J.; Chang, M.W.; Lee, K.; Toutanova, K. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv 2018, arXiv:1810.04805. [Google Scholar]
  185. Touvron, H.; Martin, L.; Stone, K.; Albert, P.; Almahairi, A.; Babaei, Y.; Bashlykov, N.; Batra, S.; Bhargava, P.; Bhosale, S.; et al. Llama 2: Open foundation and fine-tuned chat models. arXiv 2023, arXiv:2307.09288. [Google Scholar]
  186. Bubeck, S.; Chandrasekaran, V.; Eldan, R.; Gehrke, J.; Horvitz, E.; Kamar, E.; Lee, P.; Lee, Y.T.; Li, Y.; Lundberg, S.; et al. Sparks of artificial general intelligence: Early experiments with gpt-4. arXiv 2023, arXiv:2303.12712. [Google Scholar]
  187. Beltagy, I.; Lo, K.; Cohan, A. SciBERT: A pretrained language model for scientific text. arXiv 2019, arXiv:1903.10676. [Google Scholar]
  188. Thirunavukarasu, A.J.; Ting, D.S.J.; Elangovan, K.; Gutierrez, L.; Tan, T.F.; Ting, D.S.W. Large language models in medicine. Nat. Med. 2023, 29, 1930–1940. [Google Scholar] [CrossRef] [PubMed]
  189. Meskó, B.; Topol, E.J. The imperative for regulatory oversight of large language models (or generative AI) in healthcare. Npj Digit. Med. 2023, 6, 120. [Google Scholar] [CrossRef]
  190. Kasneci, E.; Seßler, K.; Küchemann, S.; Bannert, M.; Dementieva, D.; Fischer, F.; Gasser, U.; Groh, G.; Günnemann, S.; Hüllermeier, E.; et al. ChatGPT for good? On opportunities and challenges of large language models for education. Learn. Individ. Differ. 2023, 103, 102274. [Google Scholar] [CrossRef]
Figure 1. Classes of telescope entities.
Figure 1. Classes of telescope entities.
Universe 10 00210 g001
Figure 2. The environments of some sites located in high-altitude areas or Antarctica.
Figure 2. The environments of some sites located in high-altitude areas or Antarctica.
Universe 10 00210 g002
Figure 3. This method uses the output of the CNN (convolutional neural network) model as input for the Transformer model to achieve cloud classification and recognition in all-sky camera imagery [33].
Figure 3. This method uses the output of the CNN (convolutional neural network) model as input for the Transformer model to achieve cloud classification and recognition in all-sky camera imagery [33].
Universe 10 00210 g003
Figure 4. Schematic diagram of the structure of the Earth’s atmosphere.
Figure 4. Schematic diagram of the structure of the Earth’s atmosphere.
Universe 10 00210 g004
Figure 5. In-focus and defocused star images and aberrated wavefront maps are used to train the Bi-GRU network, and the trained network is used to predict wavefront maps [64].
Figure 5. In-focus and defocused star images and aberrated wavefront maps are used to train the Bi-GRU network, and the trained network is used to predict wavefront maps [64].
Universe 10 00210 g005
Figure 6. In the framework of the telescope maintenance support system, the left part realizes imaging quality monitoring, and the right part realizes fault diagnosis [84].
Figure 6. In the framework of the telescope maintenance support system, the left part realizes imaging quality monitoring, and the right part realizes fault diagnosis [84].
Universe 10 00210 g006
Figure 7. The baseline configuration is all open. After restricting ourselves to a subset of ID “togglings”, the improvement over the measured MPIQ values is plotted [17].
Figure 7. The baseline configuration is all open. After restricting ourselves to a subset of ID “togglings”, the improvement over the measured MPIQ values is plotted [17].
Universe 10 00210 g007
Figure 8. A visualization of the training and testing process on the turbulence-like dataset mentioned in a related paper [97].
Figure 8. A visualization of the training and testing process on the turbulence-like dataset mentioned in a related paper [97].
Universe 10 00210 g008
Figure 9. A combination of three networks to jointly predict morphology and photo-z [146].
Figure 9. A combination of three networks to jointly predict morphology and photo-z [146].
Universe 10 00210 g009
Figure 10. The right part enumerates both the quantity of publications and the accompanying citations pertaining to database data labeling, while the left part similarly enumerates the citation counts and number of publications in various other research domains. The citation count for articles published in the last five years reveals that aside from database calibration, the application of AI techniques in the adaptive optical technology and site seeing assessment fields are currently research hotspots.
Figure 10. The right part enumerates both the quantity of publications and the accompanying citations pertaining to database data labeling, while the left part similarly enumerates the citation counts and number of publications in various other research domains. The citation count for articles published in the last five years reveals that aside from database calibration, the application of AI techniques in the adaptive optical technology and site seeing assessment fields are currently research hotspots.
Universe 10 00210 g010
Table 1. Input parameters, output parameters, and accuracy of different methods.
Table 1. Input parameters, output parameters, and accuracy of different methods.
MethodInput ParametersOutput ParametersStatistical Operators
MLP [41]Temperature, relative humidity, and pressure at the height of 2 m; potential temperature gradient and wind shear at the height of 15 m C N 2 R 2 = 0.87, weekly a
RF [42]Surface station: dew point temperature, pressure, wind speed, relative humidity, etc. log ( C N 2 ) M S E = 0.09 b
Optimized BP [43]Surface pressure, temperature at a height of 0.5 m and 2 m, relative humidity at 0.5 m and 2 m, wind speed at the height of 0.5 m and 2 m, and snow surface temperature log ( C N 2 ) R x y = 0.9323 and R M S E = 0.2367 c
DNN [44]Simulated C N 2 from laser beam intensity scintillation patterns C N 2 C N 2 / C N , 0 2 : R M S E = 0.072 and s t d = 0.06 d
GA-BP [45]Vertical profiles from sounding balloon: height, pressure, temperature, wind speed, wind shear, and temperature gradient log ( C N 2 ) R M S E < 1.4
RF and MLP [48]Seeing, surface atmospheric parameters (pressure, temperature, wind, humidity, etc.)Seeing R M S E < 0.27 ; 2 h
RF [49]ground parameters (wind, temperature, relative humidity, pressure), seeing, isoplanatic angle, etc.SeeingPearson correlation coefficient 0.8 at start time
K-means [51]Free seeing, vertical profile of wind velocity and wind shear from GFS, etc.Seeing of total and free atmospheric parameters for the next 5 days R M S E < 0.25
RF [53]Seeing, wavefront coherence time, isoplanatic angle, ground layer fraction, and atmospheric parameters (temperature, relative humidity, wind speed, and direction)Seeing R M S E = 0.24 ; 1 h R M S E = 0.32 ; 2 h
LSTM and GPR [54]Wind speed and temperature gradient at heights of 2 m, 4 m, 6 m, 8 m, 10 m, and 12 mSeeing R M S E = 0.14 ; 10 min
a R2: correlation coefficient; b MSE: mean square error; c Rxy: correlation coefficient; d std: standard deviation; RMSE: root-mean-square error.
Table 2. Classification of database and representative catalogues.
Table 2. Classification of database and representative catalogues.
Catalog DatabaseVolumeRepresentative Catalogues
I. Astrometric Data1136AGK3 Catalogue (I/61B)
UCAC3 Catalogue (I/315)
II. Photometric Data747General Catalog of Variable Stars, 4th Ed (II/139B)
BATC–DR1 (II/262)
III. Spectroscopic Data291Catalogue of Stellar Spectral Classifications (III/233B)
Spectral Library of Galaxies, Clusters and Stars (III/219)
IV. Cross-Identifications19SAO-HD-GC-DM Cross Index (IV/12)
HD-DM-GC-HR-HIP-Bayer-Flamsteed Cross Index (IV/27A)
V. Combined Data554The SDSS Photometric Catalogue, Release 12 (V/147)
LAMOST DR5 catalogs (V/164)
VI. Miscellaneous379Atomic Spectral Line List (VI/69)
Plate Centers of POSS-II (VI/114)
VII. Non-stellar Objects292NGC 2000.0 (VII/118)
SDSS DR5 quasar catalog (VII/252)
VIII. Radio and Far-IR Data99The 3C and 3CR Catalogues (VIII/1A)
Miyun 232 MHz survey (VIII/44)
IX.High-Energy Data47Wisconsin soft X-ray diffuse background all-sky Survey (IX1)
Table 3. Classification of various research directions based on proposed criteria.
Table 3. Classification of various research directions based on proposed criteria.
ItemTime CostAccuracyLevel
Site Seeing Estimate and Prediction000
Assessment of Site Observation Conditions1−10
Optimization of Dome Seeing112
Adaptive Optics101
Optical Path Calibration101
Mirror Surface Calibration101
Observation Schedule112
Fault Diagnosis101
Database Data Fusion101
Date Classification112
Preselected Quasar Candidates112
Photometric Infrared Evaluation112
Stellar Parameter Measurements112
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Huang, K.; Hu, T.; Cai, J.; Pan, X.; Hou, Y.; Xu, L.; Wang, H.; Zhang, Y.; Cui, X. Artificial Intelligence in Astronomical Optical Telescopes: Present Status and Future Perspectives. Universe 2024, 10, 210. https://doi.org/10.3390/universe10050210

AMA Style

Huang K, Hu T, Cai J, Pan X, Hou Y, Xu L, Wang H, Zhang Y, Cui X. Artificial Intelligence in Astronomical Optical Telescopes: Present Status and Future Perspectives. Universe. 2024; 10(5):210. https://doi.org/10.3390/universe10050210

Chicago/Turabian Style

Huang, Kang, Tianzhu Hu, Jingyi Cai, Xiushan Pan, Yonghui Hou, Lingzhe Xu, Huaiqing Wang, Yong Zhang, and Xiangqun Cui. 2024. "Artificial Intelligence in Astronomical Optical Telescopes: Present Status and Future Perspectives" Universe 10, no. 5: 210. https://doi.org/10.3390/universe10050210

APA Style

Huang, K., Hu, T., Cai, J., Pan, X., Hou, Y., Xu, L., Wang, H., Zhang, Y., & Cui, X. (2024). Artificial Intelligence in Astronomical Optical Telescopes: Present Status and Future Perspectives. Universe, 10(5), 210. https://doi.org/10.3390/universe10050210

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop