Next Article in Journal
Fast Robust Point Cloud Registration Based on Compatibility Graph and Accelerated Guided Sampling
Next Article in Special Issue
Photogrammetry—The Science of Precise Measurements from Images: A Themed Issue in Honour of Professor Emeritus Armin Grün in Anticipation of His 80th Birthday
Previous Article in Journal
Comparing Three Freeze-Thaw Schemes Using C-Band Radar Data in Southeastern New Hampshire, USA
Previous Article in Special Issue
The Effect of Varying the Light Spectrum of a Scene on the Localisation of Photogrammetric Features
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Best Scanline Determination of Pushbroom Images for a Direct Object to Image Space Transformation Using Multilayer Perceptron

by
Seyede Shahrzad Ahooei Nezhad
1,
Mohammad Javad Valadan Zoej
1,*,
Kourosh Khoshelham
2,
Arsalan Ghorbanian
1,
Mahdi Farnaghi
3,
Sadegh Jamali
4,
Fahimeh Youssefi
1,5 and
Mehdi Gheisari
5,6,7
1
Department of Photogrammetry and Remote Sensing, Faculty of Geodesy and Geomatics Engineering, K. N. Toosi University of Technology, Tehran 19967-15443, Iran
2
Department of Infrastructure Engineering, University of Melbourne, Melbourne, VIC 3010, Australia
3
Department of Geo-Information Processing (GIP), Faculty of Geo-Information Science and Earth Observation (ITC), University of Twente, 7522 NH Enschede, The Netherlands
4
Department of Technology and Society, Faculty of Engineering, Lund University, P.O. Box 118, 221 00 Lund, Sweden
5
Institute of Artificial Intelligence, Shaoxing University, 508 West Huancheng Road, Yuecheng District, Shaoxing 312000, China
6
Department of Computer Science and Engineering, Saveetha School of Engineering, Saveetha Institute of Medical and Technical Sciences, Chennai 602105, India
7
Department of Computer Science, Damavand Branch, Islamic Azad University, Damavand 39718-78911, Iran
*
Author to whom correspondence should be addressed.
Remote Sens. 2024, 16(15), 2787; https://doi.org/10.3390/rs16152787
Submission received: 13 May 2024 / Revised: 25 July 2024 / Accepted: 27 July 2024 / Published: 30 July 2024

Abstract

:
Working with pushbroom imagery in photogrammetry and remote sensing presents a fundamental challenge in object-to-image space transformation. For this transformation, accurate estimation of Exterior Orientation Parameters (EOPs) for each scanline is required. To tackle this challenge, Best Scanline Search or Determination (BSS/BSD) methods have been developed. However, the current BSS/BSD methods are not efficient for real-time applications due to their complex procedures and interpolations. This paper introduces a new non-iterative BSD method specifically designed for line-type pushbroom images. The method involves simulating a pair of sets of points, Simulated Control Points (SCOPs), and Simulated Check Points (SCPs), to train and test a Multilayer Perceptron (MLP) model. The model establishes a strong relationship between object and image spaces, enabling a direct transformation and determination of best scanlines. This proposed method does not rely on the Collinearity Equation (CE) or iterative search. After training, the MLP model is applied to the SCPs for accuracy assessment. The proposed method is tested on ten images with diverse landscapes captured by eight sensors, exploiting five million SCPs per image for statistical assessments. The Root Mean Square Error (RMSE) values range between 0.001 and 0.015 pixels across ten images, demonstrating the capability of achieving the desired sub-pixel accuracy within a few seconds. The proposed method is compared with conventional and state-of-the-art BSS/BSD methods, indicating its higher applicability regarding accuracy and computational efficiency. These results position the proposed BSD method as a practical solution for transforming object-to-image space, especially for real-time applications.

1. Introduction

Linear pushbroom imaging sensors have become the most widely used optical satellite sensors due to modern high-resolution imaging technology [1,2]. These sensors capture high-resolution images with high revisit frequency and large coverage area, making them suitable for different photogrammetric and remote sensing tasks [3,4]. As a result, they are extensively used for Earth’s surface imaging in diverse applications, such as land cover classification, three-dimensional (3D) reconstruction, and environmental monitoring [5,6].
Linear pushbroom images have a more complex geometry and acquisition characteristics compared to frame-type images, such as aerial photographs [7]. In particular, each scanline of a linear pushbroom image has its own Exterior Orientation Parameters (EOPs) because it is captured at a specific exposure time [8]. This is different from frame-type images, which have a single set of EOPs [9,10,11]. As a result, the perspective center position and rotation angles of pushbroom images change scanline by scanline [7,12], making the object-to-image space transformation a complicated process [13,14]. When using a rigorous sensor model such as the Collinearity Equation (CE), physical parameters of a pushbroom image, including Interior Orientation Parameters (IOPs) and EOPs, are required to describe the imaging geometry on a line-by-line basis [15,16]. This means that the problem of the Best Scanline Search/Determination (BSS/BSD), or equivalently determining the exact time of exposure [17] must be solved to obtain the EOPs for each scanline. The process of transforming a particular point on the object/ground to its corresponding point in an image, known as object-to-image space transformation, is essential in photogrammetric applications like stereoscopic measurements, rectifying pushbroom images, epipolar resampling, and orthoimage generation [18,19]. Achieving sub-pixel accuracy is crucial for this transformation to ensure the accuracy of the resulting products [20].
Various BSS/BSD methods have been developed to apply object-to-image transformation for linear pushbroom images. The Sequential Search (SS) approach involves iteratively searching for the specific scanline of a ground point through the CE, which is inefficient due to the high computational cost of multiple transformations between object and image spaces per line through the CE. The Bisecting Window Search (BWS) method, introduced by [21], reduces the SS search space by repeatedly halving the image space and using the CE for object-to-image space transformation. However, the BWS method still requires an iterative procedure and is not efficient for practical applications. The Newton Raphson (NR) method proposed by [22] is another iterative BSS method that uses the mathematical Newton Raphson root-finding approach to account for pushbroom sensors’ characteristics. The Central Perspective Plane (CPP) approach proposed by [23] has proved superior applicability in aerial pushbroom images; although, the associated assumptions are violated when working with satellite images with distortion, and the efficiency is notably reduced [24]. This is because the CPP of the scanline becomes a 3D curved surface instead of a 3D plane. Therefore, it is necessary to divide the curved shape linear array into numerous short line segments. This segmentation enables the precise identification of the accurate CPP corresponding to the ground point [25]. The iterative process of establishing the relationship between each ground point and the CPP, as well as the computation of the best scanline through CE during the refinement stage make the CPP-based method relatively challenging to implement. In the General Distance Prediction (GDP) approach proposed by [26], the ground point is back-projected to image space using EOPs of the first scanline, then the GDP, the distance between the projected image point and the first line of the image, is calculated and used for refinement of specific equations. This iterative procedure will continue until the stopping criterion is met and the best scanline corresponding to the ground point is determined by linear interpolation. This method performs in an iterative procedure with several interpolations and thus requires moderate computation time. However, both the iterative process and the use of the CE to search for the best scanline result in a heavy computational burden for all these methods.
More recently, Ahooei Nezhad et al. [13] proposed two non-iterative three-stage BSD methods called Optimal Global Polynomial (OGP) and Artificial Neural Network (ANN). These methods use the CE and geometric calculations to obtain subpixel accuracies within a short time. Simulated points generated by the CE are used to model the errors between the exact and estimated scanline of these points through geometric interpolation. The OGP model later utilized a fifth-degree polynomial and Genetic algorithm to compress the model and improve the accuracy of the estimated scanline following the geometric interpolation of any given ground point. On the other hand, the ANN model used a different structure based on image characteristics for the refinement stage. While these methods do not involve iteration, they are relatively complex due to the three-stage procedure and use of geometric calculations.
This paper introduces a new approach for transforming objects to image space using a Multilayer Perceptron (MLP) Machine Learning (ML) algorithm to address the BSD problem. This method uses the CE to accurately simulate points across pushbroom images. A portion of these points is used to train the MLP to establish a relationship between object-to-image space, which can then be used to directly determine the best scanline for any arbitrary ground points. The proposed method was tested using ten images with diverse characteristics, and a comparison with previous methods was conducted to assess its accuracy and computational efficiency. This paper is divided into five sections. The Section 1 gives an overview of the BSS/BSD field and its importance in photogrammetry. The Section 2 begins with a presentation of the used datasets. Afterwards, the mathematical models that were dealt with are outlined. At the end of this section, the proposed method is described in detail, along with the prerequisite concepts of MLP. The experimental results of the proposed method are given in the Section 3. A discussion is included in a later section. Some conclusions are drawn in the final section.

2. Materials and Methods

A diagram of the proposed method is illustrated in Figure 1. In addition, the inputs and outputs of each step of the proposed method are provided in Table 1. The proposed method consists of two general steps. In the first step, the MLP network is trained using Simulated Control Points (SCOPs) to establish the relation between the object/ground and image spaces. In the second step, the model is evaluated using Simulated Check Points (SCPs), as arbitrary ground points, through the transformation from the ground space to the image space and determination the corresponding best scanlines. More details of the proposed method will be provided in Section 2.4.

2.1. Dataset

Ten satellite images acquired by eight different sensors, whose specifications are provided in Table 2, were employed to evaluate the proposed BSD method. The experiments were carried out on images captured by various linear array pushbroom sensors, including IKONOS, Pleiades 1A, Pleiades 1B, QuickBird, SPOT 6, SPOT 7, Worldview 1, and Worldview 2. These images covered several parts of the world, including Sao Paulo (Brazil), Melbourne (Australia), Annapolis (USA), Jaipur (India), Jaicos (Brazil), Amsterdam (Netherlands), Curitiba (Brazil), Boulder (USA), Sydney (Australia), and San Diego (USA). Ten images with diverse sensor characteristics (e.g., dimension and spatial resolution) and distinct landscapes and topographic conditions (i.e., urban areas, flat areas, agricultural areas, and a mixture of all) were considered to conduct a robust assessment (see Figure 2). In particular, the spatial resolution of images varied between 0.5 m and 6 m. Note that all the images used in this study were accompanied by the Rational Polynomial Coefficients (RPC) files; the required Ground Control Points (GCPs) were obtained by these auxiliary files. Additionally, elevation values of the GCPs and the mean height of the study areas were extracted from the available Digital Elevation Model (DEM) sources.

2.2. Mathematical Models

To determine the dynamic geometry of satellite images, mathematical models were expanded, which relates the two-dimensional (2D) image space and 3D object/ground space [27,28,29]. The well-known CE is a commonly used mathematical model for linear array pushbroom images in photogrammetry, which can be expressed by Equation (1) [30].
x = f r 11 i X X S i + r 12 i Y Y S i + r 13 i Z Z S i r 31 i X X S i + r 32 i Y Y S i + r 33 i Z Z S i = 0 y = f r 21 i X X S i + r 22 i Y Y S i + r 23 i Z Z S i r 31 i X X S i + r 32 i Y Y S i + r 33 i Z Z S i
where ( x , y ) are 2D image point coordinates, f is the focal length, ( X , Y , Z ) are 3D ground point coordinates, i represents the scanline number, ( X S i , Y S i , Z S i ) are the object space coordinates of the perspective center, and ( r 11 i ,…, r 33 i ) are the rotation matrix elements, including rotational angles ω , φ , and κ of EOPs. It is worth noting that the x is theoretically equal to zero according to linear array pushbroom imagery characteristics.
In the practical processing of pushbroom images, when using the CE, the reliable EOPs of each scanline must be obtained during the space resection [30]. In this paper, the Multiple Projection Center (MPC) model was employed for EOPs’ calculation, the equations of which are given in Equation (2) [7,31].
X s i t = X 0 + X 1 t i + X 2 ( t i 2 ) Y s i t = Y 0 + Y 1 t i + Y 2 ( t i 2 ) Z s i t = Z 0 + Z 1 t i + Z 2 ( t i 2 ) w s i t = w 0 + w 1 t i + w 2 ( t i 2 ) φ s i t = φ 0 + φ 1 t i + φ 2 ( t i 2 ) k s i t = k 0 + k 1 t i + k 2 ( t i 2 )
where ( X s i t , Y s i t , …, and k s i t ) are the ith scanline’s EOPs, ( X 0 , X 1 , …, and k 2 ) are the achieved reference scanline’s EOPs, and t i indicates the ith scanline’s exposure time, which can be used as a substitute for the along-track coordinate of the satellite or the scanline number [17]. The availability of EOPs of each scanline and other specifications of the pushbroom sensor enables us to use CE to transfer any ground point to the corresponding image point in image space. At the stage of space resection based on the MPC model, appropriate control points with known image and ground coordinates must be available.

2.3. Simulated Points Generation

Two categories of simulated points, SCOPs and SCPs, were generated. The SCOPs and SCPs were simulated as a regular grid in the image space to comply with suitable distribution across images. Thereupon, these image points were mapped to the object space by employing the CE and the mean height of the study area. This projection step required the EOPs of each scanline, which were computed through the MPC model (Equation (2)). Hence, the image and object/ground coordinates and the exact scanline number (r) of SCOPs and SCPs were available. The SCOPs were used in the training step of the MLP, and the SCPs were used to evaluate the proposed BSD approach.

2.4. MLP Model

MLP models are widely recognized and used more often than other types of neural networks in various problem domains [32]. The MLP algorithm is a type of supervised machine learning that consists of interconnected elements called neurons, arranged in three distinct layers: input, hidden, and output [33,34]. Information is transmitted from the input layer to the output layer through the intermediary hidden layer. Each neuron within a layer forms complete connections with neurons in adjacent layers, and these connections are represented as weights during the computational process [32]. Each neuron maps multiple inputs to an output [35,36] using an activation function [37]. The number of independent variables in the model determines the number of neurons allocated to the input layer, while the number of neurons in the output layer corresponds to the count of dependent variables. The output layer can consist of a single neuron or multiple neurons. The number of neurons in the hidden layer of the MLP model depends on the network’s ability to model nonlinear functions [38]. MLP networks can effectively model and approximate both linear and nonlinear functions [39]. The MLP model establishes a connection between inputs and outputs by adjusting the weighted connections between neurons through the error back-propagation technique during the training process to minimize discrepancies between the anticipated target values and those generated by the model [32,33]. If the errors exceed a predefined threshold, weight adjustments are made to reduce these discrepancies. To define an MLP structure, important parameters such as the number of neurons, hidden layers, learning algorithm, and activation function must be considered [33]. In this study, several MLP topologies were tested to find the most accurate one for transforming object-to-image space. Only the Levenberg–Marquardt algorithm and the sigmoid function were used as the back-propagation learning algorithm and activation function, respectively, due to their efficiency and ease of implementation [13,40]. The Levenberg–Marquardt algorithm has been identified as a suitable learning algorithm for similar fields [41].
The proposed BSD method involves using SCOPs to train MLP models. The inputs are ground coordinates, and the output is the corresponding exact scanline to establish a relationship between object and image spaces (Equation (3)). The trained MLP models can be used to determine the scanline number of any ground points without the CE and iterative procedure. The trained MLP models were applied to SCPs, and the estimated scanline numbers were then compared with the exact scanline numbers to compute statistical assessment criteria.
S c a n l i n e   n u m b e r   r = f G r o u n d   c o o r d i n a t e s X ,   Y          
In Equation (3), X ,   Y are ground points’ coordinates, f is the MLP model and r is row/scanline number. It is worth noting that the relationship for object-to-image transformation involves the coordinates (X, Y, Z) and (r, c), which connect 3D ground space to 2D image space. However, Equation (3) lacks the third ground component due to the use of the average height of the study area, which remains constant for all ground points to simplify computation. Also, the equation does not include the second component of the image (c). The primary challenge in object-to-image transformation is calculating the scanline number.

2.5. Accuracy Assessment

As noted earlier, the SCPs were used to evaluate the performance of the proposed method. In this regard, the estimated scanline number using the MLP models was compared with the exact scanline number known from the simulation phase. Two statistical measures of Root Mean Square Error (RMSE) and drmax (the maximum error in the scanline number determination) were computed according to Equations (4) and (5). Additionally, the computational time of the proposed method was measured to specify its time efficiency for real-time applications. Finally, the obtained results were compared with the ones computed by several other BSS/BSD methods it investigate the superiority of the proposed method.
R M S E = d r T × d r n ( d r )          
d r = | r e x a c t r e s t i m a t e d |        
In Equations (4) and (5), r e s t i m a t e d is the calculated scanline value obtained from the MLP model, r e x a c t is the exact scanline value obtained from the simulation phase, d r is the difference between these two values, and n ( d r ) is the total number of scanlines in each image.
The SCOPs and SCPs were created separately in regular grids across each image as outlined in Section 2.3. In the evaluation stage, five million SCPs were considered. During the training phase, five different groups of SCOPs, each comprising 10, 20, 30, 50, and 100 points, were taken into account. The SCPs worked as check points for measuring criteria such as RMSE and drmax. The SCOPs were randomly divided into two groups: 70% for training and 30% for validation. They were then used in the training phase of the MLP algorithm.

3. Results

During the training phase of MLP models, the structure of the model and the number of SCOPs used have a significant impact on the performance. To identify the most suitable MLP structure, a grid search analysis was conducted with the number of layers and neurons ranging between one and five (with an interval of one) and 10 and 50 (with an interval of ten), respectively. Increasing the number of layers and neurons in the model results in a more complex MLP model with a higher number of unknown parameters (ranging between 41 and 10,401), which requires more SCOPs. The study employed several sets of SCOPs with different sizes, ranging from 10 to 1000 points (in increments of 10, 20, 30, 50, 100, 300, 500, and 1000) to explore the impact of SCOPs size. Although this led to 1000 cases for each image, only feasible cases (i.e., achieving sub pixel accuracy) were considered to ensure convergence concerning the number of SCOPs and unknown parameters.
Table 3 provides a summary of the statistical results (RMSE and drmax) and computation times (using an Intel Core i7-7500U 2.90 GHz processor) for the ISB image based on MLP models. The evaluations were carried out with five million SCPs, and only the feasible cases that achieved the desired sub-pixel accuracies are displayed in Table 3. The RMSE and drmax values were almost the same (differing by just hundredths and thousandths of a pixel) for all cases, with variations primarily in computation times. Larger MLP models led to longer processing times and more SCOPs, resulting in higher complexity. Consequently, MLP models with fewer layers (one), neurons (five to ten), and SCOPs were chosen to minimize unnecessary complexity and reduce the risk of overfitting in subsequent steps [42].
The detailed results for the ISB image are presented here as the outcomes for other images were found to be similar.
The results of the MLP models with one layer and five-to-ten neurons for all the images are shown in Figure 3, Figure 4, Figure 5, Figure 6, Figure 7 and Figure 8. Generally, the results indicated that the number of neurons had no significant effect on the RMSE and drmax values, considering the cases that achieved sub-pixel accuracies. To evaluate the performance of the BSD method, five SCOPs sets with 10, 20, 30, 50, and 100 simulated points and five million SCPs per image were used. However, the SCOPs sets with 10 and 20 simulated points failed to establish a transformation between object and image spaces with sub-pixel accuracy, so their results were excluded. This suggests that a limited number of SCOPs cannot be used to relate object and image spaces, even with shallow MLP models. According to the results, even 30 SCOPs were not enough for object-to-image transformation with the desired accuracy. For some images, the obtained RMSE and drmax values were higher than one pixel, which were mapped to one for better representation. When using an MLP with a 5-neuron structure and 30 SCOPs, the ISB, SJB, and SAN images had RMSE values higher than one pixel. This may be due to the lower spatial resolution of these images compared to others. However, the land cover in these images varies, including urban, flat, and mixed areas, so the accuracy is not solely dependent on the land cover. Increasing the number of SCOPs to 100 resolved this issue, providing sub-pixel accuracy in all images. In the case of a 6-neuron structure with 30 or 50 SCOPs, the drmax values of ISB, PMA, PAU, and SJB images were more than one pixel, but using 100 SCOPs achieved the desired sub-pixel accuracy.
Similarly, using a 7-neuron MLP structure resulted in RMSE values of more than one pixel for SAN, SJB, and ISB images due to their low resolution. The complexity of an 8-neuron MLP network required more SCOPs, and using 30 SCOPs still resulted in RMSE and drmax values of more than one pixel for ISB, PMA, SJB, SAN, and WSU, indicating the insufficiency of this number of SCOPs. Finally, increasing the number of neurons led to convergence issues or very low accuracy due to the increase in unknown parameters and deficient SCOPs to estimate them. However, the MLP models obtained reasonable RMSE values for the SCB image even when the number of neurons reached ten, possibly due to the smaller dimensions of the image or its coarser spatial resolution. In general, the cases without sub-pixel accuracy increased as the number of neurons increased, which is due to the fact that MLP models with a higher number of neurons have more unknown parameters, and 30 SCOPs were insufficient. The issues were solved when the number of SCOPs reached 50, and almost all MLP models converged with sub-pixel accuracy, except for a few cases in the ISB and PMA images. This might be due to the dimension of the ISB and PMA images, in which 50 SCOPs were not enough to cover the images appropriately and enable a sub-pixel transformation. The existence of dense buildings in the urban landscape of the ISB image might introduce more complexity to the MLP models. The results imply that the desired RMSE and drmax in all images were attained when 100 SCOPs were used in the training phase of the MLP models. In other words, 100 SCOPs comprehensively supported achieving sub-pixel accuracy regardless of image dimension, spatial resolution, and landscape characteristics. It should be noted that the MLP models could also attain sub-pixel accuracies when a higher number of SCOPs are used; however, such numbers incur computation time and increase the risk of overfitting. In general, the proposed MLP algorithm can determine the best scanline with very high accuracy on different data with different characteristics by choosing the right network structure and the optimal number of SCOPs.
In this study, the proposed method for BSS/BSD was compared to previous state-of-the-art methods. The comparison was based on several quantitative measures: RMSE, computation time, drmax, and the number of SCOPs (Table 4). The NR method achieved sub-pixel accuracy but had a long computation time due to its iterative procedure of root-finding using the CE. The BWS method, despite being an improved version of SS, still required considerable time to determine the best scanline. Additionally, the BWS method could not achieve sub-pixel accuracy according to drmax, although the RMSE values were below one pixel. In contrast, the proposed method achieved sub-pixel accuracy in both RMSE and drmax and required significantly less computation time. The ANN and OGP are two state-of-the-art BSD methods that require SCOPs. The required number of SCOPs varied between 200 and 700, with 500 being sufficient for most images. The proposed method achieved sub-pixel accuracy with only 100 SCOPs, while the ANN and OGP methods required trial and error attempts to identify the optimal number of SCOPs. Moreover, the proposed method required nearly half of the computation time compared to the OGP. The ANN and the proposed method (both with one hidden layer) obtained similar results, although the proposed method was more efficient, requiring fewer SCOPs and achieving better RMSE and drmax values.

4. Discussion

Efficiency and accuracy are two crucial criteria when evaluating a BSS/BSD method in object-to-image space transformation in linear array pushbroom imagery. Efficiency refers to computation time, while accuracy pertains to the difference between estimated and exact scanline numbers. Sub-pixel accuracy is desired for a BSS/BSD method to derive suitable products in photogrammetric tasks. The proposed BSD method showed significant performance compared to previously well-known and state-of-the-art methods, as shown in Figure 3, Figure 4, Figure 5, Figure 6, Figure 7 and Figure 8 and Table 4. Ten images with diverse sensor characteristics (e.g., dimension and spatial resolution) and distinct landscapes and topographic conditions (i.e., urban areas, flat areas, agricultural areas, and a mixture of all) were considered to conduct a robust assessment. We believe that due to this variety and the achieved promising results, the proposed method is likely to exhibit broad applicability and generality across other datasets.
The research used the MLP model for the BSD and also tested other popular ML algorithms such as Random Forest (RF) [43] and Support Vector Machines (SVM) methods [44] with various numbers of SCOPs and five million SCPs. However, these methods were unable to accurately establish the relationship between object and image space with sub-pixel precision. The minimum RMSE obtained from these methods reached one hundred pixels, which is unacceptable. The evaluation demonstrates that using the MLP method is more effective than other ML methods.
The main goal of this paper was to utilize the MPC model in space resection to compute the EOPs of each scanline for all images. These EOPs were then employed to simulate points. If the EOPs are provided with the images as an auxiliary file, they can be directly used in the BSD method, and the space resection step can be bypassed. The results demonstrated that the number and distribution of SCOPs are crucial in obtaining the desired accuracy. The number of SCOPs should be the least required, and their distribution across the image should be in a regular grid with an appropriate number. The MLP model’s non-compliance with image and region characteristics and an inappropriate number of SCOPs are among the factors that produce undesired values for RMSE and drmax.
It is worth noting that in pushbroom imagery, the sensor captures the image perpendicular to its flight direction. According to theory, the first component of the CE for this kind of sensor should be zero (Equation (1)). However, in practice, it is possible to obtain a small x value, around 10−6 or smaller. In such cases, this difference can be ignored as it will not affect the algorithm process.
The ANN and OGP BSD methods developed by [13] performed closely to the proposed BSD method. However, the proposed method directly determines the best scanline number, making it simpler than ANN and OGP methods, both of which were three-step algorithms. In particular, the ANN model was used for the refinement of scanline numbers after estimating the initial row and column values of each ground point through an affine transformation and computing its approximate time of exposure using a specific equation. The ANN model then establishes the relation between the initial row and column values of each ground point and difference between calculated approximate time and exact time, which is available from SCOPs. The ANN method obtained sub-pixel accuracies, but its computation time was 22% higher than the proposed BSD method on average. The OGP method replaced the ANN with an optimized polynomial expression. The optimization task was conducted by the genetic algorithm, which increases the processing time. Although the ANN and MLP methods may seem very similar, the ANN method is a part of a three-step BSD algorithm for error modeling, while the MLP method is based on a single-step neural network, the purpose of which is to calculate the best scanline directly. Both models have a similar three-layer structure, but due to different goals, the number of neurons in the hidden layer and the number of SCOPs are different from each other. Owing to the distinction between these two methods, we named the first method as the ANN method and the proposed method as MLP method. Moreover, both ANN and OGP methods required a higher number of SCOPs to successfully relate object and image spaces with sub-pixel accuracy, while 100 SCOPs were sufficient for the MLP models regardless of image and landscape characteristics. Furthermore, the proposed BSD method has no specific assumption and can be employed for linear array pushbroom sensors with varying exposure times.

5. Conclusions

A robust and accurate ground-to-image transformation method is crucial in the geometric processing workflows of linear pushbroom images. Due to the different imaging geometry in these sensors, the transformation of coordinates from the ground space to the image space is more complicated than frame images. For this reason, the BSS/BSD methods in this field are proposed. This study proposes a new algorithm for object-to-image space transformations in linear array pushbroom images. The algorithm is a single-stage, non-iterative approach based on MLP models, resulting in sub-pixel accuracy. The study shows that using MLP models with lower complexity and 100 SCOPs can ensure achieving sub-pixel accuracy regardless of using diverse sensor (e.g., dimension and spatial resolution) and different landscapes and topographic conditions (urban areas, flat areas, agricultural areas, and a mixture of all). The proposed method is compared with well-known BSS/BSD methods such as NR, BWS, ANN, and OGP. The results reveal that the MLP model has low processing complexity, demands less computation time, and delivers significantly improved accuracies compared to previous methods. The proposed method has no specific assumption without the need to acquire information about the structure of detectors and is applicable for all linear array pushbroom images and subsequent photogrammetric tasks. The obtained accuracies and computation time suggest the capability of the proposed method for near real-time applications. In future studies, the utilization and assessment of the proposed method will be conducted based on its efficacy and precision, particularly in the realm of photogrammetric tasks like ortho-photos generation.

Author Contributions

Conceptualization, S.S.A.N., M.J.V.Z. and A.G.; methodology, S.S.A.N. and A.G.; software, S.S.A.N.; validation, S.S.A.N., M.J.V.Z. and A.G.; formal analysis, S.S.A.N., M.J.V.Z. and A.G.; investigation, S.S.A.N.; resources, S.S.A.N. and A.G.; data curation, S.S.A.N. and A.G.; writing—original draft preparation, S.S.A.N.; writing—review and editing, M.J.V.Z., K.K., A.G., M.F., S.J., F.Y. and M.G.; visualization, S.S.A.N.; supervision, M.J.V.Z.; project administration, M.J.V.Z.; funding acquisition, F.Y. and M.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The satellite images to develop and evaluate the proposed method were freely downloaded from https://intelligence.airbus.com/ (Airbus-intelligence) (accessed on 17 February 2021) and https://apollomapping.com/ (accessed on 22 July 2021) (Apollo Mapping).

Acknowledgments

The authors would like to thank Airbus-intelligence and Apollo Mapping for providing pushbroom satellite images. The authors are also grateful for the insightful comments offered by the anonymous peer reviewers.

Conflicts of Interest

The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

References

  1. Shen, X.; Sun, K.; Li, Q. A fast and robust scan-line search algorithm for object-to-image projection of airborne pushbroom images. Photogramm. Eng. Remote Sens. 2015, 81, 565–572. [Google Scholar] [CrossRef]
  2. Zhang, L.; Gruen, A. Multi-image matching for DSM generation from IKONOS imagery. ISPRS J. Photogramm. Remote Sens. 2006, 60, 195–211. [Google Scholar] [CrossRef]
  3. Kang, Y.; Pan, L.; Sun, M.; Liu, X.; Chen, Q. Destriping high-resolution satellite imagery by improved moment matching. Int. J. Remote Sens. 2017, 38, 6346–6365. [Google Scholar] [CrossRef]
  4. Zhang, L.; Gruen, A. Automatic DSM Generation from Linear Array Imagery Data. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2004, 35, 133–138. [Google Scholar]
  5. Angel, Y.; Turner, D.; Parkes, S.; Malbeteau, Y.; Lucieer, A.; McCabe, M.F. Automated georectification and mosaicking of UAV-based hyperspectral imagery from push-Broom sensors. Remote Sens. 2020, 12, 34. [Google Scholar] [CrossRef]
  6. Mirmazloumi, S.M.; Kakooei, M.; Mohseni, F.; Ghorbanian, A.; Amani, M.; Crosetto, M.; Monserrat, O. ELULC-10, a 10 m European Land Use and Land Cover Map Using Sentinel and Landsat Data in Google Earth Engine. Remote Sens. 2022, 14, 3041. [Google Scholar] [CrossRef]
  7. Jannati, M.; Zoej, M.J.V.; Mokhtarzade, M. Epipolar resampling of cross-track pushbroom satellite imagery using the rigorous sensor model. Sensors 2017, 17, 129. [Google Scholar] [CrossRef]
  8. Poli, D.; Li, Z.; Gruen, A. Spot-5/Hrs Stereo Images Orientation and Automated Dsm Generation. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2003, 35, 421–432. [Google Scholar]
  9. Granshaw, S.I. Photogrammetric Terminology: Third Edition. Photogramm. Rec. 2016, 31, 210–252. [Google Scholar] [CrossRef]
  10. Marsetič, A.; Oštir, K.; Fras, M.K. Automatic orthorectification of high-resolution optical satellite images using vector roads. IEEE Trans. Geosci. Remote Sens. 2015, 53, 6035–6047. [Google Scholar] [CrossRef]
  11. Yi, H.; Chen, X.; Wang, D.; Du, S.; Guo, N. Methods for the Epipolarity Analysis of Pushbroom Satellite Images Based on the Rational Function Model. IEEE Access 2020, 8, 103973–103983. [Google Scholar] [CrossRef]
  12. Gong New Methods for 3D Reconstructions Using High Resolution Satellite Data. 2021. Available online: http://elib.uni-stuttgart.de/bitstream/11682/11470/1/PhD_thesis_Ke_Gong.pdf (accessed on 18 October 2022).
  13. Nezhad, S.S.A.; Zoej, M.J.V.; Ghorbanian, A. A fast non-iterative method for the object to image space best scanline determination of spaceborne linear array pushbroom images. Adv. Space Res. 2021, 68, 3584–3593. [Google Scholar] [CrossRef]
  14. Wang, M.; Hu, F.; Li, J. Epipolar resampling of linear pushbroom satellite imagery by a new epipolarity model. ISPRS J. Photogramm. Remote Sens. 2011, 66, 347–355. [Google Scholar] [CrossRef]
  15. Gong, D.; Han, Y.; Zhang, L. Quantitative Assessment of the Projection Trajectory-Based Epipolarity Model and Epipolar Image Resampling for Linear-Array Satellite Images. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2020, 5, 89–94. [Google Scholar] [CrossRef]
  16. Jannati, M.; Zoej, M.J.V.; Mokhtarzade, M. A novel approach for epipolar resampling of cross-track linear pushbroom imagery using orbital parameters model. ISPRS J. Photogramm. Remote Sens. 2018, 137, 1–14. [Google Scholar] [CrossRef]
  17. Habib, A.F.; Bang, K.I.; Kim, C.J.; Shin, S.W. True ortho-photo generation from high resolution satellite imagery. In Lecture Notes in Geoinformation and Cartography; Springer: Berlin/Heidelberg, Germany, 2006; pp. 641–656. [Google Scholar] [CrossRef]
  18. Beyer, R.A.; Alexandrov, O.; McMichael, S. The Ames Stereo Pipeline: NASA’s open source software for deriving and processing terrain data. Earth Space Sci. 2018, 5, 537–548. [Google Scholar] [CrossRef]
  19. Mo, D.; Zhang, Y.; Wang, T.; Yang, G.; Xia, Q. A Back Projection Algorithm for Linear Array Imageries Based on the Constraints of Object-space Relation. Cehui Xuebao/Acta Geod. Cartogr. Sin. 2017, 46, 583–592. [Google Scholar] [CrossRef]
  20. Yavari, S.; Zoej, M.J.V.; Salehi, B. An automatic optimum number of well-distributed ground control lines selection procedure based on genetic algorithm. ISPRS J. Photogramm. Remote Sens. 2018, 139, 46–56. [Google Scholar] [CrossRef]
  21. Liu, J.; Wang, D. Efficient orthoimage generation from ADS40 level 0 products. J. Remote Sens. 2007, 11, 247. [Google Scholar]
  22. Chen, L.C.; Rau, J.Y. A Unified Solution for Digital Terrain Model and Orthoimage Generation from SPOT Stereopairs. IEEE Trans. Geosci. Remote Sens. 1993, 31, 1243–1252. [Google Scholar] [CrossRef]
  23. Wang, M.; Hu, F.; Li, J.; Pan, J. A Fast Approach to Best Scanline Search of Airborne Linear Pushbroom Images. Photogramm. Eng. Remote Sens. 2009, 75, 1059–1067. [Google Scholar] [CrossRef]
  24. Geng, X.; Xu, Q.; Lan, C.; Xing, S.; Hou, Y.; Lyu, L. Orthorectification of Planetary Linear Pushbroom Images Based on an Improved Back-Projection Algorithm. IEEE Geosci. Remote Sens. Lett. 2019, 16, 854–858. [Google Scholar] [CrossRef]
  25. Geng, X.; Xu, Q.; Lan, C.; Hou, Y.; Miao, J.; Xing, S. An Efficient Geometric Rectification Method for Planetary Linear Pushbroom Images Based on Fast Back Projection Algorithm. In Proceedings of the 2018 Fifth International Workshop on Earth Observation and Remote Sensing Applications (EORSA), Xi’an, China, 18–20 June 2018. [Google Scholar] [CrossRef]
  26. Geng, X.; Xu, Q.; Xing, S.; Lan, C. A Robust Ground-to-Image Transformation Algorithm and Its Applications in the Geometric Processing of Linear Pushbroom Images. Earth Space Sci. 2019, 6, 1805–1830. [Google Scholar] [CrossRef]
  27. Geng, X.; Xu, Q.; Xing, S.; Lan, C. A Generic Pushbroom Sensor Model for Planetary Photogrammetry. Earth Space Sci. 2020, 7, e2019EA001014. [Google Scholar] [CrossRef]
  28. Huang, R.; Zheng, S.; Hu, K. Registration of aerial optical images with LiDAR data using the closest point principle and collinearity equations. Sensors 2018, 18, 1770. [Google Scholar] [CrossRef]
  29. Seiz, G.; Poli, D.; Gruen, A.; Baltsavias, E.P.; Roditakis, A. Satellite- and ground-based multi-view photogrammetric deter-mination of 3D cloud geometry. Int. Arch. Photogramm. Remote Sens. 2004, 34, 101–107. [Google Scholar]
  30. Safdarinezhad, A.; Zoej, M.J.V. An optimized orbital parameters model for geometric correction of space images. Adv. Space Res. 2015, 55, 1328–1338. [Google Scholar] [CrossRef]
  31. Zoej, M.J.V. Photogrammetric Evaluation of Space Linear Array Imagery for Medium Scale Topographic Mapping. Ph.D. Thesis. 1997. Available online: https://theses.gla.ac.uk/4777/1/1997zoejphd1.pdf (accessed on 18 October 2022).
  32. Park, Y.S.; Lek, S. Artificial Neural Networks: Multilayer Perceptron for Ecological Modeling. Dev. Environ. Model. 2016, 28, 123–140. [Google Scholar] [CrossRef]
  33. Warsito, B.; Santoso, R.; Suparti; Yasin, H. Cascade Forward Neural Network for Time Series Prediction. J. Phys. Conf. Ser. 2018, 1025, 012097. [Google Scholar] [CrossRef]
  34. Rosenblatt, F. The perceptron: A probabilistic model for information storage and organization in the brain. Psychol. Rev. 1958, 65, 386–408. [Google Scholar] [CrossRef]
  35. Tosun, E.; Aydin, K.; Bilgili, M. Comparison of linear regression and artificial neural network model of a diesel engine fueled with biodiesel-alcohol mixtures. Alexandria Eng. J. 2016, 55, 3081–3089. [Google Scholar] [CrossRef]
  36. Salgado, C.M.; Dam, R.S.F.; Salgado, W.L.; Werneck, R.R.A.; Pereira, C.M.N.A.; Schirru, R. The comparison of different multilayer perceptron and General Regression Neural Networks for volume fraction prediction using MCNPX code. Appl. Radiat. Isot. 2020, 162, 109170. [Google Scholar] [CrossRef]
  37. Ouma, Y.O.; Okuku, C.O.; Njau, E.N. Use of Artificial Neural Networks and Multiple Linear Regression Model for the Prediction of Dissolved Oxygen in Rivers: Case Study of Hydrographic Basin of River Nyando, Kenya. Complexity 2020, 2020, 9570789. [Google Scholar] [CrossRef]
  38. Jensen, R.R.; Hardin, P.J.; Yu, G. Artificial neural networks and remote sensing. Geogr. Compass 2009, 3, 630–646. [Google Scholar] [CrossRef]
  39. Ali, Z.; Hussain, I.; Faisal, M.; Nazir, H.M.; Hussain, T.; Shad, M.Y.; Shoukry, A.M.; Gani, S.H. Forecasting drought Using Multilayer Perceptron Artificial Neural Network Model. Adv. Meteorol. 2017, 2017, 5681308. [Google Scholar] [CrossRef]
  40. Levenberg, K. A method for the solution of certain non-linear problems in least squares. Q. Appl. Math. 1944, 2, 164–168. [Google Scholar] [CrossRef]
  41. Lourakis, M.I.A.; Argyros, A.A. Is Levenberg-Marquardt the most efficient optimization algorithm for implementing bundle adjustment? In Proceedings of the IEEE International Conference on Computer Vision, Beijing, China, 17–21 October 2005; Volume II, pp. 1526–1531. [Google Scholar] [CrossRef]
  42. Sheela, K.G.; Deepa, S.N. Review on methods to fix number of hidden neurons in neural networks. Math. Probl. Eng. 2013, 2013, 425740. [Google Scholar] [CrossRef]
  43. Breiman, L. Random forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef]
  44. Sain, S.R.; Vapnik, V.N. The Nature of Statistical Learning Theory; Springer Science & Business Media: Berlin/Heidelberg, Germany, 1996; Volume 38. [Google Scholar] [CrossRef]
Figure 1. The diagram of the proposed Best Scanline Determination method.
Figure 1. The diagram of the proposed Best Scanline Determination method.
Remotesensing 16 02787 g001
Figure 2. Linear array pushbroom images used to evaluate the proposed BSD method, including (a) IKONOS image from Sao Paulo, Brazil, (b) Pleaides 1A image from Melbourne, Australia, (c) Pleaides 1B image from Annapolis, USA, (d) QuickBird image from Jaipur, India, (e) SPOT6 image from Jaicos, Brazil, (f) SPOT7 image from Amsterdam, Netherland, (g) SPOT7 image from Curitiba, Brazil, (h) WorldView1 image from Boulder, USA, (i) WorldView2 image from Sydney, Australia, and (j) WorldView2 image from SanDiego, USA.
Figure 2. Linear array pushbroom images used to evaluate the proposed BSD method, including (a) IKONOS image from Sao Paulo, Brazil, (b) Pleaides 1A image from Melbourne, Australia, (c) Pleaides 1B image from Annapolis, USA, (d) QuickBird image from Jaipur, India, (e) SPOT6 image from Jaicos, Brazil, (f) SPOT7 image from Amsterdam, Netherland, (g) SPOT7 image from Curitiba, Brazil, (h) WorldView1 image from Boulder, USA, (i) WorldView2 image from Sydney, Australia, and (j) WorldView2 image from SanDiego, USA.
Remotesensing 16 02787 g002
Figure 3. Root Mean Square Error (RMSE) and drmax values calculated using Simulated Check Points (SCPs) for Multilayer Perceptron (MLP) models with five neurons.
Figure 3. Root Mean Square Error (RMSE) and drmax values calculated using Simulated Check Points (SCPs) for Multilayer Perceptron (MLP) models with five neurons.
Remotesensing 16 02787 g003
Figure 4. Root Mean Square Error (RMSE) and drmax values calculated using Simulated Check Points (SCPs) for Multilayer Perceptron (MLP) models with six neurons.
Figure 4. Root Mean Square Error (RMSE) and drmax values calculated using Simulated Check Points (SCPs) for Multilayer Perceptron (MLP) models with six neurons.
Remotesensing 16 02787 g004
Figure 5. Root Mean Square Error (RMSE) and drmax values calculated using Simulated Check Points (SCPs) for Multilayer Perceptron (MLP) models with seven neurons.
Figure 5. Root Mean Square Error (RMSE) and drmax values calculated using Simulated Check Points (SCPs) for Multilayer Perceptron (MLP) models with seven neurons.
Remotesensing 16 02787 g005
Figure 6. Root Mean Square Error (RMSE) and drmax values calculated using Simulated Check Points (SCPs) for Multilayer Perceptron (MLP) models with eight neurons.
Figure 6. Root Mean Square Error (RMSE) and drmax values calculated using Simulated Check Points (SCPs) for Multilayer Perceptron (MLP) models with eight neurons.
Remotesensing 16 02787 g006
Figure 7. Root Mean Square Error (RMSE) and drmax values calculated using Simulated Check Points (SCPs) for Multilayer Perceptron (MLP) models with nine neurons.
Figure 7. Root Mean Square Error (RMSE) and drmax values calculated using Simulated Check Points (SCPs) for Multilayer Perceptron (MLP) models with nine neurons.
Remotesensing 16 02787 g007
Figure 8. Root Mean Square Error (RMSE) and drmax values calculated using Simulated Check Points (SCPs) for Multilayer Perceptron (MLP) models with ten neurons.
Figure 8. Root Mean Square Error (RMSE) and drmax values calculated using Simulated Check Points (SCPs) for Multilayer Perceptron (MLP) models with ten neurons.
Remotesensing 16 02787 g008
Table 1. The inputs and outputs of the proposed method in each step.
Table 1. The inputs and outputs of the proposed method in each step.
InputsOutputs
Preprocessing and data preparationSpace resection using the MPC model (Equation (2))
  • IOPs
  • Image and object/ground coordinates of GCPs
  • EOPs of the whole image for each scanline
SCOPs and SCPs generation using CE
(Equation (1))
  • IOPs
  • EOPs
  • Mean height of the study area
  • Image and ground coordinates of SCOPs
  • Image and ground coordinates of SCPs
Processing steps of BSDTraining phase of the MLP model
(Equation (3))
  • Image and ground coordinates of SCOPs
  • Fitted MLP model for object-to-image transformation
Prediction (testing) phase of the MLP
(Equations (3)–(5))
  • Ground coordinates of SCPs
  • Image coordinates of SCPs estimated using the fitted MLP model
  • Accuracy assessment
Table 2. The characteristics of the used images.
Table 2. The characteristics of the used images.
Image.SensorRegionDimension (Pixel)Resolution (m)Central Latitude Central Longitude
ISBIKONOSSao Paulo, Brazil8300 × 86000.8−23.54−46.63
PMAPleiades 1AMelbourne, Australia6000 × 70000.5−37.77144.86
PAUPleiades 1BAnnapolis, USA6057 × 56360.538.98−76.49
QJIQuickBirdJaipur, India6000 × 60000.626.9275.78
SJBSPOT 6 Jaicos, Brazil6200 × 66001.5−7.26−41.27
SANSPOT 7Amsterdam, Netherland5824 × 66161.552.374.91
SCBSPOT 7Curitiba, Brazil2597 × 14636−24.63−49.69
WBUWorldview 1Boulder, USA6000 × 60000.540.02−105.28
WSAWorldview 2Sydney, Australia6000 × 60000.5−33.84151.20
WSUWorldview 2SanDiego, USA3996 × 40150.532.72−117.16
Table 3. The results of ISB image analysis using different MLPs.
Table 3. The results of ISB image analysis using different MLPs.
Number of SCOPsNumber of LayersNumber of NeuronsRMSE (Pixel)Computation Time (s)drmax (pixel)Number of MLP Parameters
501100.063.270.1541
1001100.013.400.0241
1200.054.200.1781
3001100.023.900.0741
1200.024.840.0681
1300.025.620.06121
1400.026.360.08161
1500.097.750.03201
2100.065.700.37151
2200.1419.630.69501
5001100.033.950.0941
1200.024.690.0781
1300.0015.950.003121
1400.0016.640.005161
1500.0027.760.01201
2100.065.800.30151
2200.0521.480.15501
3100.028.470.20261
10001100.044.040.0941
1200.065.170.1581
1300.056.810.16121
1400.108.040.24161
1500.029.720.08201
2100.036.860.20151
2200.0126.400.09501
3100.1410.230.90261
4100.0217.830.93371
5100.0126.530.09481
Table 4. Obtained results of the proposed approaches and other well-known methods for best scanline determination.
Table 4. Obtained results of the proposed approaches and other well-known methods for best scanline determination.
DatasetsQuantitative MeasurementsMethods’ Name
Newton Raphson (NR)
[22]
Bisecting Window Search (BWS)
[21]
ANN BSD
[13]
OGP BSD
[13]
Proposed Method
(MLP)
ISBRMSE (pixel)5.840 × 10−100.570.290.290.015
Computation time (second)511.9191490.5053.2956.8123.29
d r m a x (pixel)1.727 × 10−910.610.570.043
Number of SCOPs--400400100
Number of neurons--10-9
PMARMSE (pixel)1.057 × 10−90.580.300.300.003
Computation time (second)591.5881413.0973.4277.9413.31
d r m a x (pixel)2.616 × 10−910.670.670.007
Number of SCOPs--500500100
Number of neurons--10-10
PAURMSE (pixel)9.561 × 10−100.580.300.300.003
Computation time (second)520.0651294.6183.3987.6402.92
d r m a x (pixel)1.203 × 10−910.670.770.010
Number of SCOPs--50050050
Number of neurons--10-10
QJIRMSE (pixel)4.401 × 10−100.580.300.300.002
Computation time (second)423.8081320.1263.8157.8393.23
d r m a x (pixel)1.162 × 10−910.720.690.006
Number of SCOPs--500500100
Number of neurons--10-10
SJBRMSE (pixel)6.182 × 10−100.580.300.300.002
Computation time (second)480.3801335.4303.7769.9593.01
d r m a x (pixel)2.184 × 10−910.730.630.005
Number of SCOPs--5001000100
Number of neurons--10-7
SANRMSE (pixel)2.503 × 10−100.570.290.310.002
Computation time (second)471.2471259.4374.4528.3052.96
d r m a x (pixel)6.207 × 10−1010.580.810.004
Number of SCOPs--700500100
Number of neurons--10-5
SCBRMSE (pixel)6.593 × 10−100.580.290.290.001
Computation time (second)412.186883.3813.3877.0703.13
d r m a x (pixel)2.379 × 10−910.610.550.003
Number of SCOPs--400200100
Number of neurons--10-9
WBURMSE (pixel)1.181 × 10−90.570.290.280.002
Computation time (second)474.4711376.6203.5767.9672.96
d r m a x (pixel)2.756 × 10−910.520.520.005
Number of SCOPs--400400100
Number of neurons-- -7
WSARMSE (pixel)9.931 × 10−100.570.320.320.003
Computation time (second)470.1851245.7633.3107.8032.09
d r m a x (pixel)2.461 × 10−910.720.710.006
Number of SCOPs--500500100
Number of neurons--10-5
WSURMSE (pixel)3.860 × 10−100.570.300.300.001
Computation time (second)483.5621281.0633.7148.7963.02
d r m a x (pixel)9.327 × 10−1010.570.580.003
Number of SCOPs--500500100
Number of neurons--10-6
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ahooei Nezhad, S.S.; Valadan Zoej, M.J.; Khoshelham, K.; Ghorbanian, A.; Farnaghi, M.; Jamali, S.; Youssefi, F.; Gheisari, M. Best Scanline Determination of Pushbroom Images for a Direct Object to Image Space Transformation Using Multilayer Perceptron. Remote Sens. 2024, 16, 2787. https://doi.org/10.3390/rs16152787

AMA Style

Ahooei Nezhad SS, Valadan Zoej MJ, Khoshelham K, Ghorbanian A, Farnaghi M, Jamali S, Youssefi F, Gheisari M. Best Scanline Determination of Pushbroom Images for a Direct Object to Image Space Transformation Using Multilayer Perceptron. Remote Sensing. 2024; 16(15):2787. https://doi.org/10.3390/rs16152787

Chicago/Turabian Style

Ahooei Nezhad, Seyede Shahrzad, Mohammad Javad Valadan Zoej, Kourosh Khoshelham, Arsalan Ghorbanian, Mahdi Farnaghi, Sadegh Jamali, Fahimeh Youssefi, and Mehdi Gheisari. 2024. "Best Scanline Determination of Pushbroom Images for a Direct Object to Image Space Transformation Using Multilayer Perceptron" Remote Sensing 16, no. 15: 2787. https://doi.org/10.3390/rs16152787

APA Style

Ahooei Nezhad, S. S., Valadan Zoej, M. J., Khoshelham, K., Ghorbanian, A., Farnaghi, M., Jamali, S., Youssefi, F., & Gheisari, M. (2024). Best Scanline Determination of Pushbroom Images for a Direct Object to Image Space Transformation Using Multilayer Perceptron. Remote Sensing, 16(15), 2787. https://doi.org/10.3390/rs16152787

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop