Next Article in Journal / Special Issue
Exhaust Gas Temperature Prediction of Aero-Engine via Enhanced Scale-Aware Efficient Transformer
Previous Article in Journal
The Communicative Behavior of Russian Cosmonauts: “Content” Space Experiment Result Generalization
Previous Article in Special Issue
Empirical Assessment of Non-Intrusive Polynomial Chaos Expansions for High-Dimensional Stochastic CFD Problems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Mission-Driven Inverse Design of Blended Wing Body Aircraft with Machine Learning

Department of Mechanical and Aerospace Engineering, Missouri University of Science and Technology, Rolla, MO 65409, USA
*
Author to whom correspondence should be addressed.
Aerospace 2024, 11(2), 137; https://doi.org/10.3390/aerospace11020137
Submission received: 1 December 2023 / Revised: 14 January 2024 / Accepted: 29 January 2024 / Published: 5 February 2024
(This article belongs to the Special Issue Machine Learning for Aeronautics)

Abstract

:
The intent of this work was to investigate the feasibility of developing machine learning models for calculating values of airplane configuration design variables when provided time-series, mission-informed performance data. Shallow artificial neural networks were developed, trained, and tested using data pertaining to the blended wing body (BWB) class of aerospace vehicles. Configuration design parameters were varied using a Latin-hypercube sampling scheme. These data were used by a parametric-based BWB configuration generator to create unique BWBs. Performance for each configuration was obtained via a performance estimation tool. Training and testing of neural networks was conducted using a K-fold cross-validation scheme. A random forest approach was used to determine the values of predicted configuration design variables when evaluating neural network accuracy across a blended wing body vehicle survey. The results demonstrated the viability of leveraging neural networks in mission-dependent, inverse design of blended wing bodies. In particular, feed-forward, shallow neural network architectures yielded significantly better predictive accuracy than cascade-forward architectures. Furthermore, for both architectures, increasing the number of neurons in the hidden layer increased the prediction accuracy of configuration design variables by at least 80%.

1. Introduction

Preliminary airplane sizing involves the synthesis of market requirements and objectives (MR&Os) [1] while applying constraints and spatial considerations to configure an integrated vehicle. For the conventional tube and wing (TAW) airplane sizing problem, design space exploration is typically leveraged in this early portion of the design cycle to accelerate the assessment of configurations and aid in configuration design decision making [2], such as deciding on a suitable wingspan (b) and aspect ratio ( A R ). By doing so, an airplane designer can converge on an optimum configuration that satisfies MR&O targets and can proceed to the more detailed design phase. This process has been well studied and understood regarding TAW aircraft; however, as MR&Os become more stringent around aircraft performance and its environmental impacts [3], exploration of more novel concepts is warranted. A promising concept that has been researched and is still being studied by both academia and the industry is the blended wing body (BWB). As stated by Wakayama et al. [4], the BWB requires a design approach that departs from the conventional TAW methodology—rather than decomposing the airplane into distinct pieces, the wing, fuselage, engines, and empennage are all tightly integrated to achieve a substantial performance improvement. Furthermore, although design space exploration of novel concepts, such as the BWB, have been studied previously [5,6], the lack of validated models for these novel concepts and their application to traditional direct design problems can result in selecting a point-design configuration prematurely, or even incorrectly as Kallou et al. [7] pointed out. In this context, contrary to typical direct design approaches of BWBs, an examination into the application of inverse design of BWBs is warranted.
The motivation for the work described in this paper is to determine whether machine learning models can be used for the mission-informed inverse design of unconventional airplane configurations while leveraging unique, physics-derived data types to generate credible results. In such a scenario, the neural network models are designed to predict the values of a set of airplane configuration design parameters, such as b, wing loading ( W / L ), maximum takeoff weight ( M T O W ), etc., when supplied with specific time-dependent performance metrics, such as lift-to-drag ratio as a function time ( L / D ( t ) ). Here, each time step reflects a different point in the vehicle’s mission, which is typically composed of taxi, take-off, climb, cruise, descent, approach, landing, and reserve segments. An advantage of inverse design is that the mission-based objectives are fixed while a suitable design is calculated, while with a typical direct optimization scheme, a design is iterated to arrive at performance that may be close, but not an exact match, to those objectives. One reason for exploring the viability of machine learning models is traditional surrogate models, such as response surface functions, are typically not suitable for mapping multiple multi-dimensional inputs to more than one singular output, for example, when the inputs are a set of time-series-based variables that correspond to a unique combination of configuration design data. Artificial neural networks (ANNs), in particular, offer an advantage over response surface models because, when architected and trained appropriately, they can robustly handle multi-dimensional, highly non-linear data relationships well, especially when enough training data are provided [8]. This implies that if architected correctly, an ANN could accurately generate a set of outputs when provided multiple multi-dimensional inputs, even if the inputs (e.g., time-series aircraft performance data) have highly non-linear relationships with the outputs (e.g., aircraft configuration design data).
The author’s previously published research [9] focused on developing a framework to support the aforementioned objective, albeit within the context of applying it to conventional, TAW, single-aisle, twin-engine, commercial transport category airplane configurations such as those in the same class as the Boeing 737 Next Generation family. The work outlined in this paper is an expansion and adaptation of the authors’ previous work, focusing specifically on application to the inverse design of unconventional airplane configurations, namely BWBs. Compared to previously published research, changes have been introduced to the framework, namely a larger airplane design and performance database; more robust neural network model generation, training, and testing methods; and an improved approach for extensibility analysis.
Within the context of airplane configuration design, inverse design is the antithesis of a direct design approach. In an inverse-design process, configuration design parameters are instead obtained through a query of a design space from a performance target standpoint. Such a scenario would typically start with a set of MR&Os, and the result would be an airplane configuration that closely satisfies, if not meets, the MR&Os [10]. In short, it differs from other design methods in that the design parameters are a result of the method, rather than an input into it. Gibbs et al. [11] developed an airplane inverse-design method, where the input was a desired fixed operating cost per passenger, and the output was the vehicle’s corresponding geometry—fuselage length, wingspan, horizontal and vertical stabilizer spans, and thrust. However, the use of neural network models for inverse design of airplane configurations has been very limited.
The use of models based on ANNs has emerged as a potential contender for response surface models in the context of inverse-design problems. This is in part due to their improved efficiency in terms of computational wall-clock time needed for calibration and their prediction accuracy compared to other surrogate models, as highlighted by Sekar et al. [12]. Rai et al. [13] have also shown that through adjustment of neural network architecture (number of layers and neurons in each layer) and neural network numerical methods (regularization algorithms, training functions, and optimization routines), overfitting tendencies induced by highly non-linear, multi-dimensional features in the training database can be reduced.
Leveraging neural network models for inverse-design problems has been widely researched, particularly for airfoil design scenarios. Compared to a direct optimization scheme, Barrett et al. [14] illustrated promise in an inverse-design methodology as it applied to airfoil geometry definition. Kharal et al. [15] demonstrated promise in the concept by developing models capable of shape generation based on values of lift coefficient ( C L ), drag coefficient ( C D ), and pitching moment coefficient ( C M ) at various angles of attack ( α ). Extending this research, Yilmaz et al. [16] showed that by leveraging a deep learning model instead, shape parameterization can be avoided thereby operating at a lower level of abstraction by detecting patterns and features in the data themselves. Glaws et al. [17] expanded on previous airfoil inverse-design research by effectively showing that an invertible neural network (INN) could be developed, which is suitable for both direct and inverse airfoil design.
Outside the realm of airfoil inverse design, Yu et al. [18] demonstrated promise in the idea of developing neural network models for the inverse design of rocket nozzles when provided a desired pressure distribution, showcasing its excellent predictive accuracy. Additionally, Oddiraju et al. [19] demonstrated the viability of developing neural network models to aid in the design of metamaterials when using desired bandgap specifications. Li et al. [20] showcased the ability to use deep neural networks for prediction of 3-D wing shape designs when provided C L , C D , C M , and pressure coefficient ( C P ) distributions, and how such a model could be leveraged by a gradient-based optimization framework for various wing design scenarios.
More broadly in the aerospace field, neural networks have been widely implemented in Reynolds-averaged Navier–Stokes (RANS) solutions. While Thuerev et al. [21] focused on neural network applications to RANS solutions, a more extensible neural network development framework was leveraged by Singh et al. [22], who showed merit in using neural networks to augment the Spalart–Allmaras turbulence models ultimately aiding in surface pressures predictions. Li et al. [23] comprehensively summarized how machine learning has solved some challenges in aerodynamic space optimization (ASO), specifically as it pertains to geometric design space [24], aerodynamic evaluations—namely, aerodynamic coefficient approximation [25,26,27,28] and flow field modeling [29]—and optimization architecture [30,31]. However, there are only a few previous works detailing the use of neural network models for the analysis of airplane configurations; for example, Secco et al. [32] developed a neural network model effectively replacing a full-potential code for prediction of aerodynamic coefficients when provided configuration design data and flight condition.
This paper is organized as follows: Section 2 describes the computational approach, including initial geometry parameterization, database generation methodology, neural network algorithm selection and architecture variation, training and testing through a K-fold cross-validation scheme, extensibility analysis of the neural networks across a BWB vehicle survey, and a random forest approach to the prediction of design variables. Section 3 details the results as it pertains to neural network performance and extensibility analysis. Finally, the conclusions of this study, and findings that warrant further investigation, are elaborated in Section 4.

2. Computational Approach and Methodology

The outline of the computational approach, along with what programming languages are used, is illustrated in Figure 1 (The second-segment thrust correction process is explained in Section 2.1.2). First, a baseline BWB vehicle is used to construct a configuration parametrization model. Then, design parameters are varied via a Latin-hypercube sampling scheme. The geometry parametrization model is then used to define a number of new, unique BWB configurations. These configurations are assessed using a physics-informed, Level-0 airplane performance assessment tool. Following this step, a database is created containing design and performance data for each configuration. Numerical scaling is then performed on the database, and different neural network models are generated, each with its own unique architecture. Next, the scaled database is equally partitioned, where different portions are used to train and test the neural network models in a K-fold cross-validation scheme. Here, the neural networks are trained such that their inputs are BWB performance data and their targets are corresponding BWB configuration design parameter values, thus lending themselves towards the inverse-design objective.
To enable parametric-based airplane configuration definition and rapid performance estimation, SUAVE (data available online at https://suave.stanford.edu/ (accessed on 25 November 2022)) [33,34,35,36], a conceptual-level aircraft design environment built with the ability to rapidly configure and analyze both conventional and unconventional designs, was leveraged. SUAVE allows users to define details of an airplane configuration design programmatically via the specification of different design parameters, such as b, A R , fuselage length, M T O W , sea-level static thrust, etc. These values are stored and ultimately constitute an aircraft configuration design file. This file is then processed through SUAVE’s mission solver, where a fixed mission profile is utilized to obtain performance data for a configuration—range, total weight, specific fuel consumption ( S F C ), C L , C D , L / D , and so on—all as a function of time steps in the vehicle’s mission. SUAVE’s comparatively low-fidelity aerodynamics and stability derivative solvers can be augmented with relatively higher-fidelity methods such as Athena Vortex Lattice (AVL (data available online at http://web.mit.edu/drela/Public/web/avl (accessed on 25 November 2022))) or SU2 (data available online at https://su2code.github.io/ (accessed on 25 November 2022)), thereby enabling multi-fidelity analysis. AVL is a tool for aerodynamic and flight-dynamic analysis of rigid aircraft, and can be used to analyze both TAW aircraft as well as unconventional configurations such as BWBs. It employs an extended vortex lattice model for the lifting surfaces, together with a slender-body model for fuselages and nacelles. The flight-dynamic analysis portion of AVL combines a full linearization of the aerodynamic model about any flight state, together with specified mass properties. For the purpose of this study, the authors have elected to use AVL’s combined capability with SUAVE’s solvers. Meanwhile, MATLAB’s Deep Learning Toolbox (data available online at https://www.mathworks.com/help/deeplearning/ (accessed on 25 November 2022)) was used to manage neural network development, training, and testing.
An additional framework is created for assessing the accuracy of developed neural networks across unknown design sites, i.e., configurations not within the training and testing database. This extensibility analysis of the models applied to a more “diverse” set of BWB vehicles is conducted through a random forest approach. Figure 2 depicts an overview of this process. The following subsections describe these processes in more detail: geometry parameterization; database creation; neural network architecture generation; numerical method selection and training; K-fold cross-validation training and testing scheme; and the random forest approach to extensibility analysis.

2.1. Design Space Creation

2.1.1. Geometry Parameterization

To develop neural network models capable of inverse design, i.e., predicting BWB configuration design parameter values when supplied with mission-dependent BWB performance data, a database was created containing two distinct parts—BWB configuration definition and its respective performance data. Typically, neural networks require a large amount of training data placing importance on the ability to generate such data in a rapid and automated manner—this is especially important in the conceptual-level airplane design cycle [37]. To enable the quick creation of BWB configurations, a parametric-based BWB geometry model was developed, where through the specification of wingspan and wing area, a new BWB vehicle design can be generated.
The design of BWBs is challenging, primarily due to the tight coupling between aerodynamic performance, trim, stability, and propulsion, not to mention the sheer number of design variables involved and the complexities with transonic flow conditions [38]. For this reason, careful consideration was placed on choosing a suitable baseline vehicle for which the parametric geometry model was calibrated on—Liebeck’s BWB450 [39] was used. The baseline tail volume coefficient, V v b l , was calculated for the baseline vehicle using Equation (1). The distribution of semi-spans, b b l , wing areas, S b l , and quarter chord sweeps, c λ b l , was calculated segmenting the BWB450’s wing into 7 sections, including the vertical wing tip, as shown in Figure 3. Based on the properties of the BWB450, a few rule-of-thumb relationships were established that constrained the following parameters: center of gravity located at 60% root chord fraction, C G = 0.6 ψ , two engines positioned at 90% root chord fraction and 9% wingspan fraction, ω ψ = 0.9 ψ and ω η = 0.09 η , respectively. Additionally, the aerodynamic center ( A C ) was constrained at the same location as the center of gravity. This ensures that the vehicle is trimmed longitudinally.
V v b l = S t i p ( A C t i p C G ) ( n = 1 6 S n ) ( i = 1 6 b i )
These values ultimately inform the definition of new BWB configurations when the model is supplied with values of b and wing area, S. The overall computational process flow for this model is as follows: First, the model accepts new values of span and area as inputs— b i n and S i n . The wing and span are then distributed— S and b —based on fixed proportions from the baseline design, namely b b l and S b l . Chord length distributions are determined using the quarter chord sweep for each segment, c λ b l . S is then used to calculate the tail volume coefficient for the new vehicle, V v . This will inherently be different than V v b l since the new configuration has a different b and S than the baseline, calibrated vehicle, namely the BWB450. Essentially, this indicates that the vertical tail area for the new configuration, S 7 , is either undersized or oversized relative to b i n and S i n , and relative to the baseline vehicle. This could mean the new configuration has vertical tails that either provide insufficient longitudinal stability or more than needed. Constraining the vehicle to V v b l , the difference between the two tail volume coefficient quantities is calculated, and is used to calculate the adjusted vertical tip area and span, S 7 a , and span, b 7 a , respectively. These procedures are highlighted in Equations (2) and (3), where S 7 is the unadjusted vertical tail area, and c r 7 and c t 7 are the root chord and tip chord, respectively, of the vertical tip. Furthermore, here, ( A C t i p C G ) corresponds to the planar measurement between the aerodynamic center of the tip and the aircraft’s center of gravity. Note that, depending on the difference between the two tail volume coefficients, the area and span of the tip are either reduced or increased while keeping c r 7 and c t 7 constant.
S 7 a = S 7 ± ( V v b l ( n = 1 6 S n ) ( i = 1 6 b i ) ) ( A C t i p C G )
b 7 a = 2 S 7 a c r 7 + c t 7
Finally, by incorporating S 7 a and b 7 a , the span and area distributions for the new BWB configuration are adjusted— S a and b a . These distributions are used to calculate the adjusted values of span and area, b a and S a , which are different than b i n and S i n . At this point, the new BWB configuration has been sized; however, it lacks other principal characteristics data such as appropriate values of M T O W and sea-level static thrust available, T A , which are tightly dependent on geometry for BWBs [40]. For this reason, polynomial response surface functions of M T O W and T A were developed using a vehicle survey, consisting of a variety of configurations beyond the BWB450. The vehicle survey contained three categories of configurations, namely BWBs, hybrid wing bodies (HWBs), and integrated wing bodies (IWBs). The authors elected to leverage a variety of vehicle types, closely related to BWBs, in order to diversify the dataset and improve neural network generalization across a broader set of BWB vehicles. These vehicles are illustrated in Figure 4.
Using these vehicles and their configuration characteristics data—highlighted in Table 1—polynomial functions, of different orders, were developed for M T O W and T A , both expressed in pounds. For each function, the coefficient of determination, R 2 value, and root mean squared error (RMSE) were calculated. Ultimately, for both M T O W and T A , the functions with the highest R 2 value and lowest RMSE were selected. The equations for M T O W , as a function of S a , ft 2 , and b a , ft., and the equation for T A , as a function of M T O W and S a , ft 2 , are expressed in Equation (4) and Equation (5), respectively. They both exhibit R 2 values of 0.999 and RMSE values of 0.33 and 0.21 , respectively. From a first-principles BWB vehicle design standpoint, and based on the design variables selected for the design space, wing area and wingspan can fundamentally aid in determining M T O W . Similarly, thrust can be approximated using area and M T O W of the vehicle.
M T O W ( b a , S ) = ( 4.663 × 10 6 ) + ( 7.337 × 10 4 ) b 555.3 S 233.2 b 2 2.494 b S + ( 8.696 × 10 2 ) S 2 + 0.027 b 2 S ( 5.83 × 10 4 ) b S 2 + ( 2.249 × 10 6 ) S 3
T A ( S a , M T O W ) = ( 4.632 × 10 5 ) + 157.6 S 0.6734 M T O W ( 9.835 × 10 4 ) S M T O W ( 1.065 × 10 5 ) M T O W 2 ( 1.418 × 10 9 ) S M T O W 2 + ( 1.933 × 10 11 ) M T O W 3 + ( 6.324 × 10 16 ) S M T O W 3 ( 9.181 × 10 18 ) M T O W 4
Using these functions, M T O W and T A of the BWB airplane are calculated, and the parametric-based BWB configuration generation process is complete. The output of this model is a SUAVE configuration file, capable of being assessed through SUAVE’s configuration assessment functionality. A process flow of this routine is shown in Figure 5.

2.1.2. Database Creation

BWB configurations are initially varied and created using two design variables, namely b i n and S i n , via the parametric-based configuration generation model. This model utilizes these two parameters as inputs to ultimately create a unique BWB configuration defined by four design parameters— b a , S a , M T O W , and T A . These four design variables, along with the vehicle’s corresponding performance data, are used to train neural network models. In this sense, the objective of this work is to develop neural network models such that the aforementioned design parameters for a BWB configuration are calculated when supplied with the configuration’s mission-informed, time-dependent performance data.
Four design parameters, namely b a , S a , M T O W , and T A , were selected as they are generally regarded as high-level design parameters and are typically investigated in the preliminary and conceptual sizing portion of a BWB’s design cycle. During this phase, design space exploration is conventionally utilized to explore the trade space and understand design sensitives as it relates to both the MR&Os and the aircraft manufacturer’s product and technology capability [50]. The bounds for b i n and S i n design space were selected as 169 ft . b i n 327 ft . and 4200 ft . 2 S i n 22 , 000 ft . 2 , respectively. In general, referencing the BWB vehicle survey table (Table 1), the N + 3 SUGAR-Ray configuration was used to inform the lower bounds of both b i n and S i n , while the VELA-3’s configuration characteristics informed the upper limit for these values. Since the M T O W and T A polynomial response surface functions are also based on vehicles in Table 1, the bounds applied for the design space mitigated errors exhibited by these functions.
The method of numerical variation employed was the Latin-hypercube sampling (LHS) scheme. Compared to other statistical sampling methods, the LHS scheme provides broader coverage of the design space via the distribution of samples in equally spaced probability bins [51]. In the context of neural network training, this is particularly beneficial as it exposes the model to well-distributed data, which in turn can increase overall predictive accuracy and reduce the likelihood of overfitting [52]. Additionally, Loh [53] has shown that for problems characterized by multidimensionality, surrogate model development is typically faster when using data obtained from an LHS scheme as opposed to Monte Carlo sampling methods. Figure 6 depicts a visualization of 100 LHS-derived points of b i n and S i n .
The LHS sample size, N L H S , utilized was 30,000 as this was deemed to provide both adequate design space coverage and a suitable number of training points for each fold in the K-fold cross-validation scheme (method explained in Section 2.3). However, from a numerical sensitivity standpoint determining the optimal value of N L H S was not a focus of this study. Each unique combination of b i n and S i n is used by the BWB parametric configuration generation model, which generates adjusted values of span and area, namely b a and S a , along with M T O W , and T A . In conjunction with spatial integration constraints and assumptions, this model uses the aforementioned design parameters and programmatically generates 30,000 unique SUAVE airplane configuration files. An example of one such configuration is depicted in Figure 7, where b a is 221.6 ft., S a is 8998 ft 2 , M T O W is 408,300 lbs., and T A is 116,000 lbs.
These configuration files are then run through SUAVE’s performance analysis routines, which assess every BWB configuration through the same mission constraints, and for that matter, the same mission flight profile. This flight profile represents a typical long-range, transport category, Part 25 mission with constraints on climb and descent gradients, rate of climbs and descents, speeds for different segments, and altitude restrictions [54]. The 6500 nautical mile mission is composed of first, second, and third climb segments; a cruise segment; and first, second, third, fourth, and fifth descent segments. The flight profile is visualized in Figure 8. The 6500 nautical mile range was chosen as this represents a notional, long-range mission that a vehicle similar to what the BWB450 may routinely fly in-service.
Coupled with AVL’s aerodynamics and stability and control (S&C) analysis capabilities, SUAVE’s mission solver assesses each configuration and generates its respective output files in an automated manner. For BWBs, the mission solver uses b a and S A for cruise C L , maximum C L , and L / D calculations and for S&C calculations since both design variables fundamentally inform the shape of the configuration. M T O W is used by SUAVE’s in-built weight estimation routines to inform the weight of the vehicle at the start of the mission. T A represents the total static thrust available for the airplane and primarily determines the engine mass flow and fuel flow at full throttle and adjustments to this quantity are made based on standard atmosphere fluctuations.
The SUAVE-generated output files contain different time-series airplane performance data, corresponding to the configuration assessed. These output files are processed to obtain the following performance parameters, all as a function of time: coefficient of drag attributed to compressibility effects ( C D c ), induced drag coefficient ( C D i ), miscellaneous drag coefficient ( C D m ), parasitic drag coefficient ( C D p ), L / D , weight (W), specific fuel consumption ( S F C ), and thrust required ( T R ). Each parameter is composed of 144 discrete time steps, essentially meaning that each performance parameter is a row vector consisting of 144 elements. Here, the time steps are not linearly spaced from each other, i.e., the time difference between each time step is unique. This is because SUAVE’s mission solver discretizes time based on the Chebyshev polynomial, which has been shown to improve convergence and data accuracy near transition portions of the mission and mitigate Runge’s phenomenon. This is exhibited by each performance parameter having smaller time steps at the beginning and towards the end of each mission segment.
At this stage, design data— b a , S a , M T O W , and T A —have been collected for each BWB configuration along with its associated airplane performance data— C D c , C D i , C D m , C D p , L / D , W , S F C , and T R . However, a numerical discrepancy may exist between T A and values in T R . Although polynomial response surface models have been developed using a BWB vehicle survey, to calculate M T O W and T A as a function of b a , and S a , it is still theoretically possible to calculate a value for T A that does not represent the optimal value for the vehicle. Here, the optimal value of T A is defined as that value that provides minimal, i.e., as close to zero, second-segment climb excess thrust. Second-segment climb takes place after takeoff ground roll, rotation, liftoff, and first-segment climb (first-segment climb occurs after liftoff where the aircraft is still in a takeoff configuration). Under single-engine operation, the aircraft must be able to produce enough thrust to sustain a positive and steady climb gradient at the liftoff speed. Upon achieving this speed, the landing gear is retracted, which signals the completion of the first-segment climb and the start of the second-segment climb. During this phase, a two-engine aircraft is expected to produce enough thrust under single-engine operations (SEO) to achieve a steady gross climb gradient of no less than 2.4% at the takeoff safety speed, V 2 , until reaching 400 ft. above ground level (AGL) [55]. The second-segment climb constraints often become a critical engine sizing condition for the entire airplane, where insufficient thrust renders the airplane unable to satisfy the 2.4% climb gradient required under single-engine operating conditions. While an excessively positive thrust margin allows the airplane to achieve the necessary climb gradient, S F C is likely penalized for the rest of the airplane’s flight profile since the airplane is utilizing an engine with more thrust than the airplane needs. For this reason, SUAVE’s mission solver is adjusted to modify T A as needed to achieve a minimal second-segment excess thrust condition. Therefore, rather than using T A for neural network training and testing, the SUAVE-adjusted value of T A is used— T A o p t . This ensures that the neural network models developed are more likely to predict values of T A that are optimal for a BWB configuration.
Next, two discrete databases are created—a BWB design and a BWB performance database. The BWB design database, matrix D , contains values of b a , S a , M T O W , and T A o p t for all 30,000 BWB configurations. Therefore, D has dimensionality 30 , 000 × 4 , where each row represents a unique BWB configuration. Similarly, the BWB performance database, matrix P , contains the values of C D c , C D i , C D m , C D p , L / D , W , S F C , and T R for the same 30,000 BWB configurations. Here, each performance parameter is a row vector containing 144 elements, representing time step values for the entire mission. Since each performance parameter’s dimensions is 1 × 144 and there are 8 performance parameters, P has dimensionality 30 , 000 × 1152 . For matrices D and P each row corresponds to the same configuration where for example, the values of b a , S a , M T O W , and T A o p t in row 2600 of matrix D yielded the performance in row 2600 of matrix P . Matrices D and P are expressed in Equations (6) and (7).
D = b a 1 S a 1 M T O W 1 T A o p t 1 b a 2 S a 2 M T O W 2 T A o p t 2 b a 3 S a 3 M T O W 3 T A o p t 3 b a N L H S S a N L H S M T O W N L H S T A o p t N L H S
P = C D c 1 C D i 1 C D m 1 C D p 1 L / D 1 W 1 S F C 1 T R 1 C D c 2 C D i 2 C D m 2 C D p 2 L / D 2 W 1 S F C 2 T R 2 C D c 3 C D i 3 C D m 3 C D p 3 L / D 3 W 1 S F C 3 T R 3 C D c N L H S C D i N L H S C D m N L H S C D p N L H S L D N L H S W N L H S S F C N L H S T R N L H S
Once D and P were created, a scaling operation was performed to create D S and P S . The motivation for scaling stemmed from demonstrated improvement in how it reduces neural network training time, via enhanced optimization algorithm convergence, and how it yields a reduction in prediction errors [56]. Nayak et al. suggested that to achieve the aforementioned performance improvements, the entire input and output databases for training and testing neural networks should be scaled by some reference quantities to normalize values between 0 and 1 [57]. Another benefit of scaling, as it applies to this study, is that it can adjust quantities that numerically differ by several orders of magnitude, such as T R and S F C , as well as M T O W and b. This ensures that the model’s learning method is not skewed by substantially higher or lower values. In general, from a numerical standpoint, scaling was conducted by dividing variables by 1.05 multiplied by the maximum value of variables that exist within each matrix— D and P . Specifically, for D , this entailed dividing the M T O W value in each row by 1.05 multiplied by the maximum value of M T O W that appears within D , i.e., 1.05 · M T O W m a x . A new matrix, D S , was formed by repeating this operation for the other design variables in D and this is expressed in Equation (8). The value of 1.05 was informed by applying a 5% growth factor to each data point. This ensures all data points are between 0 and 1 and reduces the risk of overfitting. Once BWB configuration data was generated by the neural network model as an output, a re-scaling operation was not performed to obtain absolute values. This was to ensure that predicted values were all in the same range (0 to 1) prior to error calculation, thereby mitigating numerical sensitivity issues.
D S = b a 1 1.05 · b a , m a x S a 1 1.05 · S a , m a x M T O W 1 1.05 · M T O W m a x T A o p t 1 1.05 · T A o p t , m a x b a 2 1.05 · b a , m a x S a 2 1.05 · S a , m a x M T O W 2 1.05 · M T O W m a x T A o p t 2 1.05 · T A o p t , m a x b a 3 1.05 · b a , m a x S a 3 1.05 · S a , m a x M T O W 3 1.05 · M T O W m a x T A o p t 3 1.05 · T A o p t , m a x b a N L H S 1.05 · b a , m a x S a N L H S 1.05 · S a , m a x M T O W N L H S 1.05 · M T O W m a x T A o p t N L H S 1.05 · T A o p t , m a x
A similar scaling operation was performed to obtain P S . Here, each time step value of every performance parameter was divided by 1.05 multiplied by the maximum value of that performance parameter within P . For example, in order to scale L / D , each time step value of L / D in every row was divided by 1.05 multiplied by the maximum value of L / D that appears in P . This operation was repeated for all 8 performance parameters in each row of P to construct P S and is expressed in Equation (9). At this point, the databases for neural network training and testing have now been generated.
P S = C D c 1 1.05 · C D c , m a x C D i 1 1.05 · C D i , m a x C D m 1 1.05 · C D m , m a x T R 1 1.05 · T R , m a x C D c 2 1.05 · C D c , m a x C D i 2 1.05 · C D i , m a x C D m 2 1.05 · C D m , m a x T R 2 1.05 · T R , m a x C D c 3 1.05 · C D c , m a x C D i 3 1.05 · C D i , m a x C D m 3 1.05 · C D m , m a x T R 3 1.05 · T R , m a x C D c N L H S 1.05 · C D c , m a x C D i N L H S 1.05 · C D i , m a x C D m N L H S 1.05 · C D m , m a x T R N L H S 1.05 · T R , m a x
Outside of neural network training, these matrices can be used to illustrate the numerical sensitivity of a performance parameter to a change in a configuration design parameter, and vice versa. Figure 9 highlights the L / D values across an entire mission for two BWB configurations. The blue line represents the L / D values for a BWB with b a = 221.6 ft. S a = 8998 ft 2 , M T O W = 408,300 lbs., and T A o p t = 116,000 lbs., while the orange line represents another BWB configuration with b a = 229.3 ft., S a = 8048 ft 2 , M T O W = 405,000 lbs., and T A o p t = 111,400 lbs.

2.2. Neural Network Architecture Generation, Numerical Method Selection, and Training

Neural network models, in their most basic form, are governed by a set of algorithms and control variables that map, or correlate, an input to a target. They are composed of multiple layers—input, hidden, and output—each with nodes. Nodes from one layer are linked to nodes from other layers, for example, a node in the first hidden layer is connected to all nodes in the input layer. Each linkage has its own weight value while layers themselves have their own bias values. These values help inform how data are translated, i.e., changed, as they pass from one layer to the next. For example, the input, n e t h 1 1 , into the first node of the first hidden layer, h 1 1 , is a summation of all of the outputs from each input node multiplied by the weight value of each nodal link to that node in h 1 , plus the bias value associated with the hidden layer. Throughout training, these weight values are adjusted such that the difference between the predicted values and the actual target values are within a specified tolerance.
For this study, the authors investigated shallow ANNs of two different types and of varied architectures. Shallow ANNs are characterized by being composed of only one hidden layer. Typically, shallow neural networks are used to better understand the feasibility of deploying machine learning models for a problem of interest. They are relatively simple, inexpensive models to develop, in terms of computational resources required, and often serve as better surrogate models than other comparatively more complex neural network architectures for problems not involving image or video processing. Theoretically, a shallow ANN with enough neurons in the hidden layer can adequately capture complex features of a database, such as non-linearity and multi-dimensionality [58]. Within the subset of shallow ANNs, two were selected for this study—feed-forward and cascade-forward neural networks. A feed-forward neural network is considered one of the simplest neural network architectures in that each neuron in the input layer is mapped, or linked, to every neuron in the hidden, which is then mapped to every neuron in the output layer. Compared to this architecture, a cascade-forwards neural network has an additional mapping from each neuron in the input layer directly to every neuron in the output layer. This type of ANN has provided favorable predictive accuracy in scenarios involving time-series data [9]. Figure 10 illustrates the architectures for both shallow feed-forward and shallow cascade-forward ANNs. Within each model type, the number of neurons in the hidden layer, μ , was varied from 1 to 10 to better understand predictive accuracy sensitivity as it relates to the number of neurons. Increasing the number of neurons can impact training time and computational cost—central processing unit (CPU) speed and random access memory (RAM) usage. The trade-off, however, is the potential for higher predictive accuracy.
With regard to the input and output layers, the number of neurons in each is dependent on the dimensionality of the training database. In this case, since the goal is to develop a neural network suitable for inverse aircraft design, the input for training is BWB performance data, namely P S , while the output is corresponding BWB configuration data, i.e., D S . The dimensionality of P S is 30,000 × 1152, which implies that each BWB configuration is characterized by 1152 vehicle performance data points—composed of 8 individual performance parameters. For this reason, the input layer is composed of 1152 neurons, where each neuron represents an individual performance data point. For example, the second neuron in the input layer represents the second time step for C D c . Meanwhile, since the dimensionality of D S is 30,000 × 4, this dictates that the output layer contains 4 neurons, where each one represents a different performance parameter. For example, the first neuron in the output layer represents b a for a BWB configuration.
After neural network models have been architected, training can commence. The training scheme employed was supervised learning as it involved training data, P S and D S , that is labeled, and each input corresponds to one specific set of outputs [59]. The intent of training is to allow the neural network to “learn” features of the data such that it can predict output values when provided inputs. In the process of doing so, weight values are numerically adjusted to reduce the error between predicted and target values. Figure 11 provides an overview of this process. This involves passing input and output data in its entirety, iteratively through the neural network. This is referred to as an epoch—one complete pass of data to be learned by a neural network.
The size of the epoch is dictated by how many training points from P S and D S are used for training, which will be explained in the following section. Neural network training typically involves multiple epochs before a favorable predictive performance is converged on. For example, Equation (10) represents the calculation of n e t h 1 1 for the first BWB configuration used as training in an epoch, namely P S 1 and D S 1 . Here P S n 1 represents the value of BWB airplane performance parameter in the first row and nth column of the P S matrix. These values are associated with neurons in the input layer where P S 1 1 , which is the first time step value of C D c , represents the value of the first neuron while P S 1152 1 , the last time step value of T R , represents the value of the last neuron. I w n 1 are the weight values associated with each neuron linkage to h 1 and b h is the bias value for the entire hidden layer.
n e t h 1 = n = 1 n = 1152 ( I w n 1 { P S c a l e d } n 1 ) + b h
How data are passed through each neuron in the hidden layer, whether it is 1 neuron or 10, is governed by a numerical algorithm known as an activation function. Fundamentally, an activation function uses the input value of a neuron to calculate the output value of the neuron. There are many activation functions to choose from, each with its own trade-offs in accuracy, generalization across a wider training set, and computational wall-clock time needed for training, to name a few. For this study, the authors elected to use the tan-sigmoid function, also known as the hyperbolic tangent function, expressed in Equation (11), where o u t h i is the output from the ith neuron in the hidden layer. A benefit of the tan-sigmoid activation function is that its derivative is steeper at most points compared to the derivatives of other activation functions. This enables larger numerical changes in weight values during training, which can significantly reduce training time [60]. The training process will be described in more detail in the following paragraphs.
o u t h i = 2 1 + e 2 n e t h i 1 = tanh ( n e t h i )
Ultimately this value, in conjunction with the weights associated with neuron linkages from the hidden layer neurons to the output layer neurons, O w n and the bias value corresponding to the output layer, b o , is used to calculate the inputs into the output layer nodes, expressed in Equation (12), where n e t o 1 is the input into the first neuron of the output layer and μ is the number of neurons in the hidden layer. Figure 12 provides a schematic representation of a feed-forward shallow neural networking, highlighting some of the aforementioned variables.
n e t o 1 = n = 1 n = μ ( O w n 1 o u t h i n 1 ) + b o
Using P S 1 , n e t o is calculated for each neuron in the output layer. These values can be compared to target values that are D S 1 . There will inherently be a difference between the two quantities, which is defined as the total error. Improving neural network predictive accuracy is based on determining the sensitivity of the total error with respect to each weight value through calculation of the partial derivative of the total error with respect to each weight. This procedure is called backpropagation and involves tracing back outputs of the model through different neurons that were involved in generating that output and ultimately back to the weights that were applied. For this reason, the derivative of the activation function plays a crucial role in training. Once partial derivatives have been calculated, optimization algorithms are leveraged to determine numerical changes to weight values with the objective of minimizing total error [61]. Based on previous work, the authors elected to use the Levenburg–Marquardt (LM) optimization algorithm with Bayesian regularization. One benefit of the application of Bayesian regularization to the LM scheme is that it improves overall predictive performance through adjustments of a linear combination of squared errors and weight values [62,63].
While there are several ways to express neural network performance and training stoppage criteria, root mean squared error (RMSE) is often used as it can express accuracy across a wider set of design points. Compared to net error (NE), which is simply the absolute difference between the target and predicted value, RMSE expresses the standard deviation of the predicted errors across N S sample points, shown in Equation (13), where y and y are the prediction and truth values respectively. In this sense, a low RMSE value is indicative of low numerical noise as it pertains to predictive accuracy [64]. For training purposes, once a convergence in averaged RMSE was exhibited over 3 epochs, i.e., no improvement in predictive performance [65], training was stopped.
R M S E = i = 1 N s ( y y ) 2 N s

2.3. K-Fold Cross-Validation Training and Testing Scheme

Neural network training and testing were conducted using a K-fold cross-validation scheme. Cross-validation is a statistical method that partitions data into subsets to then subsequently train a model using data from a subset and using the other subset for evaluation of the model’s performance [66]. Typically, to reduce variability, multiple rounds of cross-validation are performed using different subsets from the same database. From these rounds, validation results are combined to yield an estimate of the model’s overall predictive performance. Essentially, as it applies to neural network training and testing, cross-validation is a means to minimize prediction error through training and testing of all data points in a large dataset [67].
K-fold cross-validation involves randomly dividing a database into k subsets, or folds, each of the same size, i.e., the same number of data points. For the first batch, the first k 1 folds are used for training while the kth fold is used for testing. This process is repeated k times each time using a different fold for testing—for example, the kth batch would use the first fold for testing while the rest of the folds, folds two through k, are used for training [68]. Effectively, the process is repeated k times, with the model getting trained and testing across all folds, so the RMSE for cross-validation is computed by taking the average of RMSE values across all k folds.
For this study, a 10-fold cross-validation scheme is used. This scheme offers a key advantage in that every data point is used for training nine times and testing once. This in turn reduces bias and reduces the variance in the prediction errors [69]. Its obvious disadvantage is the need to train and test a model ten times, which can be computationally intensive.
P S and D S were divided into 10 folds, where each fold contained 3000 data points. Since each BWB configuration and performance metrics were generated using an LHS scheme, the folds were simply created by dividing P S and D S uniformly into 10 folds, as opposed to having to randomly select points to form a fold. Effectively, neural network models were trained with 9 folds (27,000 data points) and tested with 1 fold (3000 data points) for a given batch, which resulted from the ordering of the training and testing folds, as shown in Figure 13.
This scheme was applied to both neural network types, namely feed-forward and cascade-forward, and for all architectures, i.e., 1 to 10 neurons in the hidden layer. Therefore, for each neural network type, 100 neural network models were trained and tested. As mentioned in the previous subsection, the averaged RMSE was then calculated for each of these neural network models.

2.4. Random Forest Approach for Extensibility Analysis

Once 10-fold cross-validation neural network training and testing is complete, a neural network with the best predictive performance, across all architectures, can be identified. While the neural network models are trained and tested across parametrically-generated configurations bounded by the limits informed by Table 1, they have not been exposed to the exact configurations listed in this table. Additionally, since the geometry parametrization model was calibrated around the BWB450 vehicle, the configurations generated for training and testing purposes have similar configuration characteristics as the BWB450.
For this reason, the authors wanted to better understand the model’s extensibility characteristics, i.e., predicting the value of configuration design parameters, when the model is supplied with BWB performance data for a vehicle not resembling the BWB450. This testing was accomplished through a random forest classification (RFC) scheme. At its core, RFC involves using a collection of surrogate models, working together, to classify or solve a problem [70]. Rather than relying on one model, the RFC scheme leverages several models, where each model is trained on their subspace but, when combined, can show a monotonic improvement in classification [71].
A result of the 10-fold cross-validation method is that even after a suitable neural network architecture is identified, for both neural network types, there are still 10 versions of each model, making it suitable for RFC. Leveraging this method, the extensibility analysis was conducted as follows: First, a BWB vehicle was selected from Table 1. This vehicle’s configuration was modeled using SUAVE, and its performance was obtained using SUAVE’s mission solver coupled with AVL. Both the configuration design variables—b, S, M T O W , and T A —and performance parameters— C D c , C D i , C D m , C D p , L / D , W, S F C , and T R —were then scaled using the same reference scaling quantities applied to generate the P S and D S matrices. Next, the scaled performance parameters were fed as inputs to all 10 neural networks of the same type and architecture, where each model subsequently generated values for configuration design variables. Lastly, the average was calculated for each of the four configuration design variables. Once un-scaled using the reference scaling quantities, these averages were compared against the target values, i.e., the actual values of b, S, M T O W , and T A for the selected BWB.

3. Inverse-Design Prediction Results

3.1. Performance of Neural Networks

In general, the averaged RMSE value associated with neural network prediction is an indication of the model’s ability to accurately predict BWB configuration design characteristics when provided with performance data. Note that the RMSE is averaged since the output of the neural network is four BWB design variables, and this average calculation is performed after rescaling to absolute values. A lower averaged RMSE value exhibits precise correlation and accurate predictive performance while a higher averaged RMSE value indicates poor classification. Figure 14 and Figure 15 present the averaged RMSE for all 10 folds, different numbers of neurons in the hidden layer ( 1 μ 10 ), and for both neural network types—feed-forward and cascade-forward.
To better aid in determining a suitable neural network architecture for extensibility analysis, elaborated on in Section 2.4 it was first important to determine prediction accuracy sensitivity to μ . For this reason, the fold-averaged RMSE was calculated across all 10 folds, for each μ value and for both neural network types, as shown in Figure 16.
The feed-forward neural network architectures exhibit a range in fold-averaged RMSE values from approximately 0.042 to 0.0025 while the cascade-forward neural network architectures range from approximately 0.032 to 0.006. For both neural network types, an increase in μ yielded a decrease in prediction errors, as is indicated by the decrease in averaged RMSE values. This is evidence of the benefit that additional neurons in the hidden layer can provide, which is additional hidden layer processing in a scenario with highly complex, non-linear data mapping. Note that these results are obtained for N L H S = 30,000, and it is likely that different results can be obtained for a different number of training points.
When a neural network has too few neurons it lacks the ability to learn enough about the underlying patterns of the training data. Increasing the number of neurons allows the model to adequately learn more features and trends that may exist in the data. However, increasing μ beyond a certain value no longer has a large effect on predictive performance, as is indicated by the asymptotic convergence after μ = 4 for feed-forward architectures, and after μ = 8 for cascade-forward neural networks.
In general, feed-forward architectures yielded better predictive performance compared to cascade-forward architectures across almost all values of μ . Futhermore, feed-forward architectures also exhibited a significantly lower variance across all μ values, as shown by the green shaded region in Figure 17. On the other hand, cascade-forward neural networks are sensitive to overfitting when trained with too many training samples [72]—recall that each model is being trained with 27,000 training samples (9 folds for training, each with 3000 data points). Low error rates and high variance in predicted outputs are two indicators of overfitting, which is exhibited by cascade-forward architectures and is illustrated by the larger green shaded region in Figure 18 [73].

3.2. Extensibility Analysis via Random Forest

Examining Figure 16 more closely, a shallow feed-forward neural network with μ = 10 and a shallow cascade-forward neural network with μ = 10 was chosen for the extensibility analysis via the random forest approach—described in Section 2.4 Although convergence in predictive accuracy was observed after μ = 4 for feed-forward models and μ = 8 for cascade-forward models, the most accurate models were chosen, which was at μ = 10 for both neural network types. Across all 10 folds, these models exhibit the best predictive performance, i.e., yielding the lowest RMSE values.
Table 2 details all of the BWB vehicles that were part of the extensibility analysis. This table details the actual values of b, S, M T O W , and T A for each BWB configuration, random forest averaged predicted values of b, S, M T O W , and T A for both feed-forward and cascade-forward neural network architectures, and their respective errors as a percentage of the actual values. For feed-forward neural networks with μ = 10 , the error in b ranged from 1.6 × 10 5 % to 0.06%, with the average across all vehicles being 0.011%. Similarly, for S, the errors ranged from 2 × 10 4 % to 0.084%, with an average of 0.04%; M T O W —0.005% to 0.51%, with an average of 0.15%; and T A —0.005% to 0.42%, with an average error of 0.11%. The errors exhibited by cascade-forward neural networks with μ = 10 are comparatively much higher: b—0.45% to 3.61%, with an average of 0.96%; S—0.06% to 3.49%, with an average of 0.73%; M T O W —5.95 × 10 4 % to 3.35%, with an average of 1.89%; and T A —0.013% to 5.92%, with an average error of 2.13%.
Practically, for every BWB configuration, the feed-forward neural network was significantly more accurate than the cascade-forward architecture in all 4 design variables. Applying the feed-forward architecture to a broader set of vehicles—BWBs, HWBs, and IWBs—confirms that the feed-forward neural networks were not only able to adequately learn features and trends that exist between b, S, M T O W , and T A and the vehicle’s performance data, but also be able to generalize effectively across a broader design space. Additionally, vehicles that exhibit low prediction error are closer in resemblance to the training data the neural networks were exposed to. In this sense, those vehicles with lower errors likely exhibit similar configuration design characteristics as the BWB450. Furthermore, the variability of the results for the cascade-forward neural networks, as exhibited by the standard deviation across each μ value, also confirms that such an architecture did indeed struggle with overfitting during the training process. As such, given the amount of training data points this neural network type was exposed to, it is not suitable for usage in a broader, more diverse design space. Furthermore, the extensibility analysis shows an increase in error as input points begin to fall outside of the range of data exposed to the model during training. This results in values greater than 1 after scaling, which does influence how such a model can be used. Careful consideration must be taken when using it to extrapolate outside of the design space it was trained with.

3.3. Envisioned Usage in Design Space Exploration

The results of this study have showcased the viability of developing machine learning models, namely neural networks, capable of generating BWB configuration design parameters when provided with time-series, mission-informed BWB performance data. The data used to train and test these neural networks were generated via Level-0 airplane configuration definition and performance estimation tools, which are typically leveraged in the preliminary design cycle phase. Traditionally, this phase also involves design space exploration where conventional techniques dictate iteratively exploring the design space while converging to an optimum configuration with desired performance. Here, the aforementioned neural networks can be advantageous over traditional surrogate models in that the neural network is trained to handle performance specified over the entire mission of the airplane. In this sense, a designer can individually modify performance parameters, such as L / D , in specific portions of the flight envelope, such as climb, while leaving L / D values in the cruise segment constant, and instantly observe its effect on the configuration design via the prediction of design parameters from the neural network. Additionally, in the context of optimization within design space exploration, such a model could be advantageous in identifying a suitable starting point—baseline airplane to begin optimizing. This, in turn, can reduce the number of iterations, thereby decreasing computational costs.
Additionally, the developed neural networks could be employed in uncertainty quantification (UQ) scenarios where changes to performance parameters can be viewed through the lens of configuration design changes. UQ problems typically leverage analytical tools of varied fidelity—dependent on the nature of the problem. Conducting airplane performance analysis across an entire flight profile using traditional assessment tools can be computationally intensive. In this sense, leveraging the mission-informed neural network models could allow designers to more rapidly explore the sensitivity of the design space. Furthermore, exposing the model to a larger set of design parameters can aid in understanding more about tightly coupled trends and relationships that may exist between configuration design and performance parameters.
Since the neural networks were trained with data from a BWB configuration parameterization model, seemingly, to adopt these neural networks for a different vehicle class would simply require replacing the configuration parameterization model. For example, a TAW configuration parameterization model could be leveraged for the configuration generation process. This broadly implies that the computational framework can easily be adopted to any vehicle by simply replacing the parameterization model.

4. Conclusions and Future Work

This paper has presented the development of neural networks, the data generated to train them, the prediction accuracy of these models, and extensibility analysis for BWB inverse-design space exploration to investigate the feasibility of the design approach for unconventional aircraft configurations. The models were developed to handle time-dependent, mission-informed BWB airplane performance data— C D c , C D i , C D m , C D p , L / D , W, S F C , and T R —as inputs and generate BWB configuration data—b, S, M T O W , and T A —as outputs. The neural networks were trained with BWB configuration data generated through a BWB configuration parameterization model, which is calibrated using the BWB450 vehicle. These configurations were then run through SUAVE, a low-fidelity airplane performance assessment tool, to obtain BWB performance data, which was also used for neural network training. Two shallow neural network architectures were tested, namely feed-forward and cascade-forward. Within each type, the number of neurons in the hidden layer, μ , was varied from 1 to 10. Each model leveraged the tan-sigmoid activation function, and the Levenburg–Marquardt optimization algorithm with Bayesian regularization was selected to adjust weight quantities during the training process. A total of 30,000 data points were generated, i.e., 30,000 BWB configurations were generated, each with their own unique performance values. Training and testing of each neural network was conducted using a 10-fold cross-validation scheme, where each fold contained 3000 BWB configuration-performance pairs. By doing so, an optimal neural network type and architecture was identified. The number of data points chosen, i.e., 30,000, was informed by providing adequate design space coverage and a suitable number of training points for each fold; however, it was not the focus of this study.
The results obtained proved the feasibility of developing inverse-design, mission-informed neural networks specifically for an unconventional airplane configuration, namely the BWB. While individual models could have been developed for each one of the four design variables, the intent was to investigate the viability of developing only one neural network for the prediction of all four configuration parameters, which was successfully demonstrated. Neural network type and architecture both had a significant influence on prediction accuracy. Ultimately, the 10-fold cross-validation training and testing scheme helped reveal that a feed-forward neural network with μ = 10 exhibited the highest predictive accuracy for all neural networks developed. While not nearly as accurate as its feed-forward counterpart, the cascade-forward neural networks, also with μ = 10 , had the highest predictive accuracy among the cascade-forward architectures. It is worth noting, however, that prediction accuracy convergence did not significantly decrease after μ = 4 for feed-forward architectures and μ = 8 for cascade-forward architectures. Considering the minimization of computational costs, it may be more suitable to leverage these models instead.
After training, These models were chosen for extensibility analysis via a random forest scheme—rather than using only one neural network for the prediction of configuration values, a collection of neural networks, of the same architecture and type, can cohesively express predicted configuration values. Specifically, all 10 feed-forward neural networks with μ = 10 , from each of the 10 folds were used to construct an averaged prediction of configuration values when provided with a set of test BWB vehicles. A similar scheme was employed for all 10 cascade-forward neural networks with μ = 10 .
The feed-forward neural network architecture demonstrated significantly better prediction accuracies than the cascade-forward architecture and with substantially lower variance. This indicates the cascade-forward architectures suffered from a degree of overfitting while the feed-forward architecture exhibited better performance in terms of prediction accuracy in a broader design space. Across the BWB test vehicles, relatively larger errors were seen for vehicles that have drastically different configuration characteristics and trends, such as distribution of wing area along the span, than the vehicle used to calibrate the models, namely the BWB450.
The developed framework presented in this paper was geared towards unconventional configurations, and the neural networks were exposed to data derived from low-fidelity, Level-0 configuration and performance assessment tools. A possible avenue to explore, which may lead to a reduction in some of the errors ultimately seen in the extensibility analysis, is the exposure to mixed data types and specifically mixed-fidelity. Leveraging a higher-fidelity tool for performance estimation in conjunction with a lower-order tool warrants further investigation of its effect on overall prediction accuracies. This could also prove to be a key step in expanding the number of performance and design parameters the neural network models can be trained and tested with, which could, in turn, permit the exploration of larger and more multi-dimensional design spaces. It is also worth investigating the effect of incorporating static airplane performance data into the training dataset, for example, performance parameters such as approach speed, V a p p , or takeoff field length, T O F L .
A complication of inverse-design methods is that they are often ill-posed, i.e., one output can be mapped to more than one input. In this study, the authors have mitigated these risks by carefully constraining the training data, leveraging a configuration parameterization model, which is calibrated using the BWB450 vehicle, and not changing flight profiles across the dataset. The exploration of making the vehicle’s flight profile dynamic rather than fixing it warrants further investigation and could ultimately lead to the usage of such models in airplane design optimization settings. More examination in determining which design parameters to vary would certainly be beneficial and could be achieved via an analysis of variance tests from a larger set of design parameters. These are all significant areas worthy of investigation and could help pave the road towards the development of robust mission-informed predictive surrogate models capable of being deployed in airplane inverse-design scenarios.

Author Contributions

Conceptualization, R.S.S. and S.H.; methodology, R.S.S. and S.H.; software, R.S.S. and S.H.; validation, R.S.S. and S.H.; formal analysis, R.S.S. and S.H.; investigation, R.S.S. and S.H.; resources, R.S.S. and S.H.; data curation, R.S.S. and S.H.; writing—original draft preparation, R.S.S. and S.H.; writing—review and editing, R.S.S. and S.H.; visualization, R.S.S. and S.H.; supervision, S.H.; project administration, R.S.S. and S.H.; funding acquisition, R.S.S. and S.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

Nomenclature

b=wingspan
S=wing area
A R =aspect ratio
W / L =wing loading
T / W =thrust-to-weight ratio
C L =lift coefficient
C D =drag coefficient
C M =pitching moment coefficient
C P =pressure coefficient
α =angle of attack
L / D =lift-to-drag ratio
V v =tail volume coefficient
b =wingspan distribution
S =wing area distribution
c =root chord length distribution
b b l =wingspan distribution for BWB450
S b l =wing area distribution for BWB450
c λ b l =quarter chord sweep distribution for BWB450
c r =root chord length
A C =aerodynamic center
ϕ =root chord fraction
η =wingspan fraction
w ϕ =position of engine in terms of root chord fraction
w η =position of engine in terms of wingspan fraction
b a =adjusted wingspan distribution
S a =adjusted wing area distribution
T A =total static thrust available
T A o p t =2nd-segment optimized total static thrust available
R 2 =coefficient of determination
C D c =compressibility effects drag coefficient
C D i =induced drag coefficient
C D m =miscellaneous drag coefficient
C D p =parasitic drag coefficient
W=weight
T R =thrust required
S F C =specific fuel consumption
C D c , C D i , C D m , C D p , L D W , S F C , T R } =time-domain airplane performance vectors
D =configuration design data matrix
P =airplane performance matrix
M T O W m a x , b a , m a x , S a , m a x , T A o p t , m a x C D c , m a x , C D i , m a x , C D m , m a x , C D p , m a x L D m a x , W m a x , S F C m a x , T R , m a x } =max reference quantitites used for scaling
D S =scaled configuration design data matrix
P S =scaled airplane performance matrix
N L H S =Latin-ypercube sample size
n e t h 1 =net input into the first node of the first hidden layer
h 1 =first hidden layer
μ =number of neurons in the hidden layer of a shallow
neural network
I w =weight values for input-to-hidden layer
nodal connections
b h =bias value of the hidden layer
o u t h i =output from the ith neuron in the hidden layer
O w =weight values for hidden-to-output layer
nodal connections
b o =bias value of the output layer
V a p p =approach speed

References

  1. Dorsey, A.; Uranga, A. Design Space Exploration of Blended Wing Bodies. In Proceedings of the AIAA AVIATION 2021 Forum, Virtual Event, 2–6 August 2021. [Google Scholar] [CrossRef]
  2. Raymer, D.P. Aircraft Design: A Conceptual Approach. In AIAA Education Series, AIAA; American Institute of Aeronautics and Astronautics, Inc.: Reston, VA, USA, 1992. [Google Scholar] [CrossRef]
  3. Strathoff, P.; Zumegen, C.; Stumpf, E.; Klumpp, C.; Jeschke, P.; Warner, K.L.; Gelleschus, R.; Bocklisch, T.; Portner, B.; Moser, L.; et al. On the Design and Sustainability of Commuter Aircraft with Electrified Propulsion Systems. In Proceedings of the AIAA AVIATION 2022 Forum, Chicago, IL, USA, 27 June–1 July 2022. [Google Scholar] [CrossRef]
  4. Wakayama, S.; Kroo, I. The challenge and promise of blended-wing-body optimization. In Proceedings of the 7th AIAA/USAF/NASA/ISSMO Symposium on Multidisciplinary Analysis and Optimization, St. Louis, MO, USA, 2–4 September 1998. [Google Scholar] [CrossRef]
  5. Cinar, G.; Cai, Y.; Bendarkar, M.V.; Burrell, A.I.; Denney, R.K.; Mavris, D.N. System Analysis and Design Space Exploration of Regional Aircraft with Electrified Powertrains. J. Aircr. 2022, 60, 1–28. [Google Scholar] [CrossRef]
  6. Biser, S.; Filipenko, M.; Boll, M.; Kastner, N. Design Space Exploration Study and Optimization of a Distributed Turbo-Electric Propulsion System for a Regional Passenger Aircraft. In Proceedings of the AIAA Propulsion and Energy 2020 Forum, Virtual Event, 24–28 August 2020. [Google Scholar] [CrossRef]
  7. Kallou, E.; Sarojini, D.; Mavris, D.N. Application of Set-Based Design Principles on Multi-Level Aircraft Design Space Exploration. In Proceedings of the AIAA AVIATION 2022 Forum, Chicago, IL, USA, 27 June–1 July 2022. [Google Scholar] [CrossRef]
  8. Wang, J.; Wu, J.; Ling, J.; Iaccarino, G.; Xiao, H. Physics-Informed Machine Learning for Predictive Turbulence Modeling: Towards a Complete Framework; Sandia National Lab.: Albuquerque, NM, USA, 2016. [CrossRef]
  9. Sharma, R.S.; Hosder, S. Investigation of Mission-Driven Inverse Aircraft Design Space Exploration with Machine Learning. J. Aerosp. Inf. Syst. 2021, 18, 774–789. [Google Scholar] [CrossRef]
  10. Sun, G.; Sun, Y.; Wang, S. Artificial neural network based inverse design: Airfoils and wings. Aerosp. Sci. Technol. 2015, 42, 415–428. [Google Scholar] [CrossRef]
  11. Gibbs, J.; Gollnick, V. Inverse Aircraft Design. In Proceedings of the AIAA AVIATION 2020 FORUM, Virtual, 15–19 June 2020. [Google Scholar] [CrossRef]
  12. Sekar, V.; Zhang, M.; Shu, C.; Khoo, B.C. Inverse Design of Airfoil Using a Deep Convolutional Neural Network. AIAA J. 2019, 57, 993–1003. [Google Scholar] [CrossRef]
  13. Rai, M.M.; Madavan, N.K. Aerodynamic Design Using Neural Networks. AIAA J. 2000, 38, 173–182. [Google Scholar] [CrossRef]
  14. Barrett, T.R.; Bressloff, N.W.; Keane, A.J. Airfoil Shape Design and Optimization Using Multifidelity Analysis and Embedded Inverse Design. AIAA J. 2006, 44, 2051–2060. [Google Scholar] [CrossRef]
  15. Kharal, A.; Saleem, A. Neural networks based airfoil generation for a given Cp using Bezier–PARSEC parameterization. Aerosp. Sci. Technol. 2012, 23, 330–344. [Google Scholar] [CrossRef]
  16. Yilmaz, E.; German, B. A Deep Learning Approach to an Airfoil Inverse Design Problem. In Proceedings of the 2018 Multidisciplinary Analysis and Optimization Conference, Atlanta, GA, USA, 25–29 June 2018. [Google Scholar] [CrossRef]
  17. Glaws, A.; King, R.N.; Vijayakumar, G.; Ananthan, S. Invertible Neural Networks for Airfoil Design. AIAA J. 2022, 60, 3035–3047. [Google Scholar] [CrossRef]
  18. Yu, K.; Chen, C.; Chen, Y. Inverse Design of Nozzle Using Convolutional Neural Network. J. Spacecr. Rocket. 2022, 59, 1161–1170. [Google Scholar] [CrossRef]
  19. Oddiraju, M.; Behjat, A.; Nouh, M.; Chowdhury, S. Efficient Inverse Design of 2D Elastic Metamaterial Systems using Invertible Neural Networks. In Proceedings of the AIAA AVIATION 2021 FORUM, Virtual Event, 2–6 August 2021. [Google Scholar] [CrossRef]
  20. Li, J.; Zhang, M. Data-based approach for wing shape design optimization. Aerosp. Sci. Technol. 2021, 112, 106639. [Google Scholar] [CrossRef]
  21. Thuerey, N.; Weißenow, K.; Prantl, L.; Hu, X. Deep Learning Methods for Reynolds-Averaged Navier–Stokes Simulations of Airfoil Flows. AIAA J. 2020, 58, 25–36. [Google Scholar] [CrossRef]
  22. Singh, A.P.; Medida, S.; Duraisamy, K. Machine-Learning-Augmented Predictive Modeling of Turbulent Separated Flows over Airfoils. AIAA J. 2017, 55, 2215–2227. [Google Scholar] [CrossRef]
  23. Li, J.; Du, X.; Martins, J. Machine learning in aerodynamic shape optimization. Prog. Aerosp. Sci. 2022, 134, 100849. [Google Scholar] [CrossRef]
  24. Li, J.; Zhang, M. Adjoint-Free Aerodynamic Shape Optimization of the Common Research Model Wing. AIAA J. 2021, 59, 1990–2000. [Google Scholar] [CrossRef]
  25. Bouhlel, M.; He, S.; Martins, J. Calable gradient-enhanced artificial neural networks for airfoil shape design in the subsonic and transonic regime. Struct. Multidiscip. Optim. 2020, 61, 1363–1376. [Google Scholar] [CrossRef]
  26. Du, X.; He, P.; Martins, J. A B-spline-based generative adversarial network model for fast interactive airfoil aerodynamic optimization. In Proceedings of the AIAA SciTech Forum, AIAA, Orlando, FL, USA, 6–10 January 2020. [Google Scholar] [CrossRef]
  27. Barnhart, S.; Narayanan, B.; Gunasekaran, S. Blown wing aerodynamic coefficient predictions using traditional machine learning and data science approaches. In Proceedings of the AIAA SciTech Forum, Online, 11–15 January 2021. [Google Scholar] [CrossRef]
  28. Karali, H.; Inalhan, G.; Demirezen, M.U.; Yukselen, M.A. A new nonlinear lifting line method for aerodynamic analysis and deep learning modeling of small unmanned aerial vehicles. Int. J. Micro Air Veh. 2021, 13, 1–24. [Google Scholar] [CrossRef]
  29. Cai, S.; Wang, Z.; Fuest, F.; Jeon, Y.; Gray, C.; Karniadakis, G.E. Flow over an espresso cup: Inferring 3-d velocity and pressure fields from tomographic background oriented schlieren via physics-informed neural networks. J. Fluid Mech. 2021, 915, A102. [Google Scholar] [CrossRef]
  30. Yilmaz, E.; German, B. Conditional generative adversarial network framework for airfoil inverse design. In Proceedings of the AIAA AVIATION Forum, Online, 15–19 June 2020. [Google Scholar] [CrossRef]
  31. Achour, G.; Sung, W.J.; Pinon-Fischer, O.; Mavris, D. Development of a conditional generative adversarial network for airfoil shape optimization. In Proceedings of the AIAA Scitech 2020 Forum, Orlando, FL, USA, 6–10 January 2020. [Google Scholar] [CrossRef]
  32. Secco, N.; de Mattos, B. Artificial neural networks to predict aerodynamic coefficients of transport airplanes. Aerosp. Sci. Technol. 2017, 89, 211–230. [Google Scholar] [CrossRef]
  33. Lukaczyk, T.W.; Wendorff, A.D.; Colonno, M.; Economon, T.D.; Alonso, J.J.; Orra, T.H.; Ilario, C. SUAVE: An Open-Source Environment for Multi-Fidelity Conceptual Vehicle Design. In Proceedings of the 16th AIAA/ISSMO Multidisciplinary Analysis and Optimization Conference, Dallas, TX, USA, 22–26 June 2015. [Google Scholar] [CrossRef]
  34. Botero, E.M.; Wendorff, A.; MacDonald, T.; Variyar, A.; Vegh, J.M.; Lukaczyk, T.W.; Alonso, J.J.; Orra, T.H.; Ilario, C. SUAVE: An Open-Source Environment for Conceptual Vehicle Design and Optimization. In Proceedings of the 54th AIAA Aerospace Sciences Meeting, San Diego, CA, USA, 4–8 January 2016. [Google Scholar] [CrossRef]
  35. MacDonald, T.; Botero, E.; Vegh, J.M.; Variyar, A.; Alonso, J.J.; Orra, T.H.; Ilario, C. SUAVE: An Open-Source Environment Enabling Unconventional Vehicle Designs through Higher Fidelity. In Proceedings of the 55th AIAA Aerospace Sciences Meeting, Grapevine, TX, USA, 9–13 January 2017. [Google Scholar] [CrossRef]
  36. MacDonald, T.; Clarke, M.; Botero, E.M.; Vegh, J.M.; Alonso, J.J. SUAVE: An Open-Source Environment Enabling Multi-Fidelity Vehicle Optimization. In Proceedings of the 18th AIAA/ISSMO Multidisciplinary Analysis and Optimization Conference, Denver, CO, USA, 5–9 June 2017. [Google Scholar] [CrossRef]
  37. Tanio, T.; Takeda, K.; Yu, J.; Hashimoto, M. Training Data Reduction using Support Vectors for Neural Networks. In Proceedings of the 2019 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC), Lanzhou, China, 18–21 November 2019. [Google Scholar] [CrossRef]
  38. Lyu, Z.; Martins, J.R.R.A. Aerodynamic Design Optimization Studies of a Blended-Wing-Body Aircraft. J. Aircr. 2014, 51, 1604–1617. [Google Scholar] [CrossRef]
  39. Liebeck, R.H. Design of the Blended Wing Body Subsonic Transport. J. Aircr. 2004, 41, 10–25. [Google Scholar] [CrossRef]
  40. Brown, M.; Vos, R. Conceptual Design and Evaluation of Blended-Wing Body Aircraft. In Proceedings of the 2018 AIAA Aerospace Sciences Meeting, Kissimmee, FL, USA, 8–12 January 2018. [Google Scholar] [CrossRef]
  41. Nickol, C.; Haller, W. Assessment of the Performance Potential of Advanced Subsonic Transport Concepts for NASA’s Environmentally Responsible Aviation Project. In Proceedings of the 54th AIAA Aerospace Sciences Meeting, San Diego, CA, USA, 4–8 January 2016. [Google Scholar] [CrossRef]
  42. Hileman, J.; Spakovszky, Z.; Drela, M.; Sargeant, M.; Jones, A. Airframe Design for Silent Fuel-Efficient Aircraft. AIAA J. Aircr. 2010, 47, 956–969. [Google Scholar] [CrossRef]
  43. Bonet, J.; Schellenger, H.; Rawdon, B.; Elmer, K.; Wakayama, S.; Brown, D. Environmentally Responsible Aviation (ERA) Project—N + 2 Advanced Vehicle Concepts Study and Conceptual Design of Subscale Test Vehicle (STV) Final Report; Report No.: NASA/CR-2011-216519; NASA Dryden Flight Research Center: Edwards, CA, USA, 2011.
  44. Maier, R. ACFA 2020—An FP7 project on active control of flexible fuel efficient aircraft configurations. Prog. Flight Dyn. Guid. Navig. Control. Fault Detect. Avion. 2013, 6, 585–600. [Google Scholar] [CrossRef]
  45. Smith, H. College of Aeronautics Blended Wing Body Development Programme, ICAS-2000-1.1.4. In Proceedings of the 22nd International Congress of the Aeronautical Sciences, Harrogate, UK, 27 August–1 September 2000. [Google Scholar]
  46. Bolsunovsky, A.; Buzoverya, N.; Gurevich, B.; Denisov, V.; Dunaevsky, A.; Shkadov, L. Flying wing—Problems and Decisions. AIAA J. Aircr. Des. 2001, 4, 193–210. [Google Scholar] [CrossRef]
  47. Godard, J. Semi-Buried Engine Installation: The Nacre Project Experience, ICAS-2010-4.4.3. In Proceedings of the 27th International Congress of the Aeronautical Sciences, Nice, France, 19–24 September 2010. [Google Scholar]
  48. Frota, J.; Nicholls, K.; Whurr, J.; Müller, M.; Gall, P.E.; Loerke, J.; Macgregor, K.; Schmollgruber, P.; Russell, J.; Hepperle, M.; et al. Final Activity Report. New Aircraft Concept Research (NACRE); Technical Report; SIXTH FRAMEWORK PROGRAMME PRIORITY 4, Aeronautics and Space, FP6-2003-AERO-1; NACRE Consortium: Blagnac, France, 2010. [Google Scholar]
  49. Hepperle, M. The VELA Project. 2005. Available online: https://www.dlr.de/as/en/Portaldata/5/Resources/dokumente/projekte/vela/The_VELA_Project.pdf (accessed on 27 November 2023).
  50. Fusaro, R.; Viola, N. Influence of High Level Requirements in Aircraft Design: From scratch to sketch. In Proceedings of the 2018 Aviation Technology, Integration, and Operations Conference, Atlanta, GA, USA, 25–29 June 2018. [Google Scholar] [CrossRef]
  51. Chrisman, L. Latin Hypercube vs. Monte Carlo Sampling. March 2020. Available online: https://lumina.com/latin-hypercube-vs-monte-carlo-sampling/ (accessed on 4 January 2024).
  52. Aistleitner, C.; Hofer, M.; Tichy, R. A Central Limit Theorem for Latin Hypercube Sampling with Dependence and Application to Exotic Basket Option Pricing. Int. J. Theor. Appl. Financ. 2012, 15, 1250046. [Google Scholar] [CrossRef]
  53. Loh, W.L. On Latin Hypercube Sampling. Ann. Stat. 1996, 24, 2058–2080. [Google Scholar] [CrossRef]
  54. Raymer, D.P. RDSwin: Seamlessly-Integrated Aircraft Conceptual Design for Students & Professionals. In Proceedings of the 54th AIAA Aerospace Sciences Meeting, San Diego, CA, USA, 4–8 January 2016. [Google Scholar] [CrossRef]
  55. Beard, J.E.; Takahashi, T.T. Revisiting Takeoff Obstacle Clearance Procedures: An Argument for Extended Second Segment Climb. In Proceedings of the 17th AIAA Aviation Technology, Integration, and Operations Conference, Denver, CO, USA, 5–9 June 2017. [Google Scholar] [CrossRef]
  56. Stöttner, T. Why Data Should Be Normalized before Training a Neural Network. May 2019. Available online: https://towardsdatascience.com/why-data-should-be-normalized-before-training-a-neural-network-c626b7f66c7d (accessed on 20 January 2023).
  57. Nayak, S.; Misra, B.; Behera, H. Impact of Data Normalization on Stock Index Forecasting. Int. J. Comput. Inf. Syst. Ind. Manag. Appl. 2014, 6, 357–369. [Google Scholar]
  58. Kim, D.E.; Gofman, M. Comparison of shallow and deep neural networks for network intrusion detection. In Proceedings of the 2018 IEEE 8th Annual Computing and Communication Workshop and Conference (CCWC), Las Vegas, NV, USA, 8–10 January 2018; pp. 204–208. [Google Scholar] [CrossRef]
  59. Deng, L.; Li, X. Machine Learning Paradigms for Speech Recognition: An Overview. IEEE Trans. Audio Speech Lang. Process. 2013, 21, 1060–1089. [Google Scholar] [CrossRef]
  60. Lin, C.W.; Wang, J.S. A digital circuit design of hyperbolic tangent sigmoid function for neural networks. In Proceedings of the 2008 IEEE International Symposium on Circuits and Systems, Seattle, WA, USA, 18–21 May 2008; pp. 856–859. [Google Scholar] [CrossRef]
  61. Nicholson, C. A Beginner’s Guide to Neural Networks and Deep Learning. Available online: https://wiki.pathmind.com/neural-network (accessed on 23 November 2020).
  62. Gill, P.R.; Murray, W.; Wright, M.H. The Levenburg-Marquardt Method. In Practical Optimization; Emerald Group Publishing Limited: Bingley, UK, 1981; Volume 4, pp. 136–137. [Google Scholar]
  63. Aburaed, N.; Atalla, S.; Mukhtar, H.; Al-Saad, M.; Mansoor, W. Scaled Conjugate Gradient Neural Network for Optimizing Indoor Positioning System. In Proceedings of the 2019 International Symposium on Networks, Computers and Communications (ISNCC), Istanbul, Turkey, 18–20 June 2019; pp. 1–4. [Google Scholar] [CrossRef]
  64. Kumar, U.A. Comparison of neural networks and regression analysis: A new insight. Expert Syst. Appl. 2005, 29, 424–430. [Google Scholar] [CrossRef]
  65. Papila, N.; Shyy, W.; Fitz-Coy, N.; Haftka, R. Assessment of neural net and polynomial-based techniques for aerodynamic applications. In Proceedings of the AIAA 17th Applied Aerodynamics Conference, Norfolk, VA, USA, 28 June–1 July 1999. [Google Scholar] [CrossRef]
  66. Campagnini, S.; Liuzzi, P.; Galeri, S.; Montesano, A.; Diverio, M.; Cecchi, F.; Falsini, C.; Langone, E.; Mosca, R.; Germanotta, M.; et al. Cross-Validation of Machine Learning Models for the Functional Outcome Prediction after Post-Stroke Robot-Assisted Rehabilitation. In Proceedings of the 2022 44th Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), Glasgow, UK, 11–15 July 2022; pp. 4950–4953. [Google Scholar] [CrossRef]
  67. Powers, D.M.W.; Atyabi, A. The Problem of Cross-Validation: Averaging and Bias, Repetition and Significance. In Proceedings of the 2012 Spring Congress on Engineering and Technology, Xi’an, China, 27–30 May 2012; pp. 1–5. [Google Scholar] [CrossRef]
  68. Yadav, S.; Shukla, S. Analysis of k-Fold Cross-Validation over Hold-Out Validation on Colossal Datasets for Quality Classification. In Proceedings of the 2016 IEEE 6th International Conference on Advanced Computing (IACC), Bhimavaram, India, 27–28 February 2016; pp. 78–83. [Google Scholar] [CrossRef]
  69. Alippi, C.; Roveri, M. Virtual k-fold cross validation: An effective method for accuracy assessment. In Proceedings of the 2010 International Joint Conference on Neural Networks (IJCNN), Barcelona, Spain, 18–23 July 2010; pp. 1–6. [Google Scholar] [CrossRef]
  70. More, A.S.; Rana, D.P. Review of random forest classification techniques to resolve data imbalance. In Proceedings of the 2017 1st International Conference on Intelligent Systems and Information Management (ICISIM), Aurangabad, India, 5–6 October 2017; pp. 72–78. [Google Scholar] [CrossRef]
  71. Kam Ho, T. Random decision forests. In Proceedings of the 3rd International Conference on Document Analysis and Recognition, Montreal, QC, Canada, 14–16 August 1995; Volume 1, pp. 278–282. [Google Scholar] [CrossRef]
  72. Yang, M.; Xie, B.; Dou, Y.; Xue, G. Cascade Forward Artificial Neural Network based Behavioral Predicting Approach for the Integrated Satellite-terrestrial Networks. Mob. Netw. Appl. 2022, 27, 1569–1577. [Google Scholar] [CrossRef]
  73. Zhang, H.; Zhang, L.; Jiang, Y. Overfitting and Underfitting Analysis for Deep Learning Based End-to-end Communication Systems. In Proceedings of the 2019 11th International Conference on Wireless Communications and Signal Processing (WCSP), Xi’an, China, 23–25 October 2019; pp. 1–6. [Google Scholar] [CrossRef]
Figure 1. Computational approach overview.
Figure 1. Computational approach overview.
Aerospace 11 00137 g001
Figure 2. Overview of extensibility analysis via a random forest approach.
Figure 2. Overview of extensibility analysis via a random forest approach.
Aerospace 11 00137 g002
Figure 3. BWB semi-span segments.
Figure 3. BWB semi-span segments.
Aerospace 11 00137 g003
Figure 4. Illustration of BWB configurations in vehicle survey [41,42,43,44,45,46,47,48,49].
Figure 4. Illustration of BWB configurations in vehicle survey [41,42,43,44,45,46,47,48,49].
Aerospace 11 00137 g004
Figure 5. BWB parametric configuration creation model overview.
Figure 5. BWB parametric configuration creation model overview.
Aerospace 11 00137 g005
Figure 6. Visualization of 100 LHS-derived span and wing area combinations.
Figure 6. Visualization of 100 LHS-derived span and wing area combinations.
Aerospace 11 00137 g006
Figure 7. Visualization of a BWB generated using parametric configuration model.
Figure 7. Visualization of a BWB generated using parametric configuration model.
Aerospace 11 00137 g007
Figure 8. SUAVE flight profile for BWB performance assessment.
Figure 8. SUAVE flight profile for BWB performance assessment.
Aerospace 11 00137 g008
Figure 9. L/D variation across two generated BWB configurations.
Figure 9. L/D variation across two generated BWB configurations.
Aerospace 11 00137 g009
Figure 10. Feed-forward and cascade-forward neural network architectures.
Figure 10. Feed-forward and cascade-forward neural network architectures.
Aerospace 11 00137 g010
Figure 11. Neural network training process overview.
Figure 11. Neural network training process overview.
Aerospace 11 00137 g011
Figure 12. Feed-forward neural network calculations schematic.
Figure 12. Feed-forward neural network calculations schematic.
Aerospace 11 00137 g012
Figure 13. Ten-fold cross-validation training and testing matrix split by batches and fold numbers.
Figure 13. Ten-fold cross-validation training and testing matrix split by batches and fold numbers.
Aerospace 11 00137 g013
Figure 14. Feed-forward averaged RMSE for all 10 folds.
Figure 14. Feed-forward averaged RMSE for all 10 folds.
Aerospace 11 00137 g014
Figure 15. Cascade-forward averaged RMSE for all 10 folds.
Figure 15. Cascade-forward averaged RMSE for all 10 folds.
Aerospace 11 00137 g015
Figure 16. Feed-forward and cascade-forward fold-averaged RMSE.
Figure 16. Feed-forward and cascade-forward fold-averaged RMSE.
Aerospace 11 00137 g016
Figure 17. Feed-forward fold-averaged RMSE with a standard deviation overlay.
Figure 17. Feed-forward fold-averaged RMSE with a standard deviation overlay.
Aerospace 11 00137 g017
Figure 18. Cascade-forward fold-averaged RMSE with a standard deviation overlay.
Figure 18. Cascade-forward fold-averaged RMSE with a standard deviation overlay.
Aerospace 11 00137 g018
Table 1. BWB vehicle survey and surrogate model data [41,42,43,44,45,46,47,48,49].
Table 1. BWB vehicle survey and surrogate model data [41,42,43,44,45,46,47,48,49].
Vehicle MTOW [lbs] S ref [ ft 2 ] Span [ft]Total Thrust [lbs]
N + 3 SUGAR-Ray181,5004136168.556,000
HWB216-GTF312,500822122092,000
SAX40330,3008998221.692,500
ERA-0009A411,2508048229.395,000
HWB301-GTF533,00010,169250134,500
HWB400-GTF701,00011,471260168,500
ACFA-2020884,00014,291261.9239,000
BW-981,060,00014,968254.27296,500
IWB-7501,262,00017,093328.08353,000
NACRE-7501,390,00021,453328.08468,500
VELA-31,542,00022,088326.76432,500
Table 2. Extensibility performance with μ = 10 FF (feed-forward) and CF (cascade-forward) neural networks across BWB vehicle survey.
Table 2. Extensibility performance with μ = 10 FF (feed-forward) and CF (cascade-forward) neural networks across BWB vehicle survey.
Design VariableActual ValueFF Predicted ValueFF % ErrorCF Predicted ValueCF % Error
N + 3 SUGAR-Ray [41]
b, ft.168.5168.50.004167.93.61
S, ft 2 41364139.50.0843991.73.49
M T O W , lbs.181,500181,4460.03187,5803.35
T A , lbs.56,00055,9800.0357,7193.07
HWB216-GTF [42]
b, ft.220220.010.007218.70.61
S, ft 2 82218221.20.0028193.90.33
M T O W , lbs.312,500314,0940.51320,0002.40
T A , lbs.92,00092,0050.00595,4040.37
SAX40 [43]
b, ft.221.6221.60.004220.40.54
S, ft 2 8997.68997.030.0068987.70.11
M T O W , lbs.330,300330,2830.005338,7882.57
T A , lbs.92,50092,5050.00595,4143.15
ERA-009A [44]
b, ft.229.3229.250.02228.20.47
S, ft 2 8048.08047.92 × 10 4 8018.20.37
M T O W , lbs.411,250411,0030.06424,8623.31
T A , lbs.95,00094,9050.10100,6245.92
HWB301-GTF [42]
b, ft.250250.00.01248.50.59
S, ft 2 10,16910,168.80.00210,182.10.13
M T O W , lbs.533,000533,2130.04545,4192.33
T A , lbs.134,500134,7290.17137,4322.18
HWB400-GTF [42]
b, ft.260260.00.007258.30.66
S, ft 2 11,47111,467.00.0311,494.00.2
M T O W , lbs.701,000701,5610.08713,1971.74
T A , lbs.168,500168,5340.02171,2471.63
ACFA-2020 [45]
b, ft.261.9261.91.6 × 10 5 260.70.45
S, ft 2 14,290.614,296.40.0414,299.20.06
M T O W , lbs.884,000883,8850.013902,9182.14
T A , lbs.239,000238,7850.09248,7754.09
BW-98 [46]
b, ft.254.3254.40.06258.61.7
S, ft 2 14,968.214,991.70.1615,200.81.55
M T O W , lbs.1,060,0001,064,7700.451,088,8322.72
T A , lbs.296,500297,4490.32301,2741.61
IWB-750 [47]
b, ft.328.1328.10.002325.80.70
S, ft 2 17,093.117,089.20.02317,063.40.17
M T O W , lbs.1,262,0001,264,1450.171,262,0380.003
T A , lbs.353,000354,4830.42355,2590.64
NACRE-750 [47]
b, ft.328.1328.10.005325.70.71
S, ft 2 21,452.521,448.90.0221,269.30.85
M T O W , lbs.1,390,0001,393,8920.281,393,3360.24
T A , lbs.468,500468,6410.03472,0140.75
VELA-3 [49]
b, ft.326.8326.84 × 10 4 325.00.55
S, ft 2 22,087.522,099.30.0521,907.80.81
M T O W , lbs.1,542,0001,542,4630.031,542,0095.95 × 10 4
T A , lbs.432,500432,7600.06432,5560.013
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Sharma, R.S.; Hosder, S. Mission-Driven Inverse Design of Blended Wing Body Aircraft with Machine Learning. Aerospace 2024, 11, 137. https://doi.org/10.3390/aerospace11020137

AMA Style

Sharma RS, Hosder S. Mission-Driven Inverse Design of Blended Wing Body Aircraft with Machine Learning. Aerospace. 2024; 11(2):137. https://doi.org/10.3390/aerospace11020137

Chicago/Turabian Style

Sharma, Rohan S., and Serhat Hosder. 2024. "Mission-Driven Inverse Design of Blended Wing Body Aircraft with Machine Learning" Aerospace 11, no. 2: 137. https://doi.org/10.3390/aerospace11020137

APA Style

Sharma, R. S., & Hosder, S. (2024). Mission-Driven Inverse Design of Blended Wing Body Aircraft with Machine Learning. Aerospace, 11(2), 137. https://doi.org/10.3390/aerospace11020137

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop