1. Introduction
The deployment of modern wireless communication systems, based on spectrally efficient modulation schemes such as the orthogonal frequency division multiplexing (OFDM), is perhaps the main agent that has pushed the dawn of new ultra-linear and highly efficient power amplifiers (PAs) [
1]. Advanced techniques in the discrete-time domain have been employed to quantitatively mitigate the nonlinear impairments generated by the PA and the IQ modulator, mainly through digital predistorters (DPD) in transmitters linearization [
2,
3,
4,
5], and also by applying post-compensation in the communications receiver [
6,
7].
The success of baseband signal processing techniques comes on a base of two aspects. First, the implementation of improved black-box models allowing superior accuracy and a better representation of the transmitter and receiver nonlinearities. Both neural networks and Volterra nonlinear filters have been advanced for the associated signal processing [
8,
9,
10], but perhaps the most widely used approach in PAs linearization is the baseband Volterra model [
11], whose discrete-time version, truncated in nonlinear order and memory length, is denoted as the full Volterra (FV) model. Regrettably, the size of its regressor set can be unsuitably large, and ad hoc models with a reduced set of regressors were proposed. For example, the memory polynomial (MP) model and its modifications [
12,
13], together with the generalized memory polynomial (GMP) [
2] model, have demonstrated satisfactory performance in the design of a DPD. Another approach based on available information at the circuit level has also made possible the deduction of a reduced-order structure that contains the GMP lagging envelope terms with even-order envelope powers as a particular regressor type [
14].
On the other hand, the adoption of compressed sensing (CS) techniques for the search of active regressors is a way to reduce the order in sparse systems [
9,
15,
16,
17,
18] and has been particularized to PA Volterra models [
19]. Likewise, the complete structure of the popular GMP model can be sparse, so that pruning methods have been applied to discard unimportant terms and reduce the regressor set [
20,
21,
22]. The selection of the nonlinear orders and memory depths is performed by combining a greedy algorithm, the orthogonal matching pursuit (OMP) in [
19] or the doubly orthogonal matching pursuit (DOMP) in [
21], and a Bayesian information criterion (BIC) for determining the optimum number of coefficients. The proposal in [
22] is based on a hill-climbing (HC) algorithm, which provides the best trade-off between modeling accuracy and model complexity by searching in the GMP structure.
Notwithstanding the notable performance of these approaches, a general concern exists on techniques to upgrade and optimize a given model. This interest has motivated recent publications which compare model pruning or model growing techniques [
23,
24]. In [
24], the model growth is made taking into account only the initial set of the GMP regressors, but it could be necessary the upgrade with regressors not included in the complete GMP model. Unfortunately, no results have been published for an HC algorithm applied to a general FV model because the regressor set size is unsuitably large. Its massive number of regressors can also limit the use of matching pursuits [
19,
21] and the selection of a manageable model with reduced nonlinear order and memory depth may lead to a selected subset with a lack of significant terms.
The cited works evidence the need for a procedure to upgrade sub-optimal models in multiple scenarios. In the first one, which can be denoted as
intra-model scenario, the search of new active regressors is circumscribed to the same model, e.g., the GMP in [
22,
23,
24]. In a second
inter-model scenario, the search is extended to a new model with a richer set of regressors so that the optimal model will benefit from a boost provided by the regressors of the second model. Several situations can be foreseen in this inter-model scenario: a memoryless (ML) model enhanced with memory regressors, a GMP model enhanced with FV regressors, or even an FV model enhanced with image signal regressors of the complex-valued Volterra series (CVS) model [
25] when modulator impairments are significant, to mention only a few examples.
This communication formulates the proposal of an algorithm to upgrade a sub-optimal PA baseband model following a regressors pursuit procedure based on compressed sensing and the BIC rule. This approach is valid in both scenarios and is first illustrated with a sub-optimal pruned model resulting from a standard search applied to an FV model with a manageable, though incomplete, raw set of regressors. The smaller size of this pruned subset allows its enrichment with regressors that extend the nonlinear order or the memory depth, and a second search is performed. The result is a model with a small subset of significant regressors, including those with high nonlinear orders and large memory, that avoids computational constraints. In a second example, a ML sub-optimal model is improved with memory regressors. In
Section 2, the perspective of enhancing a sub-optimal model is theoretically established, and modeling results for practical PAs are presented.
Section 3 is devoted to the design of DPDs following the proposed upgrading procedure and its performance improvement is demonstrated. Finally, some concluding remarks are presented in
Section 4.
2. A Strategy to Upgrade PA Models
The complex envelope at the output of a PA can be described with a discrete-time version of the baseband Volterra model advanced in [
11]. In that case, the amplifier output is described by a linear combination of basis functions given by monomials resulting from the multiplication of delayed samples of the input complex envelope
and its conjugate
. If the series of these Volterra regressors is truncated to a maximum nonlinear order, the resulting structure is denoted as the FV model.
A severe drawback of the FV model is that its regressors stock is too large, involving a high computational cost in its regression and output signal generation. Thus, the identification of a pruned set of regressors is the main objective of model-order reduction techniques. The structures in [
2,
14] are examples of pruned models with reduced sets of regressors. Alternatively, it is possible to search within the whole FV-regressors set those active regressors that guarantee the FV performance.
Gathering
samples of the PA input signal to arrange the column vector
, and defining similarly the Volterra regressor vectors
arranged in an ordered fashion, the FV model can be expressed in matrix form as
where
is the column vector containing PA output samples,
is the number of regressors,
are the regression coefficients,
is the regressors matrix and
is a column vector with the coefficients
. It is convenient to normalize the regressors in power, so the columns of
are taken to be unit-norm. As for a given nonlinear order and memory length,
contains the complete set of regressors of the FV model (
1), here and below we refer to
as the whole FV-regressors matrix. The coefficients vector of (
1) can be estimated with the standard least-squares (LS) algorithm
In a realistic scenario, it is possible to exploit the sparsity of the system to reduce the whole FV-regressors set by identifying only a small portion of active regressors. A suitable pruning procedure is [
19,
21]
the application of a greedy pursuit for the search of the most significant regressors among the whole set of the FV model, and
a criterion, such as the BIC, to stop the algorithm execution and avoid overfitting.
However, if the number of regressors is so large that the search step exceeds the computational resources, it is necessary to select a lower nonlinear order or memory to reduce the number of regressors in (
1). After determining the best-reduced model, the regressors matrix of the sparse model
is assembled from the matrix of an FV model with a shortfall in the regressors stock, with all but the
S selected regressors set to zero (
). On the other hand, if the procedure is applied to the GMP, or any other a priori pruned model, we cannot affirm that the identified set is complete because the richness of the initial set of regressors may be insufficient and there is no guarantee that the selected set of active regressors achieves the best performance. Therefore, a method to complete the model structure and improve its performance would also be beneficial.
The present upgrading procedure is applied to a model with a deficit of active regressors. Assuming that the matrix
of this incomplete model is known, the output signal predicted with the estimated parameters is
and the residual vector is
The procedure proposed in this paper is to upgrade the incomplete model starting with the attachment of new stock of FV normalized regressors with higher nonlinear order and/or memory depth. The matrix of the additional model is constructed with new regressors and is attached to , thus forming the extended matrix . By way of illustration, if the pruned matrix was determined from a th-order FV model, a possible extension is constituted only by higher-order regressors.
The search of the supplementary active regressors starts with the definition of the first auxiliary basis function
. Following a procedure based on the Gram-Schmidt algorithm [
20,
21], the other Volterra regressors
,
are orthogonally projected onto the line
spans, and the projections are subtracted to the original basis yielding the basis functions orthogonal to
,
Repeating the procedure for the remaining
(
), the FV regressors of the additional model
are transformed to a new set of basis functions
orthogonal to the FV regressors
of the incomplete model
. The extended auxiliary matrix is
The matrix
can be considered the result for the search of the first
S active regressors at iteration
, and the residual vector can be calculated with (
4). Continuing the search, at iteration
,
is expressed as a linear combination of the new set of regressors. Then, the algorithm chooses the regressor of the additional
that better predicts the residual at iteration
t, obtained by maximizing the absolute value of the projection of the regressors with the residual of the previous iteration
The chosen index is incorporated into the support set of the active coefficients and a new estimation of the coefficients vector is used to update the signal estimation and the residual,
and
. The iteration computes the matrix
with the projections of the remaining regressors onto the selected one, and the updated matrix of orthogonal basis
. The procedure is summarized in the Algorithm 1. Regressors are incorporated until the minimum of the BIC criterion is reached. The stopping indicator was defined by the minimum of the BIC written in terms of the normalized mean square error (NMSE) and a penalty term, as in [
26],
where
is the number of regressors and
is the NMSE corresponding to this number of regressors.
Three cases of study are analyzed to motivate the upgrading procedure.
2.1. Case 1: A Weakly Nonlinear PA
The first case of study is a class AB PA based on Cree’s board for the evaluation of the power GaN HEMT CGH40010, operated at a carrier frequency of 3.6 GHz. Using experimental data acquired for this PA, the procedure of the previous Section is directly applied. The test bench, described in detail in
Section 2.3, is here integrated only by the signal generator and the vector signal analyzer. The probing signal was designed with an OFDM format and 15 MHz bandwidth, according to the Long-Term Evolution (LTE) downlink standard. The average power of the input signal was 6 dBm, for which the PA delivers an output average level of 19 dBm. This case with a moderate output level is presented only to illustrate the proposed upgrading procedure in a first approximation. Anyway, the 11 dB of peak-to-average power ratio (PAPR) level produces a peak PA output power of about 30 dBm.
Algorithm 1: Upgrading an incomplete model. |
Require: , , |
1: Initialization: , , |
2: for to S do |
3: |
4: |
5: |
6: end for |
7: , |
8: |
9: for until stopping criterion is met do |
10: |
11: |
12: |
13: |
14 |
15: |
16: |
17: |
18: end for |
For practical reasons, memory is considered only for FV regressors with nonlinear order below 7. Even in this case, to predict the PA output with an FV model of 13th nonlinear order and a memory of 10 samples, it would be necessary to handle 19,617 regressors, a quantity that exceeds the average computer capabilities. Therefore, we initially considered a model with only three samples of memory length and, therefore, with a shortfall in the regressors stock. The model FV(13,3) contains a raw stock of 248 regressors. The evolution of the NMSE and the BIC as the greedy search includes more active regressors (coefficients) is shown in
Figure 1, indicating a model reduced to eight active regressors, denoted as s-FV(13,3), with an NMSE of
dB. The normalized magnitude of the corresponding estimated coefficients, labeled with the associated regressors, are displayed in the upper plot of
Figure 2. Observe that the resulting sparse model is not optimum because of the limited richness of the initial stock of regressors.
As the incomplete set of the s-FV(13,3) model does not embrace regressors with memory larger than three samples, it was updated by incorporating the 737 new regressors of the third nonlinear order FV model with a memory of 10 samples, FV(3,10), and the aforementioned upgrading procedure was applied. The resulting model, denoted as upgraded FV, is also displayed in
Figure 1 demonstrating an improved NMSE of
dB with 14 active regressors. The lower plot of
Figure 2 shows the normalized magnitude of the new estimated coefficients, and
Figure 3 reveals the comparison of the error spectra for the s-FV(13,3) model and the upgraded FV model (blue traces). The spectrum of the error between
and
is also plotted to have a reference of the distortion generated by the PA. Once the normalized parameters were computed at an input level of 6 dBm, they were straightforwardly scaled to adapt the coefficients to other power levels and the corresponding NMSE were evaluated [
26]. In the case of the reduced regressors set derived from the FV(13,3) set (a model with a shortfall in the regressors stock), there are eight active regressors. This pruned model delivers NMSE values below
dB in a dynamic range of 16 dB, as it is shown in
Figure 4. The pruned model after upgrading contains 14 regressors and the NMSE improves to values of about
dB in the complete range. The stable NMSE over 16 dB of dynamic range at the input is an indication of the procedure reliability to select the active regressors successfully.
2.2. Case 2: A PA Near Saturation
Nonlinear distortion and memory effects are significantly noticeable in the case of PAs with output levels near saturation, where the efficiency is markedly high. The polynomic behavior of the truncated Volterra series makes the solution diverge near saturation. Hence, an upgrading model approach to overcome this drawback is proposed here.
Taking into account that the Volterra series is linear with respect to the kernels, the
nth-order Volterra operator
can be split into two components
and the PA output can be expressed as the sum of two Volterra series
where the prima indicates that only odd-order terms are included in the sum. As the only assumption is the linearity of the Volterra operators, this result allows the adoption of additional criteria to select each one of the two Volterra series.
Based on the fact that the nonlinear order has not been truncated yet and the Volterra series can be seen as a generalization of the Taylor series, in this subsection, the first part of (
11) is chosen as a memoryless function expanded with the Taylor series
Observe that the option of a truncated Taylor series would introduce convergence issues. The replacement of the infinite Taylor series by the memoryless function
overcomes this computational instability. Substituting in (
11) and truncating the second Volterra series, a similar expression to (
1) can be obtained. To make the model (
11) unique, it is necessary to adopt a criterion to select the function
. Here, a function that maximizes its correlation with
is selected. After collecting the samples of (
12) to form the normalized column vector
, the output can be written again as a linear regression
where
is a new basis added to the conventional set of Volterra series regressors
. Let us remark that
does not have a definite nonlinear order and may not be denoted as a conventional Volterra series regressor, but it has been derived following a Volterra series approach. As this new memoryless basis is derived from the infinite Taylor series (
12), it overcomes the inherent instability of a truncated polynomial in the compression region near saturation.
The first step of the proposed procedure is the search for a function presenting the best correlation with the acquired output signal. Once the regressor
is given, it can be considered as the incomplete set of a model lacking active regressors with memory. Then, the supplementary matrix
is constructed departing from the stock of regressors of a new FV model. Therefore, the updated matrix is
and the model can be upgraded following the proposed procedure. The analysis is completed experimentally in
Section 3 by applying this technique to linearize a PA near saturation.
2.3. Case 3: A Generic PA
The model performance was also tested with the class AB PA based on the evaluation board of the Cree’s GaN CGH40010 operated now in a range of output average levels with a maximum of
dBm and a gain compression of 6 dB. The experimental acquisition was carried out over the test bench whose picture is shown in
Figure 5. It was formed of a SMU200A vector signal generator (VSG) from Rohde & Schwarz (Munich, Germany), which was followed by two cascaded Mini-Circuits TVA-4W-422A+ preamplifiers. The output signals were acquired using a PXA- N9030A vector signal analyzer (VSA) from Keysight Technologies (Santa Rosa, CA, USA).
The probe signal was set following a Fifth-generation New Radio (5G-NR) format characterized by a total bandwidth of 30 MHz with 30 kHz subcarrier spacing, 16-QAM symbols over all the subcarriers, a PAPR of dB, and a total length of 368,640 samples. A custom Matlab script controlled the settings to modulate the carrier with the 5G-NR waveform in the VSG and acquire samples of the complex envelope of the output signal in the VSA with an oversampling factor of 6, i.e., a sampling frequency of 92.16 MSa/s. Both the DAC converter included in the VSG and the ADC converter included in the VSA presented a resolution of 16 bits. In the VSA, the dynamic range of the measurement was optimized through the equipment settings and by averaging 300 acquisitions of the output signal, thus significantly reducing the noise floor.
The operation point to obtain the model structure was set to generate an output power level of
dBm (
dBm at the input of the two cascaded preamplifiers). To predict the PA output at this level, the model demonstrated in [
14] with a ninth nonlinear order and 10 samples of memory length was selected, giving a raw stock of 365 regressors. After a search with the algorithm in [
21], the stopping indicator was the minimum of (
8) with
in the penalty term. The identification procedure, which was performed over 1% of the measured samples, produced an optimum subset composed of only the
active regressors listed in
Table 1. The normalized coefficients, estimated with the LS algorithm, are shown in
Figure 6 labeled with the corresponding regressors. When validating with the complete length of the signal, this pruned model provides a satisfactory NMSE of
dB. The two regressors marked with a dagger (†) do not belong to the GMP set of regressors.
To detect if any active regressor is missed, a second search with an FV model was desirable. Given that the search is impractical if the pursuit starts directly with the 19,615 regressors of a ninth nonlinear order FV model with a memory of 10 samples, the upgrading procedure was adopted. First, the search proceeded with the 737 regressors of an insufficient model limited to a nonlinear order of 3 and a memory of 10 samples, FV(3,10), yielding nine identified active regressors. Next, this incomplete model is enhanced with the 246 regressors of a second 9th-order extended model, FV(9,3), and the search procedure was repeated with a result of a pruned model with a total of 14 regressors. Remarkably, 12 of the most significant regressors are the same for both pruned models. The identified structure of 14 active regressors was re-utilized to estimate the model coefficients for output average powers ranging from 21 to 34 dBm. The computed NMSE of the upgraded FV(9,10) model is shown in
Figure 7 for the whole dynamic range. The proposed technique for model upgrading was also implemented to obtain the results published in [
27].
4. Conclusions
In this work, a method to upgrade PA models is proposed. The set of active regressors identified for some conventional models (MP, GMP, etc.) can be insufficient to produce an optimal sparse model. On the other hand, sometimes, it is almost impossible to cope with the massive set of FV regressors. An approach to overcome this difficulty by applying an upgrading procedure to deal with the unmanageable set of FV regressors is demonstrated. Focusing on a ninth nonlinear order FV model with a memory of 10 samples, the 19,615 regressors are reduced to 14 active regressors for a generic class AB PA under test after the application of compressed sensing techniques, exhibiting an NMSE of approximately dB. Once the active set of regressors was identified, the corresponding coefficients were estimated in a range of output power levels from 21 to 34 dBm and the model predicts satisfactorily the PA output within a dynamic range of 14 dB. The method was applied to the linearization of the PA by identifying first the DPD structure, i.e., the active DPD regressors, and then the coefficients were estimated for the different power levels. The results indicate a satisfactory NMSE of dB and an ACPR below dB in both adjacent channels at all levels of the dynamic range.
It has been demonstrated that a memoryless static function can be used as a legitimate regressor in a non-truncated Volterra model, and this regressor can be also upgraded following the exposed procedure. This approach was applied to the linearization of a class J PA operating in conditions near saturation. The proposed method has been experimentally validated with the amplifier driven by a 20-MHz 5G-NR signal. After linearization with the proposed DPD, the results show more than 21 dB of ACPR and NMSE improvement with a reduction of the EVM from to .