Next Article in Journal
A Method for Prediction and Analysis of Student Performance That Combines Multi-Dimensional Features of Time and Space
Previous Article in Journal
Solving the Control Synthesis Problem Through Supervised Machine Learning of Symbolic Regression
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

DecoStrat: Leveraging the Capabilities of Language Models in D2T Generation via Decoding Framework

1
School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China
2
School of Electrical Engineering and Computing, Adama Science and Technology University, Adama 1888, Ethiopia
3
Department of Artificial Intelligence and Data Science, College of AI Convergence, Daeyang AI Center, Sejong University, Seoul 05006, Republic of Korea
*
Authors to whom correspondence should be addressed.
Mathematics 2024, 12(22), 3596; https://doi.org/10.3390/math12223596
Submission received: 18 October 2024 / Revised: 14 November 2024 / Accepted: 15 November 2024 / Published: 17 November 2024

Abstract

:
Current language models have achieved remarkable success in NLP tasks. Nonetheless, individual decoding methods face difficulties in realizing the immense potential of these models. The challenge is primarily due to the lack of a decoding framework that can integrate language models and decoding methods. We introduce DecoStrat, which bridges the gap between language modeling and the decoding process in D2T generation. By leveraging language models, DecoStrat facilitates the exploration of alternative decoding methods tailored to specific tasks. We fine-tuned the model on the MultiWOZ dataset to meet task-specific requirements and employed it to generate output(s) through multiple interactive modules of the framework. The Director module orchestrates the decoding processes, engaging the Generator to produce output(s) text based on the selected decoding method and input data. The Manager module enforces a selection strategy, integrating Ranker and Selector to identify the optimal result. Evaluations on the stated dataset show that DecoStrat effectively produces a diverse and accurate output, with MBR variants consistently outperforming other methods. DecoStrat with the T5-small model surpasses some baseline frameworks. Generally, the findings highlight DecoStrat’s potential for optimizing decoding methods in diverse real-world applications.

1. Introduction

The task of accurately generating diverse and contextually relevant text from structured data is crucial but challenging, with numerous applications such as dialogue systems [1], healthcare reporting [2,3], weather forecasting [4], and sports summarization [5]. For instance, leveraging D2T generation can empower chatbots to provide personalized responses to customer inquiries. However, this task requires language models to grasp complex relationships and contextual dependencies within the structured input data. Moreover, the input data can be in various formats, such as meaning representations [6], graphs [7], and tables [8], making the process difficult. Despite the importance of D2T generation, leveraging the immense capabilities of transformer models, such as T5 [9] and BART [10], remains a significant challenge. While these models have shown remarkable success in natural language processing (NLP) [11], generating contextually relevant text from structured data is a complex task that requires more than just powerful models [12]. Recent studies have highlighted the limitations of current approaches, including the need for the more effective handling of contextual dependencies in more realistic settings [13,14,15,16].
One significant hurdle in harnessing the potential of language models for D2T generation lies in the lack of a unified and modular framework that can effortlessly integrate multiple decoding methods. The absence of this framework hampers the progress and seamless development, integration, and selection of optimal decoding strategies for specific D2T generation tasks, making it challenging to pinpoint the most effective decoding method for a particular task. Commonly used decoding methods in D2T generation, such as beam search and its variants [17,18,19,20,21], tend to favor high-probability words, resulting in limited diversity in the generated text [13,22]. While stochastic approaches, such as top-k [23] and top-p [13], have shown promise in addressing these limitations in open-ended text generation NLP tasks, their effectiveness in D2T generation remains unexplored. Furthermore, the application of Minimum Bayes Risk (MBR) decoding, which has demonstrated better performance in machine translation [22,24,25], a task similar to the mathematical framework of D2T generation, has not been explored in D2T generation. Consequently, identifying the optimal decoding method remains a long-standing open research problem with significant implications for real-world applications where diverse and contextually relevant text is essential [13,20].
This study presents DecoStrat, a unified and modular framework that provides alternative decoding strategies to bridge the gap between language modeling and decoding processes. By integrating multiple decoding methods, DecoStrat enables the seamless exploration of optimal decoding strategies for specific tasks, as illustrated in Figure 1. We demonstrate its effectiveness on the MultiWOZ dataset, showcasing the effective utilization of language models in D2T generation. DecoStrat’s modular architecture enables the flexible development, integration, and optimization of various decoding methods, leading to advancements in D2T generation. The framework’s main contributions include the following:
  • Modular Decoding Architecture: DecoStrat features a modular design that integrates various decoding strategies through its interactive modules. This architecture allows for the flexible development and optimization of decoding methods tailored to specific tasks.
  • MBR Decoding: The framework includes essential modules—Manager, Ranker, and Selector—specifically for MBR decoding. These components work together to determine candidate selection strategies and optimize the output sequence, enhancing the effectiveness of the decoding process.
  • Evaluation: DecoStrat is designed to be domain-independent, making it applicable to a wide range of language generation tasks. Its effectiveness is showcased through evaluations on the MultiWOZ dataset, demonstrating its capability to produce contextually relevant text across various domains.
The remaining parts of this study are structured into the following sections. Section 2 provides a concise overview of recent related works. This is followed by an illustration and description of the research methodology in Section 3. The experimental setup and the details of experimental results are presented in Section 4. Section 5 presents the Results and Discussion. Finally, Section 6 summarizes the main findings and concludes the study.

2. Related Work

This section briefly presents the current state of neural language modeling and decoding methods in D2T generation, focusing on recent advancements in transformer models such as T5 and BART. Moreover, we examine deterministic, stochastic, and MBR decoding methods. Finally, we identify the gaps in existing approaches and show how our DecoStrat framework addresses these limitations, offering a unified and modular solution for D2T generation.

2.1. Neural Language Modeling

Neural language models have revolutionized NLP by learning the complex patterns in language and estimating the probability of the next word in a sequence [11]. By leveraging this approach, researchers have developed powerful language models and applied them to NLP tasks, including D2T generation. These models acquire knowledge through pre-training on large datasets, contributing to their powerful performance on various tasks.
The emergence of the transformer model [26] marked a significant milestone in neural language modeling, leading to notable improvements in the field. Variants of the transformer model, such as T5 [9] and BART [10], have achieved state-of-the-art results in various NLP tasks. However, despite these advancements, generating text from structured data remains challenging. One crucial aspect that has not kept pace with language modeling advancements is the decoding process, which plays a vital role in determining the output of language models [15,27,28]. This disparity highlights the need for further investigation into decoding methods and unlocking the immense potential of neural language models in D2T generation.

2.2. Deterministic Decoding Methods

In D2T generation, researchers often employ greedy search, beam search, and its variants. Greedy search constructs the output sequence by selecting the highest probability word at each position. Some notable works that utilized greedy search include [29], which investigated the effectiveness of the pre-train followed by fine-tune strategy, and [30] as an alternative method.
Beam search selects multiple tokens for a position based on conditional probability, considering a range of alternatives determined by the beam width. Considering the context of preceding words and their probabilities, beam search makes more informed decisions about the upcoming word in the sequence [31]. Several works have employed beam search, including [17], which introduced DataTuner D2T system for diverse domains, ref. [18], who developed an end-to-end neural architecture that incorporates content selection and planning, and [19], who proposed an entity-centric neural architecture. Additionally, ref. [30] applied beam search and introduced beam search variants that leverage cross-attention to extract meaningful information and figure out attributes mentioned in the generated text. Another variant of beam search is guided beam search, proposed by [21], which aimed to reduce omissions and hallucinations during the decoding process.
These works have contributed to the advancements of D2T generation. However, despite their contributions, deterministic methods can still result in suboptimal output, characterized by unnatural language and repetitive patterns, a problem considerably discussed in previous research [13,27,32].

2.3. Stochastic Decoding Methods

When generating sequences, the standard approach involves using the distributions predicted by a model without modifications. However, this can lead to issues with coherence and diversity in the generated text as low-probability tokens can dominate and decrease the quality of the text [13,28]. To address this, researchers have explored alternative approaches that can modify the generation behavior of the model. One such approach is to use controlled sampling methods, which can help to improve the coherence and diversity of the generated text. The two popular methods that can achieve this are top-k and top-p (Nucleus) sampling methods.
Top-k sampling is a decoding method that involves selecting the top k most likely tokens from a predicted distribution, which is suggested to be effective in generating coherent and diverse text. Ref. [23] introduced top-k sampling in their work on hierarchical story generation, where a premise is first generated and then developed into a coherent text passage using model fusion and self-attention mechanisms. This approach maintains balanced coherence and diversity by choosing k tokens from the model’s probability distribution while excluding the least likely ones. For example, let us consider a token probability distribution of 10 tokens, where k = 5. In this case, the top five most probable tokens would be selected: A (0.25), B (0.20), C (0.15), D (0.10), and E (0.08), while the remaining tokens would be discarded. However, using a fixed value for K can be limiting, as it may only cover a small portion of the total probability in uniform distributions or include highly unlikely tokens in distributions with extreme peaks.
Another approach is to use top-p sampling, which involves selecting tokens from the dynamic nucleus of the probability distribution [13]. This approach demonstrated its effectiveness in improving the quality of machine-generated text by reducing the likelihood of bland and repetitive text. Specifically, ref. [13] found that neural language models trained on likelihood objectives tend to generate bland and repetitive text due to distributional differences between human and machine text, and proposed the top-p method as a solution to this problem. The top-p sampling considers the top p% of the probability mass, providing a better balance between coherence and diversity. The model can generate more diverse and coherent text by selecting the top p% of the probability mass. For example, with the same token probability distribution and p = 0.7, we select tokens until the cumulative probability reaches 0.7, which includes A (0.25), B (0.20), C (0.15), and D (0.10). This approach can be more effective than top-k sampling as it dynamically adapts to the shape of the probability distribution.
Combining these techniques can improve the coherence and diversity of generated text. Stochastic methods, which have proven effective in open-ended text generation, overcome the degeneration inherent in deterministic methods [20]. The DecoStrat framework leverages the combined approach to enhance diversity in the D2T domain, yielding promising results.

2.4. MBR Decoding

MBR decoding provides a systematic approach to introducing randomness into the decoding process to generate diverse outputs, each of which has a probability of being valid. This approach enables language models to explore different translation possibilities, providing a better understanding of the input data [25,33]. MBR decoding has demonstrated encouraging results in various tasks, including text generation and neural machine translation, with notable works including [34], who proposed MBR to address the trade-off between diversity and quality in text generation tasks, ref. [24], who presented a lattice MBR decoding approach over translation lattices to efficiently encode translation hypotheses and improve translation performance in different languages, ref. [35], who proposed an approach to combining neural machine translation (NMT) with traditional statistical machine translation (SMT) by minimizing the Bayes-risk with respect to syntactic translation lattices, and [22], who argued that issues in neural machine translation systems are not inherent to the model or training algorithm but stem from using maximum a posteriori decoding, and proposed considering the full translation distribution over focusing solely on the highest-scoring translation. While MBR decoding can impose a significant computational burden, requiring careful decisions regarding the number of candidates generated and selecting a suitable utility function for hypothesis evaluation, its potential benefits make it a promising approach worth exploring further, particularly in novel applications such as the D2T generation domain, where we introduced three variants of MBR that seamlessly integrate with the modules of the DecoStrat framework.
The review of existing literature highlights several critical limitations in current decoding methods for data-to-text (D2T) generation. First, many individual approaches fail to fully leverage the capabilities of neural models, leading to suboptimal language generation. Deterministic methods often produce degenerate outputs, while stochastic techniques, although they enhance diversity, struggle to maintain coherence. Moreover, the effectiveness of Minimum Bayes Risk (MBR) methods in D2T generation remains unproven. Another significant issue is the lack of modularity in current decoding strategies, hindering the exploration and adaptation of decoding methods tailored to specific tasks. The proposed DecoStrat framework provides a unified and modular solution that integrates multiple decoding methods, including novel MBR variants. With its flexible design, the framework allows for seamless adaptation to diverse D2T generation tasks.

3. Research Methodology

This section identifies the essential elements required for D2T generation and introduces the innovative DecoStrat framework, designed to harness these components and unlock the potential of language models for D2T generation. Additionally, the visual representation of the DecoStrat framework illustrates its core components and their interactions, thereby offering a comprehensive understanding of the framework’s architecture.

3.1. Neural Language Modeling for D2T Generation

D2T generation can be formulated as a sequence-to-sequence problem, where the goal is to learn a mapping from an input sequence X to a target sequence Y. Let  X = ( x 1 , x 2 , , x T ) be the input sequence, and   Y = ( y 1 , y 2 , , y T ) be the target sequence. The goal is to model the conditional probability  P ( Y | X ) , which can be viewed as a next-word prediction problem, where the model predicts the next word in the target sequence given the context of the input sequence and the previous words in the target sequence.
A sequence-to-sequence model consists of encoder and decoder components. Traditionally, researchers have used Recurrent Neural Networks (RNNs) for encoder and decoder components. However, RNNs have significant limitations that make them less effective for modeling sequential data. Their sequential processing is slow and inefficient, and they suffer from the vanishing gradient problem, which hinders the training of deep RNNs. In contrast, transformer models such as T5 and BART have become the current state-of-the-art for sequence-to-sequence tasks. They utilize self-attention mechanisms to process input sequences in parallel and capture long-range dependencies [26,36], making them a superior choice for our purposes.
The encoder transforms the input sequence X into a continuous representation  X ¯ , and the decoder generates the target sequence Y based on  X ¯ in an autoregressive manner, where each output token is generated based on the previous tokens in the target sequence. The encoder and decoder can be represented as functions f and g, respectively:
X ¯ = f ( X ) = Concat ( x ¯ 1 , x ¯ 2 , . . . , x ¯ T )
where Concat denotes the concatenation operation, combining multiple vectors (  x ¯ 1 x ¯ 2 , …,  x ¯ T ) into a single vector  X ¯ , to form a continuous representation of the input sequence X.
The probability of generating each token  y t in the target sequence is computed using the softmax function:
P ( y t | y < t , X ¯ ) = softmax ( g ( y < t , X ¯ ) )
where  y < t = y 1 : t 1 = { y 1 , y 2 , , y t 1 } is the sequence of previous target outputs. In the context of the decoder function g, the softmax function is applied to the output of g. Let us denote this vector as  z = g ( y < t , X ¯ ) . The softmax function takes the output vector z from the decoder and returns a probability distribution over all possible tokens in the vocabulary. This is done by computing the exponential of each element in z, normalizing the resulting values, and returning a vector of probabilities that add up to 1.
Then, the softmax function can be defined as
softmax ( z i ) = e z i j = 1 N e z j
where  z i is the  i t h element of the vector z, N is the number of elements in the vector z, and e is a mathematical constant of approximately 2.718.
The overall probability of generating the target sequence Y given the input sequence X is
P ( Y | X ) = t = 1 T P ( y t | y < t , X ¯ )
The training objective is to maximize the likelihood of the target sequence Y given the input sequence X:
L ( θ ) = max θ 1 N n = 1 N log p ( Y n | X n ; θ )
where  L is the likelihood of the target sequence given the input sequence,  X n and  Y n are the input and target sequences for the n-th example in the training set, N is the number of examples in the training set, and   θ is the set of model parameters.

Decoding Methods

The decoder generates the target sequence Y based on the continuous representation  X ¯ in an autoregressive manner, relying on its prediction history to construct the output sequence  y t ^ during inference. The decoding method determines which token to choose at each time step, playing a crucial role in determining the quality of the generated output as it navigates the complex probability space defined by the language model [31]. We explore three distinct decoding scenarios in D2T generation: deterministic methods that produce a single output, stochastic methods that introduce randomness, and Minimum Bayes Risk (MBR) methods that select the best candidate output from the generated options.
Deterministic decoding methods: These methods involve selecting the most likely token at each time step, resulting in a single, deterministic output sequence. This is achieved by choosing the token with the highest probability from the vocabulary set V at each time step t, as in Equation (6):
y t ^ = arg max y V P ( y t | y 1 : t 1 , X ¯ )
Equation (6) can represent both greedy search and beam search, which differ in their decoding method d and their parameters  d θ . In greedy search, d iteratively selects the most likely token at each step, with   d θ including beam size  b = 1  [31,37,38], whereas, in beam search, d maintains a set of top-k hypotheses, with   d θ including the beam size  b 2  [31,39,40,41,42]. We can view both methods as different instantiations of the decoding method d and its parameters  d θ , used to determine the output sequence  y t .
Stochastic decoding methods: These methods involve randomly sampling from the probability distribution  P ( y t | y 1 : t 1 , X ¯ ) at each time step t, resulting in a stochastic output sequence. This is achieved by introducing randomness in the output generation process, as shown in Equation (7):
y t ^ P ( y t | y 1 : t 1 , X ¯ )
Equation (7) indicates that we can obtain the predicted output sequence  y t by sampling from the probability distribution  p ( y t | y 1 : t 1 , X ¯ ) , where the decoding method d and its parameters  d θ determine the specific sampling strategy by controlling how to carry out the sampling from the probability distribution. The two popular methods that fall under stochastic decoding are top-k samples from the top-k most probable words at each time step [23] and top-p that samples a subset of the most probable words with cumulative probability equal to or exceeding a threshold (p) at each step [13]. We can represent both as  y t ^ P ( y t | y 1 : t 1 , X ¯ ) .
MBR decoding methods: This method involves generating multiple candidate outputs and selecting the best one based on a utility function such as BLEURT [22,24,34]. This can be achieved by, first, generating a set of candidate outputs  Y as shown in Equation (8):
Y = { y 1 , y 2 , . . . , y m } P ( y t | y 1 : t 1 , X ¯ )
Then, the predicted output sequence  y t ^ is selected using the candidate selection strategies  S , as shown in Equation (9):
y t ^ = arg max y Y S ( y , Y )
We can evaluate the quality of the generated text  y t ^ for all scenarios using conventional automatic metrics such as BLEU [43], ROUGE [44], METEOR [45], CIDEr [46], or any other metric that measures the similarity or relevance of the generated text  y t ^ to the reference text y [47]. Moreover, we can apply human evaluation to examine the qualities of the generated text such as coherence, informativeness, relevance, and fluency [48,49].

3.2. Proposed Framework

The proposed DecoStrat framework integrates fine-tuned language models, alternative decoding methods, and candidate selection strategies  S through the interacting modules. This section presents the DecoStrat system architecture and the details of each module.

3.2.1. DecoStrat System Architecture

The DecoStrat system architecture integrates fine-tuned language models followed by six interacting modules: Director, Generator, Manager, Ranker, Selector, and DecoDic. We designed this architecture to effectively process input data using trained language models and alternative decoding methods to produce the output sequences. The high-level operation of this system is as follows.
The model training process builds upon the work of [30]. We initialized the parameters of the pre-trained model checkpoints from the hugging face hub [50], which provides a standardized framework for conducting experiments with transformer-based models on NLP tasks. Moreover, the training and validation datasets were then loaded, and training parameters were employed. The experiment section provides the detailed implementation of the training procedure, including the model specifications and training parameters. During the inference stage, the Director receives input data, a language model, and a decoding method and then instructs the Generator to produce outputs. The Manager subsequently processes the generated output(s), determining the decoding method category by applying a selection strategy  S and returning the selected candidate  s c . The interactions between these modules are crucial in determining the final output sequence. The DecoStrat system architecture visually depicts the main components and the interactions that comprise the framework as illustrated in Figure 2.

3.2.2. DecoDic

The DecoStrat framework comprises a crucial component, the DecoDic, a comprehensive dictionary of decoding methods that serves as a central hub for decoding methods and their corresponding parameters. The primary function of DecoDic is to validate and configure decoding methods employed by the Director and Manager modules, facilitating seamless integration and optimal performance in the D2T generation process.
As illustrated in Algorithm 1, DecoDic functions categorize decoding methods into two main types: single output and multiple outputs. The single output category includes decoding methods, such as greedy search, which produce a single output for a given input sequence. The single output key in the dictionary contains a list of these methods, each represented as a nested dictionary where the method name serves as the key and the corresponding parameters parameter-name: parameter-value are stored as values. Conversely, the multiple outputs methods encompass techniques that generate multiple intermediate outputs, such as MBR decoding. DecoDic plays a crucial role in the D2T generation task by providing a standardized way of representing decoding methods and their parameters. The Director module uses the DecoDic function to validate its chosen decoding method. Similarly, the Manager module applies this function to determine the category of the selected decoding method, enabling informed decisions about how to proceed with the task. By providing this critical functionality, the DecoDic function ensures that the decoding methods used by the Director and Manager modules are valid and correctly configured for the D2T generation task.
Algorithm 1: Decoding methods dictionary
Mathematics 12 03596 i001

3.2.3. Director

The Director module is one of the main components of the DecoStrat framework that oversees the generation of output sequences from the input data using a fine-tuned language model and decoding method. It coordinates the interactions between the Generator and Manager modules to produce the final output.
The Director module, shown in Algorithm 2, coordinates the generation of output sequences by validating the decoding method using the DecoDic function, generating output sequences using the Generator module, selecting the optimal candidate output sequence using the Manager module, and, finally, returning the predicted output sequence  y ^ . The Director module acts as a high-level controller, orchestrating the interactions between the Generator and Manager modules to produce the predicted output  y ^ . The Director module controls and efficiently generates output sequences by directing the D2T generation process.
Algorithm 2: Director module
1module Director (X,  D L M ):
2 candidates [ ]
3 selected _ candidate [ ]
4 y ^ [ ]
5 category DecoDic ( )
6 if  D = d  not in category then
7 raise value error message.
8 c a n d i d a t e s Generator ( X , d , L M ) [3]
9 s e l e c t e d _ c a n d i d a t e Manager ( c a n d i d a t e s ) [4]
10 y ^ selected _ candidate
11 return  y ^
Input: X: Input sequence.
D = { d 1 , d 2 , . . . , d n } : Decoding methods.
L M : D2T generation language model.
Output y ^ : The predicted output sequence.

3.2.4. Generator

The Generator module is a crucial lower-level component of the DecoStrat framework, performing the actual D2T generation task. It takes input data, a language model, decoding methods, decoding parameters, and the number of candidate output(s) and produces a list containing the generated output sequence(s).
The Generator module, shown in Algorithm 3, produces output sequence(s) based on the provided inputs and the corresponding processing algorithm. The algorithm accommodates three distinct categories of decoding methods: deterministic, stochastic, and MBR.
Algorithm 3: Generate output sequence(s).
Mathematics 12 03596 i002
  • Deterministic decoding: When  k = 1 and  D are deterministic, the algorithm generates a single output sequence by iteratively predicting the next token with the highest probability according to the language model  L M .
  • Stochastic decoding: When  k = 1 and  D is stochastic, the algorithm generates a single output sequence by sampling from the probability distribution predicted by  L M .
  • MBR decoding: When  k 2 and  D is MBR, the algorithm produces k distinct output sequences by sampling from the probability distribution predicted by  L M .
The output is a list containing the generated output sequence(s), a single sequence for deterministic and stochastic decoding, or k sequences for MBR decoding. Overall, the Generator module provides a flexible and efficient way to generate text from input data, supporting a range of decoding methods and parameters. Essentially, the Generator module offers a versatile and effective means of text generation from input data, accommodating an array of decoding methods and associated parameters. The singular output from the module represents the final predicted output, whereas multiple outputs are subject to subsequent processing by other components within the DecoStrat framework.

3.2.5. Manager

The Manager module is a decision-making algorithm that considers multiple possible scenarios to select a single candidate output from a set of candidates based on the specified decoding method and selection strategy.
The Manager module, shown in Algorithm 4, checks the validity of the decoding method and selection strategy to determine the selection process. The process involves selecting a candidate using the Selector module, ranking candidates using Ranker module, or applying a joint ranking and selection approach. The Manager module integrates the Ranker and Selector modules to provide a flexible approach to choosing the most suitable candidate. The algorithm raises an error message if the selection strategy is not recognized. Finally, it returns the selected candidate output to the Director.

3.2.6. Ranker

The Ranker module is a ranking algorithm that adapts the PageRank [51] algorithm to rank candidates based on their similarity scores 7.

3.2.7. Calculate Similarity Matrix

The Ranker module, outlined in Algorithm 5, takes in a set of candidate outputs, a threshold for convergence, a damping factor, the number of iterations, and a similarity matrix calculated by the Calculate Similarity Matrix (CSM) Algorithm 6.
The Ranker algorithm initializes a score vector with uniform scores for each candidate to iteratively update the score vector based on the similarity matrix and the damping factor until convergence or it reaches a maximum number of iterations. The algorithm converges to a stable set of rankings based on the similarity matrix and the damping factor, then it returns the ranked top n denoted as  T candidates. The Ranker operates in two modes, either directly selecting the best output based on its ranking when working individually or ranking the top  T candidate outputs and passing the list to the Selector module when working jointly. We redesigned the algorithm to be well-suited for D2T generation tasks where the generated multiple outputs require ranking based on similarity. [Note, df = 0.85, threshold = 1  × 10 5 , epoch = 100, and the BERT base model were used during implementation]. Calculate Similarity Score (Algorithm 7).

3.2.8. Selector

The Selector module selects an optimal output from a list of candidates based on pairwise similarity scores computed using a utility function.
The Selector module, as shown in Algorithm 8, initializes a matrix, calculates scores for each pair of candidates, and determines the sum of scores for each candidate. The algorithm then selects the output with the highest sum, indicating its relevance or the most representative of the overall set. The Selector can operate in two modes, either receiving candidates from the Generator and making the final selection of the best output based on utility scores when working individually, or receiving the ranked list of top n candidates from the Ranker and making the final selection based on utility scores when working jointly. The Ranker also receives candidates from the Generator, ranks them, and passes the top n candidates to the Selector. By doing so, the Ranker reduces the burden of the Selector, allowing it to focus on making the final selection from a reduced number of promising candidates. This collaborative approach enables the Ranker and Selector to work together efficiently, leveraging their strengths to produce the best possible output.
Algorithm 4: Manager module.
Mathematics 12 03596 i003
Algorithm 5: Rank candidates.
Mathematics 12 03596 i004
Algorithm 6: Calculate Similarity Matrix.
Mathematics 12 03596 i005
Algorithm 7: Calculate Similarity Score.
Mathematics 12 03596 i006
Algorithm 8: Select a candidate.
Mathematics 12 03596 i007

3.3. Illustration on DecoStrat Module Interactions

To demonstrate the flexibility and configurability of the DecoStrat framework, we present a scenario that showcases the seamless interaction of all its modules. This scenario highlights how the modules work together to produce the desired output. Using sample data from the MultiWOZ test dataset, we illustrate the operation of the DecoStrat framework in generating text from meaning representation (MR) input data. This example scenario requires the involvement of all modules, allowing us to observe their interactions and demonstrate the framework’s capabilities.
Given the inputs, shown in Table 1, the DecoStrat framework modules start interacting provided they generate the output  y ^ . The DecoStrat framework generates the output  y ^ as follows:
  • Director: The Director module takes in the linearized input data  M R , decoding method  D , and language model  L M . It checks if the decoding method  D is in the category of supported decoding methods using the  DecoDic ( ) module. It initializes an empty list for the selected candidate and outputs  y ^ .
  • Generator: The Director module calls the Generator, passing in the input data  M R , decoding method  D , and language model  L M . Moreover, the Generator module retrieves the number of candidates to be generated k and the decoding parameters  d θ , where  d θ consists of top-p and top-k. It generates k candidate outputs as shown in Table 2 for the given input data using the MBR decoding method with parameters top-p = 0.7 and top-k = 30.
  • Manager: The Director module calls the Manager, passing in the set of candidate outputs  Y . The Manager module also retrieves the decoding method  D , selection strategy  S , top N ranked candidates  T , and utility function  U . The Manager module initializes an empty list for the selected candidate  s c and retrieves the category of decoding methods using the  DecoDic ( ) function. Since we set the selection strategy  S to a "joint" value, the Manager module integrates the Ranker and Selector modules, provided that they can work together. The Ranker module takes the generated candidates, ranks them, and returns the top 3 candidates, as shown in Table 3, to the Selector module. The Selector module selects the best candidate from the top 3 candidates, as shown in Table 4, using the BLEURT utility function.
  • Output: The Director module returns the selected candidate as the output  y ^
Table 1. MultiWOZ sample data and configurable inputs to illustrate the interactions of DecoStrat modules.
Table 1. MultiWOZ sample data and configurable inputs to illustrate the interactions of DecoStrat modules.
DecoStrat InputsValues
MRtopic = general | intent = request more | topic = booking | intent = book | reference number = 85bgkwo4 | length of stay = 1
LM/checkpt/multiwoz/T5base/epoch_22_step_1749/
  D MBR
top-p0.7
top-k30
k10
  S joint
  T 3
  U BLEURT
Table 2. Sample generated candidate outputs using the Generator module.
Table 2. Sample generated candidate outputs using the Generator module.
Cand-IDCandidate Outputs
Cand-1Your booking was successful. The reference number is 85bgkwo4. You’ll be staying for 1 night. Is there anything else I can help you with?
Cand-2Your booking for 1 night was successful. The reference number is 85bgkwo4. Do you need anything else?
Cand-3I was able to book you for 1 night, your reference number is 85bgkwo4. Can I help you with anything else?
Cand-4I have successfully booked your hotel for 1 night. Your reference number is 85bgkwo4. Can I help with anything else today?
Cand-5I was able to book you for 1 night. Your reference number is 85bgkwo4. Can I help you with anything else?
Cand-6Booking was successful for 1 night. Reference number is: 85bgkwo4. Is there anything else I can help you with?
Cand-7Your booking was successful for 1 night. The reference number is 85bgkwo4. Can I help you with anything else today?
Cand-8I have made that reservation for 1 night, your reference number is 85bgkwo4. Is there anything else I can help you with today?
Cand-9Yes, I was able to book 1 night. The reference number is 85bgkwo4. Can I help you with anything else?
Cand-10Booking was successful for 1 night. The table will be reserved for 15 min. Reference number is: 85bgkwo4. Anything else I can help with?
Table 3. Ranked candidate outputs using the Ranker module.
Table 3. Ranked candidate outputs using the Ranker module.
RC-IDRanked Candidate Outputs
RC-1Your booking for 1 night was successful. The reference number is 85bgkwo4. Do you need anything else? [Cand-2]
RC-2I was able to book you for 1 night. Your reference number is 85bgkwo4. Can I help you with anything else? [Cand-3]
RC-3Your booking was successful for 1 night. The reference number is 85bgkwo4. Can I help you with anything else today? [Cand-4]
Table 4. Selected candidate output using the Selector module.
Table 4. Selected candidate output using the Selector module.
SC-IDSelected Candidate Output
SCYour booking was successful for 1 night. The reference number is 85bgkwo4. Can I help you with anything else today? [Cand-4]
In summary, the proposed framework enables the effective utilization of language models in D2T generation. It provides a flexible architecture that accommodates various decoding strategies and language models. At its core, the framework offers three key features: flexibility, separation of concerns, and abstraction. These features enable the building and customization of NLG systems using our framework. DecoStrat’s flexibility allows for the integration of fine-tuned language models, integration and customization of decoding methods, and candidate selection strategies, making it adaptable to diverse applications and evolving requirements. The separation of concerns within the framework allows researchers to focus on designing diverse decoding methods without concerning themselves with the intricate details of other components. DecoStrat abstracts complex implementation details using the PyTorch framework, transformer-based language models, and utilities to make it simple and user-friendly. While DecoStrat has the potential to be a promising solution, its effectiveness relies on careful consideration of factors such as task-specific model training, decoding parameter tuning, and the choice of utility function. With DecoStrat, users can seamlessly integrate and optimize different decoding strategies with language models, achieving better results in D2T generation tasks. Next, we will experimentally evaluate the effectiveness of DecoStrat, considering task-specific model training, integration of proposed decoding methods, and parameter tuning.

4. Experimental Setup

This section presents the experimental data, baselines we compared with, and implementation details, including the system parameters for experimentation, model specifications, training parameters, and decoding strategies with associated parameters, thereby establishing a comprehensive setup for our investigation.

4.1. Experimental Data

The MultiWOZ 2.1 [1] dataset is a large-scale, multi-domain publicly available dialogue dataset consisting of 70,530 dialogues, divided into training (55,951), validation (7286), and test (7293) sets. It covers seven domains: attraction, hotel, taxi, restaurant, train, police, and general. In task-oriented dialog systems, the NLG module converts dialog acts represented in semantic form into natural language responses [52]. The dataset used in this work was an extracted version of MultiWOZ 2.1, provided by [30]. This extracted dataset follows the approach of [29,53]. Our goal was to train a model that takes in a meaning representation (MR) as input data and generates a system turn as output text, a process known as data-to-text (D2T) generation. Then, the trained model should accurately predict system responses across various domains and scenarios, conditioned on the input MR, and generate system responses that are contextually relevant and coherent.
Table 5 illustrates a sample of paired meaning representation (MR) and reference (Ref) data from the MultiWOZ dataset, along with the preprocessed linearized MR. The table showcases the structured input data and their corresponding natural language output for a dialogue system. The MR contains semantic representations of user intents and preferences, such as selecting a restaurant, providing booking information, and specifying food choices. The linearized MR demonstrates a processed version of the MR in a linear format, separating different attributes using specific delimiters. For instance, when presented with an MR indicating a preference for three Chinese restaurants, the model is expected to respond appropriately by acknowledging the request, informing the available options, and suggesting a booking. The MR undergoes a preprocessing step described by [30], where the process separates individual slot–value pairs using “|” and delimits values from slot names using “=” symbols. The reference (Ref) presents a human-readable response generated based on the given MR, offering the user relevant information and prompting further interaction, such as booking a table at a suitable restaurant.

4.2. Baselines

Our approach is compared with two recent state-of-the-art baseline studies in D2T generation tasks.
  • The works by [30] examine the attention mechanism in pre-trained language models fine-tuned for D2T generation tasks. Their research revealed that encoder–decoder models with cross-attention can grasp semantic constraints that traditional decoding methods may not fully exploit. They introduced a methodology that extracts pertinent information from the cross-attention mechanism at each decoding step to identify accurately realized slots in the output. Then, they combined this approach with beam search and rescore beam hypotheses to select those with the fewest missing or incorrect slot mentions. Unlike our approach, the author’s method relies on beam search.
  • The second work by [29] explores T5 pre-training and fine-tuning for D2T tasks. DecoStrat framework employs the T5 and BART models with multiple decoding strategies, diverging from their singular use of T5 and greedy search. Despite this distinction, their observations offer valuable insights for our investigation, emphasizing the potential of pre-trained T5 models for D2T tasks.

4.3. System Parameters

The model training and DecoStrat evaluation were conducted on a Linux server equipped with NVIDIA GeForce RTX 3090 GPUs, each featuring 24 GB of memory. We utilized the PyTorch framework along with the Hugging Face Transformers library [50]. Although the server was configured with a total of eight GPUs, our experiments typically employed two or three GPUs, which proved sufficient for effective training and evaluation.

4.4. Model Specifications and Training Parameters

The model training process was built upon the work of [30]. We initialized the parameters of the pre-trained T5 and BART model checkpoints sourced from the Huggingface hub [50], which provides a standardized framework for experimenting with models on various NLP tasks. Subsequently, we loaded the training and validation datasets and employed the AdamW optimizer [54] with a linear learning rate schedule and a warm-up period. During training, the loop iterated over epochs and batches, preparing and processing each batch, performing forward and backward passes, and updating the optimizer and learning rate scheduler. The objective was to maximize the log-likelihood of the target sequence given the input sequence [55], as formulated in Equation (5). The DecoStrat framework leverages pre-trained models and established training techniques to generate text from the input data. The detailed model specifications and training parameters used in our experiments are summarized in Table 6.

4.5. Best Model Checkpoints

We monitored the model’s training performance on the validation set using the BLEU score and updated the best model checkpoints accordingly. The optimal model checkpoints achieved during the training process are presented in Table 7.
During inference, these model checkpoints were effectively utilized with the interaction of different modules to produce an output given the input data and alternative decoding methods, as shown in Figure 2 and the detailed proposed methods.

4.6. DecoStrat Generation Parameters

Table 8 outlines decoding strategies and associated parameters used in our experiments. The study explores four decoding methods: greedy search (GS) with a beam size of 1, beam search (BS) with a beam size of 5, Stochastic Sampling (SS) with a top_p of 0.7 and top_k value of 30, and Minimum Bayes Risk (MBR) decoding with its three variants. In MBR decoding, we generate multiple candidates and choose the best one using a candidate selection strategy  S operating in three alternative modes: select, rank, and joint. We refer to the MBR decoding method with the “select” strategy as  M B R s . In   M B R s , we employ the Selector module to choose the final candidate, utilizing BLEURT as the utility function denoted by  U . We denote the MBR decoding method with the “rank” selection strategy as  M B R r , which configures the Ranker module to use the “rank” strategy, and the top n denoted as  T value is set to 1. This approach ensures that we choose only the highest-ranked candidate from multiple options. We denote the MBR decoding method with the joint strategy as  M B R j , which configures the candidate selection strategy  S to “joint” and  T value to 3. This strategy involves a joint operation of the Ranker module and the Selector module to select the final candidate. We explored the effects of tuning decoding parameter values and set decoding parameter values that can help us attain optimal results. For more information on the candidate selection strategies, we refer the readers to Algorithm 4.

4.7. DecoStrat Performance Evaluation Metrics

The evaluation of the DecoStrat framework involved a comparison with the currently state-of-the-art baseline works using established automatic metrics: BLEU (B) [43], ROUGE (R) [44], METEOR (M) [45], CIDEr (C) [46] developed by the E2E NLG Challenge [56], and Slot Error Rate (SER) on the MultiWOZ test data. SER measures the proportion of instances where the slot value from the original structured data (MR) is not mentioned or incorrectly mentioned in the predicted response. We calculated SER by checking if all slot values were present in the generated text, regardless of their order or phrasing [30].

5. Results and Discussion

This section presents the findings from our in-depth evaluation of the DecoStrat framework, providing a breakdown of the experimental outcomes and exploring their implications.

5.1. Results of DecoStrat with T5-Small Model

Our DecoStrat framework outperformed the baselines in METEOR with the highest score of 0.325 achieved by T5-s + MBRs, indicating that it generated responses with relatively better coherence and semantic similarity to the reference texts. Moreover, DecoStrat with the T5-small model and alternative decoding methods achieved competitive results, with the highest BLEU score of 0.356 achieved by T5-s + MBRj, as shown in Table 9. Our framework achieves a score of 1.63 SER with T5-s + MBRs, which is higher than the baseline SA’s score of 0.63, indicating that our framework struggles to preserve slots compared to the baselines. Compared with the baselines, which have similar BLEU scores, with the highest score of 0.360 achieved by SA and SG, our framework shows promising results, especially with the MBR variants. The MBR variants introduce controlled randomness using top-k and top-p to enable a more diverse and better exploration of the search spaces. However, the impact on the BLEU score is relatively marginal compared to the state-of-the-art baseline.
The results of DecoStrat with the T5-small model and alternative decoding methods on the MultiWOZ dataset, shown in Table 9, demonstrate its effectiveness in dialogue response generation. Notably, DecoStrat surpasses the baseline K&R in the BLEU score, except in SER, across all decoding methods. The DecoStrat framework outperforms the baselines in METEOR with the highest score of 0.325 achieved by T5-s + MBRs, indicating that it generates responses with relatively better coherence and semantic similarity to the reference texts. Moreover, DecoStrat with the T5-small model and alternative decoding methods achieves competitive results, with the highest BLEU score of 0.356 achieved by T5-s + MBRj. Furthermore, DecoStrat is competitive with the SA and SG baselines, achieving relatively similar performance in BLEU scores. The MBR decoding methods, in particular, perform well in the BLEU score and SER, with  M B R j achieving the highest BLEU score among all DecoStrat variants. The MBR variants introduced controlled randomness using top-k and top-p to enable a more diverse and better exploration of the search spaces. However, the impact on the BLEU score was relatively marginal compared to the state-of-the-art baseline. These results suggest that DecoStrat with alternative decoding methods can be a valuable approach for enhancing the performance of dialogue response generation models. Overall, the findings highlight the potential of DecoStrat as a viable strategy for improving the quality of generated responses.

5.2. Results of DecoStrat with T5-Base Model

The DecoStrat framework with the T5-base model and alternative decoding methods achieved improved results, with the highest BLEU score of 0.354 achieved by T5-b + BS, as shown in Table 10. Moreover, our framework outperforms the baseline in terms of METEOR, with the highest score of 0.327 achieved by T5-b + BS. The DecoStrat framework also achieves lower SER scores of 1.28 with T5-b + MBRs, which is competitive compared to the baseline K&R’s score of 1.27, indicating a relatively equivalent performance in slots mentioned in their output text. The framework shows promising results, especially with the BS and MBR variants, which have implications for generating more accurate and informative output in data-to-text generation tasks.

5.3. Results of DecoStrat with BART-Base Model

As shown in Table 11, the DecoStrat framework with the BART-base model and alternative decoding methods achieves competitive results, with the highest BLEU score of 0.359 achieved by Bart + BS and Bart + SS. Additionally, the framework outperforms the baselines in METEOR, with the highest score of 0.329 achieved by Bart+SS. The DecoStrat framework SER score of 1.34 with Bart + BS is higher than the baseline SA’s score of 0.60, indicating that the baseline is better at preserving slots. Overall, results demonstrate the potential of the DecoStrat framework with the BART-base model and alternative decoding methods for generating an accurate output from given input data.

5.4. Performance Comparison Across Models

The results of the DecoStrat D2T generation models across various evaluation criteria reveal notable performance differences, as shown in Figure 3. For the SER measurement, the best-performing method is T5-b + MBRs, while the relatively poor-performing methods are T5-s + GS and T5-s + MBRr. In the BLEU measurement, the Bart model, which employs all variants of decoding methods, achieves the highest performance, with other methods showing comparable results. Similarly, for the METEOR measurement, Bart + SS stands out as the top performer, with the other techniques demonstrating performance levels that are also competitive. Overall, these findings underscore the strengths of the Bart model across different evaluation criteria while highlighting areas for potential improvement in the T5 model variants.

5.5. Implications of the Results

The DecoStrat framework performs competitively with baselines. The alternative decoding methods in the framework can generate text of comparable quality to the baselines. Among these methods, the MBR variants consistently outperform the others in METEOR and SER metrics. This performance indicates that the MBR variants outperform the other alternative methods in generating diverse and contextually relevant text while maintaining high accuracy in slot preservation, meeting the pivotal objective of D2T generation tasks. Although the BS decoding method demonstrates competitive results in BLEU and METEOR, it exhibits higher SER scores than the MBR variants, implying difficulties in preserving slots effectively. The results also indicate that the T5-small model with MBR variants achieved a relatively similar performance to the T5-base model, suggesting that the T5-small model may present a cost-effective option for D2T generation tasks. This finding aligns with previous studies, such as [29], which have also reported that model size does not always correlate with performance. Moreover, the BART-base model with MBR variants demonstrates competitive results compared to the T5 models, although with slightly lower BLEU scores. These results suggest that the BART-base model can produce text of comparable quality with the T5 models but may struggle to generate accurate text. Despite this, the difference in BLEU scores is relatively small, indicating that the BART-base model remains a viable choice for D2T generation tasks.
One of the primary advantages of the DecoStrat framework is its integration of multiple decoding strategies, including MBR variants, which have shown superior performance in generating diverse and contextually relevant text. This flexibility allows practitioners to select the most appropriate method based on the specific requirements of their D2T tasks while balancing the need for slot preservation with the desire for diversity in the output. Furthermore, our findings indicate that the T5-small model with MBR variants can achieve performance comparable to larger models like the T5-base. This scenario suggests that DecoStrat can deliver high-quality D2T outputs without the computational overhead associated with larger models, particularly in resource-constrained environments.
Moreover, the findings of the DecoStrat framework highlight its broad applicability across various domains. By integrating multiple decoding methods and leveraging advanced language models, DecoStrat effectively generates diverse and contextually relevant text, enhancing D2T applications. Its performance, particularly with MBR variants, demonstrates the framework’s potential to improve output quality in real-world scenarios, including customer service automation and customized conversation systems. These applications can enhance user engagement and satisfaction in conversational AI environments. This versatility suggests the DecoStrat framework’s usefulness for various D2T generation tasks.

6. Conclusions

In this paper, we introduced DecoStrat, a unified and modular framework that bridges the gap between language modeling and decoding processes in D2T generation. By integrating multiple decoding methods, DecoStrat enables the seamless exploration of optimal decoding strategies for specific tasks. Our experimental results on the MultiWOZ dataset demonstrate the effectiveness of DecoStrat in generating diverse and contextually relevant text from structured data. Notably, the framework with MBR variants, which have shown encouraging performance in machine translation, consistently outperformed other alternative decoding methods in METEOR and SER metrics, indicating their ability to generate diverse and contextually relevant text while maintaining high accuracy in slot preservation. Moreover, the BS decoding option with the language models also demonstrated competitive results but exhibited higher SER scores than the MBR variants, implying difficulties in preserving slots effectively. Overall, our findings suggest that DecoStrat has the potential to make a significant impact on D2T generation tasks. The framework’s ability to explore different decoding strategies allows it to generate diverse and contextually relevant text from structured data, addressing a key challenge in D2T generation. Future work can build upon this framework to improve and explore its potential in real-world scenarios. Additionally, investigating alternative decoding methods, language models, learning-based evaluation metrics, and human evaluation could provide a more comprehensive understanding of the framework’s performance.

Author Contributions

E.L.J. designed and built the DecoStrat framework, conducted experiments, analysis, visualization, and writing. W.C. carried out supervision and resource allocation. M.A.A.-a. was involved in analysis, review and editing, visualization, and funding acquisition. Y.H.G. was involved in analysis, review and editing, and funding acquisition. V.K.A. supported result analysis and editing. W.F. supported result analysis and editing. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the MSIT (Ministry of Science and ICT), Korea, under the ITRC (Information Technology Research Center) support program (IITP-2024-RS-2024-00437191) supervised by the IITP (Institute for Information & Communications Technology Planning & Evaluation). This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korean government (MSIT) (No. RS-2022-00166402 and RS-2023-00256517).

Institutional Review Board Statement

Not applicable.

Data Availability Statement

The datasets used in this study are publicly available and freely accessible at https://github.com/jjuraska/data2text-nlg/tree/master/data/multiwoz accessed on 7 October 2024.

Acknowledgments

This research was supported by the MSIT (Ministry of Science and ICT), Korea, under the ITRC (Information Technology Research Center) support program (IITP-2024-RS-2024-00437191) supervised by the IITP (Institute for Information & Communications Technology Planning & Evaluation). This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korean government (MSIT) (No. RS-2022-00166402 and RS-2023-00256517).

Conflicts of Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this study.

References

  1. Eric, M.; Goel, R.; Paul, S.; Sethi, A.; Agarwal, S.; Gao, S.; Kumar, A.; Goyal, A.; Ku, P.; Hakkani-Tur, D. MultiWOZ 2.1: A Consolidated Multi-Domain Dialogue Dataset with State Corrections and State Tracking Baselines. In Proceedings of the Twelfth Language Resources and Evaluation Conference, Marseille, France, 11–16 May 2020; Calzolari, N., Béchet, F., Blache, P., Choukri, K., Cieri, C., Declerck, T., Goggi, S., Isahara, H., Maegaard, B., Mariani, J., et al., Eds.; 2020; pp. 422–428. [Google Scholar]
  2. Portet, F.; Reiter, E.; Gatt, A.; Hunter, J.; Sripada, S.; Freer, Y.; Sykes, C. Automatic generation of textual summaries from neonatal intensive care data. Artif. Intell. 2009, 173, 789–816. [Google Scholar] [CrossRef]
  3. Pauws, S.; Gatt, A.; Krahmer, E.; Reiter, E. Making Effective Use of Healthcare Data Using Data-to-Text Technology. In Data Science for Healthcare: Methodologies and Applications; Consoli, S., Reforgiato Recupero, D., Petković, M., Eds.; Springer International Publishing: Cham, Switzerland, 2019; pp. 119–145. [Google Scholar] [CrossRef]
  4. Belz, A. Automatic generation of weather forecast texts using comprehensive probabilistic generation-space models. Nat. Lang. Eng. 2008, 14, 431–455. [Google Scholar] [CrossRef]
  5. Barzilay, R.; Lapata, M. Collective Content Selection for Concept-to-Text Generation. In Proceedings of the Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing, Vancouver, BC, Canada, 6–8 October 2005; Mooney, R., Brew, C., Chien, L.F., Kirchhoff, K., Eds.; 2005; pp. 331–338. [Google Scholar]
  6. Juraska, J.; Bowden, K.; Walker, M. ViGGO: A Video Game Corpus for Data-To-Text Generation in Open-Domain Conversation. In Proceedings of the 12th International Conference on Natural Language Generation, Tokyo, Japan, 29 October–1 November 2019; van Deemter, K., Lin, C., Takamura, H., Eds.; 2019; pp. 164–172. [Google Scholar] [CrossRef]
  7. Gardent, C.; Shimorina, A.; Narayan, S.; Perez-Beltrachini, L. Creating Training Corpora for NLG Micro-Planners. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Vancouver, BC, Canada, 30 July–4 August 2017; Barzilay, R., Kan, M.Y., Eds.; 2017; pp. 179–188. [Google Scholar] [CrossRef]
  8. Parikh, A.; Wang, X.; Gehrmann, S.; Faruqui, M.; Dhingra, B.; Yang, D.; Das, D. ToTTo: A Controlled Table-To-Text Generation Dataset. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), Online, 16–20 November 2020; Webber, B., Cohn, T., He, Y., Liu, Y., Eds.; 2020; pp. 1173–1186. [Google Scholar] [CrossRef]
  9. Raffel, C.; Shazeer, N.; Roberts, A.; Lee, K.; Narang, S.; Matena, M.; Zhou, Y.; Li, W.; Liu, P.J. Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res. 2020, 21, 1–67. [Google Scholar]
  10. Lewis, M.; Liu, Y.; Goyal, N.; Ghazvininejad, M.; Mohamed, A.; Levy, O.; Stoyanov, V.; Zettlemoyer, L. BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, Online, 5–10 July 2020; Jurafsky, D., Chai, J., Schluter, N., Tetreault, J., Eds.; 2020; pp. 7871–7880. [Google Scholar] [CrossRef]
  11. Yu, F.; Xiu, X.; Li, Y. A Survey on Deep Transfer Learning and Beyond. Mathematics 2022, 10, 3619. [Google Scholar] [CrossRef]
  12. Lee, M. A Mathematical Interpretation of Autoregressive Generative Pre-Trained Transformer and Self-Supervised Learning. Mathematics 2023, 11, 2451. [Google Scholar] [CrossRef]
  13. Holtzman, A.; Buys, J.; Du, L.; Forbes, M.; Choi, Y. The Curious Case of Neural Text Degeneration. In Proceedings of the 8th International Conference on Learning Representations (ICLR 2020), Addis Ababa, Ethiopia, 26–30 April 2020. [Google Scholar]
  14. Wiher, G.; Meister, C.; Cotterell, R. On Decoding Strategies for Neural Text Generators. Trans. Assoc. Comput. Linguist. 2022, 10, 997–1012. [Google Scholar] [CrossRef]
  15. Zarrieß, S.; Voigt, H.; Schüz, S. Decoding Methods in Neural Language Generation: A Survey. Information 2021, 12, 355. [Google Scholar] [CrossRef]
  16. Welleck, S.; Kulikov, I.; Kim, J.; Pang, R.Y.; Cho, K. Consistency of a Recurrent Language Model with Respect to Incomplete Decoding. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), Online, 16–20 November 2020; Webber, B., Cohn, T., He, Y., Liu, Y., Eds.; 2020; pp. 5553–5568. [Google Scholar] [CrossRef]
  17. Harkous, H.; Groves, I.; Saffari, A. Have Your Text and Use It Too! End-to-End Neural Data-to-Text Generation with Semantic Fidelity. In Proceedings of the 28th International Conference on Computational Linguistics, Barcelona, Spain, 8–13 December 2020; Scott, D., Bel, N., Zong, C., Eds.; 2020; pp. 2410–2424. [Google Scholar] [CrossRef]
  18. Puduppully, R.; Dong, L.; Lapata, M. Data-to-Text Generation with Content Selection and Planning. Proc. AAAI Conf. Artif. Intell. 2019, 33, 6908–6915. [Google Scholar] [CrossRef]
  19. Puduppully, R.; Dong, L.; Lapata, M. Data-to-text Generation with Entity Modeling. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, Florence, Italy, 28 July–2 August 2019; Korhonen, A., Traum, D., Màrquez, L., Eds.; 2019; pp. 2023–2035. [Google Scholar] [CrossRef]
  20. Welleck, S.; Kulikov, I.; Roller, S.; Dinan, E.; Cho, K.; Weston, J. Neural Text Generation with Unlikelihood Training. arXiv 2019, arXiv:1908.04319. [Google Scholar]
  21. Garneau, N.; Lamontagne, L. Guided Beam Search to Improve Generalization in Low-Resource Data-to-Text Generation. In Proceedings of the 16th International Natural Language Generation Conference, Prague, Czechia, 11–15 September 2023; Keet, C.M., Lee, H.Y., Zarrieß, S., Eds.; 2023; pp. 1–14. [Google Scholar] [CrossRef]
  22. Eikema, B.; Aziz, W. Is MAP Decoding All You Need? The Inadequacy of the Mode in Neural Machine Translation. In Proceedings of the 28th International Conference on Computational Linguistics, Barcelona, Spain, 8–13 December 2020; Scott, D., Bel, N., Zong, C., Eds.; 2020; pp. 4506–4520. [Google Scholar] [CrossRef]
  23. Fan, A.; Lewis, M.; Dauphin, Y. Hierarchical Neural Story Generation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Melbourne, Australia, 15–20 July 2018; Gurevych, I., Miyao, Y., Eds.; 2018; pp. 889–898. [Google Scholar] [CrossRef]
  24. Tromble, R.; Kumar, S.; Och, F.; Macherey, W. Lattice Minimum Bayes-Risk Decoding for Statistical Machine Translation. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing, Honolulu, HI, USA, 25–27 October 2008; Lapata, M., Ng, H.T., Eds.; 2008; pp. 620–629. [Google Scholar]
  25. Müller, M.; Sennrich, R. Understanding the Properties of Minimum Bayes Risk Decoding in Neural Machine Translation. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), Online, 1–6 August 2021; Zong, C., Xia, F., Li, W., Navigli, R., Eds.; 2021; pp. 259–272. [Google Scholar] [CrossRef]
  26. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, L.U.; Polosukhin, I. Attention is All you Need. In Proceedings of the Advances in Neural Information Processing Systems, Long Beach, CA, USA, 4–9 December 2017; Guyon, I., Luxburg, U.V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., Garnett, R., Eds.; Curran Associates, Inc.: Red Hook, NY, USA, 2017; Volume 30. [Google Scholar]
  27. Gong, N.; Yao, N. A generalized decoding method for neural text generation. Comput. Speech Lang. 2023, 81, 101503. [Google Scholar] [CrossRef]
  28. Ippolito, D.; Kriz, R.; Sedoc, J.; Kustikova, M.; Callison-Burch, C. Comparison of Diverse Decoding Methods from Conditional Language Models. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, Florence, Italy, 28 July–2 August 2019; Korhonen, A., Traum, D., Màrquez, L., Eds.; 2019; pp. 3752–3762. [Google Scholar] [CrossRef]
  29. Kale, M.; Rastogi, A. Text-to-Text Pre-Training for Data-to-Text Tasks. In Proceedings of the 13th International Conference on Natural Language Generation, Dublin, Ireland, 15–18 December 2020; Davis, B., Graham, Y., Kelleher, J., Sripada, Y., Eds.; 2020; pp. 97–102. [Google Scholar] [CrossRef]
  30. Juraska, J.; Walker, M. Attention Is Indeed All You Need: Semantically Attention-Guided Decoding for Data-to-Text NLG. In Proceedings of the 14th International Conference on Natural Language Generation, Aberdeen, UK, 20–24 September 2021; Belz, A., Fan, A., Reiter, E., Sripada, Y., Eds.; 2021; pp. 416–431. [Google Scholar] [CrossRef]
  31. Russell, S.; Norvig, P. Artificial Intelligence: A Modern Approach, 4th ed.; Pearson: London, UK, 2020. [Google Scholar]
  32. Schulz, P.; Aziz, W.; Cohn, T. A Stochastic Decoder for Neural Machine Translation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Melbourne, Australia, 15–20 July 2018; Gurevych, I., Miyao, Y., Eds.; 2018; pp. 1243–1252. [Google Scholar] [CrossRef]
  33. Kumar, S.; Byrne, W. Minimum Bayes-Risk Decoding for Statistical Machine Translation. In Proceedings of the Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics: HLT-NAACL 2004, Boston, MA, USA, 22–27 April 2004; pp. 169–176. [Google Scholar]
  34. Suzgun, M.; Melas-Kyriazi, L.; Jurafsky, D. Follow the Wisdom of the Crowd: Effective Text Generation via Minimum Bayes Risk Decoding. In Proceedings of the Findings of the Association for Computational Linguistics: ACL 2023, Toronto, ON, Canada, 9–14 July 2023; Rogers, A., Boyd-Graber, J., Okazaki, N., Eds.; 2023; pp. 4265–4293. [Google Scholar] [CrossRef]
  35. Stahlberg, F.; de Gispert, A.; Hasler, E.; Byrne, B. Neural Machine Translation by Minimising the Bayes-risk with Respect to Syntactic Translation Lattices. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, Valencia, Spain, 3–7 April 2017; Lapata, M., Blunsom, P., Koller, A., Eds.; 2017; pp. 362–368. [Google Scholar]
  36. Zeng, H.; Liu, J.; Wang, M.; Wei, B. A sequence to sequence model for dialogue generation with gated mixture of topics. Neurocomputing 2021, 437, 282–288. [Google Scholar] [CrossRef]
  37. Germann, U.; Jahr, M.; Knight, K.; Marcu, D.; Yamada, K. Fast Decoding and Optimal Decoding for Machine Translation. In Proceedings of the 39th Annual Meeting of the Association for Computational Linguistics, Toulouse, France, 6–11 July 2001; pp. 228–235. [Google Scholar] [CrossRef]
  38. Germann, U. Greedy decoding for statistical machine translation in almost linear time. In Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology—Volume 1, Stroudsburg, PA, USA, 27 May–1 June 2003; NAACL ’03. pp. 1–8. [Google Scholar] [CrossRef]
  39. Moore, B.; Quirk, C. Faster Beam-Search Decoding for Phrasal Statistical Machine Translation. In Proceedings of the MT Summit XI, Copenhagen, Denmark, 10–14 September 2007; European Association for Machine Translation: Sheffield, UK, 2007. [Google Scholar]
  40. Cohen, E.; Beck, C. Empirical Analysis of Beam Search Performance Degradation in Neural Sequence Models. In Proceedings of the 36th International Conference on Machine Learning; Long Beach, CA, USA, 9–15 June 2019, Chaudhuri, K., Salakhutdinov, R., Eds.; Proceedings of Machine Learning Research (PMLR); 2019; Volume 97, pp. 1290–1299. [Google Scholar]
  41. Yang, Y.; Huang, L.; Ma, M. Breaking the Beam Search Curse: A Study of (Re-)Scoring Methods and Stopping Criteria for Neural Machine Translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, 31 October–4 November 2018; Riloff, E., Chiang, D., Hockenmaier, J., Tsujii, J., Eds.; 2018; pp. 3054–3059. [Google Scholar] [CrossRef]
  42. Meister, C.; Vieira, T.; Cotterell, R. Best-First Beam Search. Trans. Assoc. Comput. Linguist. 2020, 8, 795–809. [Google Scholar] [CrossRef]
  43. Papineni, K.; Roukos, S.; Ward, T.; Zhu, W.J. Bleu: A Method for Automatic Evaluation of Machine Translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, Philadelphia, PA, USA, 6–12 July 2002; Isabelle, P., Charniak, E., Lin, D., Eds.; 2002; pp. 311–318. [Google Scholar] [CrossRef]
  44. Lin, C.Y. ROUGE: A Package for Automatic Evaluation of Summaries. In Proceedings of the Text Summarization Branches Out, Barcelona, Spain, 25–26 July 2004; pp. 74–81. [Google Scholar]
  45. Banerjee, S.; Lavie, A. METEOR: An Automatic Metric for MT Evaluation with Improved Correlation with Human Judgments. In Proceedings of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization, Ann Arbor, MI, USA, June 2005; Goldstein, J., Lavie, A., Lin, C.Y., Voss, C., Eds.; 2005; pp. 65–72. [Google Scholar]
  46. Vedantam, R.; Zitnick, C.L.; Parikh, D. CIDEr: Consensus-based image description evaluation. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; pp. 4566–4575. [Google Scholar] [CrossRef]
  47. Li, J.; Lan, Y.; Guo, J.; Cheng, X. On the relation between quality-diversity evaluation and distribution-fitting goal in text generation. In Proceedings of the 37th International Conference on Machine Learning, Virtual, 13–18 July 2020. JMLR.org, ICML’20. [Google Scholar]
  48. van der Lee, C.; Gatt, A.; van Miltenburg, E.; Krahmer, E. Human evaluation of automatically generated text: Current trends and best practice guidelines. Comput. Speech Lang. 2021, 67, 101151. [Google Scholar] [CrossRef]
  49. van der Lee, C.; Gatt, A.; van Miltenburg, E.; Wubben, S.; Krahmer, E. Best practices for the human evaluation of automatically generated text. In Proceedings of the 12th International Conference on Natural Language Generation, Tokyo, Japan, 29 October–1 November 2019; van Deemter, K., Lin, C., Takamura, H., Eds.; 2019; pp. 355–368. [Google Scholar] [CrossRef]
  50. Wolf, T.; Debut, L.; Sanh, V.; Chaumond, J.; Delangue, C.; Moi, A.; Cistac, P.; Rault, T.; Louf, R.; Funtowicz, M.; et al. Transformers: State-of-the-Art Natural Language Processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, Online, 16–20 November 2020; Liu, Q., Schlangen, D., Eds.; 2020; pp. 38–45. [Google Scholar] [CrossRef]
  51. Page, L.; Brin, S.; Motwani, R.; Winograd, T. The PageRank Citation Ranking: Bringing Order to the Web; Technical Report 1999-66, Previous number = SIDL-WP-1999-0120; Stanford InfoLab: Stanford, CA, USA, 1999. [Google Scholar]
  52. Wang, W.; Zhang, Z.; Guo, J.; Dai, Y.; Chen, B.; Luo, W. Task-Oriented Dialogue System as Natural Language Generation. In Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR ’22, New York, NY, USA, 11–15 July 2022; pp. 2698–2703. [Google Scholar] [CrossRef]
  53. Peng, B.; Zhu, C.; Li, C.; Li, X.; Li, J.; Zeng, M.; Gao, J. Few-shot Natural Language Generation for Task-Oriented Dialog. In Proceedings of the Findings of the Association for Computational Linguistics: EMNLP 2020, Online, 16–20 November 2020; Cohn, T., He, Y., Liu, Y., Eds.; 2020; pp. 172–182. [Google Scholar] [CrossRef]
  54. Kingma, D.P.; Ba, J. Adam: A Method for Stochastic Optimization. In Proceedings of the 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, 7–9 May 2015; Conference Track Proceedings. Bengio, Y., LeCun, Y., Eds.; 2015. [Google Scholar]
  55. Cho, K.; van Merriënboer, B.; Gulcehre, C.; Bahdanau, D.; Bougares, F.; Schwenk, H.; Bengio, Y. Learning Phrase Representations using RNN Encoder–Decoder for Statistical Machine Translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), Doha, Qatar, 25–29 October 2014; Moschitti, A., Pang, B., Daelemans, W., Eds.; 2014; pp. 1724–1734. [Google Scholar] [CrossRef]
  56. Dušek, O.; Novikova, J.; Rieser, V. Findings of the E2E NLG Challenge. In Proceedings of the 11th International Conference on Natural Language Generation, Tilburg, The Netherlands, 5–8 November 2018; Krahmer, E., Gatt, A., Goudbeek, M., Eds.; 2018; pp. 322–328. [Google Scholar] [CrossRef]
Figure 1. The high-level operation of the DecoStrat framework that demonstrates DecoStrat utilizing a T5-base language model trained on the MultiWOZ dataset. When given input data MR about a booking request, the trained model generates six alternative outputs, each corresponding to one of the six decoding methods  D . These outputs are evaluated against a reference text to determine the best output, indicating the optimal decoding method for the specific D2T generation task.
Figure 1. The high-level operation of the DecoStrat framework that demonstrates DecoStrat utilizing a T5-base language model trained on the MultiWOZ dataset. When given input data MR about a booking request, the trained model generates six alternative outputs, each corresponding to one of the six decoding methods  D . These outputs are evaluated against a reference text to determine the best output, indicating the optimal decoding method for the specific D2T generation task.
Mathematics 12 03596 g001
Figure 2. The DecoStrat system architecture illustrates the effective use of a trained language model  L M with the interactions of main modules in the inference stage to produce an output. The Director calls the Generator with input data X, the  L M , and decoding method d and receives generated output(s) in return. After receiving the required inputs, the Manager determines the decoding method category to apply a selection strategy, sending the output directly to the Director for a single output method s or involving the Ranker and Selector modules for a multiple outputs method m. The subsequent sections present a detailed explanation of each module’s functionality and notations.
Figure 2. The DecoStrat system architecture illustrates the effective use of a trained language model  L M with the interactions of main modules in the inference stage to produce an output. The Director calls the Generator with input data X, the  L M , and decoding method d and receives generated output(s) in return. After receiving the required inputs, the Manager determines the decoding method category to apply a selection strategy, sending the output directly to the Director for a single output method s or involving the Ranker and Selector modules for a multiple outputs method m. The subsequent sections present a detailed explanation of each module’s functionality and notations.
Mathematics 12 03596 g002
Figure 3. Performance comparison across all models and approaches. Bars with a green color indicate the best performance, while the yellow color is relatively poor.
Figure 3. Performance comparison across all models and approaches. Bars with a green color indicate the best performance, while the yellow color is relatively poor.
Mathematics 12 03596 g003
Table 5. MultiWOZ sample paired MR and Ref data along with linearized MR.
Table 5. MultiWOZ sample paired MR and Ref data along with linearized MR.
Input NamesSample Data
MRRestaurant-Select(), Booking-Inform(), Restaurant-Inform(Choice[three], Food[Chinese])
Linearized MRtopic = restaurant | intent = select | topic = booking | intent = inform  |   topic = restaurant   | intent = inform | choice = three | food = chinese
RefI found three Chinese restaurants that meet your requests. Would you like for me to book a table at one of them?
Table 6. Model specifications and the training parameters used for experiments.
Table 6. Model specifications and the training parameters used for experiments.
ModelsBART-BaseT5-BaseT5-Small
Layers6 + 612 + 126 + 6
Heads12128
Hidden State Size768768512
Parameters139 M220 M60 M
Batch Size326464
Learning Rate   1 × 10 5   3 × 10 5   2 × 10 4
Epochs253030
Table 7. Best model checkpoints achieved during the training process.
Table 7. Best model checkpoints achieved during the training process.
Model CheckpointsValidation BLEUEpoch and Step
BART-base0.356epoch 21, step 1749
T5-base0.360epoch 22, step 1749
T5-small0.360epoch 27, step 875
Table 8. DecoStrat decoding methods and the generation parameters used for experiments.
Table 8. DecoStrat decoding methods and the generation parameters used for experiments.
ParametersGSBSSS   MBR s   MBR r   MBR j
k111101010
max_length160160160160160160
num_beams15NANANANA
top_pNANA0.70.70.70.7
top_kNANA30303030
  S NANANAselectrankjoint
  T NANANANA13
  U NANANABLEURTBLEURTBLEURT
Table 9. DecoStrat with T5-small model and alternative decoding methods evaluated on MultiWOZ dataset and compared against K&R [29], beam search with slot aligner reranking (SA) and semantically attention-guided decoding (SG) [30] baselines.
Table 9. DecoStrat with T5-small model and alternative decoding methods evaluated on MultiWOZ dataset and compared against K&R [29], beam search with slot aligner reranking (SA) and semantically attention-guided decoding (SG) [30] baselines.
ModelBMRCSER
K&R0.346NANANA1.27
SA0.3600.323NANA0.63
SG0.3600.323NANA0.85
T5-s + GS0.3490.3210.5182.841.99
T5-s + BS0.3530.3250.5292.891.94
T5-s + SS0.3500.3220.5182.851.95
T5-s + MBRs0.3530.3250.5262.841.63
T5-s + MBRr0.3490.3230.5232.821.99
T5-s + MBRj0.3560.3240.5232.831.91
Boldface fonts denote best-performing results.
Table 10. DecoStrat with T5-base model and alternative decoding methods evaluated on MultiWOZ dataset and compared against K&R [29] baseline.
Table 10. DecoStrat with T5-base model and alternative decoding methods evaluated on MultiWOZ dataset and compared against K&R [29] baseline.
ModelBMRCSER
K&R0.351NANANA1.27
T5-b + GS0.3510.3220.5192.851.98
T5-b + BS0.3540.3270.5312.921.74
T5-b + SS0.3450.3210.5162.841.47
T5-b + MBRs0.3510.3240.5192.851.28
T5-b + MBRr0.3450.3230.5182.831.6
T5-b + MBRj0.3480.3240.5222.861.47
Boldface fonts denote best-performing results.
Table 11. DecoStrat with BART-base model and alternative decoding methods evaluated on MultiWOZ dataset and compared against SA and SG [30] baselines.
Table 11. DecoStrat with BART-base model and alternative decoding methods evaluated on MultiWOZ dataset and compared against SA and SG [30] baselines.
ModelBMRCSER
SA0.3640.324NANA0.60
SG0.3630.323NANA0.72
Bart + GS0.3540.3240.5262.921.61
Bart + BS0.3590.3280.5362.961.34
Bart + SS0.3590.3290.5362.961.41
Bart + MBRs0.3590.3280.5362.961.41
Bart + MBRr0.3590.3280.5362.961.41
Bart + MBRj0.3580.3280.5362.961.41
Boldface fonts denote best-performing results.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Jimale, E.L.; Chen, W.; Al-antari, M.A.; Gu, Y.H.; Agbesi, V.K.; Feroze, W. DecoStrat: Leveraging the Capabilities of Language Models in D2T Generation via Decoding Framework. Mathematics 2024, 12, 3596. https://doi.org/10.3390/math12223596

AMA Style

Jimale EL, Chen W, Al-antari MA, Gu YH, Agbesi VK, Feroze W. DecoStrat: Leveraging the Capabilities of Language Models in D2T Generation via Decoding Framework. Mathematics. 2024; 12(22):3596. https://doi.org/10.3390/math12223596

Chicago/Turabian Style

Jimale, Elias Lemuye, Wenyu Chen, Mugahed A. Al-antari, Yeong Hyeon Gu, Victor Kwaku Agbesi, and Wasif Feroze. 2024. "DecoStrat: Leveraging the Capabilities of Language Models in D2T Generation via Decoding Framework" Mathematics 12, no. 22: 3596. https://doi.org/10.3390/math12223596

APA Style

Jimale, E. L., Chen, W., Al-antari, M. A., Gu, Y. H., Agbesi, V. K., & Feroze, W. (2024). DecoStrat: Leveraging the Capabilities of Language Models in D2T Generation via Decoding Framework. Mathematics, 12(22), 3596. https://doi.org/10.3390/math12223596

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop