Next Article in Journal
Size-Dependent Switching in Thin Ferroelectric Films: Mathematical Aspects and Finite Element Simulation
Previous Article in Journal
An Efficient Path Planning Algorithm Using a Potential Field for Ground Forces
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

HFCVO-DMN: Henry Fuzzy Competitive Verse Optimizer-Integrated Deep Maxout Network for Incremental Text Classification

School of Engineering and Science, G. D. Goenka University, Gurugram, Goenka Educational City, Sohna-Gurgaon Rd., Sohna 122103, Haryana, India
*
Author to whom correspondence should be addressed.
Computation 2023, 11(1), 13; https://doi.org/10.3390/computation11010013
Submission received: 22 October 2022 / Revised: 25 November 2022 / Accepted: 28 November 2022 / Published: 11 January 2023

Abstract

:
One of the effectual text classification approaches for learning extensive information is incremental learning. The big issue that occurs is enhancing the accuracy, as the text is comprised of a large number of terms. In order to address this issue, a new incremental text classification approach is designed using the proposed hybrid optimization algorithm named the Henry Fuzzy Competitive Multi-verse Optimizer (HFCVO)-based Deep Maxout Network (DMN). Here, the optimal features are selected using Invasive Weed Tunicate Swarm Optimization (IWTSO), which is devised by integrating Invasive Weed Optimization (IWO) and the Tunicate Swarm Algorithm (TSA), respectively. The incremental text classification is effectively performed using the DMN, where the classifier is trained utilizing the HFCVO. Nevertheless, the developed HFCVO is derived by incorporating the features of Henry Gas Solubility Optimization (HGSO) and the Competitive Multi-verse Optimizer (CMVO) with fuzzy theory. The proposed HFCVO-based DNM achieved a maximum TPR of 0.968, a maximum TNR of 0.941, a low FNR of 0.032, a high precision of 0.954, and a high accuracy of 0.955.

1. Introduction

In text analysis, organizing documents becomes a crucial and challenging task due to the continuous arrival of numerous texts [1]. Specifically, text data exhibit certain characteristics, such as being fuzzy and structure-less, that make the mining procedure somewhat complex for data mining techniques. Text mining is widely utilized in large-scale demands, such as visualization, database technology, text evaluation, clustering [2,3], data retrieval, extraction, classification, and data mining [4,5,6]. Hence, the multi-disciplinary [7] contribution of text mining makes the investigation even more thrilling among researchers. The deposition of essential data in the decades of information advancement has more significance, and data collection is accomplished by means of the internet. Generally, the information on the WWW exists as text, and, therefore, the collection of desired knowledge from data is a challenging process. In addition, the normal processing of data is also a major hurdle. Such limitations are addressed by introducing a text classification approach [6]. The tremendous development of the internet has maximized the availability of the number of texts online. One of the significant areas of data retrieval advancement is text categorization. The documents are classified into pre-defined types based on the contents utilizing text classification [8,9,10].
Text categorization usually eases the task of detecting fake documents, filtering spam emails, evaluating sentiments, and highlighting the contents [11]. Text categorization is employed in the areas of email filtering [12], content categorization [13], spam message filtering, author detection, sentiment evaluation, and web page [14] categorization. The text is further classified depending upon features refined from texts and it has been considered an essential process in supervised ML [10]. In text classification, useful information is refined from diverse online textual documents. Text classification is employed in large-scale applications [15,16], such as spam filtering, news monitoring, authorized email filtering, and data searching, that are utilized on the internet. The main intention of text categorization is to classify the text into a group of types by refining significant data from unstructured textual utilities [17]. The abundant data in text documents mean that text mining is a challenging task. The text mining process makes use of the linguistic properties that are extracted from the text. Different methods [18] are created to satisfy text categorization needs and improve the efficacy of the model [10].
Text classification is a commonly utilized technique for arranging large-scale documents and has numerous applications, such as filtering and text retrieval. To train a high-quality system, text classification typically uses a supervised or semi-supervised approach and requires a sufficient number of annotated texts. Numerous applications [19,20] may call for diverse annotated documents in varied contexts; as a result, field maestros are used to represent vast texts. Nevertheless, text labeling is a process that consumes a lot of time. Hence, deriving annotated documents of a high standard to guide an effective classifier is a challenging process in text categorization [1]. The widely employed ML [21] techniques for text classification are NB, SVM, AC, KNN [22], and RF [10]. In order to represent the documents basically, an interactive visual analytics system was introduced in [1] for incremental classification. So far, diverse methods have been employed for incremental learning methodologies, such as neural networks [23] and decision tress. Incremental learning performs the text classification depending upon the accumulation and knowledge management. For instant data, the system updates its theory without re-evaluating previous information. Hence, employing the learning approach for text classification is temporally, as well as spatially, economical. When training data exist for a long duration, then the utilization of an incremental learner may be cost-effective. The widely utilized text classification approaches are NNs, derived probability classification, K-NN, SVM, Booster classifier, and so on [6].
The fundamental aspect of this work is to establish an efficient methodology for incremental text classification utilizing an HFCVO-enabled DMN.
The major contribution is given as follows:
Proposed HFCVO-based DMN: An efficacious strategy for incremental text classification is designed using the proposed HFCVO-based DMN. Moreover, the optimal features are selected using IWTSO, such that the performance of incremental text classification is enhanced.
The remaining portion of the study is structured as follows: Section 2 discusses the literature review of traditional methods for incremental text classification, as well as their advantages and shortcomings. In Section 3, the incremental text classification utilizing the suggested HFCVO-based DMN is explained. The created HFCVO-based DMN is explained in Section 4. Section 5 completes the research article.

2. Literature Survey

Various techniques associated with incremental classification are described as follows: V. Srilakshmi, et al. [10] designed an approach named the SGrC-based DBN for offering best text categorization outcomes. The developed approach consisted of five steps and, here, features were refined from the vector space model. Thereafter, feature selection was performed depending on mutual information. Finally, text classification was carried out using the DBN, where the network classifier was trained using the developed SGrC. The developed SGrC was obtained by integrating the characteristics of SMO with the GCOA algorithm. Moreover, the best weights of the Range degree system were chosen depending upon SGrC. The developed model proved that the system was superior to that of the existing approaches. Nevertheless, the approach failed to extend the model for web page and mail categorization. Guangxu Shan, et al. [24] modeled a novel incremental learning model referred to as Learn#, and this model consisted of four models, namely, a student model, an RL, a teacher, and a discriminator model. Here, features were extracted from the texts using the student model, whereas the outcomes of diverse student models were filtered using the RL module. In order to achieve the final text categories, the teacher module reclassified the filtered results. Based on their similarity measure, the discriminator model filtered the student model. The major advantage of this developed model was that it achieved a shorter time for the training process. Here, the method only used LSTM as student models. However, it failed to utilize other models, such as logistic regression, SVM, decision trees, and random forests. Mamta Kayest and Sanjay Kumar Jain, [25] modeled an incremental learning model by employing an MB–FF-based NN. Here, feature refining was done utilizing TF–IDF from the pre-processed result, whereas holoentropy was used to determine the keywords from the text. Thereafter, cluster-based indexing was carried out utilizing the MB–FF algorithm, and the final task was the matching process, which was done using a modified Bhattacharya distance measure. Furthermore, the weights in the NN were chosen by employing the MB–FF algorithm. The demonstration results proved that the developed MB–FF-based NN was more significant for companies, which made large efforts across the world. The NN utilized in this model was very suitable for continuous, as well as discrete, data. However, the computational burden of this method was very high. Joel Jang, et al. [26] devised an effective training model named ST, independent of the efficiencies of the representation model that enforced an incremental learning setup by partitioning the text into subsets, and the learner was trained effectively. Meanwhile, the complications within the incremental learning were easily solved using elastic weight consolidation. The method offered reliable results in solving data skewness in text classification. However, ensemble methods failed to train the multiple weak learners.
Nihar M. Ranjan, and Midhun Chakkaravarthy, [27] developed an effectual framework of text document classifier for unstructured data. NN approaches were utilized to upgrade weights. Furthermore, the COA was employed to reduce errors and improve the accuracy level. In order to minimize the size of feature space, an entropy system was adopted. The developed system purely relied on an incremental learning approach, where the classifier classified upcoming data without prior knowledge of earlier data. However, the method failed to deal with imbalanced data. Yuyu Yan, et al. [1] presented a Gibbs MedLDA. Here, the model generated topics as a summary of the text classification. This enables users to logically explore the text collection and locate labels for creation. A scatter plot and the classifier boundary were included in order to show the classifier’s weights. Gibbs MedLDA still did not achieve demands for real-world implementation. However, it failed to arrange contents into a hierarchy and develop novel visual encoding to highlight hierarchical contents. N. Venkata Sailaja, et al. [6] devised a novel method for incremental text classification by adopting an SVNN. Pre-processing was carried here using stop word removal and stemming techniques. The generated model has four basic steps. Following feature refinement, TTF–IDF was retrieved together with semantic word-based features. Additionally, the Bhattacharya distance measure was used to select the right features. Finally, the SVNN was used to carry out the classification, and a rough set moth search was used to select the optimal weights. The developed Rough set MS-SVNN failed to improve accuracy and it should be investigated in future research. V. Srilakshmi, et al. [28] designed a novel strategy named the SG–CAV-based DBN for text categorization. To train the DBN, which was created by integrating conditional autoregressive value and stochastic gradient descent, the modeled SG–CAV was used. Although the constructed model achieved the highest level of accuracy, it did not enhance classification performance.

Major Challenges

Some of the challenges confronted by conventional techniques are deliberated below:
  • In [1], the technique required a large amount of time for classifying the text labels. When an unannotated text collection is given, it is very complex for users to identify what label to produce and how to represent the very first training set for categorization.
  • Most of the neural classifiers failed to integrate the possibility of a complex environment. This may cause a sudden failure of trained neural networks, resulting in insufficient classification. Hence, most of the neural networks faced the limitations of inefficient classification and incapability of learning the newly arrived unknown classes.
  • The SGrC-based DBN developed in [10] provided accurate outcomes for text categorization but, it was not capable of performing the tasks, such as web page classification and email classification.
  • The computational complexity of this method was high. Moreover, the accuracy of the Rough set MS-SVNN must be enhanced [6].
  • The connectionist-based categorization method considered a dynamic dataset for categorization purposes such that the network had enough potential to learn the model based on the arrival of the new database. However, the method had an ineffective similarity measure [29].

3. Proposed Method for Incremental Text Classification Using Henry Fuzzy Competitive Verse Optimizer

The ultimate aim is to design an approach for incremental text classification by exploiting the proposed HFCVO-based DMN. The pre-processing module receives the input text first and uses stop word removal and stemming to eliminate duplicate information and increase the precision of text classification. The necessary features are refined from the pre-processed data once feature extraction is accomplished using TF–IDF, wordnet-based features, co-occurrence-based features, and contextual words. The best features are selected from the retrieved features using the IWTSO hybrid optimization algorithm. The IWO and TSA are combined to form the IWTSO. Last but not least, HGSO, fuzzy theory, and CMVO are integrated to create the newly derived HFCVO optimization method, which is used to carry out incremental text categorization. When incremental data arrive, the same process is repeated, and error is computed for both the original and incremental data. If the error determined by the incremental data is less than that of the error of the original data, the weights are bounded based on the fuzzy bound approach, and the optimal weights are updated using the proposed HFCVO. Figure 1 illustrates the schematic structure of the proposed HFCVO-based DMN for incremental text classification.

3.1. Acquisition of Input Text Data

Let us consider the training dataset as D with the number of training samples and it is expressed as,
D = D 1 , D 2 , D i , , D m
where m indicates the overall count of text documents and D i denotes the i t h input data.

3.2. Pre-Processing Using Stop Word Removal and Stemming

The input text data D i is forwarded to the pre-processing phase, where redundant words are eliminated from the text data by employing two processes, namely, stop word removal and the stemming process. The occurrence of noises in the text data is due to unstructured data, and it is very necessary to eliminate the noises and redundant information from input data before performing the classification task. At this phase, data in the unstructured format are transformed into structured text representation for easy processing.

3.2.1. Stop Word Removal

Stop word is a division of the natural language processing system and stop words are words that contain articles, prepositions, or pronouns. Generally, words that do not contain any meaning are considered stop words. Stop word removal is the technique of eliminating unwanted or redundant words from a large database. It avoids the often occurring words that do not have any specific importance.

3.2.2. Stemming

By removing prefixes and suffixes, the stemming mechanism breaks down words into their stems at the root or base of the word. Stemming is an essential technique used to reduce words to their underlying words. For the purpose of distilling the words to their essence, many words are prepared. The pre-processed output of the input data is identified as D i , which is denoted as P i when it has undergone the feature extraction procedure. The pre-processing step is finished, and then the text in the text document is identified.

3.3. Feature Extraction

The pre-processed output P i is regarded as the starting point for the feature extraction procedure, which is how the important features are discovered. Wordnet-based features, co-occurrence-based features, TF–IDF-based features, and contextual-based features make up the features.

3.3.1. Wordnet-Based Features

Wordnet [30] is the frequently employed lexical resource for NLP tasks, such as text categorization, information retrieval, and text summarization. It is the network of principles in the form of word nodes that is arranged using the semantic relations between the words depending upon their meaning. However, the semantic relationship is the relation between the principles such that each node consists of a group of words known as subsets that represent the real-world concept. In addition, it is the pointers among the subsets. It is basically utilized to determine the subsets from the pre-processed text data P i . The feature extracted from wordnet-based features is specified as F 1 .

3.3.2. Co-Occurrence-Based Features

The co-occurrence term is defined as the utilization of term sets or item sets. It is also described as the occurrence of term sets from the text repository, and this feature is utilized for the set of words. It is represented as,
F 2 = C a b Z b
where C a b implies the co-occurrence frequency of words a and b ,and Z b denotes the frequency of the word b . Moreover, F 2 represents the feature obtained from co-occurrence features.

3.3.3. TF–IDF

TF–IDF consists of two parts, TF and IDF, in which TF finds the frequency of individual words, and IDF specifies the frequency of a word that is available in the text document. If a word, such as ‘are’, ‘is’, ‘am’, occurs in various texts, then the IDF value is low. In another case, if a word occurs in a small number of texts, then the IDF value is low. Meanwhile, IDF is highly utilized to determine the importance of words [31]. Let us assume that TF specifies the word frequency and it is represented as
T F = N d N
where N d specifies the count of entries d in each class and N indicates the overall count of entries.
The IDF implies the inverse text frequency and it is computed below,
I D F d = log N + 1 N d + 1 + 1
where N implies the total count of texts available in the corpus and N d symbolizes the overall count of text that consists of d word in the repository. Accordingly, TF–IDF is given by,
F 3 = TF IDF   d = T F × I D F d

3.3.4. Contextual-Based Features

The context-based technique determines the correlated words by isolating it from the non-correlating text for achieving efficient categorization results. Finding the fundamental phrases that gain semantic meaning and the context terms that provide correlative context is necessary to complete this endeavor. The basic terms play the role of indicators of the correlated document, whereas the contexts [29] play the role of validators. The purpose of validators is used to assess whether the determined basic terms are indicators or not. Hence, the technique chooses the correlated and non-correlated documents from the training dataset. Generally, the basic terms are identified utilizing a technique called language modeling employed from data retrieval.
Let us consider D with D r e l as relevant documents and D n o n _ r e l as non-correlative documents. Let us consider x G and x D as the context term and key term, respectively.
(i) Key term identification: The language model for this approach is specified as L and it is determined as follows,
G = L r e l L n o n _ r e l
where L r e l and L n o n _ r e l represent the language model for D r e l and D n o n _ r e l , respectively.
(ii) Context term identification: After identifying the fundamental terms, the method starts to perform the context term detection for each key term individually.
The procedures followed by the mechanism of context term determination are defined as follows:
Step 1: Enumerate all basic term occurrences for both correlative and non-correlative documents D r e l and D n o n _ r e l .
Step 2: Apply a sliding window of dimension M ; the terms close to x D are refined as context terms. The window dimension M is specified as the context length.
Step 3: The obtained correlative and non-relevant terms are denoted as d r and d n r , respectively. The set of relevant documents and non-relevant documents is specified as Q d _ r and Q d _ n r .
Step 4: The score is evaluated for the individual term and it is expressed as follows,
G x G = L Q d r x G L Q d _ n r x G M
where L Q d r x G denotes the language model for the relevant document set, whereas L Q d _ n r x G represents the language model for the non-relevant document.
The extracted feature from contextual-based features is F 4 and it is given by,
F 4 = G + G x G
However, the feature extracted from the text data is indicated as F i in such a way that it includes F i = F 1 , F 2 , F 3 , F 4 , respectively.

3.4. Feature Selection

After refining the desired features, the refined feature F i is subjected to feature selection, where significant features are chosen utilizing the developed IWTSO algorithm. However, IWTSO is devised by integrating the features of IWO [32,33] with the TSA [34]. By merging these optimization algorithms, it helps to enhance the classification accuracy and also results in high-quality text data. The IWO algorithm is a metaheuristic population-based algorithm, which is designed to determine the best solution for a mathematical function through randomness and mimicking the compatibility of a weed colony. Weeds are plants that are resistant to any environmental changes and the exasperating growth of weeds influences crops. Additionally, this algorithm is inspired by the agriculture sector, expressed as colonies of invasive weeds. On the other hand, the TSA is an algorithm that was inspired by a novel and mimics the swarm behavior and jet propulsion of tunicates throughout the forage and navigation phase. Bright marine creatures called tunicates produce light that may be seen from a great distance. The weed features of IWO are combined with the swarm behavior and jet propulsion of the TSA to improve the rate of optimization convergence and produce the optimum solution to the optimization problem. However, the feature selected by the proposed developed method is denoted as S i . It is noteworthy to observe that the features chosen by processing the data with a Reuter dataset have a size of 19,043 × 5 with a total count of 19,043 documents. However, the features selected with 20Newgroup dataset have the dimension of 19,997 × 5 from a total count of 19,997 documents, whereas the features chosen with real-time data have the dimension of 5000 × 5 from a total number of 5000 documents.
Solution encoding: Solution encoding is the representation of the solution vector that evaluates the choice of best-fit features S i in such a way S i < F i . The refined feature F i is subjected to feature selection, and the feature selected by the proposed developed method is denoted as S i . Figure 2 shows how the solution encoding is done.
Fitness function: The fitness parameter is exploited to identify the best feature among the set of features by considering the accuracy metric. The expression for accuracy is represented as,
A C C = ψ p + ψ n ψ p + ψ n + p + n
where, ψ p specifies true positive, ψ n denotes true negative, p indicates false positive, and n implies false negative.
The algorithmic steps in IWTSO are explained below:
Step 1: Initialization
Let us initialize the population of weeds in the dimensional space as K and the best position of the weed is denoted as K b e s t .
Step 2: Compute the fitness function
The fitness parameter is utilized to determine the best solution by choosing the best features from a group of features.
Step 3: Update solution
The updated position of the weed in the improved IWO is expressed as follows,
K l g + 1 = β g K l g + K b e s t K l g
K l g + 1 = K l g β g 1 + K b e s t
The standard expression of the TSA is computed as,
K l g + 1 = H + B H r a n d K l g
Let us assume H > K l g
H = K l g + 1 + B r a n d K l g 1 + B
As H is the optimal search agent in the TSA, it can be replaced with the K b e s t of the improved IWO.
K l g + 1 = K l g β g 1 + K l g + 1 + B r a n d K l g 1 + B
K l g + 1 = 1 + B B K l g β g 1 + r a n d K l g
where
B = A I
A = c 2 + c 3 R
R = 2 . c 1
I = f min + c 1 f max f min
where B and A imply the vector and gravity force, respectively. I denotes the social force among the search agents, R specifies the water flow advection, and r a n d indicates the random value that lies in the limit of 0 , 1 . Moreover, the random number c 1 , c 2 , and c 3 lies within the limit of 0 , 1 . Therefore, the value of f min and f max is set to 1 and 4, respectively.
Step 4: Determine the feasibility
The fitness function is determined for individual solutions and a solution with the optimal fitness measure is considered as the best solution.
Step 5: Termination
The aforementioned steps are continued until the best solution is achieved so far. Algorithm 1 elucidates the pseudocode of IWTSO.
Algorithm 1. Pseudocode of proposed IWTSO.
Sl. NoPseudocode of IWTSO
1Input: β g , K l g
2Output: K l g + 1
3Initialize the weed population
4   Determine fitness function
5Update the solution using Equation (14)
6   Determine the feasibility
7Termination

3.5. Arrival of New Data

When new data arrive, it is subjected to the pre-processing module, then the feature extraction module, followed by the feature selection module. These steps are explained in Section 3. The fuzzy-based incremental learning is performed by computing the error between the original data D i and the incremental or newly arrived data D i + 1 .

3.6. Incremental Text Classification Using HFCVO-Based DMN

The selected optimal feature S i is passed to the incremental text classification phase, where the process is done using the Deep Maxout Network. However, the network is trained by exploiting the developed HFCVO algorithm, such that the optimal weight of the classifier is increased. By doing so, the performance of text classification is accurate.

3.6.1. Architecture of Deep Maxout Network

The DMN [35] is a type of trainable activation parameter with a multi-layer structure. Let us consider an input S i , which is a raw input vector of a hidden layer. Here, the DMN consists of layers, such as input, convolutional, dropout, max pooling layer, dense, maxout layer, and output layer. When an input S i with the dimension of 1 × 698 is fed into the input neurons of the input layer, it produces an output of 1 × 698 × 50 . The process is continued by the dropout layer and the convolutional layer, alternatively. The final dropout layer generates an output with a dimension of 1 × 75 , which is considered an input to the dense layer. Subsequently, the final output of the DMN is denoted as U i with a dimension of 1 × 2 .
The activation function of a hidden unit is mathematically computed as,
k u , v 1 = max v 1 , j 1 S T w u v + δ u v
k u , v 2 = max v 1 , j 2 k u , v 1 T w u v + δ u v
k u , v n n = max v 1 , j n n k u , v n n 1 T w u v + δ u v
h u = max v 1 , j n n k u , v n n
where j m m denotes the total count of units present in the m m t h layer and n n implies the overall count of layers in the DMN. An arbitrary continuous activation parameter can be roughly approximated by the DMN activation function. To mimic the DMN structure, the traditional activation parameters’ ReLU and an absolute value rectifier are utilized. The ReLu is initially considered in RBM and it is expressed as,
z u = S i ; i f S i 0 0 ; i f S i < 0
where S i implies the input, whereas z u is the output.
The maxout is an extended ReLU, which performs the maximum function of the j j trainable linear function. The output achieved by a maxout unit is formulated below,
h u S = max v 1 , j j μ u v
In a CNN, activation of a maxout unit is equivalent to j j feature maps. Although the maxout unit is equivalent to generally employed spatial max-pooling in CNNs, it consumes a large amount of time over j j trainable functions. Figure 3 portrays the architecture of the DMN.

3.6.2. Error Estimation

Error estimation of the original data utilized for text classification is determined using the following equation,
M S E = E i = 1 m i = 1 m O i U i 2
where m denotes the total count of samples, and O i and U i indicate the targeted output and the result gained from the network classifier DMN.

3.6.3. Fuzzy Bound-Based Incremental Learning

When newly arrived data D i + 1 is included in the network, error E i + 1 is computed, and the weights are to be upgraded without learning the earlier occurrence. Then, compare the incremental data error E i + 1 with the original data error E i . If the error value of the incremental data is less than that of the original data, then immediately the weights are bounded based on the fuzzy bound approach, and the optimal weights are updated using the proposed HFCVO.
i f E i + 1 > E i
W = W i + 1 + F b
F b = α T f
T f = 50
α = 0 ; V < o V o ω o ; o < V < ω V o A a ω ; ω < V < A a 0 ; A a > V
Based on the measurements given to the factors, α can be achieved and the fuzzy bound is computed.

3.6.4. Weight Update Using Proposed HFCVO

In order to bound the weights, the optimal weights are updated using a newly proposed algorithm named HFCVO. This hybrid optimization algorithm is achieved by integrating the features of HGSO [36] and CMVO [37] with a fuzzy concept [29]. The revolutionary population-based metaheuristic algorithm known as HGSO is entirely based on physics principles. Additionally, this optimization is based on Henry’s law, which states that at a constant temperature, the capacity of a liquid and the amount of a particular gas that dissolves are proportional to half pressure. Due to this characteristic, HGSO is highly sought-after for addressing complex optimization problems with a variety of local best solutions. On the other hand, CMVO is an effective population-based optimization strategy that integrates the MVO with the idea of a pair-wise competition mechanism between universes. This algorithm increases the search capability and enhances the exploration and exploitation phases. By integrating this immense optimization algorithm, it generates promising results in updating the optimal weights.
Solution encoding: Solution encoding is used to represent the solution of a vector and here the optimal weights are determined using the solution vector. Here, M N = 1 × j is the solution vector in which j represents the learning parameter.
The algorithmic procedure involved in this process is deliberated in the below steps:
Step 1: Initialization
Let us initialize the population of gases in the M N dimensional search space and the location of gases is initialized depending upon the below expression,
Y p t + 1 = Y min + r + Y max Y min
where Y p t denotes the p t h solution in the M N search space, r is a random measure that lies in the limit of 0 , 1 , and Y max and Y min are the maximum and minimum bounds, respectively. Moreover, t represents the iteration period. The Henry’s constant and partial pressure of gas p in the q t h cluster is represented as follows,
X q t = y 1 × r a n d 0 , 1
λ p , q = y 2 × r a n d 0 , 1
σ q = y 3 × r a n d 0 , 1
where y 1 , y 2 and y 3 are constant values. λ p , q and σ q are the partial pressure of gas and the constant value of type q σ p .
Step 2: Fitness function
The fitness function is used to determine the optimal solution using Equation (26).
Step 3: Clustering
The search agents are equally partitioned into a number of gas categories. Every cluster possesses equivalent gases and, hence, it has an equivalent Henry’s constant measure.
Step 4: Evaluation
Every cluster q is estimated to determine the best solution of gas that attains the maximum equilibrium state. After that, the gases are ordered in a hierarchical ranking to achieve the optimal gas.
Step 5: Update Henry’s coefficient
Henry’s coefficient is upgraded based on the below expression,
X q t + 1 = X q t + exp σ q × 1 T T t 1 T T θ , T T t = exp t i t e r
where X q implies that Henry’s coefficient for cluster q , T T is the temperature, and i t e r specifies the iteration number.
Step 6: Update the solubility of gas
The solubility of gas is upgraded depending on computing the below equation,
S O p , q t = γ × X q t + 1 × λ p , q t
where SOp,q is the gas solubility of p in cluster q, λ p , q t is the partial pressure on gas p in cluster q , and γ is a constant.
Step 7: Update the position
The location of Henry gas solubility is updated as follows:
Y p , q t + 1 = Y p , q t + F f × r × ρ × Y p , b e s t t Y p , q t + F f × r × ϖ × S O p , q ( t ) × Y b e s t t Y p , q t
Y p , q t + 1 = Y p , q t + F f r γ Y p , b e s t t F f r γ X p , q t + F f r ϖ S O p , q t Y b e s t F f r ϖ Y p , q t
Y p , q t + 1 = Y p , q t 1 F f r γ F f r ϖ + F f r γ Y p , b e s t t + ϖ S O p , q t Y b e s t t
From CMVO, the best universe through wormhole tunnels is given by,
Y p , q t + 1 = s 1 T D R + s 2 Y w w t Y p , q t + s 3 Y k k t Y p , q t
Multiplying s 2 and s 3 inside the parentheses,
Y p , q t + 1 = s 1 T D R + s 2 Y w w t s 2 Y p , q t + s 3 Y k k t s 3 Y p , q t
Y p , q t + 1 = s 1 T D R + s 2 Y w w t Y p , q t s 2 + s 3 + s 3 Y k k t
Y p , q t s 2 + s 3 = s 1 T D R + s 2 Y w w t + s 3 Y k k t Y p , q t + 1
Y p , q t = s 1 T D R + s 2 Y w w t + s 3 Y k k t Y p , q t + 1 s 2 + s 3
Substituting Equation (43) in Equation (38), the equation is given as,
Y p , q t + 1 = s 1 T D R + s 2 Y w w t + s 3 Y k k t Y p , q t + 1 s 2 + s 3 1 F f r γ F f r ϖ + F f r γ Y p , b e s t t + ϖ S O p , q t Y b e s t t
Y p , q t + 1 + Y p , q t + 1   1 F f   r   γ F f   r   ϖ s 2 + s 3 = s 1 T D R + s 2 Y w w t + s 3 Y k k t s 2 + s 3 1 F f   r   γ F f   r   ϖ + F f   r γ Y p , b e s t t + ϖ S O p , q t Y b e s t t
Y p , q t + 1   s 2 + s 3 + Y p , q t + 1   1 F f   r   γ F f   r   ϖ s 2 + s 3 = s 1 T D R + s 2 Y w w t + s 3 Y k k t s 2 + s 3 1 F f   r   γ F f   r   ϖ + F f   r γ Y p , b e s t t + ϖ S O p , q t Y b e s t t
Y p , q t + 1 = s 1 T D R + s 2 Y w w t + s 3 Y k k t s 2 + s 3 1 F f r γ F f r ϖ + F f r γ Y p , b e s t t + ϖ S O p , q t Y b e s t t s 2 + s 3 s 2 + s 3 + 1 F f r γ F f r ϖ
Y p , q t + 1 = s 1 T D R + s 2 Y w w t + s 3 Y k k t s 2 + s 3 1 F f r γ F f r ϖ s 2 + s 3 s 2 + s 3 + 1 F f r γ F f r ϖ + F f r γ Y p , b e s t t + ϖ S O p , q t Y b e s t t s 2 + s 3 s 2 + s 3 + 1 F f r γ F f r ϖ
Y p , q t + 1 = s 1 T D R + s 2 × Y w w t + s 3 Y k k t 1 F f r γ F f r ϖ s 2 + s 3 + 1 F f r γ F f r ϖ + F f r γ Y p , b e s t t + ϖ S O p , q t Y b e s t t s 2 + s 3 s 2 + s 3 + 1 F f r γ F f r ϖ
Hence, the update solution becomes,
Y p , q t + 1 = s 1 T D R + s 2 × Y w w t + s 3 Y k k t 1 F f r γ F f r ϖ + F f r γ Y p , b e s t t + ϖ S O p , q t Y b e s t t s 2 + s 3 s 2 + s 3 + 1 F f r γ F f r ϖ
where Y p , q represents the position of gas p in cluster q , and Y p , b e s t and Y b e s t imply the best gas p in cluster q and the best gas in the swarm, respectively. Y w w t denotes the winner universe in the t t h iteration, and the mean position value of the corresponding universe is expressed as Y k k t . Moreover, T D R implies the traveling distance rate.
Step 8: Escape from the local optimum
To leave the local location, one uses this local optimum. It can be described by the following equation:
φ = × r a n d 2 1 + 1 , 1 = 0.1 2 = 0.2
where is the total count of search agents.
Step 9: Update the location of the worst agents
The location of the worst agents is updated as follows,
p , q = min p , q + r × max p , q min p , q
where p , q denotes the location of gas p in cluster q , and min and max are the bounds of the problem.
Step 10: Termination
The algorithmic steps are continued until it achieves a suitable solution. Algorithm 2 elucidates the pseudocode of the developed HFCVO algorithm.
Algorithm 2. Pseudocode of proposed HFCVO.
SL. No.Pseudocode of HFCVO
1Input: Y p , q t , X p , λ p , q , σ q , y 1 , y 2 and y 3
Output: Y p , q t + 1
2     Begin
3The population agents are divided into various gas kinds using Henry’s constant value X q
4Determine each cluster q
5     Obtain the best gas Y p , b e s t in each cluster and optimal search agent Y b e s t
6for search agent do
7     Update all search agents’ positions using equation (50)
8end for
9     Update each gas type’s Henry’s coefficient using Equation (34)
10Utilizing Equation (35), update the solubility of gas
11     Utilizing Equation (51), arrange and select the number of worst agents
12Using Equation (52), update the location of the worst agents
13     Update the best gas Y p , b e s t and best search agent Y b e s t
14end while
15 t = t + 1
16Return Y b e s t
17Terminate

4. Results and Discussion

This section explains how the created HFCVO-based DMN was evaluated in compliance with evaluation measures.

4.1. Experimental Setup

The implementation of the HFCVO-based DBN is done in the PYTHON tool. Table 1 shows the PYTHON libraries.

4.2. Dataset Description

The datasets utilized for the implementation purpose are the Reuter dataset [38], 20-Newsgroup dataset [39], and the real-time dataset.
Reuter dataset: There are 21,578 cases in this dataset, and 19,043 documents were picked for the classification job. Depending on the categories of documents, groups and indexes are created here. It has five attributes, 206,136 web hits, and no missing values.
20-Newsgroup dataset: It is well recognized for its demonstrations of text appliances for machine learning techniques, including text clustering and text categorization, in a collection of newsgroup documents. In this case, duplicate messages are eliminated to reveal the headers “from” and “subject” on the original messages.
Real-time data: For each of the 20 topics chosen, 250 publications are gathered from the Springer and Science Directwebsites. Only a few of the topics include developments in data analysis, artificial intelligence, big data, bioinformatics, biometrics, cloud computing, and other concepts. 5000 documents are therefore included in the text categorization process.

4.3. Performance Analysis

This section describes the performance assessment of the developed HFCVO-based DMN with respect to evaluation metrics using three datasets.

4.3.1. Analysis Using Reuter Dataset

Table 2 illustrates the performance assessment of the HFCVO-based DBN utilizing the Reuter dataset. If the training percentage is 90%, the TPR obtained by the proposed HFCVO-based DBN with a feature size of 100 is 0.873, a feature size of 200 is 0.897, a feature size of 300 is 0.901, a feature size of 400 is 0.928, and afeature size of 500 is 0.935. By taking the training percentage as 90%, the proposed HFCVO-based DBN achieved aTNR of 0.858 for a feature size of 100, 0.874 for a feature size of 200, 0.896 for a feature size of 300, 0.902 for a feature size of 400, and 0.925 for a feature size of 500. By considering the training percentage as 90%, the FNR obtained by the proposed HFCVO-based DBN with a feature size of 100, 200, 300, 400, and 500 is 0.127, 0.103, 0.099, 0.072, and 0.065, respectively. If the training percentage is 90%, the precision attained by the developed HFCVO-based DBN with a feature size of 100 is 0.883, 200 is 0.909, 300 is 0.923, 400 is 0.947, and 500 is 0.970. If the training data is 90%, the testing accuracy achieved by the developed HFCVO-based DBN with a feature size 100 is 0.857, with a feature size of 200 is 0.878, with a feature size of 300 is 0.898, with a feature size of 400 is 0.905, and with a feature size of 500 is 0.924.

4.3.2. Analysis Using 20Newsgroup Dataset

Table 3 depicts the assessment of the HFCVO-based DBN utilizing the 20 Newsgroup dataset. For a training percentage of 90%, the TPR yielded by the HFCVO-based DBN with a feature size of 100 is 0.894, with a feature size of 200 is 0.913, with a feature size of 300 is 0.935, with a feature size of 400 is 0.947, and with a feature size of 500 is 0.963. If the training percentage is maximized to 90%, the developed HFCVO-based DBN attained aTNR of 0.878 for a feature size of 100, 0.888 for a feature size of 200, 0.909 for a feature size of 300, 0.919 for a feature size of 400, and 0.939 for a feature size of 500. By assuming the training data as 90%, the FNR achieved by the proposed HFCVO-based DBN with a feature size of 100 is 0.106, 200 is 0.087, 300 is 0.065, 400 is 0.053, and 500 is 0.037, respectively. If the training data is 90%, the precision attained by the proposed HFCVO-based DBN with a feature size of 100 is 0.891, 200 is 0.918, 300 is 0.938, 400 is 0.955, and 500 is 0.974. By considering the training data as 90%, the testing accuracy yielded by the proposed HFCVO-based DBN with a feature size of 100, with a feature size of 200, with a feature size of 300, with a feature size of 400, and with a feature size of 500 is 0.871, 0.899, 0.918, 0.938, and 0.956, respectively.

4.3.3. Analysis Using Real-Time Dataset

Table 4 depicts the performance assessment of the developed HFCVO-based DBN utilizing the Real-time dataset. If the training percentage is 90%, the TPR obtained by the proposed HFCVO-based DBN with a feature of size = 100 is 0.869, with a feature size of 200 is 0.897, with a feature size of 300 is 0.929, with a feature size of 400 is 0.949, and with a feature size of 500 is 0.968. If the training percentage is increased to 90%, the proposed HFCVO-based DBN achieved a TNR of 0.865 for a feature size of 100, 0.886 for a feature size of 200, 0.908 for a feature size of 300, 0.926 for a feature size of 400, and 0.941 for a feature size of 500. By considering the training data as 90%, the FNR obtained by the proposed HFCVO-based DBN with a feature size of 100, 200, 300, 400, and 500 is 0.131, 0.103, 0.071, 0.051, and 0.032, respectively. If the training data is 90%, the precision attained by the proposed HFCVO-based DBN with a feature size of 100 is 0.878, 200 is 0.892, 300 is 0.919, 400 is 0.939, and 500 is 0.954. By considering the training percentage as 90%, the testing accuracy obtained by the proposed HFCVO-based DBN with a feature size of 100 is 0.884, with a feature size of 200 is 0.901, a feature size of 300 is 0.928, a feature size of 400 is 0.944, and a feature size of 500 is 0.955.

4.4. Comparative Methods

The performance enhancement of the HFCVO-based DBN is compared with existing approaches, such as the SGrC-based DBN [10], MB–FF-based NN [25], LFNN [29], and SVNN [6].

4.5. Comparative Analysis

This section deliberates the comparative assessment of the developed HFCVO-based DBN in terms of the evaluation metrics using three datasets.

4.5.1. Analysis Using Reuter Dataset

Table 5 represents the assessment of the developed method by employing the Reuter dataset. When the training percentage is 90%, the TPR obtained by the proposed HFCVO-based DBN is 0.935, which results in a performance increment of the developed method compared with that of traditional approaches; for example, that compared with the SGrC-based DBN is 14.035%, the MB–FF-based NN is 9.652%, the LFNN is 6.510%, and the SVNN is 4.276%. If the training percentage is 90%, the TNR obtained by conventional methods, such as the SGrC-based DBN, MB–FF-based NN, LFNN, and SVNN, is 0.798, 0.814, 0.837, and 0.854, respectively. By considering the training percentage as 90%, the FNR attained by the developed method is 0.065, whereas the traditional methods attained an FNR of 0.196 for the SGrC-based DBN, 0.155 for the MB–FF-based NN, 0.125 for the LFNN, and 0.105 for the SVNN. If the training percentage is 90%, the precision achieved by the proposed method is 0.970, which reveals a performance development of the developed method compared with that of existing methods; for example, that compared with the SGrC-based DBN is 20.218%, the MB–FF-based NN is 19.164%, the LFNN is 13.915%, and the SVNN is 6.397%. The testing accuracy achieved by the proposed HFCVO-based DBN is 0.924 when the training data = 90%.

4.5.2. Analysis Using 20Newsgroup Dataset

Table 6 represents an assessment of the proposed method utilizing the 20 Newsgroup dataset. When the training percentage is 90%, the TPR obtained by the proposed HFCVO-based DBN is 0.963,which indicates the development of the proposed method compared with the classical schemes; for example, that compared with the SGrC-based DBN is 14.052%, the MB–FF-based NN is 11.038%, the LFNN is 7.692%, and the SVNN is 5.617%. If the training data is 90%, the TNR obtained by conventional methods, such as the SGrC-based DBN, MB–FF-based NN, LFNN, and SVNN, is 0.836, 0.860, 0.889, and 0.909. By assuming the training percentage as 90%, the FNR achieved by the developed technique is 0.037, while the traditional schemes attained an FNR of 0.173 for the SGrC-based DBN, 0.144 for the MB–FF-based NN, 0.111 for the LFNN, and 0.091 for the SVNN. If the training data is 90%, the precision yielded by the developed strategy is 0.974, which reveals a performance enhancement of the developed method compared with that of conventional methods; for example, that compared with the SGrC-based DBN is 19.377%, the MB–FF-based NN is 18.126%, the LFNN is 12.203%, and the SVNN is 5.923%. The testing accuracy attained by the proposed HFCVO-based DBN is 0.956 when the training data = 90%.

4.5.3. Analysis Using Real-Time Dataset

Table 7 represents the assessment of the developed method using the Real-time dataset. When the training percentage is 90%, the TPR obtained by the proposed HFCVO-based DBN is 0.968,which indicates a performance enhancement of proposed method compared with that of conventional approaches; for example, that compared with the SGrC-based DBN is 13.425%, the MB–FF-based NN is 10.761%, the LFNN is 7.116%, and the SVNN is 5.709%. If the training percentage is 90%, the TNR obtained by conventional methods, such as the SGrC-based DBN, the MB–FF-based NN, the LFNN, and the SVNN is 0.802, 0.824, 0.855, and 0.897. By considering the training percentage as 90%, the FNR achieved by the developed model is 0.032, while the traditional techniques attained an FNR of 0.162 for the SGrC-based DBN, 0.137 for the MB–FF-based NN, 0.101 for the LFNN, and 0.088 for the SVNN. If the training percentage is 90%, the precision obtained by the proposed method is 0.954, which reveals the performance increment of the developed method compared with that of conventional techniques; for example, that compared with the SGrC-based DBN is 17.608%, the MB–FF-based NN is 15.868%, the LFNN is 11.299%, and the SVNN is 1.846%. The testing accuracy achieved by the proposed HFCVO-based DBN is 0.955 when the training data = 90%.

4.6. Analysis Based on Optimization Techniques

This section deliberates the assessment of the developed HFCVO-based DBM based on optimization techniques using three datasets. The algorithms utilized in this analysis are TSO + DMN [34], IIWO + DMN [32], IWTSO + DMN, HGSO + DMN [36], and CMVO + DMN [37].

4.6.1. Analysis Using Reuter Dataset

Table 8 shows the assessment of the optimization methodologies in terms of performance metrics. If the training data is 90%, the TPR obtained by the developed HFCVO + DMN is 0.935, whereas the TPR attained by TSO + DMN is 0.865, IIWO + DMN is 0.875, IWTSO + DMN is 0.887, HGSO + DMN is 0.899, and CMVO + DMN is 0.914. If the training data is 90%, the TNR obtained by the optimization algorithms, such as TSO + DMN, IIWO + DMN, IWTSO + DMN, HGSO + DMN, and CMVO + DMN, is 0.835, 0.854, 0.865, 0.887, and 0.905, respectively. By assuming the training percentage as 90%, the FNR yielded by the proposed HFCVO + DMN is 0.065, whereas the other optimization algorithms obtained an FNR of 0.135 for TSO + DMN, 0.125 for IIWO + DMN, 0.113 for IWTSO + DMN, 0.101 for HGSO + DMN, and 0.086 for CMVO + DMN. When the training percentage is maximized to 90%, the precision obtained by the developed HFCVO + DMN is 0.970. If the training percentage is elevated to 90%, the proposed HFCVO + DMN attained a testing accuracy of 0.924, whereas the conventional methodologies obtained a testing accuracy of 0.854 for TSO + DMN, 0.865 for IIWO + DMN, 0.875 for IWTSO + DMN, 0.898 for HGSO + DMN, and 0.905 for HFCVO + DMN.

4.6.2. Analysis Using 20Newsgroup Dataset

Table 9 specifies the assessment of the optimization methodologies in accordance with the performance measures. When the training data = 90%, the TPR yielded by the developed HFCVO + DMN is 0.963, while the TPR obtained by TSO + DMN is 0.887, IIWO + DMN is 0.904, IWTSO + DMN is 0.914, HGSO + DMN is 0.933, and CMVO + DMN is 0.954. By considering the training percentage as 90%, the TNR achieved by the optimization methodologies such as TSO + DMN is 0.841, IIWO + DMN is 0.865, IWTSO + DMN is 0.885, HGSO + DMN is 0.895, and CMVO + DMN is 0.925. By assuming the training percentage as 90%, the FNR attained by the developed HFCVO + DMN is 0.037, whereas the other optimization techniques achieved an FNR of 0.113 for TSO + DMN, 0.096 for IIWO + DMN, 0.086 for IWTSO + DMN, 0.067 for HGSO + DMN, and 0.046 for CMVO + DMN. When the training percentage = 90%, the proposed HFCVO + DMN obtained a precision of 0.974. When the training percentage is increased to 90%, the proposed HFCVO + DMN attained a testing accuracy of 0.956, whereas the conventional methodologies obtained a testing accuracy of 0.857 for TSO + DMN, 0.875 for IIWO + DMN, 0.885 for IWTSO + DMN, 0.937 for HGSO + DMN, and 0.941 for HFCVO + DMN.

4.6.3. Analysis Using Real-Time Dataset

Table 10 represents the assessment of the optimization methodologies in terms of the performance metrics. By considering the training percentage is 90%, the TPR obtained by HFCVO + DMN is 0.968, whereas the TPR attained by TSO + DMN is 0.875, IIWO + DMN is 0.885, IWTSO + DMN is 0.904, HGSO + DMN is 0.925, and CMVO + DMN is 0.941. If the training data is 90%, the TNR obtained by the optimization algorithms, such as TSO + DMN, IIWO + DMN, IWTSO + DMN, HGSO + DMN, and CMVO + DMN, is 0.865, 0.875, 0.895, 0.921, and 0.933, respectively. By considering the training percentage as 90%, the FNR yielded by the proposed HFCVO + DMN is 0.032, whereas the other optimization algorithms obtained an FNR of 0.125 for TSO + DMN, 0.115 for IIWO + DMN, 0.075 for IWTSO + DMN, 0.059 for HGSO + DMN, and0.032 for CMVO + DMN. When the training percentage is maximized to 90%, the precision obtained by the developed HFCVO + DMN is 0.954. If the training percentage is elevated to 90%, the proposed HFCVO + DMN attained a testing accuracy of 0.955, whereas the conventional methodologies obtained a testing accuracy of 0.865 for TSO + DMN, 0.885 for IIWO + DMN, 0.895 for IWTSO + DMN, 0.926 for HGSO + DMN, and 0.935 for HFCVO + DMN.

5. Conclusions

Text mining has been considered a significant tool for diverse knowledge discovery-based applications, such as document arrangement, fake email filtering, and news groupings. Nowadays, text mining employs incremental learning data, as they are economically cost-effective when handling massive data. However, the major crisis that occurs in incremental learning is low accuracy because of the existence of countless terms in the text document. Deep learning is an effectual technique for refining underlying data in the text but it provides superior results on closed datasets than real-world data. Hence, approaches to deal with imbalanced datasets are very significant for addressing such problems. Hence, this research proposes an effective incremental text classification strategy using the proposed HFCVO-based DMN. The proposed approach consists of four phases, namely, pre-processing, feature extraction, feature selection, and incremental text categorization. Here, the optimal features are extracted using the developed IWTSO algorithm. Moreover, the incremental text classification is done by exploiting the DBN, where the network is trained using HFCVO. When incremental data arrive, the error is computed for both the original data and incremental data. If the error of the incremental data is less than the error of the original data, then the weights are bounded based on a fuzzy theory using the same proposed HFCVO. The proposed algorithm is devised by merging the features of HGSO and CMVO with the fuzzy concept. Meanwhile, the proposed HFCVO-based DNM achieved a maximum TPR of 0.968, a maximum TNR of 0.941, a low FNR of 0.032, a high precision of 0.954, and a high accuracy of 0.955.

Author Contributions

Conceptualization, G.S. and A.N.; methodology, G.S.; software, G.S.; validation, G.S. and A.N.; formal analysis, G.S.; investigation, G.S.; resources, G.S.; data curation, G.S.; writing—original draft preparation, G.S.; writing—review and editing, G.S..; visualization, G.S.; supervision, G.S.; project administration, G.S.; funding acquisition, A.N. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are openly available in “https://archive.ics.uci.edu/ml/datasets/reuters-21578+text+categorization+collection” (accessed on 23 January 2022), reference number [38] and “https://www.kaggle.com/crawford/20-newsgroups” (accessed on 23 January 2022), reference number [39].

Conflicts of Interest

The authors declare no conflict of interest.

Nomenclature

AbbreviationsDescriptions
HFCVOHenry Fuzzy Competitive Multi-verse Optimizer
DMNDeep Maxout Network
IWTSOInvasive Weed Tunicate Swarm Optimization
DMNDeep Maxout Network
IWOInvasive Weed Optimization
NBNaïve Bayes
TSATunicate Swarm Algorithm
DMNDeep Maxout Network
HGSOHenry Gas Solubility Optimization
CMVOCompetitive Multi-Verse Optimizer
WWWWorld Wide Web
MLMachine Learning
TNRTrue Negative Rate
SVMSupport Vector Machine
FNRFalse Negative Rate
KNNK-Nearest Neighbor
ACAssociative Classification
SGrC-based DBNSpider Grasshopper Crow Optimization Algorithm-based Deep Belief Neural network
SMOSpider Monkey Optimization
GCOAGrasshopper Crow Optimization Algorithm
RLReinforcement Learning
SVNNSupport Vector Neural Network
LSTMLong Short-Term Memory
MB–FF-based NNMonarch Butterfly optimization–FireFly optimization-based Neural Network
TF–IDFTerm Frequency–Inverse Document Frequency
STSequential Targeting
COACuckoo Optimization Algorithm
Gibbs MedLDAInteractive visual assessment model depending on a semi-supervised topic modeling technique called allocation.
SG–CAV-based DBNStochastic Gradient–CAViaR-based Deep Belief Network
ReLURectified Linear Unit
RBMRestricted Boltzmann Machines
MVOMulti-Verse Optimizer algorithm
NNNeural Network
TPRTrue Positive Rate
RFRandom Forest
NLPNatural Language Processing

References

  1. Yan, Y.; Tao, Y.; Jin, S.; Xu, J.; Lin, H. An Interactive Visual Analytics System for Incremental Classification Based on Semi-supervised Topic Modeling. In Proceedings of the IEEE Pacific Visualization Symposium (PacificVis), Bangkok, Thailand, 23–26 April 2019; pp. 148–157. [Google Scholar]
  2. Chander, S.; Vijaya, P.; Dhyani, P. Multi kernel and dynamic fractional lion optimization algorithm for data clustering. Alex. Eng. J. 2018, 57, 267–276. [Google Scholar] [CrossRef]
  3. Jadhav, A.N.; Gomathi, N. DIGWO: Hybridization of Dragonfly Algorithm with Improved Grey Wolf Optimization Algorithm for Data Clustering. Multimed. Res. 2019, 2, 1–11. [Google Scholar]
  4. Tan, A.H. Text mining: The state of the art and the challenges. In Proceedings of the Pakdd 1999 Workshop on Knowledge Discovery from Advanced Databases, Beijing, China, 26–28 April 1999; Volume 8, pp. 65–70. [Google Scholar]
  5. Yadav, P. SR-K-Means clustering algorithm for semantic information retrieval. Int. J. Invent. Comput. Sci. Eng. 2014, 1, 17–24. [Google Scholar]
  6. Sailaja, N.V.; Padmasree, L.; Mangathayaru, N. Incremental learning for text categorization using rough set boundary based optimized Support Vector Neural Network. In Data Technologies and Applications; Emerald Publishing Limited: Bingley, UK, 2020. [Google Scholar]
  7. Kaviyaraj, R.; Uma, M. Augmented Reality Application in Classroom: An Immersive Taxonomy. In Proceedings of the 2022 4th International Conference on Smart Systems and Inventive Technology (ICSSIT), Tirunelveli, India, 20 January 2022; pp. 1221–1226. [Google Scholar]
  8. Vidyadhari, C.; Sandhya, N.; Premchand, P. A Semantic Word Processing Using Enhanced Cat Swarm Optimization Algorithm for Automatic Text Clustering. Multimed. Res. 2019, 2, 23–32. [Google Scholar]
  9. Sebastiani, F. Machine learning in automated text categorization. ACM Comput. Surv. 2022, 34, 1–47. [Google Scholar] [CrossRef] [Green Version]
  10. Srilakshmi, V.; Anuradha, K.; Bindu, C.S. Incremental text categorization based on hybrid optimization-based deep belief neural network. J. High Speed Netw. 2021, 27, 1–20. [Google Scholar] [CrossRef]
  11. Jo, T. K nearest neighbor for text categorization using feature similarity. Adv. Eng. ICT Converg. 2019, 2, 99. [Google Scholar]
  12. Sheu, J.J.; Chu, K.T. An efficient spam filtering method by analyzing e-mail’s header session only. Int. J. Innov. Comput. Inf. Control. 2009, 5, 3717–3731. [Google Scholar]
  13. Ghiassi, M.; Olschimke, M.; Moon, B.; Arnaudo, P. Automated text classification using a dynamic artificial neural network model. Expert Syst. Appl. 2012, 39, 10967–10976. [Google Scholar] [CrossRef]
  14. Wang, Q.; Fang, Y.; Ravula, A.; Feng, F.; Quan, X.; Liu, D. WebFormer: The Web-page Transformer for Structure Information Extraction. In Proceedings of the ACM Web Conference (WWW ’22), Lyon, France, 25–29 April 2022; pp. 3124–3133. [Google Scholar]
  15. Yan, L.; Ma, S.; Wang, Q.; Chen, Y.; Zhang, X.; Savakis, A.; Liu, D. Video Captioning Using Global-Local Representation. IEEE Trans. Circuits Syst. Video Technol. 2022, 32, 6642–6656. [Google Scholar] [CrossRef]
  16. Liu, D.; Cui, Y.; Tan, W.; Chen, Y. SG-Net: Spatial Granularity Network for One-Stage Video Instance Segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 21 June 2021. [Google Scholar]
  17. Al-diabat, M. Arabic text categorization using classification rule mining. Appl. Math. Sci. 2012, 6, 4033–4046. [Google Scholar]
  18. Srinivas, K. Prediction of e-learning efficiency by deep learning in E-khool online portal networks. Multimed. Res. 2020, 3, 12–23. [Google Scholar] [CrossRef]
  19. Alzubi, A.; Eladli, A. Mobile Payment Adoption-A Systematic Review. J. Posit. Psychol. Wellbeing 2021, 5, 565–577. [Google Scholar]
  20. Rupapara, V.; Narra, M.; Gunda, N.K.; Gandhi, S.; Thipparthy, K.R. Maintaining Social Distancing in Pandemic Using Smartphones With Acoustic Waves. IEEE Trans. Comput. Soc. Syst. 2022, 9, 605–611. [Google Scholar] [CrossRef]
  21. Rahul, V.S.; Kosuru; Venkitaraman, A.K. Integrated framework to identify fault in human-machine interaction systems. Int. Res. J. Mod. Eng. Technol. Sci. 2022, 4, 1685–1692. [Google Scholar]
  22. Gali, V. Tamil Character Recognition Using K-Nearest-Neighbouring Classifier based on Grey Wolf Optimization Algorithm. Multimed. Res. 2021, 4, 1–24. [Google Scholar] [CrossRef]
  23. Shirsat, P. Developing Deep Neural Network for Learner Performance Prediction in EKhool Online Learning Platform. Multimed. Res. 2020, 3, 24–31. [Google Scholar] [CrossRef]
  24. Shan, G.; Xu, S.; Yang, L.; Jia, S.; Xiang, Y. Learn#: A novel incremental learning method for text classification. Expert Syst. Appl. 2020, 147, 113198. [Google Scholar]
  25. Kayest, M.; Jain, S.K. An Incremental Learning Approach for the Text Categorization Using Hybrid Optimization; Emerald Publishing Limited: Bingley, UK, 2019. [Google Scholar]
  26. Jang, J.; Kim, Y.; Choi, K.; Suh, S. Sequential Targeting: An incremental learning approach for data imbalance in text classification. arXiv 2020, arXiv:2011.10216. [Google Scholar]
  27. Nihar, M.R.; Midhunchakkaravarthy, J. Evolutionary and Incremental Text Document Classifier using Deep Learning. Int. J. Grid Distrib. Comput. 2021, 14, 587–595. [Google Scholar]
  28. Srilakshmi, V.; Anuradha, K.; Bindu, C.S. Stochastic gradient-CAViaR-based deep belief network for text categorization. Evol. Intell. 2020, 14, 1727–1741. [Google Scholar] [CrossRef]
  29. Nihar, M.R.; Rajesh, S.P. LFNN: Lion fuzzy neural network-based evolutionary model for text classification using context and sense based features. Appl. Soft Comput. 2018, 71, 994–1008. [Google Scholar]
  30. Liu, Y.; Sun, C.J.; Lin, L.; Wang, X.; Zhao, Y. Computing semantic text similarity using rich features. In Proceedings of the 29th Pacific Asia Conference on Language, Information and Computation, Shanghai, China, 30 October–1 November 2015; pp. 44–52. [Google Scholar]
  31. Wu, D.; Yang, R.; Shen, C. Sentiment word co-occurrence and knowledge pair feature extraction based LDA short text clustering algorithm. J. Intell. Inf. Syst. 2020, 56, 1–23. [Google Scholar] [CrossRef]
  32. Zhou, Y.; Luo, Q.; Chen, H.; He, A.; Wu, J. A discrete invasive weed optimization algorithm for solving traveling salesman problem. Neurocomputing 2015, 151, 1227–1236. [Google Scholar] [CrossRef]
  33. Sang, H.Y.; Duan, P.Y.; Li, J.Q. An effective invasive weed optimization algorithm for scheduling semiconductor final testing problem. Swarm Evol. Comput. 2018, 38, 42–53. [Google Scholar] [CrossRef]
  34. Kaur, S.; Awasthi, L.K.; Sangal, A.L.; Dhiman, G. Tunicate Swarm Algorithm: A new bio-inspired based metaheuristic paradigm for global optimization. Eng. Appl. Artif. Intell. 2020, 90, 103541. [Google Scholar] [CrossRef]
  35. Sun, W.; Su, F.; Wang, L. Improving deep neural networks with multi-layer maxout networks and a novel initialization method. Neurocomputing 2018, 278, 34–40. [Google Scholar] [CrossRef]
  36. Hashim, F.A.; Houssein, E.H.; Mabrouk, M.S.; Al-Atabany, W.; Mirjalili, S. Henry gas solubility optimization: A novel physics-based algorithm. Future Gener. Comput. Syst. 2019, 101, 646–667. [Google Scholar] [CrossRef]
  37. Benmessahel, I.; Xie, K.; Chellal, M. A new competitive multiverse optimization technique for solving single-objective and multi-objective problems. Eng. Rep. 2020, 2, e12124. [Google Scholar]
  38. Reuters-21578 Text Categorization Collection Data Set. Available online: https://archive.ics.uci.edu/ml/datasets/reuters-21578+text+categorization+collection (accessed on 23 January 2022).
  39. 20 Newsgroup Dataset. Available online: https://www.kaggle.com/crawford/20-newsgroups (accessed on 23 January 2022).
Figure 1. Schematic view of proposed HFCVO-based DMN for incremental text classification.
Figure 1. Schematic view of proposed HFCVO-based DMN for incremental text classification.
Computation 11 00013 g001
Figure 2. Solution encoding.
Figure 2. Solution encoding.
Computation 11 00013 g002
Figure 3. Structure of the DMN.
Figure 3. Structure of the DMN.
Computation 11 00013 g003
Table 1. PYTHON Libraries.
Table 1. PYTHON Libraries.
PYTHON Libraries NameVersion
matplotlib3.5.0
numpy1.21.4
PySimpleGUI4.33.0
pandas1.3.4
scikit-learn1.0.1
Keras-Applications1.0.8
Pillow9.2.0
tensorboard2.9.1
tensorboard-plugin-wit1.8.1
tensorboard-data-server0.6.1
tensorflow2.9.1
tensorflow-estimator2.9.0
Keras2.3.1
tensorflow-io-gcs-filesystem0.26.0
Keras-Preprocessing1.1.2
Table 2. Performance analysis using Reuter dataset for TPR, TNR, FNR, precision, and testing accuracy.
Table 2. Performance analysis using Reuter dataset for TPR, TNR, FNR, precision, and testing accuracy.
Training Data(%)Proposed HFCVO-Based DMN with Feature Size 100Proposed HFCVO-Based DMN with Feature Size 200Proposed HFCVO-Based DMN with Feature Size 300Proposed HFCVO-Based DMN with Feature Size 400Proposed HFCVO-Based DMN with Feature Size 500
TPR
600.8240.8370.8460.8680.885
700.8470.8650.8740.8810.895
800.8620.8720.9000.9050.925
900.8730.8970.9010.9280.935
TNR
600.8080.8120.8350.8420.854
700.8120.8340.8460.8570.865
800.8350.8530.8740.8940.901
900.8580.8740.8960.9020.925
FNR
600.1760.1630.1540.1320.115
700.1530.1350.1260.1190.105
800.1380.1280.1000.0950.075
900.1270.1030.0990.0720.065
Precision
600.8680.8860.8950.9040.929
700.8750.8840.9030.9120.938
800.8800.8970.9130.9400.953
900.8830.9090.9230.9470.970
Accuracy
600.8240.8350.8450.8580.863
700.8390.8460.8570.8680.872
800.8460.8580.8770.8950.907
900.8570.8780.8980.9050.924
Table 3. Performance analysis using 20Newsgroup Dataset for TPR, TNR, FNR, precision, and testing accuracy.
Table 3. Performance analysis using 20Newsgroup Dataset for TPR, TNR, FNR, precision, and testing accuracy.
Training Data(%)Proposed HFCVO-Based DMN with Feature Size 100Proposed HFCVO-Based DMN with Feature Size 200Proposed HFCVO-Based DMN with Feature Size 300Proposed HFCVO-Based DMN with Feature Size 400Proposed HFCVO-Based DMN with Feature Size 500
TPR
600.8670.8840.9090.9280.946
700.8770.8970.9180.9380.950
800.8810.8960.9190.9380.955
900.8940.9130.9350.9470.963
TNR
600.8460.8680.8770.8880.898
700.8570.8610.8790.8980.902
800.8670.8750.8870.9070.923
900.8780.8880.9090.9190.939
FNR
600.1330.1160.0910.0720.054
700.1230.1030.0820.0620.050
800.1190.1040.0810.0620.045
900.1060.0870.0650.0530.037
Precision
600.8550.8770.8950.9190.936
700.8640.8840.9160.9300.942
800.8880.9080.9280.9450.966
900.8910.9180.9380.9550.974
Accuracy
600.8350.8580.8610.8810.899
700.8460.8650.8870.9050.929
800.8620.8850.9040.9010.944
900.8710.8990.9180.9380.956
Table 4. Performance analysis using Real-time dataset for TPR, TNR, FNR, precision, and testing accuracy.
Table 4. Performance analysis using Real-time dataset for TPR, TNR, FNR, precision, and testing accuracy.
Training Data(%)Proposed HFCVO-Based DMN with Feature Size 100Proposed HFCVO-Based DMN with Feature Size 200Proposed HFCVO-Based DMN with Feature Size 300Proposed HFCVO-Based DMN with Feature Size 400Proposed HFCVO-Based DMN with Feature Size 500
TPR
600.8340.8540.8760.8960.918
700.8480.8690.8960.9120.938
800.8580.8790.9120.9380.946
900.8690.8970.9290.9490.968
TNR
600.8240.8460.8680.8770.897
700.8350.8580.8720.8980.912
800.8580.8780.8980.9070.924
900.8650.8860.9080.9260.941
FNR
600.1660.1460.1240.1040.082
700.1520.1310.1040.0880.062
800.1420.1210.0880.0620.054
900.1310.1030.0710.0510.032
Precision
600.8450.8680.8870.9080.927
700.8580.8730.8990.9120.938
800.8690.8880.9090.9250.946
900.8780.8920.9190.9390.954
Accuracy
600.8480.8650.8850.8920.907
700.8690.8790.8960.9120.924
800.8750.8950.9120.9290.935
900.8840.9010.9280.9440.955
Table 5. Comparative analysis using Reuter dataset for TPR, TNR, FNR, precision, and testing accuracy.
Table 5. Comparative analysis using Reuter dataset for TPR, TNR, FNR, precision, and testing accuracy.
Training Data(%)SGrC-Based DBNMB–FF-Based NNLFNNSVNNProposed HFCVO-Based DMN
TPR
600.7450.7650.7850.8250.885
700.7540.7850.7990.8480.895
800.7750.8040.8250.8540.925
900.8040.8450.8750.8950.935
TNR
600.7250.7450.7650.7850.854
700.7350.7650.7990.8140.865
800.7650.7850.8140.8350.901
900.7980.8140.8370.8540.925
FNR
600.2550.2350.2150.1750.115
700.2460.2150.2010.1520.105
800.2250.1960.1750.1460.075
900.1960.1550.1250.1050.065
Precision
600.7250.7550.7930.8520.929
700.7430.7650.8170.8740.938
800.7650.7820.8240.8880.953
900.7740.7840.8350.9080.970
Accuracy
600.7240.7430.7640.8040.863
700.7450.7640.7840.8350.872
800.7630.7940.8150.8430.907
900.7850.8310.8540.8820.924
Table 6. Comparative analysis using 20-Newsgroup Dataset for TPR, TNR, FNR, Precision, and testing accuracy.
Table 6. Comparative analysis using 20-Newsgroup Dataset for TPR, TNR, FNR, Precision, and testing accuracy.
Training Data(%)SGrC-Based DBNMB–FF-Based NNLFNNSVNNProposed HFCVO-Based DMN
TPR
600.7570.7850.8160.8830.946
700.7670.7970.8290.8980.950
800.7750.8080.8350.9050.955
900.8270.8560.8890.9090.963
TNR
600.7380.7640.7840.8350.898
700.7540.7730.8050.8460.902
800.8020.8230.8420.8700.923
900.8360.8600.8890.9090.939
FNR
600.2430.2150.1840.1170.054
700.2330.2030.1710.1020.050
800.2250.1920.1650.0950.045
900.1730.1440.1110.0910.037
Precision
600.7360.7650.8050.8790.936
700.7590.7730.8250.8860.942
800.7720.8040.8360.8950.966
900.7850.7980.8550.9170.974
Accuracy
600.7490.7750.7980.8440.899
700.7520.7980.8130.8570.929
800.7990.8390.8520.8720.944
900.8180.8440.8780.8980.956
Table 7. Comparative analysis using Real-time dataset for TPR, TNR, FNR, precision, and testing accuracy.
Table 7. Comparative analysis using Real-time dataset for TPR, TNR, FNR, precision, and testing accuracy.
Training Data(%)SGrC-Based DBNMB–FF-Based NNLFNNSVNNProposed HFCVO-Based DMN
TPR
600.7680.7980.8270.8680.918
700.7980.8130.8500.8790.938
800.8190.8350.8560.8890.946
900.8380.8630.8990.9120.968
TNR
600.7440.7770.7860.8440.897
700.7740.7970.8250.8550.912
800.7940.8180.8380.8680.924
900.8020.8240.8550.8970.941
FNR
600.2320.2020.1730.1320.082
700.2020.1870.1500.1210.062
800.1810.1650.1440.1110.054
900.1620.1370.1010.0880.032
Precision
600.7440.7760.8180.9090.927
700.7520.7830.8240.9120.938
800.7750.7980.8380.9280.946
900.7860.8030.8460.9370.954
Accuracy
600.7550.7860.8120.8560.907
700.7880.8090.8360.8670.924
800.8090.8240.8480.8730.935
900.8240.8540.8860.9060.955
Table 8. Analysis based on optimization using Reuter dataset for TPR, TNR, FNR, precision, and testing accuracy.
Table 8. Analysis based on optimization using Reuter dataset for TPR, TNR, FNR, precision, and testing accuracy.
Training Data(%)TSO+DMNIIWO+DMNIWTSO+DMNHGSO+DMNCMVO+DMNHFCO+DMN
TPR
600.8140.8350.8540.8650.8750.885
700.8250.8450.8650.8750.8870.895
800.8410.8540.8750.8870.9050.925
900.8650.8750.8870.8990.9140.935
TNR
600.7850.7990.8140.8350.8540.854
700.7990.8140.8250.8410.8540.865
800.8250.8370.8480.8650.8860.901
900.8350.8540.8650.8870.9050.925
FNR
600.1860.1650.1460.1350.1250.115
700.1750.1550.1350.1250.1130.105
800.1590.1460.1250.1130.0950.075
900.1350.1250.1130.1010.0860.065
Precision
600.8330.8540.8750.8950.9050.929
700.8410.8650.8870.9140.9250.938
800.8540.8750.8950.9200.9270.953
900.8650.8870.9050.9250.9480.970
Accuracy
600.8040.8140.8250.8370.8480.863
700.8140.8250.8370.8470.8540.872
800.8330.8450.8540.8760.8870.907
900.8540.8650.8750.8980.9050.924
Table 9. Analysis based on optimization using 20Newsgroup dataset for TPR, TNR, FNR, precision, and testing accuracy.
Table 9. Analysis based on optimization using 20Newsgroup dataset for TPR, TNR, FNR, precision, and testing accuracy.
Training Data(%)TSO+DMNIIWO+DMNIWTSO+DMNHGSO+DMNCMVO+DMNHFCO+DMN
TPR
600.8410.8650.8750.8970.9250.946
700.8540.8700.8990.9140.9250.950
800.8650.8870.9050.9240.9370.955
900.8870.9040.9140.9330.9540.963
TNR
600.8020.8250.8470.8650.8850.898
700.8140.8370.8540.8750.8950.902
800.8240.8480.8650.8870.9050.923
900.8410.8650.8850.8950.9250.939
FNR
600.1590.1350.1250.1030.0750.054
700.1460.1300.1010.0860.0750.050
800.1350.1130.0950.0760.0630.045
900.1130.0960.0860.0670.0460.037
Precision
600.8540.8650.8870.9050.9140.936
700.8650.8750.8950.9250.9370.942
800.8870.9050.9250.9330.9540.966
900.8970.9250.9410.9510.9620.974
Accuracy
600.8250.8450.8540.8650.8870.899
700.8350.8540.8650.8950.9050.929
800.8410.8650.8750.9240.9350.944
900.8570.8750.8850.9370.9410.956
Table 10. Analysis based on optimization using Real-time dataset for TPR, TNR, FNR, precision, and testing accuracy.
Table 10. Analysis based on optimization using Real-time dataset for TPR, TNR, FNR, precision, and testing accuracy.
Training Data(%)TSO+DMNIIWO+DMNIWTSO+DMNHGSO+DMNCMVO+DMNHFCO+DMN
TPR
600.8240.8410.8610.8870.9010.918
700.8350.8540.8750.8950.9140.938
800.8540.8650.8870.9050.9250.946
900.8750.8850.9040.9250.9410.968
TNR
600.8020.8330.8540.8750.8850.897
700.8210.8540.8750.8950.9020.912
800.8410.8650.8870.9050.9140.924
900.8650.8750.8950.9210.9330.941
FNR
600.1760.1590.1390.1130.0990.082
700.1650.1460.1250.1050.0860.062
800.1460.1350.1130.0950.0750.054
900.1250.1150.0750.0590.0320.032
Precision
600.8330.8540.8750.8950.9050.927
700.8540.8650.8950.9140.9250.938
800.8750.8850.9050.9250.9350.946
900.8970.9050.9240.9350.9410.954
Accuracy
600.8140.8370.8540.8750.8870.907
700.8250.8410.8650.8990.9040.924
800.8410.8650.8870.9140.9250.935
900.8650.8850.8950.9260.9350.955
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Singh, G.; Nagpal, A. HFCVO-DMN: Henry Fuzzy Competitive Verse Optimizer-Integrated Deep Maxout Network for Incremental Text Classification. Computation 2023, 11, 13. https://doi.org/10.3390/computation11010013

AMA Style

Singh G, Nagpal A. HFCVO-DMN: Henry Fuzzy Competitive Verse Optimizer-Integrated Deep Maxout Network for Incremental Text Classification. Computation. 2023; 11(1):13. https://doi.org/10.3390/computation11010013

Chicago/Turabian Style

Singh, Gunjan, and Arpita Nagpal. 2023. "HFCVO-DMN: Henry Fuzzy Competitive Verse Optimizer-Integrated Deep Maxout Network for Incremental Text Classification" Computation 11, no. 1: 13. https://doi.org/10.3390/computation11010013

APA Style

Singh, G., & Nagpal, A. (2023). HFCVO-DMN: Henry Fuzzy Competitive Verse Optimizer-Integrated Deep Maxout Network for Incremental Text Classification. Computation, 11(1), 13. https://doi.org/10.3390/computation11010013

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop