Next Article in Journal
Video Analysis of Small Bowel Capsule Endoscopy Using a Transformer Network
Previous Article in Journal
Phenotypes of Sarcoidosis-Associated Pulmonary Hypertension—A Challenging Mystery
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Gynecological Healthcare: Unveiling Pelvic Masses Classification through Evolutionary Gravitational Neocognitron Neural Network Optimized with Nomadic People Optimizer

1
Department of Biomedical Engineering, Hindusthan College of Engineering and Technology, Coimbatore 641032, India
2
Department of Electronics and Communication Engineering, KPR Institute of Engineering and Technology, Coimbatore 641407, India
*
Author to whom correspondence should be addressed.
Diagnostics 2023, 13(19), 3131; https://doi.org/10.3390/diagnostics13193131
Submission received: 24 June 2023 / Revised: 1 September 2023 / Accepted: 4 September 2023 / Published: 5 October 2023
(This article belongs to the Section Machine Learning and Artificial Intelligence in Diagnostics)

Abstract

:
Accurate and early detection of malignant pelvic mass is important for a suitable referral, triage, and for further care for the women diagnosed with a pelvic mass. Several deep learning (DL) methods have been proposed to detect pelvic masses but other methods cannot provide sufficient accuracy and increase the computational time while classifying the pelvic mass. To overcome these issues, in this manuscript, the evolutionary gravitational neocognitron neural network optimized with nomadic people optimizer for gynecological abdominal pelvic masses classification is proposed for classifying the pelvic masses (EGNNN-NPOA-PM-UI). The real time ultrasound pelvic mass images are augmented using random transformation. Then the augmented images are given to the 3D Tsallis entropy-based multilevel thresholding technique for extraction of the ROI region and its features are further extracted with the help of fast discrete curvelet transform with the wrapping (FDCT-WRP) method. Therefore, in this work, EGNNN optimized with nomadic people optimizer (NPOA) was utilized for classifying the gynecological abdominal pelvic masses. It was executed in PYTHON and the efficiency of the proposed method analyzed under several performance metrics. The proposed EGNNN-NPOA-PM-UI methods attained 99.8%. Ultrasound image analysis using the proposed EGNNN-NPOA-PM-UI methods can accurately predict pelvic masses analyzed with the existing methods.

1. Introduction

Gynecological abdominal pelvic masses is a deadly gynecological cancer with a 5-year survival rate of only 45% worldwide. [1]. Around 10% asymptomatic postmenopausal women have a gynecological abdominal pelvic masses, often detected incidentally, of which only 1% is malignant [2,3,4]. More than 50% of gynecological abdominal pelvic masses are found in fertile women, who may experience fertility loss as a result of unnecessary or extensive surgery [5,6]. Therefore, accurate assessment of the risk of malignancy is required to personalize and improve treatment [7]. While preserving fertility, benign masses could be treated conventionally with ultrasound monitoring or minimally invasive laparoscopy [8,9,10]. Women with suspected gynecological abdominal or pelvic masses must be referred immediately to a gyneoncology treatment facility because such patients are more likely to have their tumors completely removed and have better survival rates after undergoing surgical treatment from gynecological oncologists [11]. Expert ultrasound examination is a vital imaging technique for examining gynecological abdominal pelvic masses [12,13,14]. Although there is a dearth of expertise, ultrasound has higher diagnostic accuracy in the hands of experts than in those of less experienced medical professionals [15,16,17,18]. Amongst the various research works on pelvic mass classification, some of the latest investigations are assessed here.
Christiansen et al. [19] presented ultrasound image analysis using deep neural networks for discriminating between benign and malignant ovarian tumors—comparison with expert subjective assessment. In this work, transfer learning on three pre-trained DNNs—VGG16, ResNet50, MobileNet—were utilized. The DNN ensemble classified the tumors as benign or malignant; or as benign, inconclusive or malignant. It offers high accuracy, but low f-score.
Hsu et al. [20] suggested an automatic ovarian tumor identification scheme under ensemble convolutional neural network along with ultrasound imaging. Wherein, there were ten training tests of three well-known CNN models (Alexnet, GoogleNet, ResNet) for transfer learning under deep learning methodology. They repeated the training and validation data random sampling ten times to ensure method stability and robustness. The final evaluation data were chosen to be the mean of the ten test results. Following training, ensemble learning with respect to the three models with calculation accuracy ratio to classification time was used. This attains high precision and high computation time.
Chiappa et al. [21] presented adoption of radiomics and found machine learning improves the diagnostic procedures of women with ovarian masses. The radiomics method was applied to US images as per the International Biomarker Standardization Initiative guidelines. Ovarian Masses were divided into three groups: solid, cystic, and motley. The TRACE4 radiomics platform obtained a full-automatic radiomics workflow. It provides low computation time and low accuracy.
Arezzo et al. [22] presented a machine learning method for gynecological ultrasound to predict progression-free survival in ovarian cancer patients. Epithelial ovarian cancer (EOC) patients who were monitored in a tertiary center 2018 to 2019 were examined in the retrospective observational study. Wherein they gathered information on the patient’s demographics, clinical traits, the procedure, and the histopathology following the operation. In addition, classified information about US inspections using the International Ovarian Tumor Analysis (IOTA) system was conducted. The aim was to develop a tool to assess gynecological ultrasound data using an ML algorithm to predict 12-months PFS with OC. An attribute core set was established using proper feature selection. To predict 12 month PFS, logistic regression, random forest, and KNN were trained using five-fold cross-validation. It provides high f-score and low AUC.
Ravishankar et al. [23] suggested a deep learning model for ovarian cyst identification with classification with the help of the fuzzy rule-based convolutional neural network (OCD-FCNN). Automatic OCD and classification were implemented by FCNN. It provides high accuracy and low precision.
Akter and Akhter, [24] presented ovarian cancer forecasting from ovarian cysts depending upon TVUS under machine learning strategies. PLCO with TVUS screening, random forest, KNN, XGBoost within three target variables. It provides high AUC and high computation time.
Athithan et al. [25] presented ultrasound-based ovarian cyst identification with improved machine-learning and stage classification depending on enhanced classifiers. Artificial neural networks, discriminant classifiers, and support vector machines were utilized. It provides high precision and low AUC.
Narmatha et al. [26] presented ovarian cyst categorization utilizing deep reinforcement learning and Harris Hawks optimization (DRL-HHO) approach. Initially, the input ultrasound image was pre-process by eliminating noise and categorization under the DRL-HHO classifier. It provides higher accuracy and lower f-score.
EGNNN, a set of deep learning approaches that learns complex representations of images from the configurations of several simple non-linear units, has powered recent advances in computerized diagnostics. This method represents a paradigm shift because it uses raw image input rather than hand-designed data, as was the case in the past. In computed tomography (lung cancer), photographic imagery (skin cancer), and mammography (breast cancer) it demonstrated that EGNNNs can distinguish between benign and malignant tumors with a performance comparable to that of experienced radiologists. Even though the field is still unfamiliar when it comes to gynecological abdominal pelvic masses, EGNNNs have demonstrated promising results in the diagnosis of thyroid and breast tumors using ultrasound images. Generally, to determine the optimum parameters to assure an accurate gynecological abdominal pelvic masses classification system, EGNNN classifier does not adapt any optimization strategies. To optimize the EGNNN classifier which exactly classifies the pelvic masses type, a nomadic people optimizer (NPOA) is proposed.
The key contributions of this work are abridged below:
  • To find the gynecological abdominal pelvic masses at an early stage.
  • To present a computer aided diagnosis (CAD) system basis on evolutionary gravitational neocognitron neural network (EGNNN) optimized with nomadic people optimizer (NPOA) using ultrasound images.
To acquire better classification accurateness by extracting the optimal radiomic features under the efficient fast discrete curvelet transform with the wrapping method (FDCT-WRP.
  • To lessen the error during classification process.
  • To increase a high area under curve value.
  • The remaining manuscript is arranged as follows: Section 2 describes the proposed technique, Section 3 proves the outcomes, Section 4 divulges discussions, and Section 5 concludes the manuscript with references.

2. Materials and Methods

In this manuscript, an evolutionary gravitational neocognitron neural network optimized with nomadic people optimizer is proposed for gynecological abdominal pelvic masses classification (EGNNN-NPOA-PM-UI). The process begins with augmenting the pelvic mass ultrasound image using random transformation methods including the following: simple image rotations including a flipping process such as rotate right 90 degrees, rotate left 90 degrees, flip vertical, flip horizontal, and rotate 180 degrees. Then the ultrasound images are segmented by utilizing 3D Tsallis entropy-based multilevel thresholding for fine segmentation. From the segmented images, the radiomic features are extracted by FDCT-WRP for further processes. Using those features the pelvic masses are classified by using EGNNN. Several hyper parameters have a considerable influence on the performance of the EGNNN classier. Hyper parameters are necessary to reach better results. Since the trial-and-error model for hyper parameter tuning is a tedious and erroneous process, metaheuristic approaches are employed. Therefore, nomadic people optimizer (NPOA) is applied for the hyper parameter tuning of the EGNNN classier. Figure 1 represents the block diagram of the EGNNN-NPOA-PM-UI method. The detailed explanation of the proposed EGNNN-NPOA-PM-UI method is given below.

2.1. Image Acquisition

Retrospectively, 3077 (grayscale, n = 1927; power Doppler, n = 1150) ultrasound imageries of 758 women with pelvic masses were acquired. In Stockholm, Sweden, between 2010 and 2019, all women underwent a structured expert ultrasound examination before surgery at Karolinska University Hospital (tertiary referral center) and Sodersjukhuset (secondary/tertiary referral center) gynecological ultrasound divisions. The investigations were done by one of six investigators with 7–23 years in the valuation of adnexal lesions. Every examiner was certified (2nd-opinion expert sonographers, i.e., expert analyzer, by the Swedish Society of Obstetrics and Gynecology [19]. One examiner evaluated every single case. The local ethics committee obtained ethical approval (DNR 2010/145, 2011/343). Surgery had to be performed within 120 days of the ultrasound test (n = 634). Then the input ultrasound images were given to the data augmentation process.

2.2. Image Augmentation Phase

The data augmentation process reduces over fitting and increases the generalization ability of the gynecological abdominal pelvic masses classification model. All 634 ultrasound images were taken from the gynecological ultrasound department of Karolinska University Hospital. Data augmentation increases the training data by random transformations. The random transformation methods involve the following: simple image rotations and flipping operation used to all ultrasound images, such as rotate right 90°, rotate left 90°, flip vertically, flip horizontally, rotate 180° [27]. Therefore, the images count is raised by sixty-three times including ultrasound images on which augmentation is applied. Since there are more augmented ultrasound images, the chances for the network to learn the suitable features are raised. Then the augmented histopathological images are given as the input for the segmentation process. The histological outputs of all women with pelvic masses dataset details with the image augmentation result are given in Table 1.

2.3. Segmentation Using 3D Tsallis Entropy-Based Multilevel Thresholding

This is used for the segmentation process. It is a popular method of image segmentation which consider mean and median values of neighbor pixels with pixel intensity at 3D histogram level. This 3D Tsallis entropy has better noise resistance including edge conservation ability [28]. The constraint axis of the 3-dimensional histogram for pelvic mass ultrasound image is specified through M i , its size implies G × H , it has K int intensity within 0 , K int 1 range, the level of intensity of a pixel is given by p a , b , then the local mean is given by p a v e r a g e m e a n a , b , and the median is given by p m e d i a n a , b . The local mean with median values on some coordinates a , b in g × g the neighbor region are given in Equations (1) and (2):
p a v e r a g e m e a n a , b = 1 g × g v = g 1 2 g 1 2 u = g 1 2 g 1 2 p a + v , b + u
p m e d i a n a , b = m e d i a n p a + v , b + u ; v , u = g 1 2 ..... g + 1 2
Here g is fixed and considered as 3 for all pelvic mass ultrasound images in diverse levels of segmentation. The intensity of pixel p a , b = l of the pelvic mass ultrasound image with its corresponding mean p a v e r a g e m e a n a , b = m , p m e d i a n a , b = n intensity values are consolidated to form a gray level triple l , m , n . Every probable triple in the 3D histogram is signified through its combined probability within a cube of volume K × K × K as expressed in Equation (3)
P r o b a b i l i t i e s l , m , n = χ l m n G × H
Here χ l m n implies count of occurrences of a triple l , m , n and 0 l , m , n K 1 . Consider an arbitrary threshold point t 1 , t 2 , a 1 , a 2 , b 1 , b 2 at the 3-dimensional histogram for tri-level thresholding. Here t 1 , t 2 specifies thresholds from pixel gray levels where, a 1 , a 2 signifies the local mean, b 1 , b 2 signifies median thresholds. The probability distribution of Pr o b in the above two object classes and background Pr o b 1 is expressed in the following, Equations (4) and (5):
Pr o b t , a , b = P c u b e t , a , b = l = t 2 + 1 K 1 m = a 2 + 1 K 1 n = b 2 + 1 K 1 Pr o b l m n
where
t = t 1 , t 2 , u = u 1 , u 2 , and   v = v 1 , v 2
3D Tsallis entropy-based multilevel thresholding ( M L ) for gynecological abdominal pelvic masses segmentation can be determined based on the following Equation (6)
E n λ t , a , b = arg max ( E n 1 λ + E n 2 λ + .......... + E n M L + 1 λ + ( 1 λ ) × E n 1 λ × E n 2 λ × × E n M L + 1 λ )
where
E n λ t , a , b = 1 λ 1 1 l = t z + 1 t z m = a z + 1 a z n = b m + 1 b z Pr o b l m n P c u b e z t , a , b λ
And where λ epitomizes the Tsallis entropy index. Through this, it extracts the RoI region of the gynecological abdominal pelvic masses and the outputs are given to the feature extraction phase.

2.4. Feature Extraction Using Fast Discrete Curvelet Transform with Wrapping Method

The feature mining process contributes a vital part in the discovery of gynecological abdominal pelvic masses. A certain feature is extracted from the image for processing the gynecological abdominal pelvic mass ultrasound image. Using the fast discrete curvelet transform with wrapping method, the various kinds of features are mined from the segmented pelvic mass ultrasound image. Then from the segmented pelvic mass ultrasound image R , the curvelet transform C L a , b , c is computed as the inner product of R , Ψ a , b , c and are expressed in Equation (7)
C L a , b , c = R , Ψ a , b , c
where Ψ a , b , c denotes the curvelet basic function, a , b , c signifies scale, position, and direction. Through the process of curvelet transform, every single segmented pelvic mass ultrasound image is divided into several windows in dissimilar scales with directions [29]. The discrete curvelet transform representation and the input pelvic mass ultrasound image, f c 1 , d 1 with 0 a 1 , b 1 < n are expressed in Equation (8):
C L d ^ a , b , c = 0 a 1 , b 1 < m R c 1 , d 1 Ψ a , b , c d ^ c 1 , d 1 ¯
where the digital curvelet waveform is represented as Ψ a , b , c d ^ . The 2nd generation curvelet transform has two methods: wrapping (WRP) and unequally spaced fast Fourier transforms (USFFT). These processes are less redundant, fast, and simple compared to the first-generation curvelet. Nevertheless, between the 2 models, FDCT-WRP is easier and faster compared to USFFT. At every single scale and orientation, the FDCT is attached using the WRP, such as u and v to make a feature vector. The number of the scale along the segmented pelvic mass ultrasound image size represents n c × n s . The texture T e features are extracted using Equation (9):
R ^ ( T e ) = log 2 min n c , n s 3
where every single segmented pelvic mass ultrasound image size is 128 × 128 ; the value of R ^ implies 4, i.e., every single segmented pelvic mass ultrasound image is decayed into 4 levels on curvelet transform. By this, the radiomic features are extracted with the help of fast discrete curvelet transform with wrapping technique. The extracted features from the FDCT-WRP model are delineated with the subsequent Equations (10)–(20)
m e a n x = 1 i × j a = 0 i 1 a × FDCT - WRP   ( a , b )
m e a n y = 1 i × j b = 0 j 1 b × FDCT - WRP   ( a , b )
where FDCT - WRP   ( a , b ) represents FDCT-WRP matrix, i and j represent the image height and image width of the FDCT-WRP matrix on pixels.
Standard   Deviation x = a = 0 i 1 FDCT - WRP   ( a ) m e a n x ( a ) 2
Standard   Deviation y = b = 0 j 1 FDCT - WRP   ( b ) m e a n y ( b ) 2
S k e w n e s s = 1 i × j a = 0 i 1 b = 0 j 1 [ FDCT - WRP   ( a , b ) m e a n ] 3 3
C o n t r a s t = a = 0 i 1 b = 0 j 1 [ FDCT - WRP   ( a , b ) × ( a b ) 2 ]
E n e r g y = a = 0 i 1 b = 0 j 1 FDCT - WRP   ( a , b ) 2
H o m o g e n e i t y = a = 0 i 1 b = 0 j 1 FDCT - WRP   ( a , b ) 1 + a b
E n t r o p y = a = 0 i 1 b = 0 j 1 FDCT - WRP   ( a , b ) × log e ( FDCT - WRP   ( a , b ) )
C o r r e l a t i o n = a = 0 i 1 b = 0 j 1 FDCT - WRP   ( a , b ) × a × b m e a n x × m e a n y S t a n d a r d   D e v i a t i o n x × S t a n d a r d   D e v i a t i o n y
D i s s i m i l a r i t y = a = 0 i = 1 b = 0 j 1 a b FDCT - WRP   ( a , b )
Then, the extracted radiomic features are given to the classification phase.

2.5. Classification Utilizing EGNNN

In this work, EGNNN is utilized for classifying the gynecological abdominal pelvic masses. EGNNN operates 2 units: complex and simple cell for examining the pelvic masses information. When processing the chosen features, the neocognitron neural network makes use of its high layers, which are made up of weights and bias values. Utilizing the evolutionary gravitational process, the weights with bias values are optimized to produce the desired results. Initially, every layer weight at the search space is computed in Equation (21):
Z q = ( z q 1 , .. z q e , .... z q m ) , q = 1 , 2 , ....
where z q e represents weight, z is in the q t h position of dimension e that lies in [0, 1] to calculate the value of the weight. The mass M q ( a ) value of the feature extracting image is evaluated using the effective activation function value and it is given in Equations (22) and (23):
M q ( a ) = a c t i v e q ( a ) p o o r ( a ) g o o d ( a ) p o o r ( a )
Q q ( a ) = M q ( a ) i = 1 n M q ( a )
where Mass M q ( a ) is calculated using the force direction, a c t i v e q ( a ) represents the activation function value at time a . Also, g o o d ( a ) and p o o r ( a ) are depicted in Equations (24) and (25):
g o o d ( a ) = min i m u m a c t i v e q ( a ) q 1 , ......... n
p o o r ( a ) = max i m u m a c t i v e q ( a ) q 1 , ..... n
Then the feature distance is exemplified in Equation (26):
D q i f = C ( a ) Q q ( a ) × Q i ( a ) ( L q i ( a ) ) b + ρ
where D q i f states the feature mass with the dimension, C ( a ) denotes the force of the image, Q q ( a ) and Q i ( a ) two mass values, L q i ( a ) measurement of the distance. Equation (27) denotes the value for ( a ) :
( a ) = C ( C 0 · a )
where C 0 denotes the gravitational weight [30]. Finally, the comparisons are evaluated using position and velocity values for the pre-processed image mentioned in Equation (28):
y q f ( a + 1 ) = y q f ( a ) + α q f ( a + 1 )
The above mentioned result is refreshed consequently during the testing processing then the registered yield is used by the logistic functions as follows in Equation (29):
γ ( g ) = 1 1 + e g
By using EGNNN, three types of pelvic masses are classified, namely benign, malignant, and borderline (serous and mucinous). Generally, EGNNN does not reveal any optimization mode adoption for scaling the optimum parameters to assure accurate categorization of pelvic masses. Hence, NPOA is employed to optimize the weight parameters of EGNNN.

Optimize the Parameters of EGNNN Utilizing Nomadic People Optimizer

Nomadic people optimizer (NPOA) is a swarm based meta-heuristic algorithm. The NPOA algorithm contains various clans and every clan searches for the finest place or solution based on its leader’s position. Here, the step-by-step method is deliberated to attain the finest optimum EGNNN values which depend on deep learning using NPOA. It achieves a seamless transition from exploration to exploitation and is able to achieve global optimums more quickly. This allows the NPOA to arrive at the ideal fitness solution more quickly. The NPOA approach is chosen because it has its own improvement; a good performance to solve high-dimension complex issues. The NPOA algorithm is engaged with weight parameters which are M q ( a ) and Q q ( a ) of EGNNN. For obtaining accurate pelvic mass classification, the NPOA algorithm is utilized. The flowchart representation of the nomadic people optimizer for optimizing the EGNNN classifier is given in Figure 2. The detail processes of NPOA are described below.
Step 1: Initialization
A set of leaders υ , here υ j = υ 1 , υ 2 , ....... , # C l a n s are initialized for optimizing EGNNN weight parameters which is given in Equation (30):
υ d = u b l b × R D + l b
Consider u b and l b denotes upper and lower bound, R D represents the random value between 0 and 1, υ d indicates leader position of the clan d .
Step 2: Random generation
With the help of the nomadic people optimizer, the evolutionary gravitational neocognitron neural network classifier’s input parameters are created randomly after initialization.
Step 3: Fitness Function
The fitness function is determined based on the following Equation (31):
F i t n e s s   f u n c t i o n = o p t i m i z e M q a   and   Q q a  
Step 4: Exploitation behaviors of local search for optimizing M q a
A set of families y , where Y j = Y 1 , Y 2 , .... , # f a m i l i e s is shared to the corresponding leader υ . If solutions are stated in the search space, the problems do not need x , y coordinates. Later, the depiction of solutions is unary (single dimensional) and it is an alternative to two dimensional [31]. The distribution of tents around leader’s tent needs the x coordinate to be random, when eliminating the non-requirement Y coordinate. It is expressed in Equation (32):
Y d = υ d × R D × cos θ
where Y d indicates family position, υ d indicates leader position of clan, R D indicates random number in the [0,1] range.
Step 5: Exploration behavior of global search for optimizing Q q a
If the swarm does not contain any new local best solutions, the exploration is executed. In these conditions, the families search for superior positioning far away from the present local best. The directions are generated using the levy flight formula as expressed in Equation (33):
Y j n e w = Y j o l d + N d ν d Y j o l d L e
where Y j n e w and Y j o l d denote the current family’s new and old positions, N d denotes the area of the clan.
Step 6: Termination
In termination, the optimum hyper-parameter M q ( a ) and Q q ( a ) of the EGNNN parameter are optimized depending on the NPOA algorithm until fulfilling i = i + 1 halting criteria. Finally, the EGNNN-NPOA classifier precisely classifies the pelvic masses as benign, malignant, and borderline (serous and mucinous) with higher accuracy.

3. Results

Evolutionary gravitational neocognitron neural network optimized with nomadic people optimizer for gynecological abdominal pelvic masses classification (EGNNN-NPOA-PM-UI). The proposed technique is activated in PYTHON. The implementations are made in PC utilizing Intel Core i7, seventh Gen Processor at 3.2 GHz, 8 GB RAM, Windows 7. The effectiveness of the EGNNN-NPOA-PM-UI method is evaluated under performance metrics. The obtained results are analyzed with existing models, like ultrasound image examination utilizing DNN for differentiating between benign and malignant ovarian tumors (DNN-VGG16-ResNet50-MobileNet-PM-UI) [19], automatic ovarian tumor identification scheme depending on ensemble convolutional neural network, and ultrasound imaging (CNN-Grad-CAM-PM-UI) [20], adoption of radiomics and machine learning upgrades for the diagnostic processes of women with ovarian masses (SVM-PM-UI) [21], machine learning method used with gynecological ultrasound to forecast progression-free survival in ovarian tumor patients (LR-RFF-KNN-PM-UI) [22], a deep learning method for ovarian cyst identification and categorization (OCD-FCNN) using fuzzy convolutional neural network (FCNN-PM-UI) [23], ovarian cancer estimation from ovarian cysts depending upon TVUS under machine learning strategies (RF-KNN-XGBoost-PM-UI) [24], ultrasound-based ovarian cyst detection with improved machine-lLearning strategies and stage classification under enhanced classifiers (ANN-DC-SVM-PM-UI) [25] and ovarian cyst categorization utilizing deep reinforcement learning and Harris Hawks optimization (DQN-HHOA-PM-UI) [26], respectively. K-fold cross-validation is considered. First, split the data into two parts: training and testing. The training/testing split ratio is randomly split into 3:2. The training part is further separated as five equal folds. The method is trained for five runs. Single fold is applied for validation during each run, with the remaining four folds training the method. The final model parameters are determined using the method with testing accuracy. In this work, 23,965 ultrasound pelvic mass images were taken for training and 15,977 ultrasound pelvic mass image taken for testing. We used data augmentation methods to prevent overfitting. The hyper-parameters are as follows: batch size 12; initial learning rate 1.32 × 10 3 ; the learning rate is multiplied by 0.1 for every 10 epochs. The confusion matrix of the proposed EGNNN-NPOA-PM-UI method for testing ultrasound pelvic mass image is represented in Table 2. The output image of EGNNN-NPOA-PM-UI method is given in Figure 3. The input ultrasound images are given in Figure 3a. and then the segmented outputs are given in Figure 3b while the classification output is given in Figure 3c.

Performance Analysis

Table 3, Table 4, Table 5, Table 6, Table 7 and Table 8 and Figure 4 depict the simulation results of the proposed EGNNN-NPOA-PM-UI method. Then the proposed EGNNN-NPOA-PM-UI method is likened with existing systems, namely, DNN-VGG16-ResNet50-MobileNet-PM-UI [19]; CNN-Grad-CAM-PM-UI [20]; SVM-PM-UI [21]; LR-RFF-KNN-PM-UI [22]; FCNN-PM-UI [23]; RF-KNN-XGBoost-PM-UI [24]; ANN-DC-SVM-PM-UI [25] and DQN-HHOA-PM-UI [26], respectively.
Table 3 represents the accuracy analysis. Here, the EGNNN-NPOA-PM-UI method attains 32.39%, 26.21%, 22.65%, 17.87%, 29.14%, 15.82%, 17.6%, and 13.84% higher accuracy for benign; 26.84%, 21.15%, 18.14%, 23.09%, 26.19%, 14.62%, 31.51%, and 16.22% higher accuracy for borderline (serous and mucinous); 22.07%, 26.07%,19.14%, 25.14%, 18.44%, 20.59%, 25.93%, and 16.63% higher accuracy for malignant with existing methods such as DNN-VGG16-ResNet50-MobileNet-PM-UI, CNN-Grad-CAM-PM-UI, SVM-PM-UI, LR-RFF-KNN-PM-UI, FCNN-PM-UI, RF-KNN-XGBoost-PM-UI, ANN-DC-SVM-PM-UI, DQN-HHOA-PM-UI, respectively.
Table 4 tabulates precision analysis. Here, the EGNNN-NPOA-PM-UI method attains 23.10%, 26.21%, 33.28%, 17.87%, 29.14%, 16.23%, 13.84% and 17.6% higher precision for benign; 18.43%, 21.15%, 27.81%, 23.81%, 26.20%, 20.71%, 13.58%, and 31.51% higher precision for borderline (serous and mucinous); 19.84%, 27.32%, 22.93%, 26.51%, 18.98%, 21.29%, 17.17%, and 26.51% higher precision for malignant with existing methods such as DNN-VGG16-ResNet50-MobileNet-PM-UI, CNN-Grad-CAM-PM-UI, SVM-PM-UI, LR-RFF-KNN-PM-UI, FCNN-PM-UI, RF-KNN-XGBoost-PM-UI, ANN-DC-SVM-PM-UI, DQN-HHOA-PM-UI, respectively.
Table 5 depicts the specificity analysis. Here, the EGNNN-NPOA-PM-UI method attains 29.10%, 16.19%, 14.67%, 17.67%, 22.67%, 32.45%, 25.90% and 18.82% higher specificity for benign; 26.45%, 14.70%, 20.19%, 17.56%, 31.89%, 12.90%, 21.10%, and 23.04% higher specificity for borderline (serous and mucinous); 12.34%, 20.78%, 17.89%, 14.89%, 20.78%, 24.87%, 30.89%, and 25.76% higher specificity for malignant with existing methods such as DNN-VGG16-ResNet50-MobileNet-PM-UI, CNN-Grad-CAM-PM-UI, SVM-PM-UI, LR-RFF-KNN-PM-UI, FCNN-PM-UI, RF-KNN-XGBoost-PM-UI, ANN-DC-SVM-PM-UI, DQN-HHOA-PM-UI, respectively.
Table 6 depicts the sensitivity analysis. Here, the EGNNN-NPOA-PM-UI method attains 24.78%, 30.12%, 15.87%, 17.90%, 31.78%, 12.89%, 11.90% and 18.90% higher sensitivity for benign; 17.9%, 12.28%, 13.89%, 32.09%, 13.78%, 18.90%, 13.87% and 16.6% higher sensitivity for borderline (serous and mucinous); 13.90%, 15.78%, 11.89%, 20.89%, 21.89%, 17.89%, 12.90%, and 11.89% higher sensitivity for malignant with existing methods such as DNN-VGG16-ResNet50-MobileNet-PM-UI, CNN-Grad-CAM-PM-UI, SVM-PM-UI, LR-RFF-KNN-PM-UI, FCNN-PM-UI, RF-KNN-XGBoost-PM-UI, ANN-DC-SVM-PM-UI, DQN-HHOA-PM-UI, respectively.
Table 7 displays F1-score analysis. Here, the EGNNN-NPOA-PM-UI method attains 36.93%, 33.28%, 23.41%, 25.90%, 19%, 17.60%, 28.16%, and 21.90% higher F1-score for benign; 33.28%, 25.73%, 21.90%, 18.01%, 17.32%, 14.37%, 25.73%, and 14.89% higher F1-score for borderline (serous and mucinous); 29.47%, 33.62%, 18.28%, 22.04%, 13.07%, 11.93%, 19.70% and 16.22% higher F1-Score for Malignant with existing DNN-VGG16-ResNet50-MobileNet-PM-UI, CNN-Grad-CAM-PM-UI, SVM-PM-UI, LR-RFF-KNN-PM-UI, FCNN-PM-UI, RF-KNN-XGBoost-PM-UI, ANN-DC-SVM-PM-UI, DQN-HHOA-PM-UI models, respectively.
Table 8 depicts computation time analysis. Here, the EGNNN-NPOA-PM-UI method attains 67.94%, 65.28%, 60.85%, 63.34%, 59.11%, 40.64%, 52.82%, and 48.31% lower computation time with existing methods such as DNN-VGG16-ResNet50-MobileNet-PM-UI, CNN-Grad-CAM-PM-UI, SVM-PM-UI, LR-RFF-KNN-PM-UI, FCNN-PM-UI, RF-KNN-XGBoost-PM-UI, ANN-DC-SVM-PM-UI, DQN-HHOA-PM-UI, respectively.
Figure 4 depicts the ROC curve for detection of gynecological abdominal pelvic masses. Then, the ROC of the proposed EGNNN-NPOA-PM-UI method provides 16.78%, 13.71%, 11.04%, 9.94%, 6.53%, 8.98%, 7.45%, and 5.73% higher area under curve (AUC) than the existing methods, like DNN-VGG16-ResNet50-MobileNet-PM-UI, CNN-Grad-CAM-PM-UI, SVM-PM-UI, LR-RFF-KNN-PM-UI, FCNN-PM-UI, RF-KNN-XGBoost-PM-UI, ANN-DC-SVM-PM-UI and DQN-HHOA-PM-UI, respectively.

4. Discussion

An ultrasound image analysis using the proposed EGNNN-NPOA-PM-UI method can predict pelvic masses with diagnostic accuracy comparable to the existing methods, like DNN-VGG16-ResNet50-MobileNet-PM-UI, CNN-Grad-CAM-PM-UI, SVM-PM-UI, LR-RFF-KNN-PM-UI, FCNN-PM-UI, RF-KNN-XGBoost-PM-UI, ANN-DC-SVM-PM-UI and DQN-HHOA-PM-UI, respectively. The selection of a ROI needs considerably less involvement along domain expertise by the operator. The proposed EGNNN-NPOA-PM-UI method’s capacity to directly learn great representative features, on various scales and abstraction levels from huge data sets of raw imageries is the key to its effectiveness. By this, the features emerge that are more discriminative than the traditional handcrafted descriptors. The proposed method is simple to implement because any center could upload a collection of anonymized images directly from their workstation to a cloud platform that houses the model, without first having to evaluate the images objectively or provide additional patient data. The proposed EGNNN-NPOA-PM-UI method for classifying pelvic masses will be used by non-expert investigators, but it is also helpful to specialists as a second reader. Because they contain limited access to a second opinion from ultrasound specialists, many clinics and private practitioners may use simple rules to label cases that are not conclusive for malignancy or use simple-rules risk (SRR) to achieve an acceptable sensitivity as well as observing all patients.
As with the higher sensitivity, the higher specificity is also significant, because more tumors are discovered incidentally. Unnecessary surgery wastes health-care system resources and can cause morbidity, sometimes even loss of fertility. One-third of women with false-positive pelvic mass results underwent adnexal surgery in a sizable, randomized screening study. The significance of reducing false-positive diagnoses is highlighted by the fact that 15% of these experienced at least a key obstacle. For that, the proposed EGNNN-NPOA-PM-UI method achieves better results with accuracy 99.8%. It efficaciously classifies the pelvic mass ultrasound image as borderline (serous and mucinous), benign, and malignant by showing performance equivalent to or even above doctors. It has the potential to enable automatic classification of pelvic mass types as borderline (serous and mucinous), benign, and malignant using ultrasound images worldwide, and it is both intellectually intriguing and clinically significant. Based on these findings, the proposed EGNNN-NPOA-PM-UI technique is considered as the best choice for pelvic mass classification.

5. Conclusions

Here, the evolutionary gravitational neocognitron neural network optimized with nomadic people optimizer for gynecological abdominal pelvic masses classification was implemented successfully for classifying pelvic masses, namely benign, malignant, and borderline (serous and mucinous). The simulation was conducted in PYTHON; its effectiveness was examined with the above mentioned performance metrics. Finally, the proposed EGNNN-NPOA-PM-UI method attains 16.78%, 13.71%, 11.04%, 9.94%, 6.53%, 8.98%, 7.45%, and 5.73% higher area under curve (AUC); 67.94%, 65.28%, 60.85%, 63.34%, 59.11%, 40.64%, 52.82% and 48.31% lower computation time; 31.47%, 32.62%, 19.28%, 21.04%, 14.07%, 13.93%, 21.70%, and 18.22% higher F1-score and 25.84%, 23.15%, 19.14%, 24.09%, 25.19%, 16.62%, 29.51%, and 17.22% higher accuracy compared with existing methods like DNN-VGG16-ResNet50-MobileNet-PM-UI, CNN-Grad-CAM-PM-UI, SVM-PM-UI, LR-RFF-KNN-PM-UI, FCNN-PM-UI, RF-KNN-XGBoost-PM-UI, ANN-DC-SVM-PM-UI, and DQN-HHOA-PM-UI, respectively. It has the potential to enable automatic classification of pelvic mass types as borderline (serous and mucinous), benign, and malignant using ultrasound images worldwide, and it is both intellectually intriguing and clinically significant. Based on these findings, the proposed EGNNN-NPOA-PM-UI technique is considered as the best choice for pelvic mass classification. The proposed EGNNN-NPOA-PM-UI technique is accurate, simple to implement, and can be easily adapted to other medical imaging tasks.
The study on “Elevating Gynecological Healthcare: Unveiling Pelvic Masses Classification” exhibits notable strengths in its attempt to enhance gynecological healthcare through the classification of pelvic masses. However, it is essential to acknowledge certain limitations. One primary constraint lies in the relatively small sample size used for the analysis, which may limit the generalizability of the findings and potentially introduce sampling bias. Furthermore, the need for further validation on a larger and more diverse dataset is evident to ensure the robustness and reliability of the proposed classification model. Additionally, the study suggests the exploration of alternative optimization algorithms as a direction for future research, indicating the potential for improvements in model performance. Addressing these limitations and pursuing these research directions will undoubtedly contribute to the advancement of pelvic mass classification in gynecological healthcare.

Author Contributions

Writing—original draft, M.K.; Supervision, M.D. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data sharing is not applicable to this article as no new data were created or analyzed in this study.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Su, X.; Yuan, T.; Wang, Z.; Song, K.; Li, R.; Yuan, C.; Kong, B. Two-dimensional light scattering anisotropy cytometry for label-free classification of ovarian cancer cells via machine learning. Cytom. Part A 2020, 97, 24–30. [Google Scholar] [CrossRef] [PubMed]
  2. Graham, C.A.; Shamkhalichenar, H.; Browning, V.E.; Byrd, V.J.; Liu, Y.; Gutierrez-Wing, M.T.; Novelo, N.; Choi, J.W.; Tiersch, T.R. A practical evaluation of machine learning for classification of ultrasound images of ovarian development in channel catfish (Ictalurus punctatus). Aquaculture 2022, 552, 738039. [Google Scholar] [CrossRef]
  3. Rajesh, P.; Shajin, F.H.; Kannayeram, G. A novel intelligent technique for energy management in smart home using internet of things. Appl. Soft Comput. 2022, 128, 109442. [Google Scholar] [CrossRef]
  4. Chen, Q.; Zhang, J. Classification and recognition of ovarian cells based on two-dimensional light scattering technology. J. Med. Syst. 2019, 43, 127. [Google Scholar] [CrossRef]
  5. Nougaret, S.; McCague, C.; Tibermacine, H.; Vargas, H.A.; Rizzo, S.; Sala, E. Radiomics and radiogenomics in ovarian cancer: A literature review. Abdom. Radiol. 2021, 46, 2308–2322. [Google Scholar] [CrossRef] [PubMed]
  6. Nougaret, S.; Tardieu, M.; Vargas, H.A.; Reinhold, C.; Perre, S.V.; Bonanno, N.; Sala, E.; Thomassin-Naggara, I. Ovarian cancer: An update on imaging in the era of radiomics. Diagn. Interv. Imaging 2019, 100, 647–655. [Google Scholar] [CrossRef]
  7. Pesenti, C.; Beltrame, L.; Velle, A.; Fruscio, R.; Jaconi, M.; Borella, F.; Cribiù, F.M.; Calura, E.; Venturini, L.V.; Lenoci, D.; et al. Copy number alterations in stage I epithelial ovarian cancer highlight three genomic patterns associated with prognosis. Eur. J. Cancer 2022, 171, 85–95. [Google Scholar] [CrossRef]
  8. Cordero Hernandez, Y.; Boskamp, T.; Casadonte, R.; Hauberg-Lotte, L.; Oetjen, J.; Lachmund, D.; Peter, A.; Trede, D.; Kriegsmann, K.; Kriegsmann, M.; et al. Targeted feature extraction in MALDI mass spectrometry imaging to discriminate proteomic profiles of breast and ovarian cancer. PROTEOMICS–Clin. Appl. 2019, 13, 1700168. [Google Scholar] [CrossRef]
  9. Shajin, F.H.; Aruna Devi, B.; Prakash, N.B.; Sreekanth, G.R.; Rajesh, P. Sailfish optimizer with Levy flight, chaotic and opposition-based multi-level thresholding for medical image segmentation. Soft Comput. 2023, 27, 12457–12482. [Google Scholar] [CrossRef]
  10. Shajin, F.H.; Rajesh, P.; Raja, M.R. An efficient VLSI architecture for fast motion estimation exploiting zero motion prejudgment technique and a new quadrant-based search algorithm in HEVC. Circuits Syst. Signal Process. 2022, 41, 1751–1774. [Google Scholar] [CrossRef]
  11. Rajesh, P.; Shajin, F. A multi-objective hybrid algorithm for planning electrical distribution system. Eur. J. Electr. Eng. 2020, 22, 224–509. [Google Scholar] [CrossRef]
  12. Rajesh, P.; Kannan, R.; Vishnupriyan, J.; Rajani, B. Optimally detecting and classifying the transmission line fault in power system using hybrid technique. ISA Trans. 2022, 130, 253–264. [Google Scholar] [CrossRef]
  13. Giamougiannis, P.; Morais, C.L.; Grabowska, R.; Ashton, K.M.; Wood, N.J.; Martin-Hirsch, P.L.; Martin, F.L. A comparative analysis of different biofluids towards ovarian cancer diagnosis using Raman microspectroscopy. Anal. Bioanal. Chem. 2021, 413, 911–922. [Google Scholar] [CrossRef]
  14. Gupta, S.; Gupta, M.K.; Kumar, R. A novel multi-neural ensemble approach for cancer diagnosis. Appl. Artif. Intell. 2022, 36, 2018182. [Google Scholar] [CrossRef]
  15. Sharma, N.; Saba, L.; Johri, A.M.; Paraskevas, K.; Nicolaides, A. Automated Hybrid Deep Learning-Based Paradigm for High-Risk Plaque Detection in B-mode Common Carotid Ultrasound Scans: An Asymptomatic Japanese Cohort Study. 2022 AIUM Award Win. 2022, 41, 125. [Google Scholar]
  16. Akazawa, M.; Hashimoto, K. Artificial intelligence in gynecologic cancers: Current status and future challenges–A systematic review. Artif. Intell. Med. 2021, 120, 102164. [Google Scholar] [CrossRef] [PubMed]
  17. Thanupillai, K.S.; Kamal Basha, R. Pulse coupled neural network optimized with chaotic grey wolf algorithm for breast cancer classification using mammogram images. Concurr. Comput. Pract. Exp. 2023, 35, e7448. [Google Scholar] [CrossRef]
  18. Prakash, T.S.; Kumar, A.S.; Durai, C.R.B.; Ashok, S. Enhanced Elman spike Neural network optimized with flamingo search optimization algorithm espoused lung cancer classification from CT images. Biomed. Signal Process. Control 2023, 84, 104948. [Google Scholar] [CrossRef]
  19. Christiansen, F.; Epstein, E.L.; Smedberg, E.; Åkerlund, M.; Smith, K.; Epstein, E. Ultrasound image analysis using deep neural networks for discriminating between benign and malignant ovarian tumors: Comparison with expert subjective assessment. Ultrasound Obstet. Gynecol. 2021, 57, 155–163. [Google Scholar] [CrossRef]
  20. Hsu, S.T.; Su, Y.J.; Hung, C.H.; Chen, M.J.; Lu, C.H.; Kuo, C.E. Automatic ovarian tumors recognition system based on ensemble convolutional neural network with ultrasound imaging. BMC Med. Inform. Decis. Mak. 2022, 22, 298. [Google Scholar] [CrossRef]
  21. Chiappa, V.; Bogani, G.; Interlenghi, M.; Salvatore, C.; Bertolina, F.; Sarpietro, G.; Signorelli, M.; Castiglioni, I.; Raspagliesi, F. The Adoption of Radiomics and machine learning improves the diagnostic processes of women with Ovarian MAsses (the AROMA pilot study). J. Ultrasound 2021, 24, 429–437. [Google Scholar] [CrossRef]
  22. Arezzo, F.; Cormio, G.; La Forgia, D.; Santarsiero, C.M.; Mongelli, M.; Lombardi, C.; Cazzato, G.; Cicinelli, E.; Loizzi, V. A machine learning approach applied to gynecological ultrasound to predict progression-free survival in ovarian cancer patients. Arch. Gynecol. Obstet. 2022, 306, 2143–2154. [Google Scholar] [CrossRef]
  23. Ravishankar, T.N.; Jadhav, H.M.; Kumar, N.S.; Ambala, S. A deep learning approach for ovarian cysts detection and classification (OCD-FCNN) using fuzzy convolutional neural network. Meas. Sens. 2023, 27, 100797. [Google Scholar] [CrossRef]
  24. Akter, L.; Akhter, N. Ovarian cancer prediction from ovarian cysts based on TVUS using machine learning algorithms. In Proceedings of the International Conference on Big Data, IoT, and Machine Learning: BIM 2021, Cox’s Bazar, Bangladesh, 23–25 September 2021; Springer: Singapore, 2022; pp. 51–61. [Google Scholar] [CrossRef]
  25. Athithan, S.; Sachi, S.; Singh, A.K. Ultrasound-Based Ovarian Cysts Detection with Improved Machine-Learning Techniques and Stage Classification Using Enhanced Classifiers. SN Comput. Sci. 2023, 4, 571. [Google Scholar] [CrossRef]
  26. Narmatha, C.; Manimegalai, P.; Krishnadass, J.; Valsalan, P.; Manimurugan, S.; Mustafa, M. Ovarian cysts classification using novel deep reinforcement learning with Harris Hawks Optimization method. J. Supercomput. 2023, 79, 1374–1397. [Google Scholar] [CrossRef]
  27. Oyelade, O.N.; Ezugwu, A.E. A deep learning model using data augmentation for detection of architectural distortion in whole and patches of images. Biomed. Signal Process. Control 2021, 65, 102366. [Google Scholar] [CrossRef]
  28. Jena, B.; Naik, M.K.; Panda, R.; Abraham, A. Maximum 3D Tsallis entropy based multilevel thresholding of brain MR image using attacking Manta Ray foraging optimization. Eng. Appl. Artif. Intell. 2021, 103, 104293. [Google Scholar] [CrossRef]
  29. Muduli, D.; Dash, R.; Majhi, B. Fast discrete curvelet transform and modified PSO based improved evolutionary extreme learning machine for breast cancer detection. Biomed. Signal Process. Control 2021, 70, 102919. [Google Scholar] [CrossRef]
  30. Gomathi, P.; Baskar, S.; Shakeel, P.M.; Dhulipala, V.S. Identifying brain abnormalities from electroencephalogram using evolutionary gravitational neocognitron neural network. Multimed. Tools Appl. 2020, 79, 10609–10628. [Google Scholar] [CrossRef]
  31. Salih, S.Q.; Alsewari, A.A. A new algorithm for normal and large-scale optimization problems: Nomadic People Optimizer. Neural Comput. Appl. 2020, 32, 10359–10386. [Google Scholar] [CrossRef]
Figure 1. Proposed EGNNN-NPOA-PM-UI technique.
Figure 1. Proposed EGNNN-NPOA-PM-UI technique.
Diagnostics 13 03131 g001
Figure 2. Flowchart representation of nomadic people optimizer for optimizing EGNNN classifier.
Figure 2. Flowchart representation of nomadic people optimizer for optimizing EGNNN classifier.
Diagnostics 13 03131 g002
Figure 3. Output image of proposed EGNNN-NPOA-PM-UI method.
Figure 3. Output image of proposed EGNNN-NPOA-PM-UI method.
Diagnostics 13 03131 g003
Figure 4. Performance of ROC analysis.
Figure 4. Performance of ROC analysis.
Diagnostics 13 03131 g004
Table 1. Histological output of all women with pelvic masses dataset details with image augmentation result.
Table 1. Histological output of all women with pelvic masses dataset details with image augmentation result.
Histological OutputEvery Cases
(n = 634)
After Augmentation
Benign32520,475
Benign TypesEndometrioma462898
Dermoid744662
Simple/functional cyst311953
Paraovarian cyst12756
Rare benign9567
(Hydro-)pyosalpinx14882
Fibroma/myoma251575
Cystadenoma/cystadenofibroma1086804
Peritoneal/inclusion cyst6378
Borderline553465
Borderline TypesSerous352205
Mucinous201260
Malignant25416,002
Malignant TypesEpithelial ovarian cancer16910,647
Non-epithelial ovarian cancer281764
Metastatic ovarian tumor573591
Total 63439,942
Table 2. Confusion matrix for testing ultrasound pelvic mass image.
Table 2. Confusion matrix for testing ultrasound pelvic mass image.
PredictedBenignBorderline (Serous and Mucinous)Malignant
Actual: Benign797721
Actual: Borderline (Serous and Mucinous)120760
Actual: Malignant125917
Table 3. Performance of accuracy analysis.
Table 3. Performance of accuracy analysis.
MethodsBenign (Values in %)Borderline (Serous and Mucinous) (Values in %)Malignant (Values in %)
DNN-VGG16-ResNet50-MobileNet-PM-UI75.578.881.5
CNN-Grad-CAM-PM-UI79.282.578.5
SVM-PM-UI81.584.683.5
LR-RFF-KNN-PM-UI84.881.279.5
FCNN-PM-UI77.479.284
RF-KNN-XGBoost-PM-UI86.387.282.5
ANN-DC-SVM-PM-UI857679
DQN-HHOA-PM-UI87.88685.3
EGNNN-NPOA-PM-UI (Proposed)99.9699.9599.49
Table 4. Performance of precision analysis.
Table 4. Performance of precision analysis.
MethodsBenign (Values in %)Borderline (Serous and Mucinous) (Values in %)Malignant (Values in %)
DNN-VGG16-ResNet50-MobileNet-PM-UI81.284.483.4
CNN-Grad-CAM-PM-UI79.282.578.5
SVM-PM-UI7578.281.3
LR-RFF-KNN-PM-UI84.881.279
FCNN-PM-UI77.479.284
RF-KNN-XGBoost-PM-UI8682.882.4
ANN-DC-SVM-PM-UI87.88885.3
DQN-HHOA-PM-UI857679
EGNNN-NPOA-PM-UI (Proposed)99.9699.95599.95
Table 5. Performance of specificity analysis.
Table 5. Performance of specificity analysis.
MethodsBenign (Values in %)Borderline (Serous and Mucinous) (Values in %)Malignant (Values in %)
DNN-VGG16-ResNet50-MobileNet-PM-UI77.479.284
CNN-Grad-CAM-PM-UI868882.5
SVM-PM-UI87.286.885.3
LR-RFF-KNN-PM-UI857679
FCNN-PM-UI81.484.483.1
RF-KNN-XGBoost-PM-UI75.378.282.3
ANN-DC-SVM-PM-UI79.582.579.5
DQN-HHOA-PM-UI84.181.279
EGNNN-NPOA-PM-UI (Proposed)99.9399.9199.94
Table 6. Performance of sensitivity analysis.
Table 6. Performance of sensitivity analysis.
MethodsBenign (Values in %)Borderline (Serous and Mucinous) (Values in %)Malignant (Values in %)
DNN-VGG16-ResNet50-MobileNet-PM-UI77.579.583.5
CNN-Grad-CAM-PM-UI85.583.281.5
SVM-PM-UI75.576.279.3
LR-RFF-KNN-PM-UI83.281.278.1
FCNN-PM-UI81.382.584.5
RF-KNN-XGBoost-PM-UI87.28688
ANN-DC-SVM-PM-UI7983.581.1
DQN-HHOA-PM-UI85.580.582.8
EGNNN-NPOA-PM-UI (Proposed)99.9199.9199.43
Table 7. Performance of F1-score analysis.
Table 7. Performance of F1-score analysis.
MethodsBenign (Values in %)Borderline (Serous and Mucinous) (Values in %)Malignant (Values in %)
DNN-VGG16-ResNet50-MobileNet-PM-UI737577.2
CNN-Grad-CAM-PM-UI7579.574.8
SVM-PM-UI818284.5
LR-RFF-KNN-PM-UI79.484.781.9
FCNN-PM-UI8485.288.4
RF-KNN-XGBoost-PM-UI8587.489.3
ANN-DC-SVM-PM-UI7879.583.5
DQN-HHOA-PM-UI828786
EGNNN-NPOA-PM-UI (Proposed)99.96599.9699.955
Table 8. Performance of computation time analysis.
Table 8. Performance of computation time analysis.
MethodsComputation Time (ms)
CNN-Grad-CAM-PM-UI287
CNN-Grad-CAM-PM-UI265
SVM-PM-UI235
LR-RFF-KNN-PM-UI251
FCNN-PM-UI225
RF-KNN-XGBoost-PM-UI155
ANN-DC-SVM-PM-UI195
DQN-HHOA-PM-UI178
EGNNN-NPOA-PM-UI (Proposed)92
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Deeparani, M.; Kalamani, M. Gynecological Healthcare: Unveiling Pelvic Masses Classification through Evolutionary Gravitational Neocognitron Neural Network Optimized with Nomadic People Optimizer. Diagnostics 2023, 13, 3131. https://doi.org/10.3390/diagnostics13193131

AMA Style

Deeparani M, Kalamani M. Gynecological Healthcare: Unveiling Pelvic Masses Classification through Evolutionary Gravitational Neocognitron Neural Network Optimized with Nomadic People Optimizer. Diagnostics. 2023; 13(19):3131. https://doi.org/10.3390/diagnostics13193131

Chicago/Turabian Style

Deeparani, M., and M. Kalamani. 2023. "Gynecological Healthcare: Unveiling Pelvic Masses Classification through Evolutionary Gravitational Neocognitron Neural Network Optimized with Nomadic People Optimizer" Diagnostics 13, no. 19: 3131. https://doi.org/10.3390/diagnostics13193131

APA Style

Deeparani, M., & Kalamani, M. (2023). Gynecological Healthcare: Unveiling Pelvic Masses Classification through Evolutionary Gravitational Neocognitron Neural Network Optimized with Nomadic People Optimizer. Diagnostics, 13(19), 3131. https://doi.org/10.3390/diagnostics13193131

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop