Next Article in Journal
A Real-Time Polyp-Detection System with Clinical Application in Colonoscopy Using Deep Convolutional Neural Networks
Next Article in Special Issue
Prospective of Pancreatic Cancer Diagnosis Using Cardiac Sensing
Previous Article in Journal
Path Tracing vs. Volume Rendering Technique in Post-Surgical Assessment of Bone Flap in Oncologic Head and Neck Reconstructive Surgery: A Preliminary Study
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Human Hepatocellular Carcinoma Classification from H&E Stained Histopathology Images with 3D Convolutional Neural Networks and Focal Loss Function

by
Umut Cinar
*,
Rengul Cetin Atalay
and
Yasemin Yardimci Cetin
Graduate School of Informatics, Middle East Technical University, Ankara 06800, Turkey
*
Author to whom correspondence should be addressed.
J. Imaging 2023, 9(2), 25; https://doi.org/10.3390/jimaging9020025
Submission received: 20 December 2022 / Revised: 13 January 2023 / Accepted: 18 January 2023 / Published: 21 January 2023

Abstract

:
This paper proposes a new Hepatocellular Carcinoma (HCC) classification method utilizing a hyperspectral imaging system (HSI) integrated with a light microscope. Using our custom imaging system, we have captured 270 bands of hyperspectral images of healthy and cancer tissue samples with HCC diagnosis from a liver microarray slide. Convolutional Neural Networks with 3D convolutions (3D-CNN) have been used to build an accurate classification model. With the help of 3D convolutions, spectral and spatial features within the hyperspectral cube are incorporated to train a strong classifier. Unlike 2D convolutions, 3D convolutions take the spectral dimension into account while automatically collecting distinctive features during the CNN training stage. As a result, we have avoided manual feature engineering on hyperspectral data and proposed a compact method for HSI medical applications. Moreover, the focal loss function, utilized as a CNN cost function, enables our model to tackle the class imbalance problem residing in the dataset effectively. The focal loss function emphasizes the hard examples to learn and prevents overfitting due to the lack of inter-class balancing. Our empirical results demonstrate the superiority of hyperspectral data over RGB data for liver cancer tissue classification. We have observed that increased spectral dimension results in higher classification accuracy. Both spectral and spatial features are essential in training an accurate learner for cancer tissue classification.

1. Introduction

In 2020, approximately 960,000 new liver cancer cases were diagnosed, and 830,000 deaths due to liver cancer have been reported. Liver cancer is the sixth most encountered cancer type and is the third leading cause of cancer death globally. Hepatocellular carcinoma (HCC) is the most common type of liver cancer, with an occurrence rate of 80% [1]. In 70–90% of HCC patients, the risk factors include hepatitis B virus, hepatitis C virus, exposure to aflatoxin B1, cirrhosis, and excessive alcohol consumption. The tumor nodules can be monitored with ultrasonography at the early stages, and resection is considered the primary treatment method for patients with sufficient liver functionality and small tumor solitary scores. For more complex cases, treatment procedures might include liver transplantation, chemoembolization, and molecular-targeted therapies [2]. Pathology is an essential field for diagnosing HCC, screening the grade of disease, detecting the risk of recurrence after surgery, and developing new treatment techniques, including medicines. A variety of genetic mutations occur within the cells during cancer disease. The cancerous tissues are differentiable by expert pathologists with the help of a biopsy operation. However, the tissue examination process is tedious and time-consuming for expert pathologists [3].
A significant number of researchers have worked on computer-aided diagnosis (CAD), including the tumor detection problem for many years [4]. For the problem of detecting cancerous tumors from pathological images, there are various methods proposed. Using RGB data, the study [5] suggests using Atrous Spatial Pyramid Pooling (ASPP) blocks to obtain multi-scale texture features from Hematoxylin and Eosin (H&E) stained HCC histopathology images. ASPP blocks have been placed after each max-pool layer to generate a multi-scale sample space. With the help of this approach, texture features in images are effectively utilized by the deep neural network, and the experimental results showed 90.93% accuracy in the four-category classification of HCC images. Another study [6] works on the HCC grade differentiation from multiphoton microscopy images. The researchers adapt VGG-16 CNN topology to train a classifier on a dataset consisting of three grades of HCC disease, including well-differentiated, moderately-differentiated, and poorly-differentiated groups. Over 90% HCC differentiation accuracy has been obtained, and the results show the validity of deep learning approaches with multiphoton fluorescence imagery. Alternatively, hyperspectral imaging is a powerful tool for describing the subject matter up to its chemical properties, and it has many applications in remote sensing [7], agriculture [8], food safety [9], environment [10], and more [11]. Hyperspectral imaging technology provides promising results for CAD researchers [12]. A 2001 study shows an early example of hyperspectral imaging integrated with a light microscope [13]. A reference hardware system is devised to capture hyperspectral cubes from microscopy tissue slides. The proposed approach combines an imaging spectrograph with an epi-fluorescence microscope. Different wavelength light sources have been used to illuminate the samples, such as a 532 nm solid-state laser, a helium-neon laser, an argon ion laser, and a pulsed-doubled nitrogen dye laser. The sample slide is moved with a motorized mover during the data capturing. A Charge-Coupled Device (CCD) camera captures the reflected light, and custom software is developed to visualize and store hyperspectral data. The study [14] proposes a new approach based on hyperspectral image analysis to address the Anaplastic Lymphoma Kinase (ALK) positive and negative tumor identification problems. Sixty-channel hyperspectral data from lung cancer tissues are captured by an Acousto-Optic Tunable Filter (AOTF) based hyperspectral imaging system. Using a Support Vector Machine (SVM) based segmentation algorithm, the lung tissue images are segmented into three regions: cell nucleus, cytoplasm, and blank area. The accuracy of the segmentation model is calculated with the help of manual ground truth data provided by a lung cancer expert. The segmentation accuracy for each class is evaluated to conclude a treatment prescription focusing on ALK-positive and ALK-negative tumor diversity. In another study [15], a similar hyperspectral imaging system powered by AOTF is employed to collect hyperspectral data from bile duct tissue samples with 30 channels. Deep convolutional neural networks (CNN) with architectures Inception-V3 [16] and Restnet50 [17] are deployed for building a prediction model. A spectral interval convolution method is proposed to adapt hyperspectral data with deep learning architectures. CNN experiments have been conducted by feeding image patches to the network. A random forest-based approach is utilized to provide scene-level predictions by combining image patch predictions from the same scene. The authors have reported a tumor detection accuracy of 0.93 with hyperspectral data and 0.92 with RGB data.
This paper proposes a new HCC tumor detection framework based on hyperspectral imaging and 3D Convolutional Neural Networks. We have built a microscopy biological tissue image-capturing system in-house by integrating a push broom VNIR hyperspectral camera and a light microscopy device. We collected a wide range of spectral data between 400 nm to 800 nm from liver tissue samples. The captured images from each sample are divided into smaller patches and fed into a custom 3D convolution-based CNN learner to generate a strong cancer tissue prediction model. With 3D convolution operation, both spectral and spatial features are considered while training the classification model. 3D convolution operation enables the capture of local spectral features in the hyperspectral cube. Furthermore, we have employed the focal loss function as the CNN cost function to overcome the class imbalance problem [18]. We have empirically demonstrated that 3D convolutions significantly improve classification accuracy compared to 2D convolutions operated on the same dataset. During our experiments, we demonstrated the superiority of hyperspectral data over its RGB counterpart.
Compared to the existing literature, we employ more spectral bands for the tissue classification task in this study. Although AOTF is a widely used piece of equipment in tissue classification tasks, there are studies reporting that AOTF may not be reliable enough for radiometric measurements due to the lack of homogeneity of the diffraction efficiency [19,20,21]. Unlike the existing studies, we employ a hyperspectral VNIR camera as primary imaging equipment to obtain reliable data. We introduce a solid classification framework based on a new deep-learning topology and 3D convolutions. Additionally, in contrast to other studies, we clearly compare performances of hyperspectral, Principal Component Analysis (PCA) of hyperspectral and RGB datasets on classification accuracy. The contributions of this paper can be summarized as follows. Firstly, we have built a biological tissue image capture system in our laboratory by integrating a hyperspectral camera and a light microscope with a 3D-printed motorized stepper. Secondly, we have demonstrated that hyperspectral data considerably improves classification performance. RGB data can represent spatial features of tumor tissues in fine detail, while, hyperspectral imaging captures both spatial and spectral features of tumor tissues leveraging the deep neural network classification accuracy. Thirdly, our proposed method takes advantage of the hyperspectral cube by utilizing 3D convolutional neural networks. 3D kernels enable the learner to extract voxel information with a compact approach. The use of a 3D convolution operator in CNN can generate both spectral and spatial features via the same single convolution operation. Additionally, our method does not require manual feature engineering as pre-processing or post-processing stages in the classification pipeline. Thus, 3D convolutional neural networks commit better generalization performance with a simpler network topology. Finally, our paper tackles the class imbalance problem, a common challenging aspect for most medical image analysis studies. We have employed the focal loss function within our classification model. The focal loss method compensates for class imbalance by using a focusing parameter in the cross-entropy function, and the learner’s sensitivity to misclassified samples is boosted. Furthermore, the focal loss function is capable of increasing model generalization without causing overfitting.
The rest of the paper is organized as follows. In Section 2, we give details of our methodology, including data capture and deep learning steps. Section 3 presents our experimental results by comparing different sets of parameters and learner configurations. Finally, in Section 4, we provide discussions with a brief conclusion of the study, including its limitations and suggestions for future research.

2. Materials and Methods

2.1. Data Acquisition

In this study, we have developed a hyperspectral microscopy image-capturing system by integrating a Headwall A-series VNIR model push-broom hyperspectral camera (Headwall Photonics Inc., Bolton, MA, USA) and a Euromex Oxion light microscope (Euromex, Arnhem, The Netherlands) in our laboratory. A sample photograph from our data acquisition system is presented in Figure 1. The light microscope’s objective lens is configured to display the samples with 40× magnification. The hyperspectral camera is capable of capturing 408 spectral bands between 400 nm to 1000 nm. We calibrated and verified our imaging setup using a microscope stage calibration slide for optimum image quality. In this regard, our imaging system’s spatial resolution is 0.55 microns. Our imaging system measures a liver cell nucleus around 12–18 pixels in diameter, and 6.6 to 9.9 microns, which is also correlated with the clinical measurements of the human liver cell size [22]. In addition to hyperspectral images, the camera simultaneously captures RGB images of the same scene. To capture data with the proper geometry from our hyperspectral camera, we have devised a motorized moving table hardware solution to gradually move tissue samples while the camera is in capture mode. The motor speed is controlled by a small Arduino device (Atmel Corporation, San Jose, CA, USA), which is optimized to capture tissue sample images with the highest resolution along the track direction. The tissue samples are illuminated from the bottom by a 12 V, 100 W halogen light source (Thorlabs, Newton, NJ, USA). All images are captured in a dark room without light sources other than the halogen lamp placed at the bottom of the tissue slide. For radiometric calibration, we have collected white references from the empty glass slide illuminated by the halogen lamp as in the regular capture mode. In addition, we have collected dark references by blinding the camera sensor with its lens cap. An example of captured data from healthy and unhealthy classes and corresponding tissue components, including cell and background, can be seen in Figure 2. The spectra sketches in Figure 2 are obtained by obtaining the area average of the selected regions from the sample image captured with 40× lens magnification. It can be inferred that normal and tumor cell samples transmit different spectral signatures for their particular components.
By morphologically inspecting the tissue spectra in Figure 2, we see two noticeable dips around 540 nm and 650 nm. Eosin in H&E staining has a very characteristic dip around 540 nm [23,24]. In fact, those two dips are compatible with previous findings from previous studies [25,26] working on hyperspectral data of liver tissue samples.

2.2. Classification

Hyperspectral imaging provides a high potential for classification tasks when both spectral and spatial data are fused inside a machine learning model. However, the machine learning applications developed with a hyperspectral imaging base might be prone to overfitting the training data due to high dimensionality. In fact, for small datasets, complex classifiers like CNN and SVM tend to overfit by learning random noise in the data instead of extracting generative relations between the classes [27]. In addition, manual feature engineering operations on the dataset can significantly reduce the trained model’s generalization capability. Manually crafted features restrict the feature space for the classifier, whereas deep learning models can automatically find optimal features and extract indirect and nonlinear relationships between features. Therefore, in this study, we aim to develop a fully automatic classification model with high generalization capability on the HCC detection problem.
To fully exploit the effectiveness of automatic feature learning in deep learning, we employed a CNN-based learner using 3D convolutions. 3D-CNN models are commonly used in 3D object recognition [28], video action recognition [29], and medical image recognition [30] studies. 3D-CNN learners commit to the effective utilization of spatial-spectral data and high generalization performance for hyperspectral data. Thus, spectral signature information encoded within a 3D hyperspectral cube is extracted together with the textural information available on the spatial plane.
The main difference between traditional 2D-CNN and 3D-CNN is the mechanics of convolution operation applied at the convolution layers. The kernel slides along two dimensions (x and y) on the data in 2D-CNN classifiers while, in 3D-CNN classifiers, the kernel slides along three dimensions (x, y, and z) on the data. 3D-shaped kernels used in convolutions can describe the features in spatial and spectral directions. In addition to spatial features like texture and shape attributes, the spectral dimension can be embedded in the final classification model to capture radiometric information. We employed the 3D convolution operation proposed in the study [29].
v i j x y z = f ( m p = 0 P i 1 q = 0 Q i 1 r = 0 R i 1 w i j m p q r v ( i 1 ) m ( x + p ) ( y + q ) ( z + r ) + b l j )
Mathematically, 3D kernels can be formulated as in Equation (1), where v l i j x y z represents the value at position ( x , y , z ) in the j th feature map in the i th layer, m is the index value of the input feature maps from the ( i 1 ) th layer connected to the current feature map, P i , Q i and R i are the height, width, and depth of the kernel, respectively, w i j m p q r is the kernel value at position ( p , q , r ) for m t h feature map in the previous layer, b is the bias, and f ( . ) is the activation function.
As an activation function, a non-saturating Rectified Linear Units (ReLU) function is used as proposed in [31]. The formulation of the ReLU activation function is given as
f ( v ) = max ( 0 , v )
We have designed a custom CNN topology according to the details given in Table 1 and illustrated in Figure 3. In the network, there are max-pooling layers defined between the consecutive convolution layers to decrease the number of parameters and the complexity of the model [32]. Furthermore, a batch normalization layer follows each max-pooling layer to reduce internal covariate shift. The batch normalization layer also helps to speed up training by applying a normalization so that the mean value is around 0 and the standard deviation is 1. Hence the learner can utilize a larger learning rate in the optimizer algorithm. Furthermore, instead of a conventional fully connected layer, we have employed a global average pooling layer to generate feature maps into a 2D structure before feeding to the final dense layer. As elaborated in the paper [33], the global average pooling layer is not prone to overfitting since it has no parameter to optimize. It is also invariant to spatial translations in the input since it amounts to spatial averaging. In this way, we can simultaneously tackle overfitting due to the structure of texture features in our training set and eliminate the effect of noise caused by the tiny vibrations in the stepper motor.
For convolution kernel size, we have selected 3 × 3 × 3 following the best practice suggested by [34]. For CNN training, we used the Adam optimizer, as proposed in [35], with default parameters (β_1 = 0.9 and β_2 = 0.999) a and learning of rate 0.001. We set the batch size to 128, trained the models for 100 training epochs, and used a 10% of dropout rate.
As in most medical studies [36], our dataset is imbalanced due to the presence of very few healthy compared to tumor samples. Therefore, CNN classifiers can be biased towards the majority class and may result in false positives in medical applications. We have employed the focal loss (FL) function to overcome the class imbalance problem in our dataset. Traditionally, the cross entropy (CE) function is employed in most deep learning models.
C E ( p t ) = log ( p t )
where p t is given by
p t = { p                   y = 1   1 p                 y = 1
where y { 1 , 1 } is the ground truth class and p [ 0 , 1 ]   is the classifier’s output probability value for the class y = 1 .
Nevertheless, in case of extreme class imbalance, the loss contribution of well-classified examples in cross-entropy-based models can easily dominate the minority class. The balanced cross entropy (BCE) function, as defined in Equation (5), is employed for dealing with the class imbalance problem in the traditional CE function.
C E ( p t ) = a t log ( p t )
where a t is a weighting factor hyperparameter and defined as
a t = { a                   i f   y = 1   1 a                 o t h e r w i s e
where a   [ 0 , 1 ] . The BCE function helps to balance the contribution of minority and majority classes during the training. However, it does not affect the loss between easy/hard examples. Our dataset contains an extreme imbalance, the easy positives (tumor samples with high p t ) can dominate the training and cause too much focus on easy positives. The focal loss function, however, can down-weight the loss contribution of easy examples and relatively increase the loss contribution from the hard examples. Focal loss is derived from the cross-entropy loss function (3) by introducing a modulating factor ( 1 p t ) γ   to the cross-entropy loss. In this study, we have employed a balanced version of the focal loss function, defined in Equation (7).
FL ( p t ) = a ( 1 p t ) γ   log ( p t )
where γ 0 is the focusing parameter. The weighting factor, a , enforces the training procedure so that the learner concentrates on the minority class instead of treating the classes with equal importance. At the same time, the focusing parameter, γ , imposes focusing on the examples resulting in large errors, namely hard examples [18].

3. Results

3.1. Dataset

In this study, we employed a liver tissue array from Biomax LV962 (TissueArray.Com LLC, Derwood, MD, USA), a commercially available H&E-stained liver tissue slide. The tissue microarray contains both healthy and unhealthy cases; 3 normal liver tissues, 1 cancer adjacent liver tissue, 1 each of metastatic adenocarcinoma and cavernous hemangioma, 4 liver cirrhosis, 3 cholangiocarcinoma, and 32 hepatocellular carcinoma. From each case, our dataset contains two tissue samples. We have employed normal (healthy) and hepatocellular carcinoma (unhealthy) classes from the tissue microarray. As depicted in Table 2, there are 6 healthy and 54 unhealthy tissue samples in our dataset.
We have evenly divided the dataset into three subsets, including training, validation, and testing sets. Therefore, all three sets include distinct patients and there is no overlap between them.
In the dataset, each sample image is captured with 1000 × 2000 pixels resolution and 40× microscopy lens magnification. As shown in Figure 4, for visualization purposes, we have generated an RGB representation from the hyperspectral cube by fitting three normal distributions synthesizing red, green, and blue bands with a standard deviation of 25 and mean values of 630, 540, and 480 respectively. Sample images are divided into smaller patch images with size S × S pixels, where we took S as a parameter. In some patches, there were blank areas without any tissue samples. The image patches with more than 50% blank area were automatically removed from the dataset to obtain a reliable dataset, and 4% of the data was eliminated with this method. Our hyperspectral imaging system can output 408 bands between 400 and 1000 nm. However, manually inspecting the samples, we have observed that the bands above 800 nm contain a low signal-to-noise ratio. Therefore, we have only used the first 270 bands between 400 and 800 nm to reduce computational cost and prevent flawed information from being presented to the classifier.

3.2. Hardware and Software Configuration

We have employed an AI server with eight NVIDIA V100 Tensor Core 32GB GPUs with 5,120 Tensor Cores, delivering up to 1 petaflop of AI computing performance. The server machine has a dual Intel Xeon E5-2620 v3 CPU and 128 GB of DDR4 memory. By using this server, eight distinct models can be trained simultaneously. The software stack used in our study includes Python 3.8, Keras 2.3.1 with Tensorflow 2.0 for deep learning programming, CUDA for GPU acceleration, and Ubuntu 18.04 for the main operating system.

3.3. Evaluation Metrics

We have employed accuracy, precision, recall, and F1 score metrics formulated in Equations (8) to (12) to evaluate the classification performance. Moreover, we have used the Matthews Correlation Coefficient (MCC) metric, which is generally suggested for the classifiers focusing on class imbalance problems in medical studies [37] as formulated in Equation (9). The output value of the MCC metric varies between −1 and 1, such that 1 represents a perfect prediction, 0 means a random prediction, and −1 implies total disagreement between prediction and observation. All metrics are calculated from the classifier output metrics including True Positive (TP), True Negative (TN), False Positive (FP), and False Negative (FN).
A c c u r a c y = T P + T N T P + T N + F P + F N
P r e c i s i o n = T P T P + F P
R e c a l l = T P T P + F N
F 1 S c o r e = 2 · p r e c i s i o n ·   r e c a l l p r e c i s i o n + r e c a l l
M C C = T P · T N F P · F N ( T P + F P ) ( T P + F N ) ( T N + F P ) ( T N + F N )

3.4. Experimental Results and Discussion

To evaluate the performance of the proposed method, we trained distinct CNN classifiers with different configurations but the same topologies as depicted in Table 1. We have empirically found the optimal values for image patch size S . As stated in [18], the hyperparameters γ and α in the focal loss function are dataset-specific and need to be tuned for different model and dataset configurations. Therefore, in order to provide a fair evaluation, we optimized focal loss hyperparameters for each configuration. Furthermore, we have compared the classification performances of different spectral resolutions such as hyperspectral, sampled hyperspectral, PCA of hyperspectral, and RGB. Afterward, to reveal the effect of kernel dimensionality (2D vs. 3D kernels), we experimented with the implications of convolution operation by comparing 2D and 3D convolution-based CNN results. Finally, we have conducted another experiment by rotating our dataset splits between training, validation, and testing subsets to ensure that our models are not overfitting on the data.
In the first experiment, we explored the impact of patch size ( S ) on classification performance. The patch size is an important parameter for our classification method since it determines the amount of variation in textural features on a single patch image. The textural features are composed of different components such as cell nucleus, cytoplasm, and blank area in the tissue sample. Therefore, the size of the cropped patches should not be too small to miss important textural features. Similarly, the classifier might tend to only focus on dense areas when the size parameter is too large. For this purpose, we have conducted experiments using four different values for patch size parameters as given in Table 3. We obtained the best classification accuracy and MCC value with a 100 × 100 pixels patch size. For the remaining experiments, we have fixed the patch size parameter to 100.
In the second phase of the experiments, we have identified the optimal focal loss hyperparameters, γ and α, for the HSI dataset. We have compared the classification performance of the balanced cross-entropy function with the focal loss function. In the focal loss cost function, the weighting factor, α, enables the loss function to output differentiated loss values for the minority (healthy) and majority (tumor) classes. It balances the influence of negative and positive examples on the loss. Meanwhile, the focusing parameter, γ, effectively reduces the loss contribution from well-classified, namely, easy, examples while keeping the high loss contribution of hard examples. This way, the focusing parameter, γ, adjusts the level of focus on the hard examples during the training stage. The focusing parameter value should be tuned to deal with the misclassified hard examples while maintaining the overall classification accuracy and MCC score. The optimal values for the focal loss hyperparameters, α and γ, depend on the severity of the imbalance and the existence of hard and easy examples in the dataset. Hence, the optimal values depend on the dataset. As stated in the paper that first introduced the focal loss function [18], the gain in modifying the focusing parameter, γ, is much larger than that in modifying the weighting factor, α. The optimal α values are found in the range [0.25, 0.75], and the α = 0.5 value performs well in most cases. Similar to those findings, as in Table 4, we have empirically shown that the hyperparameter values γ = 2 and α = 0.5 produce the best classification performance for our HSI dataset. Although there is an extreme imbalance in the dataset, the same α value, which is 0.5, is selected for both positive and negative classes. The reason is that the easy positives are down-weighted with the help of γ and the negatives require less focus from the loss function. As a result, the model training concentrates on the hard examples rather than intentionally focusing on the minority class. The other α values, 0.25 and 0.5, still perform similarly for the same γ values. Therefore, we can conclude that the value of γ is the critical factor in the loss function, while the α parameter should be optimized for each γ value. In the CE configuration, the classifier outputs the lowest precision since its false positive rate is relatively high. When the α value is set to 0.25 in BCE, we see a significant improvement in precision thanks to the drop in false positives. We can confirm that α plays an important role in identifying the cost function behavior in BCE form. Nevertheless, the FL function performs much better than the BCE function since it can force the learner to focus on hard examples independent from the class label.
A further experiment we conducted compared the classification performance for different spectral resolutions. For this purpose, we compared a hyperspectral dataset (HSI), sampled hyperspectral datasets (HSI-90, HSI-30, and HSI-10), two PCA-based versions of the hyperspectral dataset (PCA-9 and PCA-3), and RGB versions of our dataset. The initial HSI dataset consists of 270 bands. By sampling individual bands from the HSI dataset with a constant frequency, we have generated 90 (HSI-90)-, 30 (HSI-30)-, and 10 (HSI-10)-band versions of the initial dataset. Additionally, we have applied dimensionality reduction to our HSI dataset with the help of the PCA method. We have utilized the PCA algorithm presented in [38], an incremental technique to calculate the PCA of large datasets. We selected the first nine principal components using a variance threshold value of 0.1%. We have also formed another PCA dataset by using the first three principal components to do a three-bands performance comparison with the RGB dataset. The first three principal components (PCA-3) had a total variance of 93.46%, and the first nine principal components (PCA-9) had a total variance of 98.60%. In addition to hyperspectral datasets, we have employed the RGB data simultaneously captured by our hyperspectral camera with HSI data. The RGB data captured by the camera contains three individual bands taken from the 630, 540, and 480 nm wavelengths, respectively. We have used the RGB images to train another 3D-CNN model with the same topology. For supporting three channels input to our 3D-CNN learner, we have set convolution kernel depth and stride parameters accordingly and kept the other parameters the same as in its original version. As shown in Table 5, Table 6, Table 7, Table 8, Table 9 and Table 10, the focal loss function hyperparameters of 3D-CNN for the datasets are fine-tuned empirically. According to the experimentation results, HSI performs the best with the highest accuracy and MCC score. The results of the sampled HSI datasets clearly show that more hyperspectral bands result in higher classification performance. For the sampled hyperspectral datasets, the classification accuracy is directly proportional to the number of bands contained in the dataset. The PCA-9 dataset has the second-best classification accuracy since it holds most of the variance from the original HSI dataset. The PCA-3 dataset has a lower accuracy than PCA-9, but a higher accuracy and MCC score than the RGB dataset.
In our fourth experiment, we compared the implications of convolution operation on classification performance. For this purpose, instead of a 3D convolution operation, we trained another classification model with a 2D convolution operation with the same network topology. We fine-tuned the focal loss hyperparameters for the 2D convolution case as given in Table 11. We found that the hyperparameter set, γ = 2 and α = 0.5, that was best with 3D-CNN was also best for the 2D-CNN model. Two-dimensional convolution operates in two directions of the image data, whereas 3D convolution slides in three directions of the hyperspectral cube. Therefore, the descriptive power of the feature sets collected by 2D and 3D convolution operations are different. As depicted in Table 12, the 3D convolution operator performed better than the 2D version. From the analysis, we infer that the 3D convolution operator can utilize the full potential of hyperspectral data while the 2D convolution operator causes a deterioration of classification performance for hyperspectral data.
In our fifth experiment setup, we showed that our model is free from overfitting by rotating the split sets, training, validation, and testing, between each other. There are 3 healthy and 27 unhealthy patients in our whole dataset. We created three different data-splitting configurations by putting one healthy and nine unhealthy patients in each of the training, validation, and testing sets. We rotated the sets between each other and repeat model training for each configuration. As shown in Table 13, we obtained similar classification performance results for all three configurations. From this experiment, we empirically show that our 3D-CNN model is capable of learning descriptive features from hyperspectral space without overfitting the given training data.

4. Discussions and Conclusions

In this study, we have proposed a new HCC tumor detection method utilizing hyperspectral imaging and a custom deep-learning model. We have built a biological tissue imaging system by integrating a VNIR hyperspectral camera with a light microscopy device. We collected hyperspectral images of tumor and healthy liver tissues with the help of our imaging system. We have designed a custom 3D-CNN classification topology to utilize the full potential of HSI data. In our CNN topology, we have included four convolution layers with max-pooling layers between them. The max-pooling layers down-sample the data by halving the size of the dataset at every iteration, effectively reducing model complexity. The use of 3D convolution layers enables us to leverage both textural and spectral features in a single training pipeline. Moreover, our method does not require separate feature extraction operations on the dataset, and the learner can automatically extract useful features from the training set. In addition to 3D convolutions employed in the deep learning model, we have optimized the network topology by replacing the traditional cross-entropy cost function with the focal loss cost function. In this way, we have significantly overcome the class imbalance problem residing in our dataset. The focal loss function made the classification model less biased towards the majority class (unhealthy). Well-classified easy examples are down-weighted with the help of the focal loss function; thus, the training procedure concentrates on learning hard examples. We have empirically optimized the hyperparameters of the focal loss function, γ and α, for each experiment configuration. Notably, the γ parameter in the focal loss function has a critical impact on the classification performance whereas α has a minor effect on the results.
The majority of computer-aided histopathology studies rely on RGB data captured with a Complementary Metal-Oxide Semiconductor (CMOS) or CCD sensors [39,40,41]. Our study utilizes a much wider range of the electromagnetic spectrum. The hyperspectral dataset used in our study includes contiguous 270 bands between 400 to 800 nm in the spectrum, whereas the RGB dataset includes three individual bands taken from 630, 540, and 480 nm. With the help of hyperspectral imaging, the subject material’s chemical composition can be analyzed in addition to conventional spatial attributes such as size, shape, and texture. The hyperspectral cube is versatile for our classification task since it can represent the variation of material properties in fine detail as spectral signatures. Unlike an RGB dataset, the descriptive features along the spectral dimension can be effectively captured by a 3D convolution operation. Our 3D-CNN-based supervised learner can describe the nonlinear relationships between the features in both spectral and spatial dimensions. That is, features such as corners, edges, and textures in the spatial plane can be associated with features such as peaks, dips, slopes, and valleys in the spectral signatures of pixels. The large amount of information within the hyperspectral cube enables the deep learning model to build a strong classifier with highly descriptive feature extraction competency. Moreover, by sampling the original hyperspectral dataset into lower dimension datasets such as HSI-90, HSI-30, and HSI-10, we observe the advantage of having more bands in classification. In other words, the deep learning model’s prediction power is enhanced by introducing more spectral bands to the learner. Additionally, we have used PCA for dimensionality reduction on the original hyperspectral data with 270 bands. We have generated two other datasets with, first, nine principal components (PCA-9) and then three principal components (PCA-3). The PCA method significantly reduces data complexity and improves the signal-to-noise ratio; hence it becomes easier for the learner to converge. However, the CNN models trained with PCA data yielded lower classification accuracy than the CNN model trained with HSI data. The PCA-9 dataset having a maximum variance of 98.60% performed almost as well as the HSI dataset. Considering the simplicity of PCA-9 compared to the original HSI dataset, PCA provides a cost-effective way of utilizing hyperspectral data for our task. The PCA-3 dataset performed better than the RGB dataset, indicating that hyperspectral data compressed into three bands contains more useful information for classifying tissue samples than RGB. Experimental results validate the resourcefulness of the HSI dataset over its RGB and PCA counterparts on classification accuracy.
Although we have proposed a 3D-CNN classification model with promising results, there are limitations to our study. Firstly, the dataset employed in the study needs to be extended by adding more tissue samples. With more data fed into the training stage, the resultant classifier is expected to be more resistant to the overfitting phenomenon and have a better generalization capability. It is desirable to assess our model with further validation with a larger tissue sample dataset collected from various laboratories labeled by different pathologists. This way, the dataset’s sample variation can be boosted, and the classification model can span a larger area in feature space.
In summary, the model can be used for supporting pathologists’ examination or initial screening. Our methodology can serve as a decision support tool for novice pathologists even though the model does not provide a holistic tissue examination, including inspection of inflammation, necrosis, and blood vessels, as pathologists do.

Author Contributions

Conceptualization, U.C., R.C.A., Y.Y.C.; methodology, U.C, Y.Y.C.; software U.C.; validation, U.C.; writing—review and editing, U.C., R.C.A., Y.Y.C.; supervision, R.C.A., Y.Y.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Anonymized human liver tissue microarrays were obtained from US Biomax Inc, with an ethics statement, “All tissue is collected under the highest ethical standards with the donor being informed completely and with their consent. We make sure we follow standard medical care and protect the donors’ privacy. All human tissues are collected under HIPPA-approved protocols. All samples have been tested negative for HIV and Hepatitis B or their counterparts in animals, and approved for commercial product development”.

Informed Consent Statement

Patient consent was waived as the liver tissue samples are obtained from a third-party vendor, US Biomax Inc, with anonymized specifications.

Data Availability Statement

The data presented in this study are available on request from the corresponding author. The data are not publicly available as the intellectual property rights of the original tissue samples are owned by the microarray vendor.

Acknowledgments

For the design and manufacturing of the motorized stepper component employed in our imaging system, we thank Musa Ataş of Siirt University, Siirt, Turkey.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Figure A1. Topology of 3D-CNN utilized for HSI data.
Figure A1. Topology of 3D-CNN utilized for HSI data.
Jimaging 09 00025 g0a1
Figure A2. Topology of 3D-CNN utilized for Three-Channel data.
Figure A2. Topology of 3D-CNN utilized for Three-Channel data.
Jimaging 09 00025 g0a2
Figure A3. Topology of 2D-CNN utilized for HSI data.
Figure A3. Topology of 2D-CNN utilized for HSI data.
Jimaging 09 00025 g0a3

References

  1. Sung, H.; Ferlay, J.; Siegel, R.L.; Laversanne, M.; Soerjomataram, I.; Jemal, A.; Bray, F. Global Cancer Statistics 2020: GLOBOCAN Estimates of Incidence and Mortality Worldwide for 36 Cancers in 185 Countries. CA A Cancer J. Clin. 2021, 71, 209–249. [Google Scholar] [CrossRef] [PubMed]
  2. Villanueva, A.; Minguez, B.; Forner, A.; Reig, M.; Llovet, J.M. Hepatocellular Carcinoma: Novel Molecular Approaches for Diagnosis, Prognosis, and Therapy. Annu. Rev. Med. 2010, 61, 317–328. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  3. Vij, M.; Calderaro, J. Pathologic and Molecular Features of Hepatocellular Carcinoma: An Update. World J. Hepatol. 2021, 13, 393–410. [Google Scholar] [CrossRef] [PubMed]
  4. Fujita, H. AI-Based Computer-Aided Diagnosis (AI-CAD): The Latest Review to Read First. Radiol. Phys. Technol. 2020, 13, 6–19. [Google Scholar] [CrossRef]
  5. Aatresh, A.A.; Alabhya, K.; Lal, S.; Kini, J.; Saxena, P.P. LiverNet: Efficient and Robust Deep Learning Model for Automatic Diagnosis of Sub-Types of Liver Hepatocellular Carcinoma Cancer from H&E Stained Liver Histopathology Images. Int. J. CARS 2021, 16, 1549–1563. [Google Scholar] [CrossRef]
  6. Lin, H.; Wei, C.; Wang, G.; Chen, H.; Lin, L.; Ni, M.; Chen, J.; Zhuo, S. Automated Classification of Hepatocellular Carcinoma Differentiation Using Multiphoton Microscopy and Deep Learning. J. Biophotonics 2019, 12, e201800435. [Google Scholar] [CrossRef]
  7. Goetz, A.F.H. Three Decades of Hyperspectral Remote Sensing of the Earth: A Personal View. Remote Sens. Environ. 2009, 113, S5–S16. [Google Scholar] [CrossRef]
  8. Park, B.; Lu, R. (Eds.) . Hyperspectral Imaging Technology in Food and Agriculture; Food Engineering Series; Springer: New York, NY, USA, 2015; ISBN 978-1-4939-2835-4. [Google Scholar]
  9. Huang, H.; Liu, L.; Ngadi, M. Recent Developments in Hyperspectral Imaging for Assessment of Food Quality and Safety. Sensors 2014, 14, 7248–7276. [Google Scholar] [CrossRef] [Green Version]
  10. Stuffler, T.; Förster, K.; Hofer, S.; Leipold, M.; Sang, B.; Kaufmann, H.; Penné, B.; Mueller, A.; Chlebek, C. Hyperspectral Imaging—An Advanced Instrument Concept for the EnMAP Mission (Environmental Mapping and Analysis Programme). Acta Astronaut. 2009, 65, 1107–1112. [Google Scholar] [CrossRef]
  11. Khan, M.J.; Khan, H.S.; Yousaf, A.; Khurshid, K.; Abbas, A. Modern Trends in Hyperspectral Image Analysis: A Review. IEEE Access 2018, 6, 14118–14129. [Google Scholar] [CrossRef]
  12. ul Rehman, A.; Qureshi, S.A. A Review of the Medical Hyperspectral Imaging Systems and Unmixing Algorithms’ in Biological Tissues. Photodiagn. Photodyn. Ther. 2021, 33, 102165. [Google Scholar] [CrossRef] [PubMed]
  13. Schultz, R.A.; Nielsen, T.; Zavaleta, J.R.; Ruch, R.; Wyatt, R.; Garner, H.R. Hyperspectral Imaging: A Novel Approach for Microscopic Analysis. Cytometry 2001, 43, 239–247. [Google Scholar] [CrossRef] [PubMed]
  14. Song, J.; Hu, M.; Wang, J.; Zhou, M.; Sun, L.; Qiu, S.; Li, Q.; Sun, Z.; Wang, Y. ALK Positive Lung Cancer Identification and Targeted Drugs Evaluation Using Microscopic Hyperspectral Imaging Technique. Infrared Phys. Technol. 2019, 96, 267–275. [Google Scholar] [CrossRef]
  15. Sun, L.; Zhou, M.; Li, Q.; Hu, M.; Wen, Y.; Zhang, J.; Lu, Y.; Chu, J. Diagnosis of Cholangiocarcinoma from Microscopic Hyperspectral Pathological Dataset by Deep Convolution Neural Networks. Methods 2022, 202, 22–30. [Google Scholar] [CrossRef] [PubMed]
  16. Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J.; Wojna, Z. Rethinking the Inception Architecture for Computer Vision. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 2818–2826. [Google Scholar]
  17. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  18. Lin, T.-Y.; Goyal, P.; Girshick, R.; He, K.; Dollar, P. Focal Loss for Dense Object Detection. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 42, 318–327. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  19. Vila-Francés, J. Analysis of Acousto-Optic Tunable Filter Performance for Imaging Applications. Opt. Eng. 2010, 49, 113203. [Google Scholar] [CrossRef]
  20. Calpe-Maravilla, J. 400– to 1000–Nm Imaging Spectrometer Based on Acousto-Optic Tunable Filters. J. Electron. Imaging 2006, 15, 023001. [Google Scholar] [CrossRef]
  21. Xu, Z.; Zhao, H.; Jia, G.; Sun, S.; Wang, X. Optical Schemes of Super-Angular AOTF-Based Imagers and System Response Analysis. Opt. Commun. 2021, 498, 127204. [Google Scholar] [CrossRef]
  22. Budinger, T.F. Absorbed Radiation Dose Assessment from Radionuclides. In Comprehensive Biomedical Physics; Elsevie: Amsterdam, The Netherlands, 2014; pp. 253–269. ISBN 978-0-444-53633-4. [Google Scholar]
  23. Abe, T.; Murakami, Y.; Yamaguchi, M.; Ohyama, N.; Yagi, Y. Color Correction of Pathological Images Based on Dye Amount Quantification. OPT REV 2005, 12, 293–300. [Google Scholar] [CrossRef]
  24. Tuer, A.E.; Tokarz, D.; Prent, N.; Cisek, R.; Alami, J.; Dumont, D.J.; Bakueva, L.; Rowlands, J.A.; Barzda, V. Nonlinear Multicontrast Microscopy of Hematoxylin-and-Eosin-Stained Histological Sections. J. Biomed. Opt. 2010, 15, 026018. [Google Scholar] [CrossRef]
  25. Wang, R.; He, Y.; Yao, C.; Wang, S.; Xue, Y.; Zhang, Z.; Wang, J.; Liu, X. Classification and Segmentation of Hyperspectral Data of Hepatocellular Carcinoma Samples Using 1-D Convolutional Neural Network. Cytometry 2020, 97, 31–38. [Google Scholar] [CrossRef] [PubMed]
  26. Aref, M.H.; Aboughaleb, I.H.; El-Sharkawy, Y.H. Tissue Characterization Utilizing Hyperspectral Imaging for Liver Thermal Ablation. Photodiagnosis Photodyn. Ther. 2020, 31, 101899. [Google Scholar] [CrossRef] [PubMed]
  27. Rocha, A.D.; Groen, T.A.; Skidmore, A.K.; Darvishzadeh, R.; Willemen, L. The Naïve Overfitting Index Selection (NOIS): A New Method to Optimize Model Complexity for Hyperspectral Data. ISPRS J. Photogramm. Remote Sens. 2017, 133, 61–74. [Google Scholar] [CrossRef]
  28. Xiang, Y.; Kim, W.; Chen, W.; Ji, J.; Choy, C.; Su, H.; Mottaghi, R.; Guibas, L.; Savarese, S. ObjectNet3D: A Large Scale Database for 3D Object Recognition. In Computer Vision—ECCV 2016; Leibe, B., Matas, J., Sebe, N., Welling, M., Eds.; Lecture Notes in Computer Science; Springer International Publishing: Cham, Switzerland, 2016; Volume 9912, pp. 160–176. ISBN 978-3-319-46483-1. [Google Scholar]
  29. Ji, S.; Xu, W.; Yang, M.; Yu, K. 3D Convolutional Neural Networks for Human Action Recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 221–231. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  30. Kleesiek, J.; Urban, G.; Hubert, A.; Schwarz, D.; Maier-Hein, K.; Bendszus, M.; Biller, A. Deep MRI Brain Extraction: A 3D Convolutional Neural Network for Skull Stripping. NeuroImage 2016, 129, 460–469. [Google Scholar] [CrossRef]
  31. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet Classification with Deep Convolutional Neural Networks. Commun. ACM 2017, 60, 84–90. [Google Scholar] [CrossRef] [Green Version]
  32. O’Shea, K.; Nash, R. An Introduction to Convolutional Neural Networks. arXiv 2015. [Google Scholar] [CrossRef]
  33. Lin, M.; Chen, Q.; Yan, S. Network in Network. arXiv 2013. [Google Scholar] [CrossRef]
  34. Zunair, H.; Rahman, A.; Mohammed, N.; Cohen, J.P. Uniformizing Techniques to Process CT Scans with 3D CNNs for Tuberculosis Prediction. arXiv. [CrossRef]
  35. Kingma, D.P.; Ba, J. Adam: A Method for Stochastic Optimizatio. arXiv. [CrossRef]
  36. Li, D.-C.; Liu, C.-W.; Hu, S.C. A Learning Method for the Class Imbalance Problem with Medical Data Sets. Comput. Biol. Med. 2010, 40, 509–518. [Google Scholar] [CrossRef] [PubMed]
  37. Boughorbel, S.; Jarray, F.; El-Anbari, M. Optimal Classifier for Imbalanced Data Using Matthews Correlation Coefficient Metric. PLoS ONE 2017, 12, e0177678. [Google Scholar] [CrossRef] [PubMed]
  38. Ross, D.A.; Lim, J.; Lin, R.-S.; Yang, M.-H. Incremental Learning for Robust Visual Tracking. Int. J. Comput. Vis. 2008, 77, 125–141. [Google Scholar] [CrossRef]
  39. Mosquera-Lopez, C.; Agaian, S.; Velez-Hoyos, A.; Thompson, I. Computer-Aided Prostate Cancer Diagnosis From Digitized Histopathology: A Review on Texture-Based Systems. IEEE Rev. Biomed. Eng. 2015, 8, 98–113. [Google Scholar] [CrossRef]
  40. Chen, J.-M.; Li, Y.; Xu, J.; Gong, L.; Wang, L.-W.; Liu, W.-L.; Liu, J. Computer-Aided Prognosis on Breast Cancer with Hematoxylin and Eosin Histopathology Images: A Review. Tumour. Biol. 2017, 39, 101042831769455. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  41. Saxena, S.; Gyanchandani, M. Machine Learning Methods for Computer-Aided Breast Cancer Diagnosis Using Histopathology: A Narrative Review. J. Med. Imaging Radiat. Sci. 2020, 51, 182–193. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Data Acquisition system. A, Light microscope; B, VNIR Camera; C, Motorized Stepper; D, Light Source.
Figure 1. Data Acquisition system. A, Light microscope; B, VNIR Camera; C, Motorized Stepper; D, Light Source.
Jimaging 09 00025 g001
Figure 2. Hyperspectral tissue samples dataset. (a) Tumor (hepatocellular carcinoma, HCC) tissue sample, tumor cells (red), tumor background tissue (yellow); (b) Normal (Healthy) tissue sample, normal cells (green), normal background tissue (blue); (c) Spectra comparison plotting of the given components.
Figure 2. Hyperspectral tissue samples dataset. (a) Tumor (hepatocellular carcinoma, HCC) tissue sample, tumor cells (red), tumor background tissue (yellow); (b) Normal (Healthy) tissue sample, normal cells (green), normal background tissue (blue); (c) Spectra comparison plotting of the given components.
Jimaging 09 00025 g002
Figure 3. 3D-CNN topology sketch.
Figure 3. 3D-CNN topology sketch.
Jimaging 09 00025 g003
Figure 4. Sample patch images taken with 40X magnification, and the image size is 100 × 100 pixels. (a) Tumor sample patches; (b) Healthy sample patches.
Figure 4. Sample patch images taken with 40X magnification, and the image size is 100 × 100 pixels. (a) Tumor sample patches; (b) Healthy sample patches.
Jimaging 09 00025 g004
Table 1. 3D-CNN Topology with parameters.
Table 1. 3D-CNN Topology with parameters.
LayerParametersOutput Size
Input 100 × 100 × 270
Convolution3DKernel Size: 3 × 3 × 3
Number of filters: 4
Activation: ReLU
Padding: same
100 × 100 × 270 × 4
Max-pooling3DPool Size: 3 × 3 × 3
Strides: 2 × 2 × 2
Padding: same
50 × 50 × 135 × 4
BatchNormalization 50 × 50 × 135 × 4
Convolution3DKernel Size: 3 × 3 × 3
Number of filters: 8
Activation: ReLU
Padding: same
50 × 50 × 135 × 8
Max-pooling3DPool Size: 3 × 3 × 3
Strides: 2 × 2 × 2
Padding: same
25 × 25 × 68 × 8
BatchNormalization 25 × 25 × 68 × 8
Convolution3DKernel Size: 3 × 3 × 3
Number of filters: 16
Activation: ReLU
Padding: same
25 × 25 × 68 × 16
Max-pooling3DPool Size: 3 × 3 × 3
Strides: 2 × 2 × 2
Padding: same
13 × 13 × 34 × 16
BatchNormalization 13 × 13 × 34 × 16
Convolution3DKernel Size: 3 × 3 × 3
Number of filters: 32
Activation: ReLU
Padding: same
13 × 13 × 34 × 32
Max-pooling3DPool Size: 3 × 3 × 3
Strides: 2 × 2 × 2
Padding: same
7 × 7 × 17 × 32
BatchNormalization 7 × 7 × 17 × 32
GlobalAveragePooling3D 32
DenseUnits: 512
Activation: ReLU
512
DropoutDrop rate: 0.1512
Dense (Classification)Units: 11
Table 2. Dataset statistics with the class distribution.
Table 2. Dataset statistics with the class distribution.
HealthyUnhealthyTotal
Training Samples21820
Training Cases1910
Validation Samples21820
Validation Cases1910
Testing Samples21820
Testing Cases1910
Total Samples65460
Total Cases32730
Table 3. Classification results for varying patch size parameter. (focusing parameter γ = 2, weighting factor α = 0.50, 3D convolutions, see Figure A1 for network topology).
Table 3. Classification results for varying patch size parameter. (focusing parameter γ = 2, weighting factor α = 0.50, 3D convolutions, see Figure A1 for network topology).
Patch Size (S)AccuracyPrecisionRecallF1-ScoreMCC
50 × 500.9290.9960.9240.9610.722
100 × 1000.9700.9990.9680.9840.860
150 × 1500.9610.9730.9830.9670.774
200 × 2000.9240.9680.9470.9450.615
Table 4. Classification results with HSI dataset for varying cost functions and respective parameter sets. (patch size parameter S = 100, 3D convolutions, see Figure A1 for network topology).
Table 4. Classification results with HSI dataset for varying cost functions and respective parameter sets. (patch size parameter S = 100, 3D convolutions, see Figure A1 for network topology).
Loss FunctionAccuracyPrecisionRecallF1-ScoreMCC
CE0.8910.9210.9610.9060.277
BCE ( γ = 0 ,   α = 0.25 )0.9010.9380.9530.9190.412
BCE ( γ = 0 ,   α = 0.75 )0.9020.9160.9810.9090.278
FL ( γ = 1.5 ,   α = 0.25 )0.9180.9820.9260.9490.646
FL ( γ = 1.5 ,   α = 0.50 )0.9220.9780.9350.9490.643
FL ( γ = 1.5 ,   α = 0.75 )0.9300.9690.9520.9490.638
FL ( γ = 2.0 ,   α = 0.25 )0.9600.9930.9620.9760.811
FL ( γ = 2.0 ,   α = 0.50 )0.9700.9990.9680.9840.860
FL ( γ = 2.0 ,   α = 0.75 )0.9550.9830.9660.9690.766
FL ( γ = 2.5 ,   α = 0.25 )0.9480.9980.9430.9720.782
FL ( γ = 2.5 ,   α = 0.50 )0.9580.9990.9550.9780.818
FL ( γ = 2.5 ,   α = 0.75 )0.9530.9860.9620.9690.768
Table 5. Classification results with HSI-90 dataset for varying cost functions and respective parameter sets. (patch size parameter S = 100, 3D convolutions, see Figure A2 for network topology).
Table 5. Classification results with HSI-90 dataset for varying cost functions and respective parameter sets. (patch size parameter S = 100, 3D convolutions, see Figure A2 for network topology).
Loss Function AccuracyPrecisionRecallF1-ScoreMCC
FL ( γ = 1.5 ,   α = 0.25 )0.8850.9730.8970.9270.539
FL ( γ = 1.5 ,   α = 0.50 )0.8920.9750.9030.9320.559
FL ( γ = 1.5 ,   α = 0.75 )0.8970.9710.9130.9330.554
FL ( γ = 2.0 ,   α = 0.25 )0.9190.9800.9290.9490.645
FL ( γ = 2.0 ,   α = 0.50 )0.9500.9900.9550.9700.767
FL ( γ = 2.0 ,   α = 0.75 )0.9240.9860.9290.9540.679
FL ( γ = 2.5 ,   α = 0.25 )0.9140.9770.9260.9440.619
FL ( γ = 2.5 ,   α = 0.50 )0.8850.9770.8940.9290.552
FL ( γ = 2.5 ,   α = 0.75 )0.8570.9700.8680.910.473
Table 6. Classification results with HSI-30 dataset for varying cost functions and respective parameter sets. (patch size parameter S = 100, 3D convolutions, see Figure A2 for network topology).
Table 6. Classification results with HSI-30 dataset for varying cost functions and respective parameter sets. (patch size parameter S = 100, 3D convolutions, see Figure A2 for network topology).
Loss Function AccuracyPrecisionRecallF1-ScoreMCC
FL ( γ = 1.5 ,   α = 0.25 )0.8770.9700.8900.9210.510
FL ( γ = 1.5 ,   α = 0.50 )0.8850.9730.8970.9270.537
FL ( γ = 1.5 ,   α = 0.75 )0.8850.9710.900.9260.53
FL ( γ = 2.0 ,   α = 0.25 )0.8890.9740.9000.9300.55
FL ( γ = 2.0 ,   α = 0.50 )0.9310.9850.9370.9570.692
FL ( γ = 2.0 ,   α = 0.75 )0.9090.9750.9230.9410.598
FL ( γ = 2.5 ,   α = 0.25 )0.8830.9710.8970.9250.524
FL ( γ = 2.5 ,   α = 0.50 )0.8610.9730.8700.9140.495
FL ( γ = 2.5 ,   α = 0.75 )0.8570.970.8680.910.473
Table 7. Classification results with HSI-10 dataset for varying cost functions and respective parameter sets. (patch size parameter S = 100, 3D convolutions, see Figure A2 for network topology).
Table 7. Classification results with HSI-10 dataset for varying cost functions and respective parameter sets. (patch size parameter S = 100, 3D convolutions, see Figure A2 for network topology).
Loss FunctionAccuracyPrecisionRecallF1-ScoreMCC
FL ( γ = 1.5 ,   α = 0.25 )0.8570.9670.8710.9090.46
FL ( γ = 1.5 ,   α = 0.50 )0.8570.9700.8680.9100.472
FL ( γ = 1.5 ,   α = 0.75 )0.8590.9660.8740.9090.46
FL ( γ = 2.0 ,   α = 0.25 )0.9140.9780.9260.9450.622
FL ( γ = 2.0 ,   α = 0.50 )0.9100.9770.9220.9420.61
FL ( γ = 2.0 ,   α = 0.75 )0.8980.9730.9110.9340.565
FL ( γ = 2.5 ,   α = 0.25 )0.8670.9730.8760.9170.505
FL ( γ = 2.5 ,   α = 0.50 )0.8620.9730.8710.9140.498
FL ( γ = 2.5 ,   α = 0.75 )0.8900.9700.9050.9280.534
Table 8. Classification results with PCA-9 dataset for varying cost functions and respective parameter sets. (patch size parameter S = 100, 3D convolutions, see Figure A2 for network topology).
Table 8. Classification results with PCA-9 dataset for varying cost functions and respective parameter sets. (patch size parameter S = 100, 3D convolutions, see Figure A2 for network topology).
Loss Function AccuracyPrecisionRecallF1-ScoreMCC
FL ( γ = 1.5 ,   α = 0.25 )0.8970.9750.9090.9340.571
FL ( γ = 1.5 ,   α = 0.50 )0.9110.9770.9230.9430.614
FL ( γ = 1.5 ,   α = 0.75 )0.9110.9740.9260.9410.602
FL ( γ = 2.0 ,   α = 0.25 )0.9450.9870.9520.9660.743
FL ( γ = 2.0 ,   α = 0.50 )0.9570.9880.9640.9720.788
FL ( γ = 2.0 ,   α = 0.75 )0.9270.9870.9320.9560.687
FL ( γ = 2.5 ,   α = 0.25 )0.9300.9800.9410.9540.674
FL ( γ = 2.5 ,   α = 0.50 )0.8920.9810.8970.9340.583
FL ( γ = 2.5 ,   α = 0.75 )0.8820.9760.8910.9270.544
Table 9. Classification results with PCA-3 dataset for varying cost functions and respective parameter sets. (patch size parameter S = 100, 3D convolutions, see Figure A2 for network topology).
Table 9. Classification results with PCA-3 dataset for varying cost functions and respective parameter sets. (patch size parameter S = 100, 3D convolutions, see Figure A2 for network topology).
Loss FunctionAccuracyPrecisionRecallF1-ScoreMCC
FL ( γ = 1.5 ,   α = 0.25 )0.8730.9710.8850.9190.503
FL ( γ = 1.5 ,   α = 0.50 )0.8790.9700.8940.9220.511
FL ( γ = 1.5 ,   α = 0.75 )0.8570.9410.8970.8970.336
FL ( γ = 2.0 ,   α = 0.25 )0.8990.9770.9100.9360.584
FL ( γ = 2.0 ,   α = 0.50 )0.9130.9830.9190.9470.638
FL ( γ = 2.0 ,   α = 0.75 )0.9060.9750.9200.9390.590
FL ( γ = 2.5 ,   α = 0.25 )0.8880.9760.8970.9300.556
FL ( γ = 2.5 ,   α = 0.50 )0.8890.9740.9000.9300.550
FL ( γ = 2.5 ,   α = 0.75 )0.9230.9720.9420.9470.626
Table 10. Classification results with RGB dataset for varying cost functions and respective parameter sets. (patch size parameter S = 100, 3D convolutions, see Figure A2 for network topology).
Table 10. Classification results with RGB dataset for varying cost functions and respective parameter sets. (patch size parameter S = 100, 3D convolutions, see Figure A2 for network topology).
Loss FunctionAccuracyPrecisionRecallF1-ScoreMCC
FL ( γ = 1.5 ,   α = 0.25 )0.8520.9640.8680.9050.440
FL ( γ = 1.5 ,   α = 0.50 )0.8510.9660.8650.9050.449
FL ( γ = 1.5 ,   α = 0.75 )0.8280.9360.8690.8790.269
FL ( γ = 2.0 ,   α = 0.25 )0.9000.9740.9140.9360.573
FL ( γ = 2.0 ,   α = 0.50 )0.8910.9720.9050.9300.544
FL ( γ = 2.0 ,   α = 0.75 )0.8820.9700.8970.9240.520
FL ( γ = 2.5 ,   α = 0.25 )0.8590.9700.8710.9110.479
FL ( γ = 2.5 ,   α = 0.50 )0.8530.9680.8650.9070.458
FL ( γ = 2.5 ,   α = 0.75 )0.8780.9640.8970.9190.486
Table 11. Classification results with HSI dataset for varying cost functions and respective parameter sets. (patch size parameter S = 100, 2D convolutions, see Figure A3 for network topology).
Table 11. Classification results with HSI dataset for varying cost functions and respective parameter sets. (patch size parameter S = 100, 2D convolutions, see Figure A3 for network topology).
Loss Function AccuracyPrecisionRecallF1-ScoreMCC
FL ( γ = 1.5 ,   α = 0.25 )0.8910.9740.9030.9310.554
FL ( γ = 1.5 ,   α = 0.50 )0.8850.9720.8970.9260.534
FL ( γ = 1.5 ,   α = 0.75 )0.8850.9700.9000.9260.527
FL ( γ = 2.0 ,   α = 0.25 )0.9220.9830.9290.9520.663
FL ( γ = 2.0 ,   α = 0.50 )0.9340.9860.9400.9590.706
FL ( γ = 2.0 ,   α = 0.75 )0.9200.9800.9300.9490.646
FL ( γ = 2.5 ,   α = 0.25 )0.9320.9830.940.9570.689
FL ( γ = 2.5 ,   α = 0.50 )0.9230.9820.9310.9520.658
FL ( γ = 2.5 ,   α = 0.75 )0.9150.9750.9290.9440.615
Table 12. Comparison of classification results of 3D-CNN and 2D-CNN models trained by HSI data.
Table 12. Comparison of classification results of 3D-CNN and 2D-CNN models trained by HSI data.
Model AccuracyPrecisionRecallF1-ScoreMCC
HSI-3D-CNN 0.9700.9990.9680.9840.860
HSI-2D-CNN 0.9340.9860.9400.9590.706
Table 13. Comparison of classification results of 3D-CNN when the training, validation, and testing sets are rotated between each other. (patch size parameter S = 100, 3D convolutions, see Figure A1 for network topology).
Table 13. Comparison of classification results of 3D-CNN when the training, validation, and testing sets are rotated between each other. (patch size parameter S = 100, 3D convolutions, see Figure A1 for network topology).
Model AccuracyPrecisionRecallF1-ScoreMCC
HSI-3D-CNN
(configuration-1)
0.9700.9990.9680.9840.860
HSI-3D-CNN
(configuration-2)
0.9650.9970.9630.9810.836
HSI-3D-CNN
(configuration-3)
0.9680.9960.9680.9820.846
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Cinar, U.; Cetin Atalay, R.; Cetin, Y.Y. Human Hepatocellular Carcinoma Classification from H&E Stained Histopathology Images with 3D Convolutional Neural Networks and Focal Loss Function. J. Imaging 2023, 9, 25. https://doi.org/10.3390/jimaging9020025

AMA Style

Cinar U, Cetin Atalay R, Cetin YY. Human Hepatocellular Carcinoma Classification from H&E Stained Histopathology Images with 3D Convolutional Neural Networks and Focal Loss Function. Journal of Imaging. 2023; 9(2):25. https://doi.org/10.3390/jimaging9020025

Chicago/Turabian Style

Cinar, Umut, Rengul Cetin Atalay, and Yasemin Yardimci Cetin. 2023. "Human Hepatocellular Carcinoma Classification from H&E Stained Histopathology Images with 3D Convolutional Neural Networks and Focal Loss Function" Journal of Imaging 9, no. 2: 25. https://doi.org/10.3390/jimaging9020025

APA Style

Cinar, U., Cetin Atalay, R., & Cetin, Y. Y. (2023). Human Hepatocellular Carcinoma Classification from H&E Stained Histopathology Images with 3D Convolutional Neural Networks and Focal Loss Function. Journal of Imaging, 9(2), 25. https://doi.org/10.3390/jimaging9020025

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop