Next Article in Journal
Optimization of DevOps Transformation for Cloud-Based Applications
Next Article in Special Issue
Patch-Based Difference-in-Level Detection with Segmented Ground Mask
Previous Article in Journal
Decompose Auto-Transformer Time Series Anomaly Detection for Network Management
Previous Article in Special Issue
Intrinsic Emotion Recognition Considering the Emotional Association in Dialogues
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Mulvernet: Nucleus Segmentation and Classification of Pathology Images Using the HoVer-Net and Multiple Filter Units

Department of AI Convergence, Chonnam National University, Gwangju 61186, Republic of Korea
*
Author to whom correspondence should be addressed.
Electronics 2023, 12(2), 355; https://doi.org/10.3390/electronics12020355
Submission received: 16 December 2022 / Revised: 5 January 2023 / Accepted: 8 January 2023 / Published: 10 January 2023
(This article belongs to the Special Issue Image Segmentation)

Abstract

:
Nucleus segmentation and classification are crucial in pathology image analysis. Automated nuclear classification and segmentation methods support analysis and understanding of cell characteristics and functions, and allow the analysis of large-scale nuclear forms in the diagnosis and treatment of diseases. Common problems in these tasks arise from the inconsistent sizes and shapes of the cells in each pathology image. This study aims to develop a new method to address these problems based primarily on the horizontal and vertical distance network (HoVer-Net), multiple filter units, and attention gate mechanisms. The results of the study will significantly impact cell segmentation and classification by showing that a multiple filter unit improves the performance of the original HoVer-Net model. In addition, our experimental results show that the Mulvernet achieves outperforming results in both nuclei segmentation and classification compared to several methods. The ability to segment and classify different types of nuclei automatically has a direct influence on further pathological analysis, offering great potential not only to accelerate the diagnostic process in clinics but also for enhancing our understanding of tissue and cell properties to improve patient care and management.

1. Introduction

Pathology or histology images are stained by hematoxylin and eosin, allowing the efficient processing of tissues for analysis and management. Each pathology image contains tens of thousands of nuclei of different types, such as epithelial cells, inflammation, and neutrophils. These nuclei can be further analyzed to predict clinical outcomes, disease diagnosis, and prognosis. For example, nuclear features can be used to predict survival [1] and to diagnose disease type and grade [2]. Furthermore, efficient and accurate nucleus detection and segmentation can facilitate the quality of tissue segmentation [3,4] which, in turn, not only facilitates the quantification of pathology images, but also serves as an important step in understanding how each component of the tissue contributes to the disease. This necessitates the tasks of segmenting and classifying the nuclei in pathology analysis.
The segmentation and the classification of nuclei in previous work are separate processes. The available nuclei-segmentation methods are based on thresholds, image gradients, and morphological operations [5,6]. Recently, deep-learning methods have been widely used for segmenting nuclei [7,8] by predicting nucleus boundaries and using them for instance segmentation. Some have proposed using a nucleus distance map [9] or combining the nucleus distance map and the nucleus boundaries for nucleus segmentation [10]. In addition, ref. [11] integrated dense steerable filters into a convolutional neural network (CNN) to obtain the equivalence of rotation in nuclei segmentation.
The classification of nuclei has been studied in conjunction with the segmentation of nuclei. Many previous methods first segment individual nuclei and then classify them into appropriate classes through quantitative features, such as intensity [12] morphology [12,13,14], and texture features [12], using machine-learning algorithms [12,14], and CNN [15,16]. Recently, ref. [17] proposed an end-to-end CNN horizontal and vertical distance network (HoVer-Net) for simultaneous nuclear instance segmentation and classification in multiple tissues.
However, nucleus segmentation and classification are challenging for several reasons. First, there are many nuclei of different sizes and shapes in a pathology image and it is difficult to analyze them manually. Second, tumor nuclei tend to cluster, resulting in complex contexts and many cases of overlapping.
Our main contributions are summarized as follows: (1) we propose a variant of HoVer-Net for improved simultaneous cell segmentation and classification; (2) we introduce the strategy to combine multiple filter units and attention gates into the original HoVer-Net in order to improve the performance of nucleus segmentation and classification; and (3) we demonstrate that the proposed method achieves outperforming performance on both nuclei segmentation and classification for diverse multi-tissue datasets.
The remainder of this paper is organized as follows. Section 2 describes the proposed Mulvernet and how it solves the problem of simultaneous cell segmentation and classification. The performance of cell segmentation and classification on various datasets is reported in Section 3. Finally, we discuss our experimental results in Section 4 and conclude the paper in Section 5.

2. Materials and Methods

This section introduces details of the proposed nuclei segmentation and classification model named Mulvernet, as shown in Figure 1. The input dimension is 256 × 256 and the output dimension of each branch is 164 × 164. First, the input is normalized and mapped to a 3-channel input form. Subsequently, the residual unit is used to extract a strong and representative set of features, due to the excellent performance of ResNet50 in recent computer vision tasks. Various residual units (RUs) are applied throughout the network at different downsampling levels. The order of the encoders is three RUs with down-sampling levels 1, four RUs with down-sampling levels 2, six RUs with down-sampling levels 4 and three RUs with down-sampling levels 8. The skip concatenation of the original HoVer-Net is replaced by an attention gate that can adjust the gain of the feature map to remove irrelevant and noisy responses in skip connections.
Following the encoder path, we perform three decoder paths for three subtasks: nuclei-classification task, horizontal and vertical distances of the pixel prediction task, and nuclei-segmentation tasks. All decoder paths utilize the same architectural design, consisting of a series of upsampling operations and multiple filter units. We use multiple filter units to incorporate low-level features and high-level features, which are particularly important in simultaneous tasks, where we aim for parallel nucleus segmentation and classification. The details of the residual unit and the multiple filter unit are also shown in Figure 1.

2.1. Multiple Filter Unit

The multiple filter unit [18] addresses the issue by increasing the filter size rather than iteratively reducing the image size. A multi-filter unit is a stack of three convolutional layers with different kernel sizes: 1 × 1, 3 × 3 and 5 × 5. Each filter learns different features. While small kernels extract small complex features, large kernels extract simpler features. Therefore, the size of the first convolution kernel filter is 1 × 1 to reduce the size of the input vector and extract local features. The next convolutional layer is a 3 × 3 convolution kernel using a downsampling size of 2 to obtain global features. The final convolutional layer has a kernel size of 5 × 5 and a downsampling size of 2. Then, all features are concatenated before proceeding to the next steps. Given an input X, the process of the multiple filter unit (MF unit) can be written as follows:
x = m a x ( 0 , F ( X , f ( 1 × 1 ) ) F ( X , f ( 3 × 3 ) ) F ( X , f ( 5 × 5 ) ) ) , X R 3 × 256 × 256
where F is the convolutional layer, f is a filter of various sizes ( 1 × 1 , 3 × 3 , and 5 × 5 ), X is the feature map input of the multiple filter unit, x is the output of the multiple filter unit, and ⨂ represents the concatenation operation.

2.2. Attention Gate

Attention gates are integrated into the standard HoVer-Net architecture to focus on certain parts of the image and highlight relevant features that pass through skip connections. Given the feature map X as an input and the gating signal G R 3 × 256 × 256 that is collected on a coarse scale and contains contextual information, the attention gate uses additive attention to obtain the gating coefficient. The input X and the gating signal are first linearly mapped to an R 3 × 256 × 256 dimensional space, and then the output is squeezed into the channel domain to produce a spatial attention weight map S R 3 × 256 × 256 . The overall process can be written as follows:
S = σ ( φ ( δ ( ϕ x ( X ) + ϕ g ( G ) ) ) )
Y = S X
where φ , ϕ x and ϕ g are linear transformations implemented as 1 × 1 convolutions, δ is an element-wise nonlinearity, σ is an activation function and Y is the output of the attention gate.
This attention gate is performed before the concatenation operation to combine only relevant activations and remove irrelevant and noisy responses in skip connections. In essence, this enables updates to the model parameters in shallow layers based mainly on spatial regions related to a specific task. This reduces computational resources wasted on unnecessary activation and improves the network generalization power.
Furthermore, we use Preact-ResNet50 [19] as the backbone.

3. Results

3.1. Datasets

We used three publicly available datasets to evaluate our method: the MoNuSAC, GlySAC, and CoNSeP datasets.
The MoNuSAC (multi-organ nucleus segmentation and classification) [20] dataset contains images from various organs, such as breasts, lungs, kidneys, and prostates. There are 209 images with 31,411 nuclei of four types: epithelial nuclei, lymphocytes, macrophages, and neutrophils. The training set contains 168 images, and the test set contains 41 images. Figure 2 shows some examples from the MoNuSAC dataset with the groundtruth and prediction by our proposed method.
The GlySAC (gastric lymphocyte segmentation and classification) [21] dataset contains 59 images with various types of nuclei such as lymphocytes, cancerous epithelial and normal epithelial nuclei, stromal nuclei and endothelial nuclei. Sets of 34 and 25 images are used as the training and test sets, respectively. Figure 3 presents some examples from the GlySAC dataset with the groundtruth and prediction by Mulvernet.
The CoNSeP (colorectal nuclear segmentation and phenotypes) [17] dataset contains 24,319 nuclei from 41 images. These nuclei comprise four types: miscellaneous, inflammatory, epithelial, and spindle. The CoNSep images are divided into a training set of 27 images and a test set of 14 images. Figure 4 shows some examples from the CoNSeP dataset with the groundtruth and prediction by Mulvernet.

3.2. Evaluation Metrics

We employed five evaluation metrics: dice score, aggregate Jaccard index [22], detection quality, segmentation quality, and panoptic quality [23]. Given the ground truth x and the prediction y, the T P computes the true positive, the F P computes the false positive, F N computes the false negative, and I o U denotes the intersection over union of x and y.
The dice score for measuring the separation of all nuclei from the background is defined as:
D i c e = 2 × T P ( T P + F P ) + ( T P + F N )
Detection quality (DQ) is used to measure the instance detection, while the segmentation quality (SQ) evaluates how closely matched are predictions with ground truths. We formally define DQ and SQ as:
D Q = | T P | | T P | + 1 / 2 | F P | + 1 / 2 | F N |
S Q = ( ( x , y ) T P ) I o U ( x , y ) | T P | + 1 2 | F P | + 1 2 | F N |
Panoptic quality (PQ) is proposed for nuclei instance segmentation [23]. PQ is aggregated from DQ and SQ components and is presented as follows:
P Q = D Q × S Q
Aggregated Jaccard Index (AJI) is used to compute the nuclei-segmentation performance by computing the ratio of an aggregated intersection cardinality and an aggregated union cardinality between x and y.
A J I = i = 1 N | x i y M i | i = 1 N | x i y M i | + F U | P F |
where N is the number of nuclei, and x i and y i are the ith groundtruth and ith prediction, respectively. y M i denotes the Mth prediction which has the largest Jaccard Index with x i . U presents the connected component in the prediction without the corresponding ground truth.
AJI+ is the extension of AJI without over-penalization.

3.3. Comparative Experiments

Inspired by HoVer-Net, we propose a new method named Mulvernet. We compared our method through experiments with four representative models: NucleiSegNet [24], Triple U-Net [25], Mask-RCNN [26] and HoVer-Net.
NucleiSegNet [24] is based on the U-Net architecture, residual blocks and attention mechanisms to deal with the nuclei-segmentation problem without any post-processing step. Triple U-Net [25] uses the hematoxylin component to segment the nuclei. While NucleiSegNet [24] and Triple U-Net [25] were originally built for nuclei segmentation only, Mask-RCNN [26] was originally built for object localization and instance segmentation. HoVer-Net [17] is the horizontal and vertical distance network that is built for both nuclei segmentation and classification.
Table 1 presents the cell-classification results using the three datasets (MoNuSAC, GlySAC, and CoNSeP) and five methods. Our proposed method outperformed HoVer-Net and other competing models, regardless of the datasets.
Table 2 compares the cell-segmentation performance of the five methods using five evaluation metrics and three datasets. Like the cell segmentation results, our proposed model was superior to the competing models on all datasets. The highest performance was in our proposed method with a dice score of 0.764, 0.835 and 0.833 on MoNuSAC, GlySAC and CoNSeP dataset, respectively. The lowest performance of the five methods was the TripleUnet. Figure 5 shows the comparison between original HoVer-Net and Mulvernet through five evaluation metrics on three datasets. Overall, the Mulvernet achieved better performance than HoVer-Net. In other words, the effectiveness of multiple filter unit in improving the performance of HoVer-Net.

3.4. Ablation Experiments

The purpose of these ablation experiments was to assess the effectiveness of multiple filters. Table 3 and Table 4 describe the results of the ablation experiments on cell classification and segmentation, respectively. Besides that, Figure 6 shows the comparison of the total loss in the validation sets of three datasets between the original HoVer-Net and Mulvernet. The effectiveness of multiple filters for classification is apparent. By combining three different filters, our proposed method improved the overall performance.
Similar results were seen for cell segmentation. The use of three filters boosted the segmentation performance. Therefore, by combining multiple filters, our proposed model facilitated the improved classification and segmentation of cells across all three datasets.
Table 3 shows the results of the cell-classification ablation experiments. The data were collected using three methods. Overall, a multiple filter unit with three sizes ( 1 × 1 , 3 × 3 , and 5 × 5 ) performed best of all three combinations, with an F d of 0.841 on the MoNuSAC dataset, 0.864 on GlySAC dataset, and 0.740 on the CoNSeP dataset.
Table 4 compares the cell-segmentation results using six evaluation metrics according to three combinations of multiple filters on the MoNuSAC, GlySAC, and CoNSeP datasets. On the MoNuSAC dataset, the MF unit with filter sizes of 1 × 1 and 3 × 3 and 5 × 5 achieved the highest Dice score of 0.764, while the Dice score of filter sizes of 1 × 1 and 3 × 3 was 0.745. On the GlySAC and CoNSeP datasets, the Dice scores of the MF unit with filter sizes of 1 × 1 and 3 × 3 and 5 × 5 were 0.835 and 0.828, respectively, were higher than those of the other filter sizes combinations. Thus the ablation experiments demonstrated that the MF unit with filter sizes of 1 × 1 and 3 × 3 and 5 × 5 is the best choice for our proposed method.

3.5. Implementation

For network training, we used a patch as the input with a size of 256 × 256 pixels and a batch size of 4. For an input image, 0-1 normalization is applied before it is fed into the network. Furthermore, no data-augmentation technique was applied to this experiment. Our method is an end-to-end model. We trained the network using Adam optimization with a learning rate of 10 4 for 100 epochs until convergence. For loss computation, we calculated multiple regression losses, including the mean squared-error loss, cross-entropy loss [27], and dice loss [28] for simultaneous nucleus segmentation and classification was presented in [17]. Model selection was guided by the highest performance on the validation set. The Adam optimizer [29] was used as the optimization method for model training. All models were implemented using the PyTorch framework [30] with a NVIDIA GeForce 3090 Ti GPU (NVIDIA Corporate, Santa Clara, CA, USA).

4. Discussion

This study proposed a solution to the problem of parallel cell segmentation and classification in pathology images. We created a new simultaneous segmentation and classification model based on HoVer-Net combined with multiple filter units and attention mechanisms. We tested this method on the three datasets, assessed the performance of the segmentation and classification model, and compared it to several available methods. The experimental results showed that our method achieved the best overall result. We also designed ablation experiments to demonstrate the effectiveness of the proposed model and combined the proposed module with the original HoVer-Net to analyze its effectiveness. The ablation-experiment results showed that the proposed multiple filters improved the performance of HoVer-Net to some extent. The above experimental results show that our method has some advantages in the automatic segmentation and classification of nuclei.

5. Conclusions

This paper presented a method for nucleus segmentation and classification from pathology images. Our method integrates multiple filter units into HoVer-Net with attention gates. The experimental results show the effectiveness of the multiple filter unit in improving the performance of the original HoVer-Net model as well as outperforming other models. The ability to segment and classify nuclei of different types automatically is directly associated with subsequent pathological analysis. It not only facilitates an excellent opportunity to speed up the diagnostic process in the clinic but also increases our understanding of tissue characteristics, leading to improved patient care and management. Our nucleus segmentation and classification model allows for the identification of morphological characteristics and quantification of the different types of nuclei and, thus, can provide additional diagnostic and predictive value. We observe low classification scores for nuclei with fewer samples and high variability. Future work will involve improving the class balance of data in simultaneous learning.

Author Contributions

Conceptualization, V.T.-T.V. and S.-H.K.; Methodology, V.T.-T.V. and S.-H.K.; Software, V.T.-T.V.; Validation, V.T.-T.V. and S.-H.K.; Formal analysis, V.T.-T.V. and S.-H.K.; Investigation, V.T.-T.V. and S.-H.K.; Resources, V.T.-T.V. and S.-H.K.; Data curation, V.T.-T.V. and S.-H.K.; Writing—original draft preparation, V.T.-T.V.; Writing—review and editing, S.-H.K.; Visualization, V.T.-T.V. and S.-H.K.; Supervision, S.-H.K.; Project administration, S.-H.K.; Funding acquisition, S.-H.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the Bio & Medical Technology Development Program of the National Research Foundation (NRF) and funded by the Korean government (MSIT) (NRF-2019M3E5D1A02067961) and also supported by Institute of Information & Communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No.2021-0-02068, Artificial Intelligence Innovation Hub).

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Lu, C.; Romo-Bucheli, D.; Wang, X.; Janowczyk, A.; Ganesan, S.; Gilmore, H.; Rimm, D.; Madabhushi, A. Nuclear shape and orientation features from H&E images predict survival in early-stage estrogen receptor-positive breast cancers. Lab. Investig. 2018, 98, 1438–1448. [Google Scholar] [PubMed] [Green Version]
  2. Alsubaie, N.; Sirinukunwattana, K.; Raza, S.E.A.; Snead, D.; Rajpoot, N. A bottom-up approach for tumour differentiation in whole slide images of lung adenocarcinoma. In Proceedings of the SPIE Medical Imaging—Medical Imaging 2018: Digital Pathology, Houston, TX, USA, 10–15 February 2018; SPIE: Bellingham, WA, USA, 2018; Volume 10581, pp. 104–113. [Google Scholar]
  3. Sirinukunwattana, K.; Snead, D.; Epstein, D.; Aftab, Z.; Mujeeb, I.; Tsang, Y.W.; Cree, I.; Rajpoot, N. Novel digital signatures of tissue phenotypes for predicting distant metastasis in colorectal cancer. Sci. Rep. 2018, 8, 13692. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Javed, S.; Mahmood, A.; Fraz, M.M.; Koohbanani, N.A.; Benes, K.; Tsang, Y.W.; Hewitt, K.; Epstein, D.; Snead, D.; Rajpoot, N. Cellular community detection for tissue phenotyping in colorectal cancer histology images. Med. Image Anal. 2020, 63, 101696. [Google Scholar] [CrossRef] [PubMed]
  5. Veta, M.; Van Diest, P.J.; Kornegoor, R.; Huisman, A.; Viergever, M.A.; Pluim, J.P. Automatic nuclei segmentation in H&E stained breast cancer histopathology images. PLoS ONE 2013, 8, e70221. [Google Scholar]
  6. Chang, C.S.; Ding, J.J.; Wu, Y.F.; Lin, S.J. Cell segmentation algorithm using double thresholding with morphology-based techniques. In Proceedings of the 2018 IEEE International Conference on Consumer Electronics-Taiwan (ICCE-TW), Taichung, Taiwan, 19–21 May 2018; pp. 1–5. [Google Scholar]
  7. Litjens, G.; Kooi, T.; Bejnordi, B.E.; Setio, A.A.A.; Ciompi, F.; Ghafoorian, M.; van der Laak, J.A.W.M.; van Ginneken, B.; Sánchez, C.I. A survey on deep learning in medical image analysis. Med. Image Anal. 2017, 42, 60–88. [Google Scholar] [CrossRef] [Green Version]
  8. Shen, D.; Wu, G.; Suk, H.I. Deep learning in medical image analysis. Annu. Rev. Biomed. Eng. 2017, 19, 221. [Google Scholar] [CrossRef] [Green Version]
  9. Naylor, P.; Laé, M.; Reyal, F.; Walter, T. Segmentation of nuclei in histopathology images by deep regression of the distance map. IEEE Trans. Med. Imaging 2018, 38, 448–459. [Google Scholar] [CrossRef]
  10. Liu, X.; Guo, Z.; Li, B.; Cao, J. Nuclei segmentation by using convolutional network with distance map and contour information. In Proceedings of the Eleventh Asian Conference on Machine Learning, Nagoya, Japan, 17–19 November 2019; PMLR: London, UK, 2019; pp. 972–986. [Google Scholar]
  11. Graham, S.; Epstein, D.; Rajpoot, N. Dense steerable filter cnns for exploiting rotational symmetry in histology images. IEEE Trans. Med. Imaging 2020, 39, 4124–4136. [Google Scholar] [CrossRef]
  12. Wienert, S.; Heim, D.; Saeger, K.; Stenzinger, A.; Beil, M.; Hufnagl, P.; Dietel, M.; Denkert, C.; Klauschen, F. Detection and segmentation of cell nuclei in virtual microscopy images: A minimum-model approach. Sci. Rep. 2012, 2, 503. [Google Scholar] [CrossRef] [Green Version]
  13. Nguyen, K.; Jain, A.K.; Sabata, B. Prostate cancer detection: Fusion of cytological and textural features. J. Pathol. Inform. 2011, 2, 3. [Google Scholar] [CrossRef]
  14. Wang, P.; Hu, X.; Li, Y.; Liu, Q.; Zhu, X. Automatic cell nuclei segmentation and classification of breast cancer histopathology images. Signal Process. 2016, 122, 1–13. [Google Scholar] [CrossRef]
  15. Sirinukunwattana, K.; Raza, S.E.A.; Tsang, Y.W.; Snead, D.R.; Cree, I.A.; Rajpoot, N.M. Locality sensitive deep learning for detection and classification of nuclei in routine colon cancer histology images. IEEE Trans. Med. Imaging 2016, 35, 1196–1206. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  16. Basha, S.S.; Ghosh, S.; Babu, K.K.; Dubey, S.R.; Pulabaigari, V.; Mukherjee, S. Rccnet: An efficient convolutional neural network for histological routine colon cancer nuclei classification. In Proceedings of the 2018 15th International Conference on Control, Automation, Robotics and Vision (ICARCV), Singapore, 18–21 November 2018; pp. 1222–1227. [Google Scholar]
  17. Graham, S.; Vu, Q.D.; Raza, S.E.A.; Azam, A.; Tsang, Y.W.; Kwak, J.T.; Rajpoot, N. Hover-net: Simultaneous segmentation and classification of nuclei in multi-tissue histology images. Med. Image Anal. 2019, 58, 101563. [Google Scholar] [CrossRef] [Green Version]
  18. Vo, V.T.T.; Yang, H.J.; Lee, G.S.; Kang, S.R.; Kim, S.H. Effects of Multiple Filters on Liver Tumor Segmentation From CT Images. Front. Oncol. 2021, 11, 697178. [Google Scholar] [CrossRef] [PubMed]
  19. He, K.; Zhang, X.; Ren, S.; Sun, J. Identity mappings in deep residual networks. In Proceedings of the 14th European Conference–Computer Vision—ECCV 2016, Amsterdam, The Netherlands, 11–14 October 2016; Springer: Cham, Switzerland, 2016; pp. 630–645. [Google Scholar]
  20. Verma, R.; Kumar, N.; Patil, A.; Kurian, N.C.; Rane, S.; Graham, S.; Vu, G.D.; Zwager, M.; Raza, S.E.A.; Rajpoot, N.; et al. MoNuSAC2020: A multi-organ nuclei segmentation and classification challenge. IEEE Trans. Med. Imaging 2021, 40, 3413–3423. [Google Scholar] [CrossRef]
  21. Doan, T.N.; Song, B.; Vuong, T.T.; Kim, K.; Kwak, J.T. SONNET: A self-guided ordinal regression neural network for segmentation and classification of nuclei in large-scale multi-tissue histology images. IEEE J. Biomed. Health Inform. 2022, 26, 3218–3228. [Google Scholar] [CrossRef]
  22. Kumar, N.; Verma, R.; Sharma, S.; Bhargava, S.; Vahadane, A.; Sethi, A. A dataset and a technique for generalized nuclear segmentation for computational pathology. IEEE Trans. Med. Imaging 2017, 36, 1550–1560. [Google Scholar] [CrossRef] [PubMed]
  23. Kirillov, A.; He, K.; Girshick, R.; Rother, C.; Dollár, P. Panoptic segmentation. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; pp. 9404–9413. [Google Scholar]
  24. Lal, S.; Das, D.; Alabhya, K.; Kanfade, A.; Kumar, A.; Kini, J. NucleiSegNet: Robust deep learning architecture for the nuclei segmentation of liver cancer histopathology images. Comput. Biol. Med. 2021, 128, 104075. [Google Scholar] [CrossRef]
  25. Zhao, B.; Chen, X.; Li, Z.; Yu, Z.; Yao, S.; Yan, L.; Wang, Y.; Liu, Z.; Liang, C.; Han, C. Triple U-net: Hematoxylin-aware nuclei segmentation with progressive dense feature aggregation. Med. Image Anal. 2020, 65, 101786. [Google Scholar] [CrossRef] [PubMed]
  26. He, K.; Gkioxari, G.; Dollár, P.; Girshick, R. Mask R-CNN. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 2961–2969. [Google Scholar]
  27. Zhang, Z.; Sabuncu, M. Generalized cross entropy loss for training deep neural networks with noisy labels. In Proceedings of the 32nd International Conference on Neural Information Processing Systems, Montreal, QC, Canada, 3–8 December 2018; Volume 31. [Google Scholar]
  28. Li, X.; Sun, X.; Meng, Y.; Liang, J.; Wu, F.; Li, J. Dice loss for data-imbalanced NLP tasks. arXiv 2019, arXiv:1911.02855. [Google Scholar]
  29. Kingma, D.P.; Ba, J. Adam: A method for stochastic optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
  30. Paszke, A.; Gross, S.; Chintala, S.; Chanan, G.; Yang, E.; DeVito, Z.; Lin, Z.; Desmaison, A.; Antiga, L.; Lerer, A. Automatic Differentiation in Pytorch. 2017. Available online: https://openreview.net/forum?id=BJJsrmfCZ (accessed on 15 December 2022).
Figure 1. Schematic of our proposed method. The proposed method adopted to solve the problem, combining the HoVer-Net model [17] and a multiple filter unit [18], is named Mulvernet.
Figure 1. Schematic of our proposed method. The proposed method adopted to solve the problem, combining the HoVer-Net model [17] and a multiple filter unit [18], is named Mulvernet.
Electronics 12 00355 g001
Figure 2. Sample tissue images from the MoNuSAC dataset with ground-truth annotations and prediction of Mulvernet model.
Figure 2. Sample tissue images from the MoNuSAC dataset with ground-truth annotations and prediction of Mulvernet model.
Electronics 12 00355 g002
Figure 3. Sample tissue images from the GlySAC dataset with ground-truth annotations and prediction of Mulvernet model.
Figure 3. Sample tissue images from the GlySAC dataset with ground-truth annotations and prediction of Mulvernet model.
Electronics 12 00355 g003
Figure 4. Sample tissue images from the CoNSeP dataset with ground-truth annotations and prediction of Mulvernet model.
Figure 4. Sample tissue images from the CoNSeP dataset with ground-truth annotations and prediction of Mulvernet model.
Electronics 12 00355 g004
Figure 5. The comparison of five evaluation metrics in three datasets between original HoVer-Net and Mulvernet. (A) The comparison on MoNuSAC dataset. (B) The comparison on GlySAC dataset. (C) The comparison on CoNSeP dataset.
Figure 5. The comparison of five evaluation metrics in three datasets between original HoVer-Net and Mulvernet. (A) The comparison on MoNuSAC dataset. (B) The comparison on GlySAC dataset. (C) The comparison on CoNSeP dataset.
Electronics 12 00355 g005
Figure 6. The comparison of total loss in the validation set of three datasets between original HoVer-Net and Mulvernet. (A) The comparison on MoNuSAC dataset. (B) The comparison on GlySAC dataset. (C) The comparison on CoNSeP dataset.
Figure 6. The comparison of total loss in the validation set of three datasets between original HoVer-Net and Mulvernet. (A) The comparison on MoNuSAC dataset. (B) The comparison on GlySAC dataset. (C) The comparison on CoNSeP dataset.
Electronics 12 00355 g006
Table 1. Cell-classification results using in three datasets (MoNuSAC, GlySAC, and CoNSeP) and five methods.
Table 1. Cell-classification results using in three datasets (MoNuSAC, GlySAC, and CoNSeP) and five methods.
DatasetsMethodFdF-EpithelialF-LymphocyteF-MacrophagesF-Neutrophil
MoNuSACNucleiSegNet [24]0.3380.3410.4450.0910.228
Triple U-net [25]0.6380.5560.6490.2370.324
Mask-RCNN [26]0.8390.8010.8040.4510.472
HoVer-Net [17]0.8250.7540.8030.3820.387
Proposed method0.8410.7640.8290.3710.435
FdF-EpithelialF-LymphocyteF-Miscellaneous
GlySACNucleiSegNet [24]0.7120.3690.4290.115-
Triple U-net [25]0.7280.4010.4630.106-
Mask-RCNN [26]0.8180.5130.5350.279-
HoVer-Net [17]0.8610.5550.5170.352-
Proposed method0.8640.5750.5680.310-
FdF-EpithelialF-InflammatoryF-MiscellaneousF-Spindle
CoNSePNucleiSegNet [24]0.4180.3100.2160.0980.288
Triple U-net [25]0.6320.3580.5610.1020.438
Mask-RCNN [26]0.7310.6080.5980.0990.516
HoVer-Net [17]0.7190.5990.5080.2000.479
Proposed method0.7360.8130.3400.2480.517
Table 2. Cell segmentation results on three datasets (MoNuSAC, GlySAC and CoNSeP) using Dice score.
Table 2. Cell segmentation results on three datasets (MoNuSAC, GlySAC and CoNSeP) using Dice score.
DatasetsNucleiSegNet [24]Triple U-Net [25]Mask-RCNN [26]HoVer-Net [17]Proposed Method
MoNuSAC0.5370.5120.7670.7530.766
GlySAC0.6510.6770.7810.8230.835
CoNSeP0.7440.5120.7670.8280.833
Table 3. Results of ablation experiments on cell classification.
Table 3. Results of ablation experiments on cell classification.
DatasetsCombinationsFdACCF-EpithelialF-LymphocyteF-MacrophagesF-Neutrophil
MoNuSAC 1 × 1 and 3 × 3 0.8380.9440.7410.8090.3530.330
1 × 1 and 5 × 5 0.8330.9530.7540.8130.3400.517
1 × 1 and 3 × 3 and 5 × 5 0.8410.9590.7640.8290.3700.435
FdACCF-MiscellaneousF-EpithelialF-Lymphocyte
GlySAC 1 × 1 and 3 × 3 0.8610.7090.2970.5520.549
1 × 1 and 5 × 5 0.8630.7130.2850.5640.556
1 × 1 and 3 × 3 and 5 × 5 0.8640.7250.3100.5750.568
FdACCF-MiscellaneousF-InflammatoryF-EpithelialF-Spindle
CoNSeP 1 × 1 and 3 × 3 0.7330.7680.2040.4590.5710.430
1 × 1 and 5 × 5 0.7310.7810.2280.4590.5810.454
1 × 1 and 3 × 3 and 5 × 5 0.7360.7840.2480.3400.8130.517
Table 4. Results of ablation experiments on cell segmentation.
Table 4. Results of ablation experiments on cell segmentation.
DatasetsFilter SizesDiceAJIDQSQPQAJI+
MoNuSAC 1 × 1 and 3 × 3 0.7450.5890.7170.7790.5790.593
1 × 1 and 5 × 5 0.7630.6080.7420.7840.6010.613
1 × 1 and 3 × 3 and 5 × 5 0.7660.6080.7370.7890.6010.613
GlySAC 1 × 1 and 3 × 3 0.8350.6470.7990.7860.6290.661
1 × 1 and 5 × 5 0.8350.6510.8000.7870.6320.665
1 × 1 and 3 × 3 and 5 × 5 0.8350.6500.8040.7860.6340.666
CoNSeP 1 × 1 and 3 × 3 0.8260.5070.6230.7510.4690.541
1 × 1 and 5 × 5 0.8250.4860.6100.7460.4560.514
1 × 1 and 3 × 3 and 5 × 5 0.8330.5150.6350.7570.4820.542
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Vo, V.T.-T.; Kim, S.-H. Mulvernet: Nucleus Segmentation and Classification of Pathology Images Using the HoVer-Net and Multiple Filter Units. Electronics 2023, 12, 355. https://doi.org/10.3390/electronics12020355

AMA Style

Vo VT-T, Kim S-H. Mulvernet: Nucleus Segmentation and Classification of Pathology Images Using the HoVer-Net and Multiple Filter Units. Electronics. 2023; 12(2):355. https://doi.org/10.3390/electronics12020355

Chicago/Turabian Style

Vo, Vi Thi-Tuong, and Soo-Hyung Kim. 2023. "Mulvernet: Nucleus Segmentation and Classification of Pathology Images Using the HoVer-Net and Multiple Filter Units" Electronics 12, no. 2: 355. https://doi.org/10.3390/electronics12020355

APA Style

Vo, V. T. -T., & Kim, S. -H. (2023). Mulvernet: Nucleus Segmentation and Classification of Pathology Images Using the HoVer-Net and Multiple Filter Units. Electronics, 12(2), 355. https://doi.org/10.3390/electronics12020355

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop