Next Article in Journal
SiamSMN: Siamese Cross-Modality Fusion Network for Object Tracking
Next Article in Special Issue
Image Stitching of Low-Resolution Retinography Using Fundus Blur Filter and Homography Convolutional Neural Network
Previous Article in Journal
Higher Education Students’ Perceptions of GenAI Tools for Learning
Previous Article in Special Issue
A Review of Imaging Methods and Recent Nanoparticles for Breast Cancer Diagnosis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Bend-Net: Bending Loss Regularized Multitask Learning Network for Nuclei Segmentation in Histopathology Images

1
Castell Health Inc., Salt Lake City, UT 84111, USA
2
Department of Computer Science, University of Idaho, Idaho Falls, ID 83402, USA
3
School of Intelligent Engineering and Intelligent Manufacturing, Hunan University of Technology and Business, Changsha 410205, China
*
Author to whom correspondence should be addressed.
Information 2024, 15(7), 417; https://doi.org/10.3390/info15070417
Submission received: 7 June 2024 / Revised: 13 July 2024 / Accepted: 16 July 2024 / Published: 18 July 2024

Abstract

:
Separating overlapped nuclei is a significant challenge in histopathology image analysis. Recently published approaches have achieved promising overall performance on nuclei segmentation; however, their performance on separating overlapped nuclei is limited. To address this issue, we propose a novel multitask learning network with a bending loss regularizer to separate overlapped nuclei accurately. The newly proposed multitask learning architecture enhances generalization by learning shared representation from the following three tasks: instance segmentation, nuclei distance map prediction, and overlapped nuclei distance map prediction. The proposed bending loss defines high penalties to concave contour points with large curvatures, and small penalties are applied to convex contour points with small curvatures. Minimizing the bending loss avoids generating contours that encompass multiple nuclei. In addition, two new quantitative metrics, the Aggregated Jaccard Index of overlapped nuclei (AJIO) and the accuracy of overlapped nuclei (ACCO), have been designed to evaluate overlapped nuclei segmentation. We validate the proposed approach on the CoNSeP and MoNuSegv1 data sets using the following seven quantitative metrics: Aggregate Jaccard Index, Dice, Segmentation Quality, Recognition Quality, Panoptic Quality, AJIO, and ACCO. Extensive experiments demonstrate that the proposed Bend-Net outperforms eight state-of-the-art approaches.

1. Introduction

The histopathology nuclei segmentation aims to extract all nuclei from histopathology images, and it provides reliable evidence for cancer evaluation. Conventionally, pathologists examine the shapes and distributions of the nuclei under microscopes to determine the carcinoma and the malignancy level [1]. However, the large number of nuclei makes the whole process time-consuming, with low throughput, and prone to human error. Automated nuclei segmentation is highly desirable in clinical practice. Recently, with the growing interest in digital pathology, the whole slide scanner provides a solution that transfers glass slides digitally to whole slide images [2].
In an H&E-stained histopathology image, nuclei are the first and the most visible structures among tissues. Accurate nuclei segmentation is essential in further quantitative analysis [3], e.g., movement tracking, morphological changing, and nuclei counting. Many computational approaches have been proposed for automatic nuclei segmentation in histopathology images. Some conventional approaches [4,5,6] utilized thresholding and watershed algorithms to segment nuclei, but these approaches are not robust in handling images with various nucleus types, fat tissue, and staining procedures. In recent years, deep learning-based approaches have been thriving in numerous biomedical image processing tasks [7,8,9], and have achieved promising results in nuclei segmentation [10,11,12,13,14,15,16,17,18,19,20,21,22,23]. Xing et al. [10] proposed a convolution neural network (CNN) to produce probability maps, and improved the robustness by using postprocessing, e.g., distance transformation, H-minima thresholding, and a region growing algorithm. Kumar et al. [11] demonstrated a three-class (instance, boundary, and background) CNN that computes the label for each pixel to segment the nuclei. Naylor et al. [12] constructed a DIST-map that utilized the regression output of nuclei distance maps for accurate nuclei segmentation. Although these methods achieved better results compared to conventional approaches, it is still challenging to segment nuclei accurately due to the existence of a large number of overlapped nuclei.
Overlapped nuclei segmentation is challenging because of the lack of clear boundaries among nuclei, similar background textures, and large size and morphology variations. In recently published deep learning–based approaches, three main strategies have been proposed to address this challenge. The first strategy utilized the neural network to split the overlapped nuclei by generating both nuclei regions and boundaries. For example, Kumar et al. [11] proposed a three-classes CNN, including instances, boundaries, and background to segment the overlapped nuclei. Chen et al. [14] proposed a multitask learning framework that outputs instance and boundary maps in separate branches. Vu et al. [15] constructed a multiscale deep residual network with instances and boundary classes to segment nuclei. The second strategy integrated features from overlapped nuclei to improve the overall segmentation performance. Zhou et al. [17] proposed the CIA-Net that utilized spatial and texture dependencies between nuclei and contours to improve the robustness of nuclei segmentation. Koohbanahi et al. [18] proposed a SpaNet that captures spatial features in a multiscale neural network. Graham et al. [19] proposed a new weighted cross-entropy loss that was sensitive to the Hematoxylin stain. Qu et al. [20] constructed a new cross-entropy loss to learn spatial features for improving localization accuracy. The third strategy utilized the Watershed algorithm to segment the overlapped nuclei. Naylor et al. [12] constructed a regression network that generated markers for the Watershed algorithm to segment overlapped nuclei. Graham et al. [21] proposed the HoVer-Net architecture to output the instance map and horizontal and vertical nuclei distance maps for obtaining the markers of the Watershed algorithm. According to the reported results, these approaches achieved better overall performance than conventional methods, but their ability to separate overlapped nuclei is still limited (Figure 1). Gudhe et al. [22] proposed the Bayesian approximation with Monte Carlo dropout during the inference to estimate the uncertainty of a deep learning model for nuclei segmentation.
To address the challenges above, we propose a novel bending loss regularized deep multitask network for nuclei segmentation. First, the proposed multitask network consists of three decoder branches as follows: (1) instance segmentation branch, (2) boundary-distance branch for all nuclei, and (3) boundary-distance branch for overlapped nuclei. The third branch is designed to identify overlapped nuclei. Second, we propose the bending energy-based regularizer to penalize large curvatures of nuclei contours. In histopathology images, the curvatures of nucleus contour points change smoothly; however, if one contour contains two or multiple overlapped or touching nuclei, their touching points on the contour will have sharp curvature changes (Figure 2). Inspired by this observation, we develop the bending loss to generate large penalties for contour points with large curvatures. Third, we propose two new metrics to evaluate overlapped nuclei segmentation. Previous approaches evaluate overlapped nuclei segmentation using metrics for overall segmentation performance, which hides the real performance of the overlapped nuclei segmentation. Compared to the closest work, HoVer-Net [21], both the proposed approach and the HoVer-Net follow the multitask learning architecture and use ResNet-50 as the building blocks. There are two major differences between the two approaches as follows: (1) the proposed method introduced a new decoder branch to give focus on overlapped nuclei; and (2) we propose the bending loss to penalize large curvatures of nuclei contours.
The rest of the paper is organized as follows. Section 2 describes the proposed method, including the bending loss, multitask learning network, and the loss function of the proposed architecture. Section 3 firstly describes the data sets and evaluation metrics in our experiments, then presents the implementation and training process. The results are discussed in Section 3.3, Section 3.4, Section 3.5, Section 3.6 and Section 3.7. Section 4 is the conclusion and future work.

2. Methods

The proposed method, namely Bend-Net, consists of the following two key components: the bending loss and multitask learning architecture. First, we propose a bending energy-based regularizer for penalizing touching nuclei points. Second, we propose a multitask learning network with three decoder branches that focus on overlapped nuclei contours. The final loss function consists of regular segmentation loss [21], overlapped nuclei loss, and bending loss.

2.1. Bending Loss

Bending energy has been widely applied in measuring the shapes of biological structures, e.g., blood cells [24], cardiac [25], vesicle membranes [26], and blood vessels [27]. Young et al. [28] used the chain-code representations to model bending energy. Verbeek et al. [29] used the derivative-of-Gaussian filter to model bending energy in the grayscale image for motion tracking. Bergou et al. [30] modeled the discrete curvature and bending loss both in kinematic and dynamic treatment to solve the smoothness problem.
For 2D digital images, a contour is composed of discrete pixels, and the curvature of a contour point is computed by using the vectors created by neighboring points on the contour. For histopathology images, a nucleus usually has a smooth contour, and the points on the contour have small curvature changes; the points on the contour with large curvature have a high probability of being the touching points of two/multiple nuclei (Figure 2). To split the touching nuclei, we define the bending loss that gives high penalties to the contour points with large curvatures, and small penalties to points with small curvatures. The proposed total loss is given by
L = L 0 + α · L b e
where L0 refers to conventional segmentation loss (Section 2.3); Lbe denotes the proposed bending loss; and the parameter α controls the contribution of the bending loss. Let C = c i i = 1 m be the set of contour points of nuclei in an image, and Lbe is defined by
L b e C =   1 m i = 1 m B E i
where BE(i) is the discrete bending energy at the point c i ,
B E i = κ i 2 1 δ c i + δ c i · μ v i ,   i + 1 + v i 1 , i ,    
κ i = 2 v i 1 , i × v i ,   i + 1 v i 1 , i v i 1 , i + v i 1 , i · v i ,   i + 1
In Equation (3), δ c i is 1 if c i is a concave contour point, and 0 if c i is a convex point; κ i is the curvature at c i . For three consecutive pixels on a nucleus boundary with coordinates xi−1, xi and xi+1, v(i − 1, i) is the edge vector from point i − 1 to i, such that v(i − 1, i) = xi − xi−1; and v(i, i + 1) is the edge vector from i to i + 1, such that v(i, i + 1) = xi+1 − xi. Operator |·| calculates the length of a vector and μ defines the weight for concave contour points.
The 8-neighborhood system is applied to search neighbors for contour points. Ideally, a contour point only has two neighboring points, and their coordinates are used to calculate the edge vectors in Equations (3) and (4). As shown in Figure 3, a point with eight neighbors has 28 combinations of possible curve patterns. All curve patterns are divided into five groups; in each group, the concave points and the convex points have different discrete bending loss values. In the first group, the four patterns construct straight-line segments, and their bending losses are all 0 s. The second group shows patterns with a 3π/4 angle between edge vectors, and their bending losses are relatively small. In the last group, the eight patterns have large curvatures, and their bending losses are the largest in all patterns. The third and fourth groups illustrate patterns with the same angles between edge vectors, but they have different bending loss due to the different vector lengths.
To determine the concave and convex points, the midpoint of two extended neighboring points is calculated. If the midpoint is out of the predicted nucleus, we define it as a concave point; otherwise, the point is a convex point. The concave points are more likely to be overlapped contour points, and the convex points are usually regular/normal points.
Equation (3) gives a larger penalty to concave points. The previous approach [31] calculated bending loss using curvature directly. Points with the same curvatures could be convex or concave; and convex points are more likely regular contour points, and concave points are likely to be overlapped contour points. The previous approach cannot distinguish convex and concave contour points and tends to over-segment nuclei.
A sample of the overlapped nucleus contour is shown in Figure 4. The red dots highlight the concave points, and the green dots highlight the convex points. The concave points’ bending loss values are 28.28. The midpoints of green dots are inside of the predicted nucleus, and they are convex points; their bending loss values are less than 1.41. In Figure 4, the concave points with a bending loss 28.28 and the convex points with a bending loss 1.41 have the same curve pattern; however, the concave points produce 20 times as much loss as the convex points.
The proposed bending loss is rotation invariant since all patterns with the same angle between two edge vectors have the same bending loss. In practice, if two nuclei contours share some contour segments, one contour point may have more than two neighbors. In this scenario, we calculate the bending loss for all possible combinations and choose the smallest loss as the discrete bending loss for the point.
As shown in Figure 5, for poorly segmented nuclei contours, all these touching contour points have relatively high (red and green points) bending losses. If the touching nuclei are well separated (Figure 5c), then the bending loss values of all contour points values should be less than 9.66.

2.2. Multitask Learning Network

The proposed multitask learning architecture is shown in Figure 6. The network follows an encoder–decoder design, and has three decoder branches. The encoder employs ResNet-50 [32] as a feature extractor. In the first convolutional layer, 64 7 × 7 kernels with a stride of 1 are applied, but the following max-pooling layer is removed to preserve more information. The network has three decoders/tasks. The first task predicts the nuclei instance map (INST); the second produces each nucleus’s horizontal and vertical boundary-distance map (HV); and the third outputs the overlapped nuclei’s horizontal and vertical boundary-distance map (OHV). All decoders in the three branches have the same sub-architectures and dense units [33]. The OHV and HV branches share weights through skip connections.
The weight-sharing among decoders is designed to use features learned from similar tasks. In traditional multitask learning networks, different branches typically address different tasks. However, in the proposed network, both the HV and OHV branches share some comment results; one for all nuclei, and the other for overlapped nuclei. To take advantage of the features from two similar tasks, we design the skip connections among two branches to share weights. Specifically, the network first learns the distance maps of overlapped nuclei and aggregates them through skip connections to distance maps in the HV branch.

2.3. Loss Function

As shown in Figure 6, the loss function of the proposed network has four terms as follows: the losses from three different decoders and the proposed bending loss. Let L I N S T denote the loss of the binary instance map; L H V denote the loss of the horizontal and vertical distance maps from the HV branch; and L O H V denote the loss of the horizontal and vertical distance maps from the OHV branch. L b e is the bending loss. The proposed loss function also can split into the segmentation loss (L0) and the bending loss regularizer (Equation (1)). The total loss is given by
L = L I N S T + L H V + L O H V L 0 + α · L b e
where α is the weight of the bending loss. We follow the design in [21] to set the losses of the three branches to have equal contributions to the total loss.
Loss of the INST branch. To segment the nuclei instance, we calculate the binary classification for each image pixel. I and I* are the predicted instance map and the ground truth instance map for all nuclei. The loss ( L I N S T ) is a summation of the cross-entropy loss ( L C E ) and Dice loss ( L D i c e ). They are given by
L I N S T I , I * = L C E I , I * + L D i c e I , I *
L C E I , I * = 1 n i n I i * log I i
L D i c e I , I * = 1 2 × i n I i I i * i n I i + i n I i *
where I i is the class prediction at point i, and n denotes the number of pixels in an image patch. The INST branch separates the nuclei instance from the background.
Loss of the HV branch. The loss function is to compare the predicted distance maps (D) with the ground truth distance maps (D*) for all nuclei. We employed the distance loss function in [21]. The distance loss function is defined by
L d i s t D , D * = L M s e D , D * + 2 · L M s g e D , D *
L M s e D , D * = 1 n i n d i 2
L M s g e D , D * = 1 n i n d i 2
where L M s e is the mean square error loss and L M s g e is the mean square gradient error loss; d is D − D*, and denotes the gradient calculation.
Loss of the OHV branch. L O H V is also defined using the mean square error and the mean square gradient error (Equation (9)). But L O H V is calculated using the predicted distance maps (D) and the ground truth distance maps of overlapped nuclei.

3. Experimental Results and Discussion

3.1. Data Sets and Evaluation Metrics

In this paper, we validate the proposed method using the following two histopathology nuclei data sets: CoNSeP [21] and MoNuSegv1 [11]. CoNSeP is provided by the University of Warwick, and has 41 H&E-stained images from 16 colorectal adenocarcinoma (CRA) WSIs collected using the Omnyx VL120 scanner. Six types of nuclei (normal epithelial, tumor epithelial, inflammatory, necrotic, muscle, and fibroblast) exist in the data set. The data set contains 24,319 manually annotated nuclei (13,256 overlapped). The image size is 1000 × 1000 with the magnification at 40×. In the experiment, 27 images are utilized for training and validation and 14 images for testing. The training and validation sets have 15,582 nuclei, and the test set has 8791 nuclei.
MoNuSegv1 contains 30 images from TCGA (The Cancer Genomic Atlas) data set. The original size of the images is 1000 × 1000, and there are more than 21,000 manually annotated nuclei from the breast, liver, kidney, prostate, bladder, colon, and stomach. The magnification is at 40×. In experiments, 16 images (4 breasts, 4 livers, 4 kidneys, 4 prostates) are used for training and validation, and 14 images for testing. The training and validation sets contain over 13,000 nuclei (4431 overlapped), and the test set has 6000 nuclei (2436 overlapped). The author recently extended the data set and published in [34]; however, it was not adopted in this study because their new test set contains much less overlapped nuclei compared with the previous test set [11].
Multifold cross-validation is not applied in this work because the two data sets have a large number of annotated nuclei, and a simple setting with randomly selected training, validation, and test sets is sufficient to produce reliable evaluation. We employed the following five quantitative metrics to evaluate the overall performance of nuclei segmentation approaches: Aggregate Jaccard Index (AJI) [11], Dice coefficient [35], Recognition Quality (RQ) [36], Segmentation Quality (SQ) [36], and Panoptic Quality (PQ) [36]. We propose the following two new metrics to evaluate the overlapped nuclei segmentation: Aggregated Jaccard Index of overlapped nuclei and accuracy for overlapped nuclei.
Let G = G i i = 1 N be the nuclei ground truth of an image and N denote the total amount of segments in G; and let S = S k k = 1 M be the predicted segments of the corresponding image and M denote the total amount of segments in S. AJI is an aggregate version of the Jaccard Index and is defined by
A J I = i = 1 N G i S j i = 1 N G i S j + S k ϵ U S k
where Sj is the matched predicted segments that produce the largest Jaccard Index value with Gi; and U denotes the set of unmatched predicted segments, where the total amount of U is (M – N).
Dice coefficient (DICE) is utilized to evaluate overall segmentation performance, and the DICE is given by
D I C E = 2 G S G + S
where operator |·| denotes the cardinalities of the segments.
PQ is used to estimate both detection and segmentation results. RQ is the familiar F1-score and SQ is known as the average Jaccard Index of matched pairs. RQ, SQ, and PQ are defined as
R Q = T P T P + 1 2 F P + 1 2 F N
S Q = p , g T P I o U p , g T P
P Q = R Q × S Q
where p refers to prediction and g refers to the ground truth. The matched pairs (p, g ) are mathematically proven to be unique matching [36] if their IoU(p, g ) > 0.5. The unique matching splits the prediction and ground truth into the following three sets: the number of matched pairs (TP), the number of unmatched predictions (FP), and the number of unmatched ground truths (FN).
Metrics for Overlapped Nuclei Segmentation. We improved the Aggregated Jaccard Index (AJI) and accuracy metrics, and proposed the following two new metrics to evaluate overlapped nuclei segmentation: AJI of overlapped nuclei (AJIO) and accuracy for overlapped nuclei (ACCO). Because of the existence of many nonoverlapped nuclei in images, traditional evaluation metrics cannot accurately validate the performance of overlapped nuclei segmentation. The proposed two metrics exclude all nonoverlapped nuclei and focus on the evaluation of overlapped nuclei. Let G = G i i = 1 N be the overlapped nuclei in a ground truth image and S = S k k = 1 M be the nuclei in the output image. AJIO is defined by
A J I O = i = 1 N G i S j i = 1 N G i S j
where Sj is the matched nucleus in S that produces the largest Jaccard Index with Gi.
Let M be the number of matched nuclei pairs between the segmentation and ground truth and O denote the total number of overlapped nuclei in an image. For each overlapped nuclei, we iterate all the predicted segments, and count two nuclei matched if their Jaccard Index value is larger than a threshold τ (0.5). The ACCO is given by
A C C O = M O
The two metrics (AJIO and ACCO) are generic and can be applied to other overlapped object segmentation.

3.2. Implementation and Training

The proposed approach is trained using an NVIDIA Titan Xp GPU. The encoder was pretrained on ImageNet and we trained the decoder for 100 epochs to obtain the initial parameters for the decoder branches. The network was further fine-tuned for 100 epochs on the nuclei training set. The size of the final output images is 80 × 80 pixels, and these output images are merged to form images with the same size (1000 × 1000) as the original images. In experiments, the initial learning rate is 10-4 and is reduced to 10-5 after 50 epochs. The batch size is eight for training the decoder and two for fine-tuning the network. Moreover, processing an image of size 1000 × 1000 with our architecture takes about one second.
The input dimensionality of the network is 270 × 270 × 3. We prepare the training, validation, and test sets by extracting patches from images with a size of 270 × 270. During the training stage, data augmentation strategies (i.e., rotation, Gaussian blur, and median blur) are utilized to generate more images. The ground truths of overlapped nuclei are two or multiple individual nuclei having connected-component labeling. An example histopathology image, the ground truth of all nuclei, and the overlapped nuclei are demonstrated in Figure 7.
The proposed scheme comprises the following three stages: (1) preprocessing, (2) training of the proposed multitask learning network, and (3) postprocessing. The preprocessing performs color normalization [37] to reduce the impact of variations from the H&E staining and scanning processes. The postprocessing described in [21] is employed in this study, which applies Sobel operators to the distance maps to generate the initial contour map; then, the difference between the initial nuclei contour map and nuclei instance map is used to generate markers, and finally, the watershed algorithm is applied to generate nuclei regions.

3.3. Effectiveness of the Network Architecture

The proposed multitask learning architecture uses HoVer-Net as the backbone and integrates our newly proposed overlapped nuclei (OHV) branch and skip connections (Figure 6). To demonstrate the effectiveness of the proposed architecture, we compare the proposed network with the single-task network (Instance-Net), and two-task network (HoVer-Net). To perform a fair comparison, the proposed bending loss is not used. The approaches are evaluated on the CoNSeP data set by using AJI, Dice, RQ, SQ, and PQ scores. As shown in Table 1, the Instance-Net does not apply any strategy to separate the overlapped nuclei and achieves limited performance, e.g., AJI is only 0.371. The proposed network with the OHV branch (“Ours-OHV”) achieved better average performance than the Instance-Net and HoVer-Net. With the new skip connections between the HV and OHV branches, the AJI, RQ, and PQ scores of the proposed approach (“Ours-skip”) increased by 3.54%, 3.30%, and 4.04%, respectively. The AJIO scores demonstrate that the proposed approach outperforms HoVer-Net in separating overlapped nuclei.

3.4. Effectiveness of the Bending Loss

First, we compare the proposed multitask learning network without any bending loss, with the bending loss (Lbe v1) in [31], and with the newly proposed bending loss (Lbe v2) in Equations (2)–(4). Lbe v1 calculates the bending loss only based on the curvature changes and cannot distinguish convex and concave points. The newly proposed bending loss Lbe v2 improves Lbe v1 by characterizing the difference between the concave and convex contour points and gives more penalties to concave points. Second, we demonstrate the effectiveness of the bending loss by adding it to HoVer-Net. The CoNSeP data set and AJI, Dice, RQ, SQ, and PQ scores are used in experiments. As shown in Table 2, the proposed architecture with the v1 bending loss achieves better performance than that of the network without any bending loss, and the proposed architecture with the newly proposed v2 bending loss outperforms the network with the v1 bending loss. The results demonstrate that the v2 bending loss can improve the overall performance (AJI: from 0.565 to 0.578) of nuclei segmentation. Meanwhile, adding the v1 or v2 bending losses to HoVer-Net improves its overall performance, which demonstrates the potential of applying the bending loss to improve the performance of other approaches. In addition, the AJIO scores demonstrate that the proposed approach with the v2 bending loss outperforms all other approaches in separating overlapped nuclei.

3.5. Parameter Tuning

Two hyperparameters, α   a n d   μ , exist in the proposed loss function. α balances the bending loss and all other losses (Equation (5)) and μ (Equation (3)) gives different weights to the concave and convex contour points when calculating the bending loss. We conducted a grid search for the two parameters on the CoNSeP data set by using the AJI score. Figure 8 shows the AJI results of nine parameter combinations ( μ : 10, 20, 40; α: 0.5, 1.0, 2.0). As shown in Figure 8, the proposed approach achieved the best performance when μ is 20 and α is 1.0. Therefore, the bending loss of a concave curve pattern is 20 times the quantity of the same convex curve pattern. Refer to Figure 3 for the bending loss of different curve patterns.

3.6. Performance Comparison of State-of-the-Art Approaches

We compared eight deep learning-based approaches, including three widely used biomedical segmentation architectures (FCN8 [7], U-Net [8], and SegNet [9]) and five state-of-the-art nuclei segmentation approaches (DCAN [14], DIST [12], Micro-Net [38], HoVer-net [21], and BEND [31]). Table 3 shows the overall performance of nine approaches on two public data sets (CoNSeP and MoNuSegv1) using five metrics (AJI, Dice, RQ, SQ, and PQ). Note that all other approaches are tested using the described experiment settings and, therefore, the values in Table 3 may not be the same as those reported in the original publications. The Watershed algorithm is applied to FCN8, U-Net, and SegNet for postprocessing, whereas the rest of the approaches are implemented by following the same strategy as in the original papers. As shown in Table 3, the proposed method outperforms the other eight approaches in terms of all five metrics. Among the three general biomedical segmentation architectures, U-Net achieves the highest AJI and RQ scores, but it has lower Dice and SQ scores than those of FCN8. DCAN and DIST are built upon FCN8 and U-Net, respectively. DCAN outperforms the FCN8 in all five metrics, and DIST outperforms U-Net in all five metrics. However, their overall segmentation performances are still are limited. Micro-Net and HoVer-Net achieves comparative segmentation results. The proposed Bend-Net achieves better results than all other approaches on the two data sets in all five metrics.

3.7. Overlapped Nuclei Segmentation

We propose two new metrics, AJIO and ACCO, to evaluate overlapped nuclei segmentation. Table 4 shows the performance of overlapped nuclei segmentation on the CoNSeP and MoNuSegv1 data sets by using AJIO and ACCO scores. The DIST, Micro-Net, HoVer-Net, and the proposed method applied strategies to separate overlapped nuclei; therefore, their performances are significantly better than FCN8, U-Net, and SegNet. Our method achieved the best AJIO and ACCO scores on two data sets. Figure 9 shows segmentation examples of six image regions with overlapped nuclei from the CoNSeP and MoNuSegv1 data sets. In the ground truth images, overlapped nuclei are represented using different colors. In the result images, if an approach can separate two overlapped nuclei, the two nuclei should be in two different colors. As shown in Figure 9, FCN8, SegNet, and U-Net tend to miss small nuclei, and cannot separate overlapped nuclei. DCAN is slightly better at handling overlapped nuclei than FCN8, SegNet, and U-Net, but tends to miss nuclei that are not small. DIST separates overlapped nuclei better than the last four approaches, but it tends to over-segment nuclei and generate many small regions. Micro-Net performs well in segmenting overlapped nuclei; but it produces smaller nuclei regions than the ground truth and tends to over-segment nuclei. Hover-Net shows better segmentation results than the other six approaches. It can segment out small nuclei, and the sizes of the resulting nuclei regions are close to those of the ground truth. It outperforms the other six approaches in segmenting overlapped nuclei; however, it has difficulty in separating closely touched nuclei. In Figure 9, the proposed method achieves the most accurate results on six images. The proposed approach not only can segment small nuclei but also can separate closely touched nuclei accurately.

4. Conclusions

In this paper, we propose a novel deep multitask learning network to address the challenge of segmenting overlapped nuclei in histopathology images. First, we propose the bending loss regularizer, which defines different losses for the concave and convex points of nuclei contours. Experimental results demonstrate that the bending loss effectively improves the overall performance of nuclei segmentation, and it can also be integrated into other deep learning-based segmentation approaches. Second, the proposed multitask learning network integrates the OHV branch to learn knowledge from the overlapped nuclei, which enhances the segmentation of touching nuclei. Third, we propose two quantitative metrics, AJIO and ACCO, to evaluate overlapped nuclei segmentation. The extensive experimental results on two public data sets demonstrate that the proposed Bend-Net achieves state-of-the-art performance for nuclei segmentation. In the future, we will extend the proposed approach to more challenging tasks, such as gland segmentation and semantic image segmentation.

Author Contributions

Conceptualization, H.W. and M.X.; Methodology, H.W., A.V., C.S. and M.X.; Software, H.W.; Validation, H.W., C.S. and M.X.; Formal analysis, H.W., A.V., C.S. and M.X.; Investigation, H.W. and M.X.; Resources, C.S. and M.X.; Data curation, H.W., C.S. and M.X.; Writing—original draft, H.W. and M.X.; Writing—review & editing, H.W., A.V., C.S. and M.X.; Visualization, H.W.; Supervision, C.S. and M.X.; Project administration, M.X.; Funding acquisition, A.V. and M.X. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported, in part, by the Institute for Modeling Collaboration and Innovation (IMCI) at the University of Idaho through NIH Award #P20GM104420.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are openly available in [CoNSeP] at [https://paperswithcode.com/dataset/consep], reference number [21].

Conflicts of Interest

Author Haotian Wang was employed by the company Castell Health Inc. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

  1. He, L.; Long, L.R.; Antani, S.; Thoma, G.R. Histology Image Analysis for Carcinoma Detection and Grading. Comput. Methods Programs Biomed. 2012, 107, 538–556. [Google Scholar] [CrossRef] [PubMed]
  2. Pantanowitz, L. Digital Images and the Future of Digital Pathology. J. Pathol. Inform. 2010, 1, 15. [Google Scholar] [CrossRef] [PubMed]
  3. Aeffner, F.; Zarella, M.D.; Buchbinder, N.; Bui, M.M.; Goodman, M.R.; Hartman, D.J.; Lujan, G.M.; Molani, M.A.; Parwani, A.V.; Lillard, K. Introduction to Digital Image Analysis in Whole-Slide Imaging: A White Paper from the Digital Pathology Association. J. Pathol. Inform. 2019, 10, 9. [Google Scholar] [CrossRef] [PubMed]
  4. Yang, X.; Li, H.; Zhou, X. Nuclei Segmentation Using Marker-Controlled Watershed, Tracking Using Mean-Shift, and Kalman Filter in Time-Lapse Microscopy. IEEE Trans. Circuits Syst. I Regul. Pap. 2006, 53, 2405–2414. [Google Scholar] [CrossRef]
  5. Cheng, J.; Rajapakse, J.C. Segmentation of Clustered Nuclei With Shape Markers and Marking Function. IEEE Trans. Biomed. Eng. 2009, 56, 741–748. [Google Scholar] [CrossRef] [PubMed]
  6. Ali, S.; Madabhushi, A. An Integrated Region-, Boundary-, Shape-Based Active Contour for Multiple Object Overlap Resolution in Histological Imagery. IEEE Trans. Med. Imaging 2012, 31, 1448–1460. [Google Scholar] [CrossRef]
  7. Long, J.; Shelhamer, E.; Darrell, T. Fully Convolutional Networks for Semantic Segmentation. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; IEEE: Piscataway, NJ, USA, 2015. [Google Scholar]
  8. Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. In Medical Image Computing and Computer-Assisted Intervention—MICCAI 2015; Springer International Publishing: Berlin/Heidelberg, Germany, 2015; pp. 234–241. ISBN 9783319245744. [Google Scholar]
  9. Badrinarayanan, V.; Kendall, A.; Cipolla, R. SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 2481–2495. [Google Scholar] [CrossRef] [PubMed]
  10. Xing, F.; Xie, Y.; Yang, L. An Automatic Learning-Based Framework for Robust Nucleus Segmentation. IEEE Trans. Med. Imaging 2016, 35, 550–566. [Google Scholar] [CrossRef]
  11. Kumar, N.; Verma, R.; Sharma, S.; Bhargava, S.; Vahadane, A.; Sethi, A. A Dataset and a Technique for Generalized Nuclear Segmentation for Computational Pathology. IEEE Trans. Med. Imaging 2017, 36, 1550–1560. [Google Scholar] [CrossRef]
  12. Naylor, P.; Laé, M.; Reyal, F.; Walter, T. Segmentation of Nuclei in Histopathology Images by Deep Regression of the Distance Map. IEEE Trans. Med. Imaging 2019, 38, 448–459. [Google Scholar] [CrossRef]
  13. Oda, H.; Roth, H.R.; Chiba, K.; Sokolić, J.; Kitasaka, T.; Oda, M.; Hinoki, A.; Uchida, H.; Schnabel, J.A.; Mori, K. BESNet: Boundary-Enhanced Segmentation of Cells in Histopathological Images. In Lecture Notes in Computer Science; Springer International Publishing: Berlin/Heidelberg, Germany, 2018; pp. 228–236. ISBN 9783030009342. [Google Scholar]
  14. Chen, H.; Qi, X.; Yu, L.; Heng, P.-A. DCAN: Deep Contour-Aware Networks for Accurate Gland Segmentation. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; IEEE: Piscataway, NJ, USA, 2016. [Google Scholar]
  15. Vu, Q.D.; Graham, S.; Kurc, T.; To, M.N.N.; Shaban, M.; Qaiser, T.; Koohbanani, N.A.; Khurram, S.A.; Kalpathy-Cramer, J.; Zhao, T.; et al. Methods for Segmentation and Classification of Digital Microscopy Tissue Images. Front. Bioeng. Biotechnol. 2019, 7, 433738. [Google Scholar] [CrossRef] [PubMed]
  16. Zeng, Z.; Xie, W.; Zhang, Y.; Lu, Y. RIC-Unet: An Improved Neural Network Based on Unet for Nuclei Segmentation in Histology Images. IEEE Access 2019, 7, 21420–21428. [Google Scholar] [CrossRef]
  17. Zhou, Y.; Onder, O.F.; Dou, Q.; Tsougenis, E.; Chen, H.; Heng, P.-A. CIA-Net: Robust Nuclei Instance Segmentation with Contour-Aware Information Aggregation. In Information Processing in Medical Imaging; Springer International Publishing: Berlin/Heidelberg, Germany, 2019; pp. 682–693. ISBN 9783030203511. [Google Scholar]
  18. Alemi Koohbanani, N.; Jahanifar, M.; Gooya, A.; Rajpoot, N. Nuclear Instance Segmentation Using a Proposal-Free Spatially Aware Deep Learning Framework. In Medical Image Computing and Computer Assisted Intervention—MICCAI 2019; Springer International Publishing: Berlin/Heidelberg, Germany, 2019; pp. 622–630. ISBN 9783030322397. [Google Scholar]
  19. Graham, S.; Rajpoot, N.M. SAMS-NET: Stain-Aware Multi-Scale Network for Instance-Based Nuclei Segmentation in Histology Images. In Proceedings of the 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018), Washington, DC, USA, 4–7 April 2018; IEEE: Piscataway, NJ, USA, 2018. [Google Scholar]
  20. Qu, H.; Yan, Z.; Riedlinger, G.M.; De, S.; Metaxas, D.N. Improving Nuclei/Gland Instance Segmentation in Histopathology Images by Full Resolution Neural Network and Spatial Constrained Loss. In Medical Image Computing and Computer Assisted Intervention—MICCAI 2019; Springer International Publishing: Berlin/Heidelberg, Germany, 2019; pp. 378–386. ISBN 9783030322397. [Google Scholar]
  21. Graham, S.; Vu, Q.D.; Raza, S.E.A.; Azam, A.; Tsang, Y.W.; Kwak, J.T.; Rajpoot, N. Hover-Net: Simultaneous Segmentation and Classification of Nuclei in Multi-Tissue Histology Images. Med. Image Anal. 2019, 58, 101563. [Google Scholar] [CrossRef]
  22. Gudhe, N.R.; Kosma, V.-M.; Behravan, H.; Mannermaa, A. Nuclei Instance Segmentation from Histopathology Images Using Bayesian Dropout Based Deep Learning. BMC Med. Imaging 2023, 23, 162. [Google Scholar] [CrossRef] [PubMed]
  23. Kadaskar, M.; Patil, N. Image Analysis of Nuclei Histopathology Using Deep Learning: A Review of Segmentation, Detection, and Classification. SN Comput. Sci. 2023, 4, 698. [Google Scholar] [CrossRef]
  24. Canham, P.B. The Minimum Energy of Bending as a Possible Explanation of the Biconcave Shape of the Human Red Blood Cell. J. Theor. Biol. 1970, 26, 61–81. [Google Scholar] [CrossRef] [PubMed]
  25. Duncan, J.S.; Lee, F.A.; Smeulders, A.W.M.; Zaret, B.L. A Bending Energy Model for Measurement of Cardiac Shape Deformity. IEEE Trans. Med. Imaging 1991, 10, 307–320. [Google Scholar] [CrossRef] [PubMed]
  26. Du, Q.; Liu, C.; Wang, X. Simulating the Deformation of Vesicle Membranes under Elastic Bending Energy in Three Dimensions. J. Comput. Phys. 2006, 212, 757–777. [Google Scholar] [CrossRef]
  27. Stuhmer, J.; Schroder, P.; Cremers, D. Tree Shape Priors with Connectivity Constraints Using Convex Relaxation on General Graphs. In Proceedings of the 2013 IEEE International Conference on Computer Vision, Sydney, Australia, 1–8 December 2013; IEEE: Piscataway, NJ, USA, 2013. [Google Scholar]
  28. Young, I.T.; Walker, J.E.; Bowie, J.E. An Analysis Technique for Biological Shape. Inf. Control 1974, 25, 357–370. [Google Scholar] [CrossRef]
  29. Verbeek, P.W.; Van Vliet, L.J. Curvature and Bending Energy in Digitized 2D and 3D Images. In Proceedings of the 8th Scandinavian Conference on Image Analysis, Tromso, Norway, 25–28 May 1993. [Google Scholar]
  30. Bergou, M.; Wardetzky, M.; Robinson, S.; Audoly, B.; Grinspun, E. Discrete Elastic Rods. ACM Trans. Graph. 2008, 27, 1–12. [Google Scholar] [CrossRef]
  31. Wang, H.; Xian, M.; Vakanski, A. Bending Loss Regularized Network for Nuclei Segmentation in Histopathology Images. In Proceedings of the 2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI), Iowa City, IA, USA, 3–7 April 2020; IEEE: Piscataway, NJ, USA, 2020. [Google Scholar]
  32. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; IEEE: Piscataway, NJ, USA, 2016. [Google Scholar]
  33. Huang, G.; Liu, Z.; Van Der Maaten, L.; Weinberger, K.Q. Densely Connected Convolutional Networks. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; IEEE: Piscataway, NJ, USA, 2017. [Google Scholar]
  34. Kumar, N.; Verma, R.; Anand, D.; Zhou, Y.; Onder, O.F.; Tsougenis, E.; Chen, H.; Heng, P.-A.; Li, J.; Hu, Z. A Multi-Organ Nucleus Segmentation Challenge. IEEE Trans. Med. Imaging 2019, 39, 1380–1391. [Google Scholar] [CrossRef] [PubMed]
  35. Dice, L.R. Measures of the Amount of Ecologic Association between Species. Ecology 1945, 26, 297–302. [Google Scholar] [CrossRef]
  36. Kirillov, A.; He, K.; Girshick, R.; Rother, C.; Dollar, P. Panoptic Segmentation. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; IEEE: Piscataway, NJ, USA, 2019. [Google Scholar]
  37. Vahadane, A.; Peng, T.; Sethi, A.; Albarqouni, S.; Wang, L.; Baust, M.; Steiger, K.; Schlitter, A.M.; Esposito, I.; Navab, N. Structure-Preserving Color Normalization and Sparse Stain Separation for Histological Images. IEEE Trans. Med. Imaging 2016, 35, 1962–1971. [Google Scholar] [CrossRef] [PubMed]
  38. Raza, S.E.A.; Cheung, L.; Shaban, M.; Graham, S.; Epstein, D.; Pelengaris, S.; Khan, M.; Rajpoot, N.M. Micro-Net: A Unified Model for Segmentation of Various Objects in Microscopy Images. Med. Image Anal. 2019, 52, 160–173. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Examples of state-of-the-art approaches in segmenting overlapped nuclei.
Figure 1. Examples of state-of-the-art approaches in segmenting overlapped nuclei.
Information 15 00417 g001
Figure 2. Two contours: (a) an ideal nucleus contour; and (b) a contour containing two nuclei. Red rectangles highlight the touching points on the contour.
Figure 2. Two contours: (a) an ideal nucleus contour; and (b) a contour containing two nuclei. Red rectangles highlight the touching points on the contour.
Information 15 00417 g002
Figure 3. Discrete bending losses for different curve patterns. In the value pairs “A/B,” “A” represents the convex bending loss for the center point, and “B” denotes the concave bending loss. The value rounds to two decimal places.
Figure 3. Discrete bending losses for different curve patterns. In the value pairs “A/B,” “A” represents the convex bending loss for the center point, and “B” denotes the concave bending loss. The value rounds to two decimal places.
Information 15 00417 g003
Figure 4. A contour with both concave and convex points. Red dots highlight the concave points, and green dots highlight the convex points.
Figure 4. A contour with both concave and convex points. Red dots highlight the concave points, and green dots highlight the convex points.
Information 15 00417 g004
Figure 5. Different bending losses of different segmentation results: (a) ground truth of eight nuclei contours; (b) bending losses of contour points of poorly segmented nuclei; and (c) bending losses of well-segmented nuclei. Red: BE = 193.14, green: BE = 28.28, and BE = 40.0, blue: BE   9.66, and gray: BE = 0.
Figure 5. Different bending losses of different segmentation results: (a) ground truth of eight nuclei contours; (b) bending losses of contour points of poorly segmented nuclei; and (c) bending losses of well-segmented nuclei. Red: BE = 193.14, green: BE = 28.28, and BE = 40.0, blue: BE   9.66, and gray: BE = 0.
Information 15 00417 g005
Figure 6. Overview of the proposed Bend-Net. denotes the summation; © denotes the concatenation; the red arrow represents the skip connections; the number with a red circle denotes the connected position of skip-connections.
Figure 6. Overview of the proposed Bend-Net. denotes the summation; © denotes the concatenation; the red arrow represents the skip connections; the number with a red circle denotes the connected position of skip-connections.
Information 15 00417 g006
Figure 7. Ground truth of an example image. From left to right: original image, ground truth of all nuclei, and ground truth of overlapped nuclei.
Figure 7. Ground truth of an example image. From left to right: original image, ground truth of all nuclei, and ground truth of overlapped nuclei.
Information 15 00417 g007
Figure 8. Fine-tuning parameters using AJI scores.
Figure 8. Fine-tuning parameters using AJI scores.
Information 15 00417 g008
Figure 9. Samples of comparative segmentation results for state-of-the-art models.
Figure 9. Samples of comparative segmentation results for state-of-the-art models.
Information 15 00417 g009
Table 1. Effectiveness of the proposed multitask learning architecture using the CoNSeP data set.
Table 1. Effectiveness of the proposed multitask learning architecture using the CoNSeP data set.
MethodsMetrics
AJIDiceRQSQPQAJIO
Instance-Net0.3710.8410.6030.7710.4710.296
HoVer-Net0.5450.8400.6740.7730.5220.520
Ours-OHV *0.5590.8470.6920.7740.5370.531
Ours-skip *0.5650.8500.6970.7790.5440.537
* Ours-OHV denotes the proposed approach with the OHV branch; Ours-skip has additional skip connections between the HV and OHV branches.
Table 2. The performance of the Hover-Net and the proposed approach with two different versions of bending loss (Lbe v1 and Lbe v2) on the CoNSeP data set.
Table 2. The performance of the Hover-Net and the proposed approach with two different versions of bending loss (Lbe v1 and Lbe v2) on the CoNSeP data set.
Methodsw/o Bending LossLbe v1 *Lbe v2 *Metrics
AJIDiceRQSQPQAJIO
HoVer-Net 0.5450.8400.6740.7730.5220.520
0.5520.8440.6830.7740.5300.523
0.5590.8460.6900.7760.5370.528
Ours 0.5650.8500.6970.7790.5440.537
0.5700.8470.7010.7770.5470.541
0.5780.8510.7090.7810.5550.552
* Lbe v1 and Lbe v2 refer to our previous bending loss [31] and the newly proposed bending loss (Equations (2)–(4)), respectively.
Table 3. Overall test performance on the CoNSeP and MoNuSegv1 data sets.
Table 3. Overall test performance on the CoNSeP and MoNuSegv1 data sets.
MethodsCoNSePMoNuSegv1
AJIDiceRQSQPQAJIDiceRQSQPQ
FCN8 0.2890.7820.4260.6970.2970.4260.7790.5920.7080.421
U-Net 0.4820.7190.4900.6680.3280.5200.7220.6350.6750.431
SegNet 0.4610.6990.4820.6670.3220.5080.7970.6720.7420.500
DCAN 0.4080.7480.4920.6970.3420.5150.7780.6590.7180.473
DIST0.4890.7880.5000.7230.3630.5600.7930.6180.7240.449
Micro-Net 0.5310.7840.6130.7510.4610.5810.7850.7000.7370.517
HoVer-Net 0.5450.8400.6740.7730.5220.6060.8180.7650.7670.588
BEND 0.5530.8460.6830.7760.5300.6270.8270.7700.7660.590
Bend-Net0.5780.8510.7090.7810.5550.6350.8320.7800.7710.601
Table 4. Overlapped nuclei segmentation performance on the CoNSeP and MoNuSegv1 data sets.
Table 4. Overlapped nuclei segmentation performance on the CoNSeP and MoNuSegv1 data sets.
CoNSePMoNuSegv1
MethodsAJIOACCOAJIOACCO
FCN8 0.3500.3280.3370.358
U-Net 0.4860.3950.4720.464
SegNet 0.4110.2620.4070.406
DCAN 0.4170.2930.4270.423
DIST0.5420.4760.5430.536
Micro-Net 0.5130.4950.5130.504
HoVer-Net0.5200.5580.5420.613
BEND 0.5290.5610.5530.627
Bend-Net0.5520.5860.5700.656
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, H.; Vakanski, A.; Shi, C.; Xian, M. Bend-Net: Bending Loss Regularized Multitask Learning Network for Nuclei Segmentation in Histopathology Images. Information 2024, 15, 417. https://doi.org/10.3390/info15070417

AMA Style

Wang H, Vakanski A, Shi C, Xian M. Bend-Net: Bending Loss Regularized Multitask Learning Network for Nuclei Segmentation in Histopathology Images. Information. 2024; 15(7):417. https://doi.org/10.3390/info15070417

Chicago/Turabian Style

Wang, Haotian, Aleksandar Vakanski, Changfa Shi, and Min Xian. 2024. "Bend-Net: Bending Loss Regularized Multitask Learning Network for Nuclei Segmentation in Histopathology Images" Information 15, no. 7: 417. https://doi.org/10.3390/info15070417

APA Style

Wang, H., Vakanski, A., Shi, C., & Xian, M. (2024). Bend-Net: Bending Loss Regularized Multitask Learning Network for Nuclei Segmentation in Histopathology Images. Information, 15(7), 417. https://doi.org/10.3390/info15070417

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop