Next Article in Journal
Atraumatic Hepatic Laceration with Hemoperitoneum
Next Article in Special Issue
Equilibrium Optimization-Based Ensemble CNN Framework for Breast Cancer Multiclass Classification Using Histopathological Image
Previous Article in Journal
The Effectiveness of Ultrasound-Guided Infiltrations Combined with Early Rehabilitation in the Management of Low Back Pain: A Retrospective Observational Study
Previous Article in Special Issue
Artificial Intelligence in the Diagnosis of Colorectal Cancer: A Literature Review
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Review of In Situ Hybridization (ISH) Stain Images Using Computational Techniques

by
Zaka Ur Rehman
1,
Mohammad Faizal Ahmad Fauzi
1,*,
Wan Siti Halimatul Munirah Wan Ahmad
1,2,
Fazly Salleh Abas
3,
Phaik Leng Cheah
4,
Seow Fan Chiew
4 and
Lai-Meng Looi
4
1
Faculty of Engineering, Multimedia University, Cyberjaya 63100, Malaysia
2
Institute for Research, Development and Innovation (IRDI), IMU University, Bukit Jalil, Kuala Lumpur 57000, Malaysia
3
Faculty of Engineering and Technology, Multimedia University, Bukit Beruang, Melaka 75450, Malaysia
4
Department of Pathology, University Malaya-Medical Center, Kuala Lumpur 50603, Malaysia
*
Author to whom correspondence should be addressed.
Diagnostics 2024, 14(18), 2089; https://doi.org/10.3390/diagnostics14182089
Submission received: 30 July 2024 / Revised: 10 September 2024 / Accepted: 17 September 2024 / Published: 21 September 2024

Abstract

:
Recent advancements in medical imaging have greatly enhanced the application of computational techniques in digital pathology, particularly for the classification of breast cancer using in situ hybridization (ISH) imaging. HER2 amplification, a key prognostic marker in 20–25% of breast cancers, can be assessed through alterations in gene copy number or protein expression. However, challenges persist due to the heterogeneity of nuclear regions and complexities in cancer biomarker detection. This review examines semi-automated and fully automated computational methods for analyzing ISH images with a focus on HER2 gene amplification. Literature from 1997 to 2023 is analyzed, emphasizing silver-enhanced in situ hybridization (SISH) and its integration with image processing and machine learning techniques. Both conventional machine learning approaches and recent advances in deep learning are compared. The review reveals that automated ISH analysis in combination with bright-field microscopy provides a cost-effective and scalable solution for routine pathology. The integration of deep learning techniques shows promise in improving accuracy over conventional methods, although there are limitations related to data variability and computational demands. Automated ISH analysis can reduce manual labor and increase diagnostic accuracy. Future research should focus on refining these computational methods, particularly in handling the complex nature of HER2 status evaluation, and integrate best practices to further enhance clinical adoption of these techniques.

1. Introduction

Breast cancer remains the most prevalent malignancy among women worldwide, with over 2 million new cases and nearly 630,000 deaths reported in 2018 alone [1]. The high morbidity and mortality rates associated with breast cancer have propelled research aimed at improving histopathologic image-based computational techniques. These techniques have become essential for identifying cancer subtypes, which are critical for clinical decision making and personalized treatment strategies. Digital pathology, powered by advancements in imaging and computational capabilities, has emerged as a promising field to support the precise and efficient classification of breast cancer.
The development of whole-slide digital imaging, combined with the growing importance of tissue-based biomarkers for therapy stratification, has greatly expanded the applications of image analysis in digital pathology. Notably, techniques such as hematoxylin and eosin staining (H&E), immunohistochemistry (IHC), and in situ hybridization (ISH) are regularly employed for visualizing and analyzing tissue samples. H&E remains the cornerstone of histopathology, providing detailed cellular and tissue architecture, while IHC targets specific proteins, aiding in the functional interrogation of tissues. ISH, which focuses on the detection of nucleic acid sequences, offers deeper insights into gene expression, especially in cases where protein expression is insufficient or ambiguous [2].
In breast cancer diagnostics, ISH is particularly valuable for detecting HER2 gene amplification, a critical prognostic and predictive marker for about 20–25% of breast cancers. This amplification is commonly assessed through HER2 and CEP17 (centromere enumeration probe for chromosome 17) signals, providing a quantitative measure for determining HER2 status [3]. Although fluorescence in situ hybridization (FISH) is considered the gold standard for HER2 testing, alternative methods such as chromogenic ISH (CISH) and silver-enhanced ISH (SISH) have been developed to offer cost-effective solutions compatible with bright-field microscopy (Figure 1 and Table 1).
Despite these advancements, the interpretation of ISH images remains a complex and time-consuming task, often requiring manual analysis by experienced pathologists. The emergence of computational techniques—ranging from traditional image processing algorithms to deep learning models—has the potential to automate this process, improving diagnostic accuracy, efficiency, and reproducibility [6]. The availability of large, digitized histopathology datasets has accelerated the application of these computational methods, yet challenges such as data annotation and variability in staining techniques persist.
This review aims to provide a comprehensive overview of semi-automated and fully automated ISH-based computational methods, with a particular focus on breast cancer classification. We review the literature from 1997 to 2023, with an emphasis on image processing techniques and machine learning models, particularly deep learning, as they apply to HER2 gene amplification detection. Figure 2 provides a high-level breakdown of the computational methods commonly applied in the field. Through this review, we aim to clarify the relationships between different computational approaches, highlight key advancements, and discuss the potential for integrating these methods into routine pathology workflows.

1.1. Inclusion and Exclusion Criteria

To ensure the relevance and specificity of this review, the following criteria were used to include or exclude studies:
  • Inclusion Criteria: Studies that applied artificial intelligence (AI), machine learning (ML), or deep learning (DL) techniques to the analysis of ISH images, specifically for HER2 gene amplification detection.
  • Exclusion Criteria: Papers that focused on other pathology stains (e.g., H&E, IHC) or did not involve computational techniques.
This rest of this paper is structured as follows: Section 1.3 explores key challenges in ISH image analysis. Section 1.6 reviews the state-of-the-art methodologies in computational ISH analysis. Section 2 outlines common computational techniques for image processing in pathology. Finally, Section 5 summarizes the findings and outlines recommendations for future research directions.

1.2. In Situ Hybridization (ISH)

In situ hybridization (ISH) is a cytogenetic technique that allows for the detection, quantification, and localization of specific nucleic acid sequences within cells or tissues at high resolution. This method plays a pivotal role in understanding the organization, regulation, and function of genes by revealing the physical positions of DNA or RNA sequences on chromosomes or within tissues. ISH works by hybridizing a labeled probe—complementary to the target nucleic acid—with the DNA or RNA of the tissue or chromosome under examination. The types of probes used for DNA and RNA have been comprehensively described in earlier studies [7]. Probes can be labeled chemically or radioactively, and this labeling allows for the precise detection of hybridization signals.
In the context of HER2 gene amplification, ISH techniques such as FISH, CISH, and SISH are routinely used to determine gene copy number alterations, which are critical for evaluating HER2 status in breast cancer (as shown in Table 1). Each method offers varying advantages in terms of sensitivity, ease of use, and cost-effectiveness, with SISH being particularly suitable for bright-field microscopy applications.

1.3. Challenges of In Situ Hybridization

The ISH process presents numerous challenges, both biological and technical, that complicate the analysis of the resulting images. These challenges impact the accuracy, reproducibility, and scalability of computational approaches used for automated analysis.

1.3.1. Technical Challenges

  • Signal variability: ISH images often exhibit significant variability in signal intensities, not only between the target and non-target cells but also within different regions of the same tissue sample. This inconsistency complicates accurate signal detection and segmentation [8].
  • Complex tissue structures: ISH images often include a mixture of cell populations and complex tissue architectures, making it difficult to isolate and analyze regions of interest. Overlapping or closely spaced signals, particularly in multi-probe ISH experiments, further add to this complexity.
  • Large image size: Whole-slide ISH images can be very large, requiring significant computational power for storage, processing, and analysis. Multi-channel ISH images with multiple probes introduce additional layers of complexity to the segmentation and classification tasks [9].
  • Tissue preparation: The requirement for very thin tissue sections (typically 3–7 μm in thickness) introduces potential artifacts to the images, such as tearing or folding, which can distort the analysis [9].

1.3.2. Biological Challenges

  • Heterogeneous tissue samples: The biological complexity of tissues introduces variability in cell types, gene expression patterns, and tissue structures [10]. This heterogeneity can lead to uneven distribution of hybridization signals, further complicating segmentation and quantification tasks.
  • Overlapping signals: In biological samples, signals from adjacent cells or closely located genes often overlap, making it difficult to accurately assign signals to specific cells or chromosomes [11].
  • Non-specific staining: Background noise and non-specific staining are common in ISH images, reducing the contrast between the target signal and the background. This interferes with the ability of automated systems to distinguish true signals from artifacts, especially in low-signal regions [11].

1.4. Data Acquisition

Accurate and reproducible ISH analysis requires well-optimized protocols for data acquisition, starting from tissue preparation to probe hybridization. In our study, the INFORM HER2 DNA and CEN17 probes were replaced with the Ventana HER2 siler ISH Probe Cocktail, applied using the Ventana Benchmark automated device [12]. This method streamlines the process, reducing manual intervention and improving consistency across samples.
The data acquisition process for SISH can be outlined as follows:
1.
Sample preparation: Tissue samples are baked at 60 °C for 20 min to ensure proper adhesion to the slides.
2.
Probe hybridization: The HER2 DNA and chromosome 17 probes are denatured at different temperatures and hybridized with the target sequences.
3.
Stringency washes: Stringent washing is performed to remove any unbound probes, ensuring high specificity of the hybridization signals.
4.
Signal detection: The ultraView SISH Detection Kit is used for visualizing the HER2 and CEP17 probes, with silver deposition providing contrast for bright-field microscopy analysis.
5.
Counterstaining: Hematoxylin is applied as a counterstain to enhance visualization under a light microscope.
Compared to traditional FISH methods, the use of SISH significantly reduces the overall time required for analysis (from 12–16 h to 6 h) and can be performed using a standard light microscope, making it more accessible for routine pathology laboratories.

1.5. Probe Design and Labeling Techniques

The sensitivity and specificity of ISH rely heavily on the design of the probe and its labeling technique. Probes can be classified based on their labeling method:
  • Radiolabeled probes: These probes use radioactive isotopes to tag the nucleic acid sequence of interest, offering high sensitivity but requiring specialized equipment for detection and posing health risks [13].
  • Non-radioactive probes: Modern techniques such as biotin or digoxigenin labeling have become more popular, offering safer alternatives that use colorimetric or fluorescent detection methods [14].
  • Direct enzyme labeling: Enzyme-conjugated probes catalyze colorimetric reactions, offering a straightforward way to visualize hybridization signals without the need for secondary detection steps [15].
The development and selection of appropriate probes are critical for ensuring accurate HER2 gene amplification analysis. Ongoing research is focused on improving the sensitivity of these probes to detect smaller genetic aberrations, expanding the potential clinical applications of ISH techniques.

1.6. From Glass Slide to Whole-Slide Image

The transformation of traditional glass slides into whole-slide images (WSIs) has revolutionized the field of digital pathology, providing pathologists with the ability to view, analyze, and share high-resolution tissue images. However, this transition requires the precise control of the entire image acquisition pipeline, from tissue processing and staining to scanning and image quality assurance. Table 2 summarizes some of the key challenges in standardizing this process for clinical and research applications.
The digitization of histological slides enables advanced computational analysis, including segmentation, feature extraction, and classification tasks that form the backbone of AI-based ISH image analysis.

1.7. HER2 Status Evaluation

The amplification or overexpression of the HER2 oncogene is observed in approximately 20% of invasive breast carcinomas [16]. This amplification is associated with poor prognosis, necessitating targeted therapies such as trastuzumab [17]. The accurate assessment of the HER2 status is critical for making personalized therapeutic decisions in breast cancer patients [18]. When IHC results are equivocal, such as a 2+ expression level [19], further analysis using ISH is performed to confirm HER2 amplification. Pathologists commonly follow ASCO/CAP guidelines, using methods such as FISH, CISH, and SISH to compare HER2 signals with CEP17 (chromosome 17 centromere) signals [20].
In the DISH procedure, pathologists manually count HER2 (black) and CEP17 (red) signals under a microscope. A total of 20 cells are typically counted, and if the HER2/CEP17 ratio is borderline, an additional 20 cells are counted for more accurate results [21]. However, challenges arise in interpreting borderline and heterogeneous tumors, where HER2-amplified cells may be concentrated in specific areas or mixed with non-amplified cells [22,23]. The presence of CEP17 polysomy further complicates interpretation, as it can inflate the HER2/CEP17 ratio without indicating true amplification [24]. Subjectivity in cell selection and technical variability across laboratories also impact the consistency of the results [25,26].

1.8. Current Evaluation Practice

Manual HER2 ISH evaluation is a time-consuming and subjective process, heavily reliant on the pathologists’ selection of representative cells, which may introduce selection bias [27,28]. Various clinical guidelines, including cutoffs for signal counts, ratios, and the fraction of amplified cells, contribute to the complexity of assessing equivocal or heterogeneous cases. Despite these challenges, the manual evaluation process remains widespread, though it is not yet adequately standardized. Investigations into automated ISH evaluation have revealed discrepancies in the sampling methods, from small TMA cores to full tissue sections [29], with sample sizes ranging from a few fields of view to 20–60 nuclei per case [30,31].

1.9. Toward Computational Digital Pathology

Efforts to automate HER2 ISH testing using computational methods have gained attraction in recent years [29,32], with digital image analysis offering the potential to reduce the pathologist’s workload and enhance diagnostic precision. While good-quality samples and standardized procedures are still necessary, image analysis can serve as a valuable decision-support tool [29,30]. The computer-assisted quantification of FISH signals has shown promise, particularly in improving the evaluation of equivocal and heterogeneous cases through large-scale sampling and unbiased analysis [29,31].
Despite the advancements in digital pathology for FISH and CISH stains, limited research has been conducted on SISH stains for HER2 scoring and amplification using computational methods. Our research aimed to explore the potential of high-resolution digital HER2 SISH images to generate objective, statistically derived indicators of intratumoral heterogeneity in the HER2 status. Such efforts are crucial for improving the accuracy and scalability of HER2 amplification evaluation in clinical settings.

2. Computational Digital Pathology

While many healthcare and life sciences organizations recognize the potential of using artificial intelligence to analyze whole-slide images (WSIs), developing an automated slide analysis pipeline presents significant challenges. A functional WSI pipeline must handle a high volume of digitized slides at low cost and with high efficiency. Computational image analysis generally involves several key steps, which are discussed in this section. Figure 3 illustrates our proposed scheme for these steps.
Digital pathology has become central to both research and clinical diagnostics, driven by advancements in imaging technology and the availability of efficient computational tools. WSIs have been instrumental in this transformation, allowing for the rapid digitization of pathology slides into high-resolution images.

2.1. Data Preprocessing

Image preprocessing typically involves the following steps:
  • Noise reduction and artifact elimination: Removing irrelevant or non-informative data, such as slide backgrounds, dust particles, or scanning artifacts.
  • Dataset consistency: Ensuring the creation of a standardized and consistent dataset by eliminating variations across different samples.
  • Tiling for deep learning models: Most deep learning models cannot process gigapixel images directly. Therefore, WSIs are split into smaller tiles, which are processed in batches during downstream modeling.
Preprocessing is critical for using computational resources efficiently and minimizing errors caused by noise or artifacts in the images. Tissue segmentation algorithms often rely on effective preprocessing, as irrelevant variations can disrupt accurate image analysis. Morphological transformations, frequently used in image postprocessing, are also employed during preprocessing to detect and remove artifacts.
Automated image analysis in digital pathology depends on the visual quantification of image features. Pathologists use tissue segmentation algorithms based on this initial preprocessing step [33]. Signal estimation, or saturation, is a common optical effect that occurs when scanner software exceeds its recognition threshold for certain pixel values—such as when detecting overexpressed genes. Yang et al. [34] proposed a mixture-based model for spot segmentation that addresses this issue by estimating dense pixel values with a censored component. Table 3 provides an overview of commonly used preprocessing techniques, their applications, and constraints.
Data preprocessing plays a critical role in ensuring the success of downstream modeling for whole-slide image analysis. Preprocessing not only reduces noise but also prepares the images for feature extraction, segmentation, and classification tasks. Understanding the properties and limitations of each technique is essential for developing robust, scalable pipelines in computational digital pathology.

2.2. Feature Extraction

Table 4 lists various feature extraction methods used in image analysis. Histopathology images often rely on pathologists’ clinical experiences to guide feature extraction techniques. As a result, property-based features are used as a foundation. This section covers three main types of features: shape-based, texture-based, and color-based. Each feature type is detailed in the following subsections, and Table 4 provides brief descriptions of the feature-wise performances for some methods.

2.2.1. Shape-Based Features

The classification of pathology images often relies on the morphology of nuclei and cell sections. Shape and size (morphology) play a crucial role in diagnosing lesions and cancers. Spherical or quasi-spherical shapes are easier to characterize as feature vectors than more complex, naturally occurring cell shapes. Shape features can quantify the cell or nucleus region by calculating attributes such as size, area, and perimeter.
For example, Ref. [56] represented color distribution in 3D cervical cancer images using intensity data and shape details. A large annotated dataset of histological images related to the cervix, vagina, and uterus was used in [57] to assess the quality of shape features, such as rotation-invariant features. Shape-based features were also applied in [58] for cervical cancer detection using unsupervised k-means clustering and geometric feature extraction from spanning tree graphs.
Shape features have also been applied in cancer cell detection using FISH spots [39], where automated detection and classification rules were employed to identify and count FISH spots accurately. Similarly, Ref. [42] extracted lymphocytes from plasma and used shape and contour features for leukemia diagnosis, while [43] introduced contour signature and fractal features for classifying lymphocyte nuclei in leukemia cases.
Various shape-based extraction methods used for image analysis are described in Table 4.

2.2.2. Texture-Based Features

Texture is a crucial feature for analyzing spatial patterns and tissue organization in pathology images. Texture analysis has been widely employed for classification tasks, particularly in pathological image analysis [59]. Texture patterns can range from pixel-level patterns to larger structures that capture spatial relationships.
For example, ref. [60] proposed using fractal texture features based on optical density surface areas for analyzing cervical cell images. Texture features were also effective for detecting developmental phases in ISH images of gene expression patterns in [41], where texture factors provided insights into Drosophila gene patterns.
Additionally, local binary patterns (LBPs) were used in [10] to analyze ISH images and train gene classifiers for different layers of the cerebellum. Texture features have also been applied for HER2 2+ status evaluation, where 279 texture features were extracted from FISH images [52], and hyperspectral image compression techniques were explored in [47] for texture-based segmentation and classification.
Table 4 lists several texture-based feature extraction methods applied in image analysis.

2.2.3. Color-Based Features

Color is one of the most widely used features in digital pathology for selecting or rejecting cell sections. Color features are extracted in various color spaces, including RGB, to analyze images. However, color representation varies across devices, and standardization is essential for accurate analysis [61]. In pathology, color features help distinguish cells, tissues, and other structures.
For example, Ref. [43] used color segmentation to analyze blood cells in leukemia diagnosis. Blood and bone marrow smears from patients with acute lymphoblastic leukemia were analyzed in [48] using a k-means clustering approach, and the resulting color features were used for classification.
In M-FISH image analysis, color features are employed for chromosome classification [45], and SVM classifiers have been used for distinguishing leukemic white blood cells based on color features [53]. Various color-based extraction methods are summarized in Table 4.

2.3. Segmentation

Segmentation is a critical task in image analysis, used to isolate regions of interest (ROIs) such as cell nuclei, tissues, or tumor areas. Segmentation techniques can vary from threshold-based methods to more advanced approaches like region-based or machine learning techniques. Effective segmentation is vital for accurate feature extraction and classification.
Table 5 summarizes various segmentation techniques, categorized by application (e.g., nuclei segmentation, cancer cell detection, tumor area detection).

2.3.1. Thresholding-Based Segmentation

Thresholding is one of the most common segmentation techniques, particularly for grayscale and RGB images. In threshold-based segmentation, pixel intensity is used to create image sections, which are then analyzed based on intensity differentials. Adaptive thresholding methods, such as Otsu’s method [69], are widely used to enhance segmentation accuracy.
For example, Ref. [70] proposed an intelligent framework for FISH data analysis using a hybrid nuclei segmentation technique. Threshold-based segmentation methods have also been applied for nuclei segmentation in HER2 status detection [62], where contrast enhancement and thresholding were used for improving image quality.
Threshold-based segmentation techniques for ISH images, including examples of CISH, FISH, and SISH images, are illustrated in Figure 4.

2.3.2. Region-Based Segmentation

In region-based segmentation, pixels are grouped based on intensity and spatial connectivity. This method works well for images with distinct regions, but multisegment images may require more processing power. Clustering methods like fuzzy c-means are often used for the soft clustering of pixels into multiple regions [71].
In [72], clustering-based segmentation was applied for identifying tumor regions, and machine learning techniques have also been integrated for region-based segmentation tasks [73,74]. Examples of region-based segmentation techniques applied to nuclei and tumor detection are listed in Table 5.

2.4. Classification

This section discusses the methods used for classifying ISH pathology images. These methods are categorized into two main subcategories: conventional classification, discussed in Section 2.4.1, and deep learning methods, covered in Section 2.4.2.

2.4.1. Classification through Conventional Methods

The first developments in computer vision date back to the 1960s, and the field has since become an essential part of intelligent systems in industries such as security, robotics, autonomous vehicles, and medical imaging [75]. In digital pathology, the task of classifying pathology images involves assigning biomarkers to different classes based on image input. Conventional computer vision methods leverage features such as color, shape, texture, and size to perform classification, making use of RGB images to detect disease-specific patterns [76].
Table 6 provides a comparison of conventional and deep learning methods for ISH image classification, highlighting their respective pros and cons.
Machine learning models such as decision trees, neural networks (NNs), K-nearest neighbors (KNN), and support vector machines (SVMs) have been widely applied in pathology classification tasks [77,78,79]. Each method offers distinct advantages: SVMs handle linear and non-linear data mapping using kernel functions, decision trees provide a probability-based graph for multi-class classification, and KNN is a non-parametric method that learns from data indefinitely.
For example, Hongbao et al. [45] used minimal representation-based classifiers to enhance chromosome analysis for cancer and genetic disease diagnostics using M-FISH images. Improved segmentation techniques and ensemble classifiers have also been applied in the diagnosis of acute lymphoblastic leukemia (ALL) [46].
Table 6. A comparison of conventional and deep learning methods used in digital pathology for ISH image classification.
Table 6. A comparison of conventional and deep learning methods used in digital pathology for ISH image classification.
YearISH StainML/DLPros and ConsRef.
2012M-FISH✓ (ML )Pros: Effective for small datasets, interpretable models. Cons: Limited scalability and feature extraction capability. [45]
2014Leukemia✓ (ML)Pros: Simple, computationally efficient for screening. Cons: Handcrafted features may miss complex patterns. [46]
2016FISH✓ (DL)Pros: Automated feature extraction, scalable. Cons: Requires large datasets and computational power. [80]
2017ISH✓ (DL)Pros: Learns hierarchical features from raw images. Cons: Black-box models, high computational requirements. [81]
2018Monoclonal antibody WSIs✓ (DL)Pros: High accuracy, effective for complex features like cell membranes. Cons: Training requires large amounts of annotated data. [82]
2019ISH✓ (DL)Pros: Learns from raw pixel data. Cons: Struggles to interpret feature representations. [83]
2020CISH✓ (ML)Pros: Cost-effective, interpretable. Cons: Lower accuracy than DL methods for complex data. [84]
2021ISH✓ (DL)Pros: Can handle large image datasets; Cons: Black-box model, interpretability challenges. [85]
Note: ML stands for machine learning, and DL for deep learning. The table highlights the advantages (pros) and limitations (cons) of both approaches in terms of scalability, interpretability, and computational cost.
Liew et al. [80] applied classification-based methods for FISH image analysis, while [84] explored CISH image classification using Haralick texture features and principal component analysis (PCA) for dimensionality reduction. Table 6 summarizes recent studies on ISH image-based pathology disease classification.

2.4.2. Classification through Deep Learning

Deep learning has revolutionized image analysis in recent years, particularly with the use of convolutional neural networks (CNNs) for medical imaging tasks [86,87]. CNNs apply convolutional filters to input images, learning hierarchical feature representations automatically, without the need for manual feature extraction. This makes CNNs especially powerful for tasks such as pathology image classification.
CNNs are trained on large datasets, allowing them to learn from raw pixel data and optimize for high-level semantic features such as cell boundaries and biomarker signals [88,89]. For example, Ref. [81] proposed a deep convolutional denoising autoencoder (CDAE) for constructing compact ISH image representations, while [82] introduced Her2Net, a deep learning framework for HER2-stained breast cancer image analysis, which includes cell membrane and nucleus detection, segmentation, and classification.
Similarly, Ref. [83] employed autoencoders and convolutional neural networks for learning feature representations directly from image pixels, demonstrating the superiority of these methods over traditional feature extraction techniques. Transfer learning strategies were also explored to adapt pretrained models to ISH images, improving accuracy in biomarker detection and disease classification.
The reference work for ISH image-based pathology disease classification is summarized in Table 6.

3. Image Analysis on SISH

In the realm of HER2 determination, SISH has emerged as a viable alternative to traditional methods like FISH and CISH [90]. SISH [91,92] represents a novel approach that leverages bright-field imaging, similar to CISH, and has been significantly enhanced by advancements in automation. The Ventana Medical System (Tucson, AZ, USA) has developed a fully automated system that improves the efficiency and consistency of bright-field in situ hybridization, thereby reducing the risk of human error. This system allows for the automated detection of chromogenic signals, enabling the simultaneous running of HER2 and CEP17 assays on related tissue slides.
In line with the ASCO/CAP guidelines, the evaluation of HER2 gene amplification status using SISH was conducted in a blinded fashion. The analysis involved examining 20 non-overlapping nuclei for HER2/CEP 17 signals and calculating the HER2/CEP 17 ratio. A ratio greater than 2.2 indicates HER2 gene amplification, while a ratio of 1.8 or less suggests a lack of amplification. Ratios between 1.8 and 2.2 are deemed equivocal, necessitating the counting of signals from an additional 20 tumor nuclei in a second target area to compute a new ratio. Benign breast epithelial cells and other adjacent benign cells served as internal controls throughout the process.
This study’s focus on SISH was not only to validate its efficacy in HER2 status assessment but also to lay the groundwork for more advanced computational analysis. Moreover, the combination of SISH with automated image analysis presents an opportunity to create scalable solutions capable of analyzing high-throughput HER2 assays, minimizing the variability often observed in manual counting techniques. As discussed in our review paper, SISH offers a promising platform for integrating computational techniques to enhance the accuracy and scalability of HER2 analysis. The adoption of SISH in computational pathology is particularly significant, given its compatibility with automated image analysis systems, which are essential for handling the increasing volume and complexity of digital pathology data.
Software Information: Image analysis was performed using MATLAB R2021b (MathWorks, Natick, MA, USA) was used to create custom algorithms. Both software tools were downloaded from their official websites, with MATLAB accessed from https://www.mathworks.com/products/matlab.html, accessed on 29 July 2024.

4. Limitations of the Research Work

This study faced several limitations, including variability in tissue samples and heterogeneity in ISH images, which may lead to inconsistencies in image analysis. The reliance on SISH, a newer and less widely adopted staining technique, could limit the generalizability of the results. Additionally, the computational demands required for processing large whole-slide images (WSIs) pose scalability challenges. Future work could focus on improving algorithms for tissue heterogeneity and utilizing federated learning across clinical institutions to address the variability in datasets. Further refinement is needed to address complex cases of HER2 heterogeneity and to validate the findings across diverse datasets and clinical settings.

5. Conclusions

In this paper, we have provided a comprehensive overview of the advancements in machine learning and in situ hybridization (ISH) image analysis using computational methods. We began with an introduction to ISH images and the associated challenges, followed by a discussion of ISH-related work and the application of computational methods in ISH stain pathology image analysis. The computational image analysis section encompasses image data acquisition, preprocessing, segmentation, feature extraction, and classification. We assessed relevant works based on their specific technical categories under each application goal from a computational pathology perspective. By reviewing all related studies on ISH stains using computational image analysis methods, we identified the most popular image feature extraction, segmentation techniques, and classification approaches.
Machine vision techniques in this sector have demonstrated a consistent overall development trend, albeit with a cautious approach. The most cutting-edge technologies in this field typically emerge three to five years later compared to other domains. This “slow starter” phenomenon is primarily due to the interdisciplinary nature of the research, where machine vision scientists often have limited knowledge of ISH stain pathology. However, as more biomedical engineering students are educated, we expect the progress of machine vision techniques in this field to synchronize with those in other domains.
Machine vision techniques have evolved significantly in ISH image analysis, though the interdisciplinary nature of pathology, and AI often leads to a slower adoption of these technologies. As these fields converge, we anticipate that digital pathology, with integrated computational methods, will enhance the accuracy and reproducibility of HER2 assessments, ultimately improving patient outcomes through personalized cancer treatment.
Furthermore, the machine vision approaches discussed in this paper can be applied to various microscopic image analysis disciplines beyond ISH digital pathology. Recent rapid advancements in this field have shifted the debate around digital pathology, enabling greater accuracy and efficiency through computational pathology. While the potential of powerful new models to support clinicians in decision making is promising, translating these models into medical practice remains challenging. Digital pathology, distinguished by its comprehensive image acquisition process, often involves subsampling or selecting small tiles from a large whole-slide image (WSI), either systematically or randomly.
Figure 5 illustrates an example of computer-based nuclei and HER2 detection from a SISH pathology image. Detecting breast cancer using the HER2 ratio with SISH stains is complex and poses significant challenges for the automatic localization of tumor regions in SISH WSI images. HER2 scoring follows a specified procedure, which includes several key points and challenges:
  • Selecting appropriate regions with more red and black signals from the SISH WSI stain image.
  • Localizing the nuclei region, which is difficult due to the fusion of nuclei in many areas of the WSI.
  • Choosing 20 nuclei with signals and discarding faint nuclei.
  • After selecting 20 nuclei, separating the red and blue signals, ensuring that two identical signals are not fused.
Due to these challenges, manually identifying HER2 scoring from SISH stains is not easy. Computational techniques are necessary to automatically compute the HER2 score from SISH stains.

Future Directions

ISH is a unique tissue image-based molecular analysis method used for the precise microscopic detection and localization of DNA, mRNA, and microRNA in metaphase spreads as well as in cell and tissue preparations. In comparison, IHC (immunohistochemistry) is invaluable for the localization, detection, and quantification of antigens, including HER2 signals. Thus, deploying automated machine learning techniques with ISH holds significant promise for the future. Artificial intelligence (AI) provides a powerful tool for extracting information from ISH digitized whole-slide images (WSIs). Numerous techniques have been developed to address diverse tasks related to HER2 scoring from ISH stain images using machine learning methods.
In the future, pathological imaging and machine vision technologies should be developed together organically, with features such as real-time pathological image processing under a microscope or endomicroscopy (e.g., virtual staining and class labeling of pathological images). Emerging AI models, such as transformers and self-supervised learning techniques, offer significant promise in overcoming current challenges in real-time image processing and live diagnosis, moving pathology closer to integrated, fully automated solutions. Microscopes equipped with apps or software for the image analysis of diseased samples can be fitted with small, high-performance CPU processors. Pathologists will be able to monitor cells or tissue types in their actual range of vision and decide if they are normal or abnormal in real time through these systems. They can also witness and observe a number of virtual stained images created using basic lens staining. Simultaneously, the related data analysis report and virtual staining image will be transmitted to a specified mobile phone, computer, or mailbox, enabling real-time scoring. There are numerous effective and novel strategies that can be used to attain these objectives.

Author Contributions

Conceptualization, Z.U.R., M.F.A.F. and F.S.A.; Methodology, Z.U.R., W.S.H.M.W.A., M.F.A.F. and F.S.A.; Validation on medical terminologies and review on medical part, L.-M.L., P.L.C. and S.F.C.; Formal writing and Analysis, Z.U.R. and W.S.H.M.W.A.; Investigation, Z.U.R. and W.S.H.M.W.A.; Writing—Original Draft Preparation, Z.U.R. and W.S.H.M.W.A.; Writing—Review & Editing, Z.U.R., M.F.A.F., F.S.A. and L.-M.L.; Supervision, M.F.A.F. and F.S.A.; Funding Acquisition, M.F.A.F. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Ministry of Higher Education (MOHE) Malaysia under the Fundamental Research Grant Scheme (FRGS), Malaysia and its grant number is (FRGS/1/2020/ICT02/MMU/02/10).

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article as it is a review based on previously published studies and publicly available datasets.

Acknowledgments

The authors would like to thank University Malaya Medical Center (UMMC) for providing access to resources and for their ongoing partnership. We also appreciate the valuable insights from all contributors to the literature we reviewed.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Nicolas, E.; Bertucci, F.; Sabatier, R.; Gonçalves, A. Targeting BRCA deficiency in breast cancer: What are the clinical evidences and the next perspectives? Cancers 2018, 10, 506. [Google Scholar] [CrossRef] [PubMed]
  2. Coulton, G.R.; De Belleroche, J. In Situ Hybridization: Medical Applications; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2012. [Google Scholar]
  3. Koh, Y.W.; Lee, H.J.; Lee, J.W.; Kang, J.; Gong, G. Dual-color silver-enhanced in situ hybridization for assessing HER2 gene amplification in breast cancer. Mod. Pathol. 2011, 24, 794–800. [Google Scholar] [CrossRef] [PubMed]
  4. Di Palma, S.; Collins, N.; Faulkes, C.; Ping, B.; Ferns, G.; Haagsma, B.; Layer, G.; Kissin, M.; Cook, M. Chromogenic in situ hybridisation (CISH) should be an accepted method in the routine diagnostic evaluation of HER2 status in breast cancer. J. Clin. Pathol. 2007, 60, 1067–1068. [Google Scholar] [CrossRef]
  5. Shousha, S.; Peston, D.; Amo-Takyi, B.; Morgan, M.; Jasani, B. Evaluation of automated silver-enhanced in situ hybridization (SISH) for detection of HER2 gene amplification in breast carcinoma excision and core biopsy specimens. Histopathology 2009, 54, 248–253. [Google Scholar] [CrossRef] [PubMed]
  6. Wang, P.; Wang, L.; Li, Y.; Song, Q.; Lv, S.; Hu, X. Automatic cell nuclei segmentation and classification of cervical Pap smear images. Biomed. Signal Process. Control. 2019, 48, 93–103. [Google Scholar] [CrossRef]
  7. Forster, A.C.; Mclnnes, J.L.; Skingle, D.C.; Symons, R.H. Non-radioactive hybridization probes prepared by the chemical labelling of DNA and RNA with a novel reagent, photobiotin. Nucleic Acids Res. 1985, 13, 745–761. [Google Scholar] [CrossRef] [PubMed]
  8. Theodosiou, Z.; Kasampalidis, I.N.; Livanos, G.; Zervakis, M.; Pitas, I.; Lyroudia, K. Automated analysis of FISH and immunohistochemistry images: A review. Cytom. Part A J. Int. Soc. Anal. Cytol. 2007, 71, 439–450. [Google Scholar] [CrossRef]
  9. Mondal, S.K. Manual of Histological Techniques; Jaypee Brothers Medical Publishers (P) Ltd.: New Delhi, India, 2017. [Google Scholar]
  10. Kirsch, L.; Liscovitch, N.; Chechik, G. Localizing genes to cerebellar layers by classifying ISH images. PLoS Comput. Biol. 2012, 8, e1002790. [Google Scholar] [CrossRef]
  11. Ben-Dor, A.; Bruhn, L.; Friedman, N.; Nachman, I.; Schummer, M.; Yakhini, Z. Tissue classification with gene expression profiles. In Proceedings of the Fourth Annual International Conference on Computational Molecular Biology, Tokyo, Japan, 8–11 April 2000; pp. 54–64. [Google Scholar]
  12. Ventana Medical Systems, Inc. Ventana HER2 Dual ISH DNA Probe Cocktail. Available online: https://www.accessdata.fda.gov (accessed on 29 July 2024).
  13. Huber, D.; von Voithenberg, L.V.; Kaigala, G. Fluorescence in situ hybridization (FISH): History, limitations and what to expect from micro-scale FISH? Micro Nano Eng. 2018, 1, 15–24. [Google Scholar] [CrossRef]
  14. Farrell, R.E. Chapter 4 - RNA isolation strategies. In RNA Methodologies, 6th ed.; Farrell, R.E., Ed.; Academic Press: Cambridge, MA, USA, 2023; pp. 77–120. [Google Scholar] [CrossRef]
  15. Rapley, R. Basic techniques in molecular biology. In Medical Biomethods Handbook; Springer: Berlin/Heidelberg, Germany, 2005; pp. 1–12. [Google Scholar]
  16. Iqbal, N.; Iqbal, N. Human epidermal growth factor receptor 2 (HER2) in cancers: Overexpression and therapeutic implications. Mol. Biol. Int. 2014, 2014, 852748. [Google Scholar] [CrossRef]
  17. Sasso, M.; Bianchi, F.; Ciravolo, V.; Tagliabue, E.; Campiglio, M. HER2 splice variants and their relevance in breast cancer. J. Nucleic Acids Investig. 2011, 2, e9. [Google Scholar] [CrossRef]
  18. Nohe, A. Long-Term Trends in Phytoplankton Biomass, Composition and Dynamics in the Belgian Part of the North Sea. Ph.D. Thesis, Ghent University, Gent, Belgium, 2019. [Google Scholar]
  19. Masmoudi, H.; Hewitt, S.M.; Petrick, N.; Myers, K.J.; Gavrielides, M.A. Automated quantitative assessment of HER-2/neu immunohistochemical expression in breast cancer. IEEE Trans. Med. Imaging 2009, 28, 916–925. [Google Scholar] [CrossRef] [PubMed]
  20. Ciesielski, M.; Szajewski, M.; Walczak, J.; Pęksa, R.; Lenckowski, R.; Supeł, M.; Zieliński, J.; Kruszewski, W.J. Impact of chromosome 17 centromere copy number increase on patient survival and human epidermal growth factor receptor 2 expression in gastric adenocarcinoma. Oncol. Lett. 2021, 21, 142. [Google Scholar] [CrossRef]
  21. Wolff, A.C.; Somerfield, M.R.; Dowsett, M.; Hammond, M.E.H.; Hayes, D.F.; McShane, L.M.; Saphner, T.J.; Spears, P.A.; Allison, K.H. Human Epidermal Growth Factor Receptor 2 Testing in Breast Cancer: American Society of Clinical Oncology–College of American Pathologists Guideline Update. Arch. Pathol. Lab. Med. 2023, 147, 993–1000. [Google Scholar] [CrossRef] [PubMed]
  22. Nitta, H.; Li, Z. Breast HER2 Intratumoral Heterogeneity as a Biomarker for Improving HER2-Targeted Therapy. Crit. Rev.™ Oncog. 2020, 25. [Google Scholar] [CrossRef] [PubMed]
  23. Marchiò, C.; Annaratone, L.; Marques, A.; Casorzo, L.; Berrino, E.; Sapino, A. Evolving concepts in HER2 evaluation in breast cancer: Heterogeneity, HER2-low carcinomas and beyond. Semin. Cancer Biol. 2021, 72, 123–135. [Google Scholar] [CrossRef]
  24. Yeh, I.T.; Martin, M.A.; Robetorye, R.S.; Bolla, A.R.; McCaskill, C.; Shah, R.K.; Gorre, M.E.; Mohammed, M.S.; Gunn, S.R. Clinical validation of an array CGH test for HER2 status in breast cancer reveals that polysomy 17 is a rare event. Mod. Pathol. 2009, 22, 1169–1175. [Google Scholar] [CrossRef]
  25. Hanna, W.M.; Rüschoff, J.; Bilous, M.; Coudry, R.A.; Dowsett, M.; Osamura, R.Y.; Penault-Llorca, F.; Van De Vijver, M.; Viale, G. HER2 in situ hybridization in breast cancer: Clinical implications of polysomy 17 and genetic heterogeneity. Mod. Pathol. 2014, 27, 4–18. [Google Scholar] [CrossRef]
  26. Chang, M.C.; Malowany, J.I.; Mazurkiewicz, J.; Wood, M. ‘Genetic heterogeneity’ in HER2/neu testing by fluorescence in situ hybridization: A study of 2522 cases. Mod. Pathol. 2012, 25, 683–688. [Google Scholar] [CrossRef]
  27. Robertson, S. Improving Biomarker Assessment in Breast Pathology. Ph.D. Thesis, Karolinska Institutet, Solna, Sweden, 2020. [Google Scholar]
  28. Huclier-Markai, S.; Alliot, C.; Battu, S. Nanoparticles in radiopharmaceutical sciences: Review of the fundamentals, characterization techniques and future challenges. J. Mater. NanoSci. 2020, 7, 36–61. [Google Scholar]
  29. Prins, M.J.; Ruurda, J.P.; van Diest, P.J.; Van Hillegersberg, R.; ten Kate, F.J. Evaluation of the HER2 amplification status in oesophageal adenocarcinoma by conventional and automated FISH: A tissue microarray study. J. Clin. Pathol. 2014, 67, 26–32. [Google Scholar] [CrossRef] [PubMed]
  30. Furrer, D.; Jacob, S.; Caron, C.; Sanschagrin, F.; Provencher, L.; Diorio, C. Validation of a new classifier for the automated analysis of the human epidermal growth factor receptor 2 (HER2) gene amplification in breast cancer specimens. Diagn. Pathol. 2013, 8, 17. [Google Scholar] [CrossRef] [PubMed]
  31. López, C.; Tomás, B.; Korzynska, A.; Bosch, R.; Salvadó, M.T.; Llobera, M.; Garcia-Rojo, M.; Alvaro, T.; Jaén, J.; Lejeune, M. Is it necessary to evaluate nuclei in HER2 FISH evaluation? Am. J. Clin. Pathol. 2013, 139, 47–54. [Google Scholar] [CrossRef]
  32. Reljin, B.; Paskas, M.; Reljin, I.; Konstanty, K. Breast cancer evaluation by fluorescent dot detection using combined mathematical morphology and multifractal techniques. Diagn. Pathol. 2011, 6, S21. [Google Scholar] [CrossRef] [PubMed]
  33. Bouzin, C.; Saini, M.L.; Khaing, K.K.; Ambroise, J.; Marbaix, E.; Grégoire, V.; Bol, V. Digital pathology: Elementary, rapid and reliable automated image analysis. Histopathology 2016, 68, 888–896. [Google Scholar] [CrossRef]
  34. Yang, Y.; Stafford, P.; Kim, Y. Segmentation and intensity estimation for microarray images with saturated pixels. BMC Bioinform. 2011, 12, 462. [Google Scholar] [CrossRef]
  35. Janani, P.; Premaladha, J.; Ravichandran, K. Image enhancement techniques: A study. Indian J. Sci. Technol. 2015, 8, 1–12. [Google Scholar] [CrossRef]
  36. Förstner, W. Image preprocessing for feature extraction in digital intensity, color and range images. In Geomatic Method for the Analysis of Data in the Earth Sciences; Springer: Berlin/Heidelberg, Germany, 2000; pp. 165–189. [Google Scholar]
  37. El-Hakim, S.F.; Boulanger, P.; Blais, F.; Beraldin, J.A. System for indoor 3D mapping and virtual environments. In Proceedings of the Videometrics V, International Society for Optics and Photonics, San Diego, CA, USA, 7 July 1997; SPIE: Bellingham, WA, USA; 1997; Volume 3174, pp. 21–35. [Google Scholar] [CrossRef]
  38. Lagendijk, R.L.; Biemond, J. Basic methods for image restoration and identification. In The Essential Guide to Image Processing; Elsevier: Amsterdam, The Netherlands, 2009; pp. 323–348. [Google Scholar]
  39. Wang, X.; Zheng, B.; Li, S.; Zhang, R.; Mulvihill, J.J.; Chen, W.R.; Liu, H. Automated detection and analysis of fluorescent in situ hybridization spots depicted in digital microscopic images of Pap-smear specimens. J. Biomed. Opt. 2009, 14, 021002. [Google Scholar] [CrossRef]
  40. Schinko, J.; Posnien, N.; Kittelmann, S.; Koniszewski, N.; Bucher, G. Single and double whole-mount in situ hybridization in red flour beetle (Tribolium) embryos. Cold Spring Harb. Protoc. 2009, 2009, pdb-prot5258. [Google Scholar] [CrossRef]
  41. Zhang, W.; Li, R.; Zeng, T.; Sun, Q.; Kumar, S.; Ye, J.; Ji, S. Deep model based transfer and multi-task learning for biological image analysis. IEEE Trans. Big Data 2016, 6, 322–333. [Google Scholar] [CrossRef]
  42. Mohapatra, S.; Patra, D. Automated cell nucleus segmentation and acute leukemia detection in blood microscopic images. In Proceedings of the 2010 International Conference on Systems in Medicine and Biology, Kharagpur, India, 16–18 December 2010; pp. 49–54. [Google Scholar]
  43. Mohapatra, S.; Samanta, S.S.; Patra, D.; Satpathi, S. Fuzzy based blood image segmentation for automated leukemia detection. In Proceedings of the 2011 International Conference on Devices and Communications (ICDeCom), Mesra, India, 24–25 February 2011; pp. 1–5. [Google Scholar]
  44. Kimura, Y.; Arakawa, F.; Kiyasu, J.; Miyoshi, H.; Yoshida, M.; Ichikawa, A.; Nakashima, S.; Ishibashi, Y.; Niino, D.; Sugita, Y.; et al. A spindle cell variant of diffuse large B-cell lymphoma is characterized by T-cell/myofibrohistio-rich stromal alterations: Analysis of 10 cases and a review of the literature. Eur. J. Haematol. 2012, 89, 302–310. [Google Scholar] [CrossRef]
  45. Cao, H.; Deng, H.W.; Li, M.; Wang, Y.P. Classification of multicolor fluorescence in situ hybridization (M-FISH) images with sparse representation. IEEE Trans. Nanobiosci. 2012, 11, 111–118. [Google Scholar]
  46. Mohapatra, S.; Patra, D.; Satpathy, S. An ensemble classifier system for early diagnosis of acute lymphoblastic leukemia in blood microscopic images. Neural Comput. Appl. 2014, 24, 1887–1904. [Google Scholar] [CrossRef]
  47. Kala, S.; Vasuki, S. Feature correlation based parallel hyper spectral image compression using a hybridization of FCM and subtractive clustering. J. Commun. Technol. Electron. 2014, 59, 1378–1389. [Google Scholar] [CrossRef]
  48. Amin, M.M.; Kermani, S.; Talebi, A.; Oghli, M.G. Recognition of acute lymphoblastic leukemia cells in microscopic images using k-means clustering and support vector machine classifier. J. Med Signals Sensors 2015, 5, 49. [Google Scholar]
  49. Xie, C.; Shao, Y.; Li, X.; He, Y. Detection of early blight and late blight diseases on tomato leaves using hyperspectral imaging. Sci. Rep. 2015, 5, 16564. [Google Scholar] [CrossRef] [PubMed]
  50. Vahadane, A.; Peng, T.; Sethi, A.; Albarqouni, S.; Wang, L.; Baust, M.; Steiger, K.; Schlitter, A.M.; Esposito, I.; Navab, N. Structure-preserving color normalization and sparse stain separation for histological images. IEEE Trans. Med Imaging 2016, 35, 1962–1971. [Google Scholar] [CrossRef]
  51. Jiang, M.; Yang, P.; Mackin, D.; Elhalawani, H.; Zhang, Z.; Peng, W.; Shi, Y.; Wang, H.; Jin, H.; Mohamed, A.; et al. An imaging/biology correlation study between radiomics features and anaplastic lymphoma kinase (ALK) mutational status in a uniform Chinese cohort of locally advanced lung adenocarcinomas. J. Clin. Oncol. 2018, 36, 15. [Google Scholar] [CrossRef]
  52. Jiang, Z.; Song, L.; Lu, H.; Yin, J. The potential use of DCE-MRI texture analysis to predict HER2 2+ status. Front. Oncol. 2019, 9, 242. [Google Scholar] [CrossRef]
  53. Hegde, R.B.; Prasad, K.; Hebbar, H.; Singh, B.M.K.; Sandhya, I. Automated decision support system for detection of leukemia from peripheral blood smear images. J. Digit. Imaging 2020, 33, 361–374. [Google Scholar] [CrossRef]
  54. Song, L.; Lu, H.; Yin, J. Preliminary study on discriminating HER2 2+ amplification status of breast cancers based on texture features semi-automatically derived from pre-, post-contrast, and subtraction images of DCE-MRI. PLoS ONE 2020, 15, e0234800. [Google Scholar] [CrossRef] [PubMed]
  55. Park, M.; Jin, J.S.; Xu, M.; Wong, W.F.; Luo, S.; Cui, Y. Microscopic image segmentation based on color pixels classification. In Proceedings of the First International Conference on Internet Multimedia Computing and Service, Kunming, China, 23–25 November 2009; pp. 53–59. [Google Scholar]
  56. Loukas, C.G.; Linney, A. A survey on histological image analysis-based assessment of three major biological factors influencing radiotherapy: Proliferation, hypoxia and vasculature. Comput. Methods Programs Biomed. 2004, 74, 183–199. [Google Scholar] [CrossRef]
  57. Schaumberg, A.J.; Juarez, W.; Choudhury, S.J.; Pastrián, L.G.; Pritt, B.S.; Pozuelo, M.P.; Sánchez, R.S.; Ho, K.; Zahra, N.; Sener, B.D.; et al. Large-scale annotation of histopathology images from social media. BioRxiv 2018, 1, 396663. [Google Scholar]
  58. Li, C.; Hu, Z.; Chen, H.; Xue, D.; Xu, N.; Zhang, Y.; Li, X.; Wang, Q.; Ma, H. Cervical histopathology image clustering using graph based unsupervised learning. In Proceedings of the 11th International Conference on Modelling, Identification and Control (ICMIC2019), Tianjin, China, 13–15 July 2019; Springer: Singapore, 2019; pp. 141–152. [Google Scholar]
  59. Castellano, G.; Bonilha, L.; Li, L.; Cendes, F. Texture analysis of medical images. Clin. Radiol. 2004, 59, 1061–1069. [Google Scholar] [CrossRef] [PubMed]
  60. MacAulay, C.; Palcic, B. Fractal texture features based on optical density surface area. Use in image analysis of cervical cells. Anal. Quant. Cytol. Histol. 1990, 12, 394–398. [Google Scholar]
  61. Garcia-Lamont, F.; Cervantes, J.; López, A.; Rodriguez, L. Segmentation of images by color features: A survey. Neurocomputing 2018, 292, 1–27. [Google Scholar] [CrossRef]
  62. Del Bimbo, A.; Meoni, M.; Pala, P. Accurate evaluation of HER-2 amplification in FISH images. In Proceedings of the 2010 IEEE International Conference on Imaging Systems and Techniques, Thessaloniki, Greece, 1–2 July 2010; pp. 407–410. [Google Scholar]
  63. Slavković-Ilić, M.S.; Paskaš, M.P.; Reljin, B.D. Nuclei segmentation from contrast enhanced FISH images. In Proceedings of the 2016 13th Symposium on Neural Networks and Applications (NEUREL), Belgrade, Serbia, 22–24 November 2016; pp. 1–5. [Google Scholar]
  64. Çetin, Ş.B.; Khameneh, F.D.; Serteli, E.A.; Çayır, S.; Hatipoğlu, G.; Kamasak, M.; Ayaltı, S.; Razavi, S.; Budancamanak, Y.; Özsoy, G. Automated cell segmentation and spot detection in fluorescence in situ hybridization staining to assess HER2 status in breast cancer. In Proceedings of the 2018 26th Signal Processing and Communications Applications Conference (SIU), Izmir, Turkey, 2–5 May 2018; pp. 1–4. [Google Scholar]
  65. Zakrzewski, F.; de Back, W.; Weigert, M.; Wenke, T.; Zeugner, S.; Mantey, R.; Sperling, C.; Friedrich, K.; Roeder, I.; Aust, D.; et al. Automated detection of the HER2 gene amplification status in Fluorescence in situ hybridization images for the diagnostics of cancer tissues. Sci. Rep. 2019, 9, 8231. [Google Scholar] [CrossRef]
  66. Kromp, F.; Bozsaky, E.; Rifatbegovic, F.; Fischer, L.; Ambros, M.; Berneder, M.; Weiss, T.; Lazic, D.; Dörr, W.; Hanbury, A.; et al. An annotated fluorescence image dataset for training nuclear segmentation methods. Sci. Data 2020, 7, 262. [Google Scholar] [CrossRef]
  67. Goudas, T.; Maglogiannis, I. An advanced image analysis tool for the quantification and characterization of breast cancer in microscopy images. J. Med Syst. 2015, 39, 31. [Google Scholar] [CrossRef]
  68. Frankenstein, Z.; Uraoka, N.; Aypar, U.; Aryeequaye, R.; Rao, M.; Hameed, M.; Zhang, Y.; Yagi, Y. Automated 3D scoring of fluorescence in situ hybridization (FISH) using a confocal whole slide imaging scanner. Appl. Microsc. 2021, 51, 4. [Google Scholar] [CrossRef]
  69. Fan, H.; Xie, F.; Li, Y.; Jiang, Z.; Liu, J. Automatic segmentation of dermoscopy images using saliency combined with Otsu threshold. Comput. Biol. Med. 2017, 85, 75–85. [Google Scholar] [CrossRef] [PubMed]
  70. Nandy, K.; Gudla, P.R.; Meaburn, K.J.; Misteli, T.; Lockett, S.J. Automatic nuclei segmentation and spatial FISH analysis for cancer detection. In Proceedings of the 2009 Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Minneapolis, MN, USA, 3–6 September 2009; pp. 6718–6721. [Google Scholar]
  71. Dirami, A.; Hammouche, K.; Diaf, M.; Siarry, P. Fast multilevel thresholding for image segmentation through a multiphase level set method. Signal Process. 2013, 93, 139–153. [Google Scholar] [CrossRef]
  72. Gofer, S.; Haik, O.; Bardin, R.; Gilboa, Y.; Perlman, S. Machine Learning Algorithms for Classification of First-Trimester Fetal Brain Ultrasound Images. J. Ultrasound Med. 2021, 41, 1773–1779. [Google Scholar] [CrossRef] [PubMed]
  73. Chen, L.C.; Papandreou, G.; Kokkinos, I.; Murphy, K.; Yuille, A.L. Semantic image segmentation with deep convolutional nets and fully connected crfs. arXiv 2014, arXiv:1412.7062. [Google Scholar]
  74. Guo, Y.; Liu, Y.; Georgiou, T.; Lew, M.S. A review of semantic segmentation using deep neural networks. Int. J. Multimed. Inf. Retr. 2018, 7, 87–93. [Google Scholar] [CrossRef]
  75. Caldera, S.; Rassau, A.; Chai, D. Review of deep learning methods in robotic grasp detection. Multimodal Technol. Interact. 2018, 2, 57. [Google Scholar] [CrossRef]
  76. Habib, M.T.; Majumder, A.; Jakaria, A.; Akter, M.; Uddin, M.S.; Ahmed, F. Machine vision based papaya disease recognition. J. King Saud Univ.-Comput. Inf. Sci. 2020, 32, 300–309. [Google Scholar] [CrossRef]
  77. Nasirahmadi, A.; Ashtiani, S.H.M. Bag-of-Feature model for sweet and bitter almond classification. Biosyst. Eng. 2017, 156, 51–60. [Google Scholar] [CrossRef]
  78. Suresha, M.; Kumar, K.; Kumar, G.S. Texture features and decision trees based vegetables classification. Int. J. Comput. Appl. 2012, 975, 8878. [Google Scholar]
  79. Ashok, V.; Vinod, D. Automatic quality evaluation of fruits using Probabilistic Neural Network approach. In Proceedings of the 2014 International Conference on Contemporary Computing and Informatics (IC3I), Mysore, India, 27–29 November 2014; pp. 308–311. [Google Scholar]
  80. Liew, M.; Rowe, L.; Clement, P.W.; Miles, R.R.; Salama, M.E. Validation of break-apart and fusion MYC probes using a digital fluorescence in situ hybridization capture and imaging system. J. Pathol. Inform. 2016, 7, 20. [Google Scholar] [CrossRef]
  81. Cohen, I.; David, E.O.; Netanyahu, N.S.; Liscovitch, N.; Chechik, G. Deepbrain: Functional representation of neural in-situ hybridization images for gene ontology classification using deep convolutional autoencoders. In International Conference on Artificial Neural Networks; Springer: Cham, Switzerland, 2017; pp. 287–296. [Google Scholar]
  82. Saha, M.; Chakraborty, C. Her2Net: A deep framework for semantic segmentation and classification of cell membranes and nuclei in breast cancer evaluation. IEEE Trans. Image Process. 2018, 27, 2189–2200. [Google Scholar] [CrossRef] [PubMed]
  83. Cohen, I.; David, E.O.; Netanyahu, N.S. Supervised and Unsupervised End-to-End Deep Learning for Gene Ontology Classification of Neural In Situ Hybridization Images. Entropy 2019, 21, 221. [Google Scholar] [CrossRef] [PubMed]
  84. Pavlov, S.; Momcheva, G.; Burlakova, P.; Atanasov, S.; Stoyanov, D.; Ivanov, M.; Tonchev, A. Feasibility of Haralick’s Texture Features for the Classification of Chromogenic In-situ Hybridization Images. In Proceedings of the 2020 International Conference on Biomedical Innovations and Applications (BIA), Varna, Bulgaria, 24–27 September 2020; pp. 65–68. [Google Scholar]
  85. Abed-Esfahani, P.; Darwin, B.C.; Howard, D.; Wang, N.; Kim, E.; Lerch, J.; French, L. Evaluation of deep convolutional neural networks for in situ hybridization gene expression image representation. PLoS ONE 2021, 17, e0262717. [Google Scholar] [CrossRef] [PubMed]
  86. Janowczyk, A.; Madabhushi, A. Deep learning for digital pathology image analysis: A comprehensive tutorial with selected use cases. J. Pathol. Inform. 2016, 7, 29. [Google Scholar] [CrossRef]
  87. Alom, M.Z.; Yakopcic, C.; Nasrin, M.S.; Taha, T.M.; Asari, V.K. Breast cancer classification from histopathological images with inception recurrent residual convolutional neural network. J. Digit. Imaging 2019, 32, 605–617. [Google Scholar] [CrossRef]
  88. Zhao, Z.Q.; Zheng, P.; Xu, S.T.; Wu, X. Object detection with deep learning: A review. IEEE Trans. Neural Netw. Learn. Syst. 2019, 30, 3212–3232. [Google Scholar] [CrossRef]
  89. Litjens, G.; Sánchez, C.I.; Timofeeva, N.; Hermsen, M.; Nagtegaal, I.; Kovacs, I.; Hulsbergen-Van De Kaa, C.; Bult, P.; Van Ginneken, B.; Van Der Laak, J. Deep learning as a tool for increased accuracy and efficiency of histopathological diagnosis. Sci. Rep. 2016, 6, 26286. [Google Scholar] [CrossRef]
  90. Powell, R.D.; Pettay, J.D.; Powell, W.C.; Roche, P.C.; Grogan, T.M.; Hainfeld, J.F.; Tubbs, R.R. Metallographic in situ hybridization. Hum. Pathol. 2007, 38, 1145–1159. [Google Scholar] [CrossRef]
  91. Rehman, Z.U.; Wan Ahmad, W.S.H.M.; Ahmad Fauzi, M.F.; Abas, F.S.; Cheah, P.L.; Looi, L.M.; Toh, Y.F. Comprehensive analysis of color normalization methods for HER2-SISH histopathology images. J. Eng. Sci. Technol. 2024, 19, 146–159. [Google Scholar]
  92. Rehman, Z.U.; Fauzi, M.F.A.; Wan Ahmad, W.S.H.M.; Cheah, P.L.; Looi, L.M.; Toh, Y.F.; Abas, F.S. Detection and histo-scoring of HER2/CEN17 biomarkers in SISH images. In Proceedings of the 2022 International Symposium on Intelligent Signal Processing and Communication Systems (ISPACS), Penang, Malaysia, 22–25 November 2022; pp. 1–4. [Google Scholar] [CrossRef]
Figure 1. Differenttypes of cytogenic images resulting from ISH: (a) FISH at 20× magnification, (b) CISH at 20× magnification, and (c) SISH at 40× magnification.
Figure 1. Differenttypes of cytogenic images resulting from ISH: (a) FISH at 20× magnification, (b) CISH at 20× magnification, and (c) SISH at 40× magnification.
Diagnostics 14 02089 g001
Figure 2. Breakdown of computational methods commonly used for histopathology image analysis.
Figure 2. Breakdown of computational methods commonly used for histopathology image analysis.
Diagnostics 14 02089 g002
Figure 3. A machine vision-based approach used in digital pathology image analysis. The red squares in subfigure (A) indicate selected regions for machine vision analysis. The whole slide image (WSI) is at a magnification level of 40×.
Figure 3. A machine vision-based approach used in digital pathology image analysis. The red squares in subfigure (A) indicate selected regions for machine vision analysis. The whole slide image (WSI) is at a magnification level of 40×.
Diagnostics 14 02089 g003
Figure 4. Examples of how digital photographs have been altered using grayscale-based contrast enhancement and thresholding for different cytogenetic types of ISH: (a) scale variation in CISH at 20× magnification, (b) scale variation in FISH at 20× magnification, and (c) scale variation in SISH at 40× magnification.
Figure 4. Examples of how digital photographs have been altered using grayscale-based contrast enhancement and thresholding for different cytogenetic types of ISH: (a) scale variation in CISH at 20× magnification, (b) scale variation in FISH at 20× magnification, and (c) scale variation in SISH at 40× magnification.
Diagnostics 14 02089 g004
Figure 5. Automated image processing-based system demonstration at 40× magnification: (a) original SISH images, (b) preprocessed for ground truth generation, (c) nuclei-labeled ground truth images, (d) marked labeled nuclei on the original image, and (e) marked labeled nuclei and HER2 signals. More precise signal detection refines nuclei segmentation.
Figure 5. Automated image processing-based system demonstration at 40× magnification: (a) original SISH images, (b) preprocessed for ground truth generation, (c) nuclei-labeled ground truth images, (d) marked labeled nuclei on the original image, and (e) marked labeled nuclei and HER2 signals. More precise signal detection refines nuclei segmentation.
Diagnostics 14 02089 g005
Table 1. ISH applications for HER2 amplifications based on different chromogenic systems.
Table 1. ISH applications for HER2 amplifications based on different chromogenic systems.
TechniqueTargetExplanationRef.
FISHHER2 gene/CEP17Fluorescence in situ hybridization (FISH) uses fluorescent probes to detect HER2 gene amplification and chromosome 17 centromere (CEP17) in tumor cells. [3]
CISHHER2 gene/CEP17Chromogenic in situ hybridization (CISH) uses chromogenic probes that produce a colorimetric reaction, making it easier to view HER2 gene amplification and CEP17 under a regular microscope. [4]
SISHHER2 geneSilver-enhanced in situ hybridization (SISH) is similar to CISH but uses silver deposition to visualize HER2 gene amplification, allowing the use of standard bright-field microscopy. [5]
Table 2. Current computer-based image analysis limitations and potential solutions.
Table 2. Current computer-based image analysis limitations and potential solutions.
IssueProblemProposed Solution
Tissue analysis and its standardizationProcessing of variabilities and tissue harvestingRevision of histology techniques across centers to improve quantitative analysis downstream. Subspecialty societies are involved
Image analyticsVariability in scanners and image problemsStandard quality assurance and calibration methods are implemented to check the image linearity, uniformity, and reproducibility
Data integrationExtraction of data, spanning multiple length scales, representation, and fusionData gathering and storage should be standardized. Development of ontologies. New data fusion methods are being developed
Table 3. Descriptions of common image preprocessing techniques.
Table 3. Descriptions of common image preprocessing techniques.
TechniqueDescriptionApplicationsConstraints
Elementary processing [33,35]Signal processing filters are used to process a group of adjacent pixelsSmoothing and gradient analysis for better edge detectionLimited for complex and non-linear signal processing
Intensity estimation [34,36]The estimation of missing pixel values using spatial and non-spatial analysisNoisy pixel value determination in grayscale and RGB imagesNon-uniform object lighting may require prior knowledge
Geometric estimation [37]Geometric distortion estimation using relative motion, angle, speed, and 2D to 3D representationGeometric detail determination in mobile robotics and remote sensing applicationsThe sensor and object angle, location, and relative speed must be known
Holistic processing [38]A set of filters are used for convolution for image restorationIdentifying holistic image featuresRequires complex stochastic analysis and prior knowledge
Table 4. Propertiesof feature descriptors.
Table 4. Propertiesof feature descriptors.
YearRef.Image & Stain TypeSFCFTFFeature DescriptionAccuracy
2009[39]FISHSize, circularity, and compactness were computed96.90%
[40]ISHAnti-digoxigenin (DIG) and fluorescein-labeled riboprobes
[41]ISHIn Drosophila gene patterns, texture features are effective81.90%
2010[42]FISHDiscriminative features, i.e., nucleus shape and texture, are used for the final detection of leukemia95.00%
2011[43]FISHThe contour signature and Hausdorff Dimensions are used for classifying a lymphocytic cell93.00%
2012[44]FISHSpindle-shaped features are extracted for the classification of FISH cells
[45]M-FISHMulticolor sparse imaging representation approach based on L1-norm minimization90.00%
[10]ISHLocal binary patterns or histograms are used to train the gene classifiers based on four cerebellum layers94.00%
2014[46]Stained Blood ImagesA quantitative microscopic method is used for determination of lymphoblasts90.00%
[47]Hyper spectral imagesGLCM texture features are used for hyper spectral images (HSIs)
2015[48]ISHNuclei are segmented using k-means. Then, statistical and geometric features are used for cell classification using an SVM98%
[49]Hyper spectral imagesEight texture statistical features based on gray-level co-occurrence matrix (GLCM)71.8%
2016[50]Tissue imagesPatch samples are selected based on stains on density maps with stain color
[41]ISHImage pixel-based DCNN is used for feature extraction81.00%
2018[51]DICOM filesThe shape, gray-level co-occurence matrix, gray-level run length matrix, and neighborhood intensity difference were used to extract 386 texture features80.39%
2019[52]FISHIn total, 279 textural features and a machine learning classifier-based method were used86.00%
2020[53]Blood Smear ImagesDifferent shades of color and brightness levels are computed from blood smears, and then the classifiers were applied98.80%
[54]FISHIn total, 488 texture features were extracted from precontrast, postcontrast, and subtraction images83.00%
2021[55]MicroscopyHomogeneous regions were segmented using clustering techniques in the RGB color space90%
Note: SF stands for shape features, CF stands for color features, and TF stands for texture features. The check and cross symbols indicate that the features belong to the corresponding method and reference.
Table 5. Explanations of segmentation methods used in digital pathology for nuclei and cell segmentation.
Table 5. Explanations of segmentation methods used in digital pathology for nuclei and cell segmentation.
YearPathology Image TypeApplicationSegmentation TechniqueRef.
Nuclei Segmentation     ↓
2009Cervical tissueRegion-based segmenationClustering method is used in RGB color space for nuclei segmentation [55]
2010FISHNuclei segmentationMorphologial image enhancement and watershed technique [62]
2012SISHHER2 gene statusNumber of cells, genes, number of genes per cell (average), superimposed contour cell image, gene image, and processing time [55]
2016FISHHER2 gene statusA method for nuclei segmentation from the blue channel of the contrast-enhanced image [63]
2018FISHSegmentation and detection of signalsEnhanced nucleus segmentation and signal detection from tile-based processing using the adaptive thresholding [64]
2019FISHSegmenation and classificationTwo RetinaNet networks for the detection and classification of nuclei into distinct classes and classifing FISH signals into HER2 or CEN17 [65]
2020IHCMachine learning-based segmentationAnnotated dataset for training machine learning techniques, which includes firmly packed nuclei from several tissues [66]
Cancer cell detection     ↓
2015Microscopy ImagesFast characterization of apoptotic cellsAdaptive thresholding, a support vector machine, a majority vote, and the watershed technique are used [67]
Tumor area detection     ↓
2021FISHThree-dimensional scoring of fluorescenceThree-dimensional FISH scoring is established for automated z-stack images from confocal WSI scanner [68]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Rehman, Z.U.; Ahmad Fauzi, M.F.; Wan Ahmad, W.S.H.M.; Abas, F.S.; Cheah, P.L.; Chiew, S.F.; Looi, L.-M. Review of In Situ Hybridization (ISH) Stain Images Using Computational Techniques. Diagnostics 2024, 14, 2089. https://doi.org/10.3390/diagnostics14182089

AMA Style

Rehman ZU, Ahmad Fauzi MF, Wan Ahmad WSHM, Abas FS, Cheah PL, Chiew SF, Looi L-M. Review of In Situ Hybridization (ISH) Stain Images Using Computational Techniques. Diagnostics. 2024; 14(18):2089. https://doi.org/10.3390/diagnostics14182089

Chicago/Turabian Style

Rehman, Zaka Ur, Mohammad Faizal Ahmad Fauzi, Wan Siti Halimatul Munirah Wan Ahmad, Fazly Salleh Abas, Phaik Leng Cheah, Seow Fan Chiew, and Lai-Meng Looi. 2024. "Review of In Situ Hybridization (ISH) Stain Images Using Computational Techniques" Diagnostics 14, no. 18: 2089. https://doi.org/10.3390/diagnostics14182089

APA Style

Rehman, Z. U., Ahmad Fauzi, M. F., Wan Ahmad, W. S. H. M., Abas, F. S., Cheah, P. L., Chiew, S. F., & Looi, L. -M. (2024). Review of In Situ Hybridization (ISH) Stain Images Using Computational Techniques. Diagnostics, 14(18), 2089. https://doi.org/10.3390/diagnostics14182089

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop