Next Article in Journal
CVE2ATT&CK: BERT-Based Mapping of CVEs to MITRE ATT&CK Techniques
Next Article in Special Issue
Polymer Models of Chromatin Imaging Data in Single Cells
Previous Article in Journal
Images Segmentation Based on Cutting the Graph into Communities
Previous Article in Special Issue
Cancer Identification in Walker 256 Tumor Model Exploring Texture Properties Taken from Microphotograph of Rats Liver
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Artificial Intelligence for Cell Segmentation, Event Detection, and Tracking for Label-Free Microscopy Imaging

1
Institute for High-Performance Computing and Networking, National Research Council, 80131 Naples, Italy
2
Department of Economics and Law, University of Cassino and Southern Lazio, 03043 Cassino, Italy
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Algorithms 2022, 15(9), 313; https://doi.org/10.3390/a15090313
Submission received: 4 August 2022 / Revised: 26 August 2022 / Accepted: 29 August 2022 / Published: 31 August 2022
(This article belongs to the Special Issue Algorithms for Biomedical Image Analysis and Processing)

Abstract

:
Background: Time-lapse microscopy imaging is a key approach for an increasing number of biological and biomedical studies to observe the dynamic behavior of cells over time which helps quantify important data, such as the number of cells and their sizes, shapes, and dynamic interactions across time. Label-free imaging is an essential strategy for such studies as it ensures that native cell behavior remains uninfluenced by the recording process. Computer vision and machine/deep learning approaches have made significant progress in this area. Methods: In this review, we present an overview of methods, software, data, and evaluation metrics for the automatic analysis of label-free microscopy imaging. We aim to provide the interested reader with a unique source of information, with links for further detailed information. Results: We review the most recent methods for cell segmentation, event detection, and tracking. Moreover, we provide lists of publicly available software and datasets. Finally, we summarize the metrics most frequently adopted for evaluating the methods under exam. Conclusions: We provide hints on open challenges and future research directions.

1. Introduction

Microscopy is a fundamental research pillar enabling scientists to discover the structures and dynamics of cells and subcellular components. Most of these components are phase objects, which means they are transparent and colorless and cannot be visualized under a light microscope. To overcome this limit, a solution consists in staining the components with dyes, also known as fluorophores or fluorochromes. Such molecules absorb short-wavelength light, generally UV, and emit fluorescence light at a longer wavelength. This mechanism represents the basic premise of fluorescence microscopy techniques, and the fluorescence images show the specimen bright on a dark background. The staining techniques use different dyes to point out a specific cellular component. Then, the same specimen can appear with a red, blue, or green color and appearance, as depicted in Figure 1a,b.
As the molecular genetic methodologies and tools advance, the fluorescence techniques become more specific and can be applied to a larger set of different model organisms [1,2]. However, to ensure high-quality images, they require laborious and expensive sample preparation. Furthermore, the dyes reduce their luminance over time due to a light-induced degradation process called photobleaching [3]. Finally, fluorescent techniques are invasive and interfere with the biological processes causing phototoxicity.
On the contrary, Label-free Imaging (LI) techniques can visualize many cellular structures simultaneously with minimal sample preparation, phototoxicity, and no photobleaching, making them particularly suitable for live-cell imaging. Thus, LI provides measurements complementary to fluorescence imaging for several biological studies. Based on optical principles, these techniques measure the light phase change (i.e., the refractive index) passing through the specimen and converting it into intensity modulations producing qualitative phase contrast images, as exemplified in Figure 1c (the phase change information contained in LI images is non-linearly coupled with its luminance intensity and cannot be retrieved quantitatively. An image produced by LI techniques is a map of path-length shifts associated with the specimen, containing information about both the thickness and refractive index of its structure; for further details and explanations, see [4]). Among the traditional techniques to phase change imaging, there are Phase Contrast (PhC) [5] and Differential Image Contrast (DIC) [6] based on the phase gradient method and differential interference contrast, respectively, to measure the refractive index. Another LI technique very similar to DIC is the Hoffman Modulation Contrast (HMC) [7]. Due to intrinsic limitations of the numerical conversion methods, phase contrast images contain artifacts [8] (i.e., bright halo surrounding cell contours) and the “shade-off effect”, which produces low contrast inside the cells with an intensity very similar to the background. Although several methods have been developed to overcome these artifacts, automatic image processing of label-free images is still challenging, especially the segmentation task for separating cells from the background.
Figure 1. Example of differences in appearance of fluorescence and phase contrast microscopy images (culture of human lymphocyte cells [9]): (a) fluorescence image of nuclear envelope; (b) fluorescence image of interior nuclei (DNA); (c) phase contrast image of whole cells.
Figure 1. Example of differences in appearance of fluorescence and phase contrast microscopy images (culture of human lymphocyte cells [9]): (a) fluorescence image of nuclear envelope; (b) fluorescence image of interior nuclei (DNA); (c) phase contrast image of whole cells.
Algorithms 15 00313 g001
Unlike the previous techniques, quantitative LI provides higher contrast and reduces artifacts. Among them, Quantitative Phase Imaging (QPI) techniques refer to the microscopes showing phase quantitative information. Details concerning QPI techniques can be found in [10].
A further alternative to noninvasive techniques is Bright-Field (BF) microscopy. It represents the most straightforward configuration for the light microscope, which is not only cheaper but also does not require sample preparation [11]. BF images provide information about the cellular organization, and they are preferred to visualize specimens with low contrast from the background (unlike in fluorescence) or with low resolution and magnification visualization of thin cellular components (unlike in phase contrast). Several studies describe the use of the BF channel in cell detection and automated image analysis of cell populations [12,13].
With the aim of observing the dynamic behavior of living cells over time, LI microscopes are also equipped with a real-time imaging tool named time-lapse. Broadly speaking, time-lapse is a speed-up technique to observe events changing over time. This is usually realized by taking images at regular time intervals and merging them into a video. One of the most significant applications of time-lapse microscopy is cell population monitoring to study single-cell behavior in response to physiological or external stimuli and understand the underlying mechanisms. For example, in drug discovery and cancer research [14], time-lapse microscopy is used to look at cell response to anti-mitotic drugs in terms of cell division and cell death. To achieve this goal, quantitative information on cell behavior needs to be obtained and analyzed [15]. Cell proliferation, lineage, and fate are of primary importance among various cellular events. Image analysis of these biological processes is usually performed manually with suitable protocols [16]. As manual analysis of a large volume of light microscopy images is slow, tedious, time-consuming, and subject to observer subjectivity, biological studies see an increased demand for reliable automatic imaging tools. As in many scientific disciplines, Artificial Intelligence (AI) has been changing how imaging data are processed and analyzed and how experiments are carried out. AI refers to artificial systems aiming to adapt previous knowledge to new situations and recognize meaning in data patterns. Machine Learning (ML) [17] is a subset of AI methods that extracts valuable features from large data sets to make predictions or decisions on unseen data. An ML algorithm is not designed to solve a specific problem but rather to train a computer to solve problems. The training is data-driven. Deep Learning (DL) is a set of ML algorithms using a multi-layer “neural networks” to progressively extract higher-level features from data (the number of layers is the depth of the model, hence the terminology “deep learning”; for a quick overview on DL key concepts for microscopy image data, the reader can refer to [18]).
This review mainly focuses on AI methods for the most used traditional LI microscopy techniques, i.e., PhC, DIC, and BF, to investigate fundamental biological events. A comprehensive description of the state-of-the-art methods using QPI data and AI approaches, which are out of our focus, can be found in [19] and references therein. Few recent surveys are available in the literature covering methods for these types of microscopy images and videos. The study presented by Vicar et al. [20] performs a comprehensive comparison of cell image segmentation methods for the most common label-free microscopy techniques, including PhC, DIC, HMC, and QPI. The review covers traditional methods, providing only hints on DL-based segmentation methods. The authors identify an effective image segmentation pipeline composed of four main steps: image reconstruction, foreground-background segmentation, seed-point extraction, and cell segmentation. They discuss and assess the most effective combination of the above steps for the specific microscopy techniques based on software and tools available in the state-of-the-art literature. Furthermore, they compare the accuracy and efficiency of tools containing all the above four steps (named “all-in-one” tools) as well as software implementing only one of them. Software and data were made publicly available (see Section 3 and Section 4). In [21], Emami et al. present a review of methods and tools for cell tracking. Following the traditional object tracking literature, the methods are subdivided into three groups, according to whether tracking is achieved by detection, model evaluation, or filtering, with limited space given to DL approaches. Well-known commercial and open-source cell tracking tools are summarized, and typical challenges are highlighted. Ulman et al. [22] present a comparison of 21 cell-tracking algorithms participating in three editions of the Cell Tracking Challenge (CTC), an initiative promoting the development and objective evaluation of cell segmentation and tracking algorithms. The compared methods are summarized based on common principles, features, and methodologies, as well as pre- and post-processing strategies. They are evaluated for both the segmentation and tracking tasks (see Section 5), and their overall average performance is used to compile the final ranking. Started in 2013, this challenge is still ongoing, and since 2019, it has been articulated into two different challenges, the Cell Tracking Benchmark (CTB) and the Cell Segmentation Benchmark (CSB), sharing the same dataset (see Section 4).
Although the described surveys provide extremely useful insights on specific tasks or specific datasets, the landscape of the scientific research on the subject still appears fragmented. The aim of our review is to provide a broad and up-to-date view of AI methods for the analysis of label-free images and videos acquired by traditional LI microscopy techniques and all the ingredients needed to afford it. Thus, it covers the most recent methods, especially for cellular segmentation, event detection, and tracking over time-lapse videos, available datasets, software, and evaluation metrics.
The review is organized as follows. In Section 2, we introduce the considered microscopy analysis tasks and provide brief descriptions of the reviewed literature methods for each of them. Section 3 and Section 4 provide brief descriptions and links to the publicly available software and data. Section 5 introduces the most frequently used metrics to evaluate the AI algorithms for the considered tasks. Section 6 summarizes the open problems, providing hints on possible future research directions.

2. Literature

2.1. Cell Segmentation

Image segmentation is the main task for producing numerical data from live-cell imaging experiments, thus providing direct insight into the living system from the quantitative cell information [23,24]. Cell segmentation is the process of splitting a microscopy image into “segments”, i.e., Regions Of Interest (ROIs), and produces an image where cells are separated from the image background by cell contours or cell labels. Accurate cell segmentations are crucial for many challenges involved in cellular analysis, including but not limited to cell tracking [25], cellular features quantification, proliferation, morphology, migration, interactions, and counting [26,27,28,29].
Several steps are considered in the literature for achieving cell segmentation. Some authors [8,20,30] consider crucial to initially perform an image reconstruction step, which produces images with higher contrast between foreground and background, increasing the success of the subsequent image processing tasks. The image formation model in phase contrast microscopy can be studied to reduce artefacts and solve the inverse problem through a regularization approach [8]. Some ML-based methods are reviewed by Vicar et al. [20] for PhC and DIC images, while De Haan et al. [30] present an overview of how DL-based frameworks solve these inverse problems in optical microscopy.
Cell detection (or identification) is also frequently adopted previous to segmentation [20,22], with the aim of locating the cells in the image (e.g., via bounding boxes, as exemplified in Figure 2a).
Figure 2. Results of different image processing tasks: (a) cell detection; (b) cell semantic segmentation; (c) cell instance segmentation.
Figure 2. Results of different image processing tasks: (a) cell detection; (b) cell semantic segmentation; (c) cell instance segmentation.
Algorithms 15 00313 g002
Some specific types of segmentation are frequently considered for cellular images [20,31,32,33]. Semantic segmentation identifies the object (i.e., the cell) category of each pixel for every known object within an image, as exemplified in Figure 2b. Instance segmentation, instead, identifies the object instance (i.e., the cell with specific features) of each pixel for every known object within an image [34]; an example is given in Figure 2c. It should be observed that the problem of cell instance segmentation is sometimes intended as joint cell detection and segmentation (e.g., see [32]), while other times (e.g., see [20]) instant segmentation is used as a synonym for single cell segmentation.
Some of the most recent segmentation algorithms are described in the following. In [23], Van Valen et al. in the software named DeepCell adopt deep convolutional neural networks for cell segmentation for various types of microscopy images (PhC, Fluo, and PhC coupled with images of a fluorescent nuclear marker), also providing hints on design rules in training CNNs for this task (image normalization, data augmentation, hyper-parameter tuning, and segmentation refinement). They also extend the deep convolutional neural networks to perform semantic segmentation (i.e., not only image segmentation but also cell type prediction). The software and the adopted data are publicly available (see Section 3 and Section 4).
Hilsenbeck et al. [25] present fastER, a fast and trainable tool for cell segmentation that extracts texture and shape features from candidate regions, estimates their likelihood to be a cell with a support vector machine (SVM) algorithm, and calculates an optimal set of non-overlapping candidate regions using a divide and conquer approach. Candidate regions are chosen as the so-called extremal regions (regions with maximal size containing only pixels whose intensities are no greater than a specific threshold), similarly to CellDetect [35]. For each candidate region, a feature vector is extracted to train the SVM model, including typical shape and intensity information (size, major/minor axis lengths, eccentricity, average intensity inside and in its neighborhood, average and standard deviation gradient, and average heterogeneity) [36]. Pre-processing of the images consists of denoising with bilateral filtering, while post-processing of the resulting masks includes hole-filling and size filtering. The software made publicly available (see Section 3) is shown to be robust against common cell segmentation challenges but still suffers high cell densities and blurring. Compared with other state-of-the-art methods (e.g., U-Net [37], ilastik [38], CellProfiler [39], and CellDetect), it is shown to be more efficient on various types of data made publicly available (see Section 4) still achieving similarly accurate results.
Yi et al. [32] propose the software ANCIS an Attentive Neural Cell Instance Segmentation method to predict each cell’s bounding box and its segmentation mask simultaneously. The method builds on a joint network that combines the single shot multi-box detector (SSD) one-stage object detector [40] and U-net [37] for cell segmentation. Attention mechanisms are adopted in detection and segmentation modules to focus the model on useful features while suppressing irrelevant information. The software, tested on DIC images of neural cells, is publicly available (see Section 3).
In [24], Lux and Matula use a watershed [41] marker-based approach with two convolutional neural networks (CNN) of hour-glass architecture shape to segment clustered cells in images consisting of five datasets, three of which originate from the Cell Tracking Challenge [22] (DIC-C2DH-HeLa, Fluo-N2DH-SIM+, and PhC-C2DL-PSC). They used normalization by histogram equalization and median scaling as a pre-processing step. Afterwards, they augment the data by randomized rigid geometric transformations and scaling. Then, Lux et al. use one CNN to predict cell marker pixels and the other CNN for image foreground predictions. They utilize these outputs to compute the marker function to obtain the segmentation seeds and the segmentation function to define cell regions that are further used in the marker-controlled watershed segmentation.
Scherr et al. [42] present a method for segmenting touching cells in BF images from the Cell Tracking Challenge [22] by using a novel representation of cell borders, inspired by distance maps. The proposed method uses an adapted U-Net with two decoder paths, one for prediction of cell distance and another for prediction of neighbor distance. These distances are then used for the watershed-based post-processing to obtain segmentations. Results are evaluated using the SEG, DET, and CSB metrics (see Section 5) and show accurate performances, with average SEG and DET scores of 0.726 and 0.975, respectively.
Nishimura et al. [43] propose a weakly supervised cell instance segmentation method that recognizes each cell region by only using weak labels, i.e., point-level (cell centroid positions) rather than pixel-level annotations, as training data. This approach strongly reduces the annotation cost compared with the standard annotation method required for supervised segmentation. They train a cell detection CNN (U-net) and then use it to estimate rough cell positions. The rough cell shapes are extracted from the detection network by backpropagating the activation from output to input, obtaining a relevance map that shows how each pixel in the input image is relevant to the output. The final cell shapes are estimated by graph-cut [44] using the estimated relevance map as a seed. Results on different datasets show that the method works well with different types of microscopy and different contrasts.
Stringer and Pachitariu introduce Cellpose [45] a software library for the instance segmentation of cell images. It implements a CNN using the U-net architecture style. Cellpose provides the probability of a pixel being inside a cell and the flows of pixels in x y coordinates towards the cell center. The flows are then used to construct the cell ROIs. Several results confirm its reliability on a wide range of label-free images without model retraining or parameter adjustment. The authors also propose a 3D extension of the library that does not use 3D-labeled data but works on the 2D model. The recent furthest extension of Cellpose [46] can adapt CNN segmentation models to new microscopy images with very little training data. Code and data are publicly available (see Section 3 and Section 4).

2.2. Event Detection and Classification

Even though segmentation remains the core of the subsequent imaging tasks, automated analysis of microscopy image sequences often bypasses the segmentation task. Usually, the detection of cell events under investigation is performed directly from heuristically generated ROIs. Detecting changes in cellular behavior plays a central role in different studies, where the focus is on identifying the changes in cellular growth, mitosis, and death. Such changes may be related to cell shape, division, and movement. They cannot be detected in a single image but require the analysis of video or time-lapse sequences. The difficulty primarily relies on the wide spatial-temporal variability of such phenomena, which requires suitable methods to handle time-varying phenomena. Nevertheless, the infinite spectrum of possible events faces an inherent shortage of labeled data.
Automatic and robust approaches to detecting the time and location of cell events from image sequences often make use of the classification task. In microscopy, classification refers to identifying and distinguishing different cell types or states. Classification between other cells, types of tumors (benign or malignant), types of cell states (mitosis detection, alive-dead classification), and types of CIC (Cell-In-Cell) structures [47] are some typical applications.
Some recent AI approaches for event cell detection are here presented. Su et al. [48] and Mao and Yin [49] propose a convolutional long short-term memory (CNN-LSTM) network and a Two stream Bidirectional CNN-LSTM network, respectively, on sequences of single-cell image patches and utilize both spatial and temporal information to detect mitosis events. They report an average precision of 0.96 and 0.98, respectively. However, these models need a large amount of manually annotated data to train on, and both papers also report a sharp decrease in accuracy when testing the model on other cell datasets.
A CNN-LSTM model that learns spatial and temporal locations of the cells from a detection map in a semi-supervised manner is proposed by Phan et al. [50] for the detection of mitosis in PhC videos. The method needs only 1050 annotated frames to achieve an F1 score of 0.544–0.822, depending on the video. However, it also shows a decrease in performance with the increase in the input sequence length, which is not ideal for practical situations where time-lapse experiment’s video sequences may contain thousands of frames. The method also will only be able to detect a single event at a time, such as mitosis, whereas these events can randomly occur in multiple places in a single frame.
Nishimura and Bise [51] propose a method for multiple mitosis event detection and localization by estimating a spatial-temporal likelihood map using the 3D CNN architecture V-Net [52]. In the likelihood map, a mitosis position is represented as an intensity peak with a Gaussian distribution, in which multiple mitoses are represented as multiple peaks. The method has an average precision of 0.862 on a private dataset. While the method does take into account the spatial and temporal information, it is only limited to detecting mitosis events and not any other events that may be associated with mitosis. In order to identify other events as well, multiple models based on this method would be needed. Furthermore, the use of this method for other datasets or cell lines requires the generation of laborious manual annotations in the form of Gaussian distributed likelihood maps.
Su et al. [53] present a deep reinforcement learning-based progressive sequence saliency discovery network (PSSD) for mitosis detection in time-lapse PhC images. The discovery of these salient frames is formulated as a Markov Decision Process that progressively adjusts the selection positions of salient frames in the sequence. Then, the pipeline leverages deep reinforcement learning to learn the policy in the salient frame discovery process. The method consists of two parts: (1) the saliency discovery module, which selects the salient frames from the input cell image sequence by progressively adjusting the selection positions of salient frames; (2) the mitosis identification module, which takes a sequence of salient frames and performs temporal information fusion for mitotic sequence classification. The method is evaluated on the C2C12-16 mitosis detection dataset [54] (see Section 4), and is found to outperform the previous state-of-the-art methods, including CNN-LSTM and 3D-CNN among the others.
Theagarajan and Bhanu [55] present DeephESC 2.0, an ML method to detect and classify human embryonic stem cells (hESC) in PhC images. Firstly, they use a mixture of Gaussians to detect the cells [56], where two Gaussian distributions model the intensity distributions of the foreground (cells) and the background (substrate). Then, Generative Multi Adversarial Networks (GMANs) [57] augment data with new synthetic images and improve the performance of the classification step. To classify the images into six different classes, they implement a hierarchical classifier consisting of a CNN and two Triplet CNNs. The software and dataset are publicly available (see Section 3 and Section 4).
La Greca et al. [58] use in the celldeath software some classical DL approaches such as ResNet [59], where they classify cells as dead or alive by using complete frames as input images. On images containing both alive and dead cells, the model can predict the dead ones, which are localized by heat map-like visualizations merging the information provided by the last convolutional layer and the model predictions. These predictions are compared with human performance and are found to largely outperform human ability. The software is publicly available (see Section 3 and Section 4).

2.3. Cell Tracking

Object tracking consists in locating and monitoring one or more objects of interest and their behavior over time [21]. The image sequence containing cells can be acquired at specific time intervals using the time-lapse technique. When discussing cell tracking, it is generally assumed that segmentation or detection and classification have been performed.
Some recent tracking methods are here reviewed. Magnusson et al. [60] propose a global track linking algorithm, which links into tracks cell outlines generated by a segmentation algorithm. It is a batch algorithm that uses the entire image sequence to decide the links. Starting with the hypothesis that there are no cells in the image sequence, it adds one cell track at a time, in a greedy way, choosing the one maximizing a suitable scoring function, using the Viterbi algorithm. The algorithm can handle cell mitosis, apoptosis, and migration in and out of the imaged area and can also deal with false positives, missed detections, and clusters of jointly segmented cells. It has been tested on BF sequences, but in principle, it can be applied to any type of sequences, given a suitable segmentation algorithm to outline the cells. The algorithm has been implemented in several cell trackers, see for example the Baxter Algorithms package (see Section 3).
Grah et al. [61] propose MitosisAnalyser, a framework for detecting, classifying, and tracking mitotic cells in live-cell phase contrast imaging based on mathematical imaging methods. As pre-processing, denoising by Gaussian filter smoothing is applied, followed by rescaling. In the workflow, each mitosis is detected by using the circular Hough transform. The obtained circular contours are used for initializing the tracking algorithm, which is based on variational methods. Backward tracking is used to establish the beginning of mitosis by detecting a change in the cell morphology. This step is followed by forward tracking until the end of mitosis. The output provides the duration of mitosis and information on cell fates (e.g., number of daughter cells, cell death). The Matlab code is publicly available (see Section 3).
In [62], Rea et al. propose a Graphics Processing Unit (GPU)-based algorithm for tracking yeast cells in PhC microscopy images in real-time. The tracking by detection approach determines a minimum cost configuration for each couple of frames, given by the solution of a linear programming (LP) problem. The GPU-parallel software based on the simplex method, a common tool for solving LP problems, is obtained by exploiting parallelization strategies to maximize the overall throughput and minimize memory transfers between host and device, thus exploiting data locality. The software is publicly available (see Section 3).
Tsai et al. [63] introduce Usiigaci, a semi-automated pipeline to segment, track, and visualize cells in PhC sequences. Segmentation is based on a mask regional convolutional neural network (Mask R-CNN) [64], while the tracking module relies on the Trackpy library [65]. A graphical user interface allows the user to verify the results. The software and annotated data are publicly available (see Section 3 and Section 4).
Scherr et al. [42], in the same paper as for segmentation, also propose a graph-based cell tracking algorithm for touching cells in BF microscopy images (BF-C2DL-HSC and BF-C2DL-MuSC datasets) from the Cell Tracking Challenge [22]. The adapted tracking algorithm includes a movement estimation in the cost function to re-link tracks with missing segmentation masks over a short sequence of frames. Their algorithm can track all segmented cells in an image sequence and only a subset, e.g., a selection of manually marked cells. Results for cell tracking are evaluated using the TRA and CTB metrics (see Section 5) and are shown to perform very well, with TRA scores of 0.929 and 0.967 for the BF-C2DL-HSC and BF-C2DL-MuSC images, respectively.

3. Software

As also already discovered in Section 2, it is every day more common for newly proposed methods to make their implementations publicly available, in the light of the recent trend toward open science. In Table 1, we provide links to existing publicly available software, subdivided by task. Moreover, we provide links to software platforms, providing more diverse functionalities for analyzing microscopy images and videos. Besides the software already described in Section 2, here we briefly summarize the remaining ones.
TWS (Trainable Weka Segmentation) is a Fiji plugin that combines ML algorithms with a set of selected image features to produce pixel-based segmentations. Weka (Waikato Environment for Knowledge Analysis) [66] can itself be called from the plugin.
Baxter Algorithms is a software package for tracking and analyzing cells in microscope images, providing an implementation of the global track-linking algorithm in [60]. The software can handle images produced using both 2D transmission microscopy and 2D or 3D fluorescence microscopy.
CellProfiler is a commonly used program designed for biologists with minimal programming knowledge to measure biological phenotypes quantitatively [39]. Algorithms for image analysis are available as individual modules that can be placed in sequential order to create a pipeline. Several commonly used pipelines are available for download and can be used to detect and measure various properties of biological objects.
ilastik [38] is an interactive machine learning tool based on a random forest classifier [67] for image analysis and is widely used by biologists since it no require specific ML knowledge. It provides pipelines for segmentation, classification, tracking, and lineage, performing on multidimensional data (including 3D space, time, and channels). A friendly user interface enables users to interactively implement their image analysis through a supervised machine learning workflow. ilastik classifies pixels and objects by learning from annotations to predict the class of each unannotated pixel and object. It provides an automatic selection of image features based on a first optimization step. Users can introduce sparse annotations or use labeled data or even provide training examples, then correct the classifier precisely at the position where it is wrong. Once a classifier has been trained, new data can be processed in batch mode.
Table 1. Software: name (Name); reference ([Ref]); year of publication (Year); url (Link); programming language or environment (Language). All links were accessed on 28 August 2022.
Table 1. Software: name (Name); reference ([Ref]); year of publication (Year); url (Link); programming language or environment (Language). All links were accessed on 28 August 2022.
Name[Ref]YearLinkLanguage
Cell segmentation
DeepCell[23]2016https://simtk.org/projects/deepcellPython, C, Ruby
fastER[25]2017https://bsse.ethz.ch/csd/software/faster.htmlC++
TWS[68]2017https://imagej.net/plugins/tws/Java
ANCIS[32]2019https://github.com/yijingru/ANCIS-PytorchPython
Vicar et al.[20]2019https://github.com/tomasvicar/Cell-segmentation-methods-comparisonMatlab
Cellpose[45,46]2022https://github.com/MouseLand/cellposePython
Cell classification
DeephESC 2.0[55]2019https://www.vislab.ucr.edu/SOFTWARE/software.phpPython
celldeath[58]2021https://github.com/miriukaLab/celldeathPython
Cell tracking
Baxter Algorithms[60]2015https://github.com/klasma/BaxterAlgorithmsMatlab/C
Rea et al.[62]2019https://dibernardo.tigem.it/software-dataMatlab/C
Software platforms
CellProfiler[39]2006http://cellprofiler.orgPython
MitosisAnalyser[61]2017https://github.com/JoanaGrah/MitosisAnalyserMatlab
ilastik[38]2019https://www.ilastik.org/index.htmlPython
Usiigaci[63]2019https://github.com/ElsevierSoftwareX/SOFTX_2018_158Python
ZeroCostDL4Mic[69]2020https://github.com/HenriquesLab/ZeroCostDL4MicPython
DeepImageJ[70]2021https://deepimagej.github.io/deepimagejPython
BioImage Model Zoo[71]2022https://bioimage.ioPython
LIM Tracker[72]2022https://github.com/LIMT34/LIM-TrackerPython/Java
TrackMate 7[73]2022https://imagej.net/plugins/trackmate/trackmate-v7-detectorsJava
ZeroCostDL4Mic is a cloud-based platform proposed by von Chamier et al. [69] aiming to simplify the use of DL architectures for various microscopy tasks. It is a collection of Jupyter Notebooks that can efficiently and interactively run Python code, leveraging the free, cloud-based computational resources of Google Colab. Concerning our focus, the tasks covered by ZeroCostDL4Mic include object detection, for which it implements YOLOv2, and cell segmentation, where it implements both the U-net and StarDist [74,75] networks. The outputs generated by StarDist are directly compatible with the TrackMate tracking software, enabling also automated cell tracking.
DeepImageJ [70] is a plugin for ImageJ and Fiji to facilitate the usage of DL models. It aims to offer user-friendly access to pre-trained models designed for various image modalities, including PhC and DIC. Currently, for the two previously mentioned modalities, the DL models are designed for segmentation.
BioImage Model Zoo [71] is an online repository for AI models to facilitate the usage of these pre-trained models by the bioimaging community. They provide a standard and tutorials to upload new models. The users can either download the projects in community partners’ format or in user-friendly Python notebooks that can be used by anyone with the user’s own dataset to perform bioimage analysis tasks. The current community partners are ilastik, ImJoy [76], Fiji [77], deepImageJ, ZeroCostDL4Mic, and HPA [78].
LIM Tracker is a Fiji plugin for cell tracking and analysis expressly aimed at advanced interactivity, usability, and versatility. Three tracking methods are implemented, suitable for fluorescence or PhC microscopy sequences. In the link-type tracking (tracking by detection), cells are first detected based on a Laplacian of Gaussian filter and watershed segmentation. Their ROIs are then linked by the Linear Assignment Problem algorithm [79]. In the sequential search-type tracking method, based on the particle filter framework, a user-specified ROI is tracked by sequentially searching for its corresponding ROI in subsequent frames by pattern matching. The third type of tracking is manual tracking, which allows users to specify the position of ROIs while moving along sequence frames. Several additional functions allow interactive visualization and error correction. A plugin mechanism is provided for using different segmentation modules, including user-defined algorithms or DL algorithms (e.g., StarDist, Cellpose [45,46], YOLACT++ [80], Matterport MaskR-CNN [81], and Detectron2 MaskR-CNN [82]).
TrackMate 7 [73] is an extension of the TrackMate tracking software [83] distributed as a Fiji plugin. It integrates into tracking pipelines (based on five possible particle-linking algorithms) ten segmentation algorithms (including ilastik, Weka, StarDist, and Cellpose), besides any mask or label images computed with any other segmentation algorithm. It can handle fluorescence or label-free microscopy images, both 2D and 3D. The additional TrackMate helper facilitates choosing an optimal combination of segmentation and tracking modules, also allowing a systematic optimization of the tracking parameters for a whole dataset.
For an extended list of commercial and open source tools for tracking, the interested reader can also refer to [21]. A list of publicly available executable versions of 19 algorithms participating in the 2013–2015 CTC challenges is provided in Table 3 of the Supplementary Material of [22]; further links can also be found through the CTC web pages. Open-source DL software for bioimage segmentation is nicely surveyed in [84], where tools in different forms, such as web applications, plug-ins for existing imaging analysis software, and preconfigured interactive notebooks and pipelines are reviewed. Finally, further suggestions can come from the review by Smith et al. [85]. Indeed, even though their survey focuses on phenotypic image analysis, some of the referred software includes cell segmentation and time-lapse analysis tools.

4. Data

Datasets publicly available can be broadly subdivided into those devoted solely to segmentation (see Table 2), event detection and classification (and eventually also tracking, see Table 3), or tracking (and eventually also segmentation, see Table 4). Observe that the numbers reported in these tables refer solely to traditional label-free images/image sequences, which is the focus of this review; nonetheless, many of the reported datasets also have data from other microscopy types. The reported numbers specify only images for which annotations exist (there could be other images but without annotations).
Allen Cell Explorer [86] includes a massive collection of light microscopy cell images with manually curated segmentation masks for 12 cellular components, as reported in [88].
BU-BIL (Boston University-Biomedical Image Library) [87] includes six datasets, three of which consisting of PhC images from different cell lines. The main aim of [87] is to evaluate and compare the performance of biomedical image segmentation made by trained experts, non-experts, and automated segmentation algorithms. Therefore, for each image, only one cell is annotated and provided as binary masks obtained in those three different ways. The gold standard annotation is obtained by majority voting of annotations created by the ten trained experts.
CTC (Cell Tracking Challenge) is a time-lapse cell segmentation and tracking benchmark on publicly available data, launched in 2012 to objectively compare and evaluate state-of-the-art whole-cell and nucleus segmentation and tracking methods [22,89]. The datasets consist of 2D and 3D time-lapse video sequences of fluorescent counterstained nuclei or cells moving on top or immersed in a substrate, along with 2D PhC and DIC microscopy videos of cells moving on a flat substrate. The videos cover a wide range of cell types and quality (spatial and temporal resolution, noise levels, etc.). The ground truth consists of manually annotated cell masks (for segmentation) and cell markers interlinked between frames to form cell lineage trees (for tracking).
DeepCell comes from the supporting material of [23]. It consists of a PhC image sequence of HeLa-S3 cells. Annotations for each image are given in terms of cell and nuclei segmentation masks.
EVICAN (Expert VIsual Cell ANnotation) [88] includes partially annotated grayscale images of 30 different cell lines from multiple microscopes, contrast mechanisms, and magnifications. For each image, a subset of cells and nuclei is annotated and provided both as json annotation files and as binary masks. An example is shown in Figure 3a,b. To reduce the influence of unannotated cells on the background class, in their experiments, the authors pre-processed the dataset by blurring (with a Gaussian filter) the images but leaving unchanged the annotated instances. The pre-processed images are also provided with the dataset (see Figure 3c).
fastER [25] includes PhC, BC, and synthetic Fluo images of three different cell lines. For each image, the annotations consist of binary masks that enclose the segmentation of most of the cells and just the centroid for the remaining cells.
LIVEcell [26] is a recently proposed large-scale, manually annotated, and expert-validated dataset of PhC images for benchmarking cell segmentation. It consists of over 5 thousand images, including over 1.6 million cells of seven cell types (human and mouse) having different cell morphologies and culture densities. Annotations are provided as json files.
Usiigaci [63] includes 37 PhC images of T98G cells. Annotations consist of indexed masks, with an index for each cell, followed in time (see Figure 4). Thus, these can be used for both segmentation and tracking. A spreadsheet file is also enclosed, providing information from tracking and various features for each tracked cell.
Figure 4. Example data from the Usiigaci dataset [63]: (a) original image (20180101ef002xy01t01.tif); (b) corresponding indexed mask, where each color indicates a different cell in all sequence images.
Figure 4. Example data from the Usiigaci dataset [63]: (a) original image (20180101ef002xy01t01.tif); (b) corresponding indexed mask, where each color indicates a different cell in all sequence images.
Algorithms 15 00313 g004
Table 3. Details of annotated data for cellular event detection and classification in traditional label-free sequences: dataset name (Name); reference ([Ref]); application of the data (Task); type of microscopy data (Content); url (Link); number of annotated images (# imgs), and number of annotated events (# events). All links were accessed on 28 August 2022.
Table 3. Details of annotated data for cellular event detection and classification in traditional label-free sequences: dataset name (Name); reference ([Ref]); application of the data (Task); type of microscopy data (Content); url (Link); number of annotated images (# imgs), and number of annotated events (# events). All links were accessed on 28 August 2022.
Name[Ref]TaskContentLink# Imgs# Events
C2C12-16[54]Mitosis DetectionDIChttps://www.iti-tju.org/mitosisdetection/download/116,2087159
CTMC[90]Mitosis DetectionDIChttps://ivc.ischool.utexas.edu/ctmc/180,3891616
DeephESC[55]ClassificationPhChttps://www.vislab.ucr.edu/SOFTWARE/software.php2785NA
1 Unavailable at the time of writing. 2 Accessed on 30 August 2022.
C2C12-16 [54] was released as a large-scale time-lapse phase-contrast microscopy image dataset for the mitosis detection task at the first international contest on mitosis detection in phase-contrast microscopy image sequences, held with the workshop on computer vision for microscopy image analysis (CVMI) at CVPR 2019. It is an extension of the Ker et al. dataset [91] with manual annotations of mitosis. The complete dataset contains 16 sequences with 1013 frames per sequence and a total of 7159 mitosis events within the images.
Cell Tracking with Mitosis Detection Challenge (CTMC) is a benchmarked challenge that provides DIC images for 14 cell lines [90]. The data adds up to 86 live-cell imaging videos consisting of 152,584 frames in total. In addition to the images, the challenge grants bounding box-based detection and tracking ground truths for each cell line, in the form of csv files for each video, including, for each frame and each cell, the cell ID and its bounding box coordinates. Recently, the dataset has also been adopted for the CTMC-v1 Challenge at CVPR 2022 (https://motchallenge.net/data/CTMC-v1/, accessed on 30 August 2022).
DeephESC [55] consists of 785 PhC hESC images subdivided according to six classes (cell clusters, debris, unattached cells, attached cells, dynamically blebbing cells, and apoptically blebbing cells).
Table 4. Details of available annotated data for cell tracking in traditional label-free sequences: dataset name (Name); reference ([Ref]); type of microscopy data (Content); url (Link); number of annotated images (# imgs), annotated cells/tracks (# cells/tracks), and cell lines (#cell lines). All links were accessed on 28 August 2022.
Table 4. Details of available annotated data for cell tracking in traditional label-free sequences: dataset name (Name); reference ([Ref]); type of microscopy data (Content); url (Link); number of annotated images (# imgs), annotated cells/tracks (# cells/tracks), and cell lines (#cell lines). All links were accessed on 28 August 2022.
Name[Ref]ContentLink# Imgs# Cells/Tracks# Cell Lines
CTC[22]PhC, DIC, BFhttp://www.celltrackingchallenge.net12131980/29445
CTMC[90]DIChttps://ivc.ischool.utexas.edu/ctmc/280,3891,097,223 3/161614
Ker et al.[91]PhChttps://osf.io/ysaq2/119134NA 4/20111
Usiigaci[63]PhChttps://github.com/ElsevierSoftwareX/SOFTX_2018_1581372641/1051
1 Accessed on 30 August 2022. 2 Unavailable at the time of writing. 3 Only bounding boxes are provided. 4 Only centroids are provided.
The dataset by Ker et al. [91] includes 48 PhC image sequences of mouse C2C12 cells under various treatments. Annotations consist of manually tagged centroids and state (e.g., newborn, divided, or mitotic) for 10% of the cells for all the sequences; only for one of the sequences, all the cells are manually annotated. The dataset is also provided with annotations automatically generated for all the cells using in-house software based on segmentation, mitosis detection, and association.
Other annotated microscopy image sets can be downloaded from the Broad Bioimage Benchmark Collection (BBBC) [92]. It is a publicly available collection of microscopy images intended as a resource for testing and validating automated image-analysis algorithms. Being contributed by many different research groups and for various applications, annotations are provided in varying forms (e.g., cell counts, masks, outlines, or bounding boxes).

5. Metrics

Below, we present some metrics commonly adopted to evaluate the results quantitatively. Some of these metrics are directly used from the computer vision and ML/DL domains, while others are more specific to cellular image analysis.

5.1. Metrics for Pixel-Wise Cell Segmentation

Many different metrics are adopted in the literature to evaluate the performance of (cell) segmentation algorithms. The most frequently used is the one adopted for the CTC [22], generally denoted as SEG. Given the ground truth cell segmentation G T and the corresponding segmentation S computed with any segmentation algorithm, the Jaccard similarity index, also known as Intersection over Union (IoU), evaluates the degree of overlap between the true and the computed results and is defined as
IoU ( G T , S ) = | G T S | | G T S | ,
where | · | indicates the cardinality of a set (i.e., the number of pixels) and ∩ and ∪ indicate the set intersection and union, respectively. This metric [63] is sometimes equivalently expressed in terms of the number of true positive pixels T P ( T P = | G T S | ), false negative pixels F N ( F N = G T S ), and false positive pixels F P ( F P = S G T ) as
IoU ( G T , S ) = T P F N + T P + F P .
The SEG metric adopted in the CTC for a particular video is then computed as the mean IuO over all the GT cells of the video. It should be observed that, although many authors refer to this metric as SEG [72,93], others just refer to it as AP [45,46,74,75,88]. Further metrics frequently adopted [23,25,63] include
  • the Recall, also known as Sensitivity or True Positive Rate,
    Recall = T P T P + F N ,
    that gives the percentage of detected true positive pixels as compared to the total number of true positive pixels in the ground truth;
  • the Precision, also known as Positive Prediction,
    Precision = T P T P + F P ,
    that gives the percentage of detected true positive pixels as compared to the total number of pixels detected by the algorithm, providing an indication on the degree of exactness of the algorithm in identifying only relevant pixels;
  • the F-score, also known as F-measure or Figure of Merit,
    F 1 = 2 · Recall · Precision Recall + Precision = 2 · T P 2 · T P + F P + F N ,
    that is the weighted harmonic mean of Precision and Recall.
All the above metrics assume values in [0,1] and higher values indicate better results.

5.2. Metrics for Object-Wise Cell Detection

Generally, in the case of cell detection, the ground truth is given in terms of bounding boxes of the cells contained in the images. Here, the IoU metric of Equation (1) can be adapted to evaluate the degree of overlap between the ground truth bounding boxes ( G T ) and the predicted bounding boxes (S). IoU tresholding can then be used to decide if a detection is correct or not. For a given IoU threshold α , a true positive ( T P ), i.e., a correct positive prediction, is a detection for which IoU ( G T , S ) α and a false positive ( F P ), i.e., a wrong positive detection, is a detection for which IoU ( G T , S ) < α . A false negative ( F N ) is an actual instance that is not detected.
Given these adapted concepts, the Recall, Precision, and F-score metrics defined in Equations (2)–(4) can be used to evaluate cell detection algorithms. These are also used to compute the Average Precision at a given IoU threshold α , denoted as AP@ α , defined as the Area Under the Precision-Recall Curve (AUC-PR) evaluated at the IoU threshold α , given as
AP @ α = 0 1 p ( r ) d r .
According to the Common Objects in Context (COCO) [94] evaluation protocol (https://cocodataset.org/#detection-eval, accessed on 28 August 2022), single values for α can be chosen for thresholding IoU (generally equal to 0.5 or 0.75). Moreover, a set of thresholds can be chosen and the mean Average Precision mAP over these IoU thresholds considered for cell detection evaluation. With the usual choice [26,74,75] of values for α from 0.5 to 0.95 with a step size of 0.05, mAP is thus given by
mAP = A P @ 0.5 + A P @ 0.55 + + A P @ 0.95 10 .
In the CTC [22], the detection accuracy of the methods, denoted as DET, is adopted to estimate how accurately each given object has been identified (http://celltrackingchallenge.net/evaluation-methodology/, accessed on 28 August 2022). It is based on the comparison of the nodes of the acyclic oriented graphs representing the objects in both the ground truth and the computed object detection result. Exploiting the Acyclic Oriented Graph Matching measure for detection (AOGM-D) [95], that gives the cost of transforming the set of nodes of the computed objects into the set of ground truth nodes, DET is defined as
DET = 1 min ( AOGM - D , AOGM - D 0 ) AOGM - D 0 ,
where AOGM-D0 is the cost of creating the set of ground truth nodes from scratch. DET always falls in the [0,1] interval, with higher values corresponding to better detection performance. The DET metric is averaged with the SEG metric described in Section 5.1 to provide the overall performance for the CSB
OP C S B = 1 2 DET + SEG .

5.3. Metrics for Cell Event Detection

For mitosis detection, Ref. [54] represents each detected mitosis as a triple ( x , y , t ) of spatial and temporal position of the event. The detection is considered a true positive ( T P ) if its distance from the corresponding ground truth triple is below preset spatial and temporal thresholds. Otherwise, it is considered a false positive ( F P ). Undetected ground truth mitotic events are considered as false negative ( F N ). Having so defined T P , F P , and F N , Ref. [54] adopts Precision, Recall, and F-score metrics defined in Equations (2)–(4), respectively, to evaluate the performance of mitosis detection algorithms. The same metrics are also adopted in [60], where they are extended also to apoptic events.
In [23], DeepCell is also used to perform semantic segmentation, i.e., to both segment individual cells and predict their cell type. For evaluating the obtained results, the authors consider the Cellular Classification Score (CCSc) for each class c, defined as
CCS c = i C e l l s s i , c j C l a s s e s i C e l l s s i , j ,
where s i , j indicates the classification score of pixel i for class j. The authors showed that the closer the CCSc is to 1, the more likely the prediction is correct.

5.4. Metrics for Cell Tracking

The metrics most frequently adopted for evaluating cell tracking are those introduced by the Multiple Object Tracking (MOT) [96]. The Multiple Object Tracking Accuracy (MOTA) [97] is a MOT tracking metric that represents the object coverage [90], also used for example in [63]. It can be defined as
MOTA = 1 F N + F P + I D S W T ,
where F N is the sum over the entire video of all missed cells (number of ground truth bounding boxes not covered by any computed bounding box), F P is the sum over the entire video of all false positives (number of bounding boxes not covering any ground truth bounding box), I D S W is the number of object identities switched from one frame to the next (number of bounding boxes covering a ground truth bounding box from a track different than in the previous frame), and T is the total number of detections in the ground truth.
Multiple Object Tracking Precision (MOTP) [98] is the average dissimilarity between all correctly assigned detections (true positives) and their ground-truths, defined as
MOTP = t , i d t , i t c t ,
where c t indicates the number of matches in frame t and d t , i is the bonding bob overlap of the detection i with its ground truth. This MOT tracking metric shows the ability of the tracker to estimate precise object positions, independent of its skill at recognizing object configurations, keeping consistent trajectories, and so forth [97].
More recently, the MOT challenge introduced another tracking metric, named IDF1, that quantifies the object’s identity across the frames of a sequence [90] and represents the ratio of the detections that were properly identified over the average number of ground-truth and computed detections [99]. It is an F-score as in Equation (4).
IDF 1 = 2 I D T P 2 I D T P + I D F P + I D F N ,
where I D T P , I D F P , and I D F N indicate the number of true positive, false positive, and false negative IDs, respectively.
Many other metrics introduced for evaluating MOT challenges could also be applied to the case of cell tracking, such as the Higher Order Tracking Accuracy (HOTA) [100]. Focusing more specifically on cellular microscopy, the metric adopted in CTC [22] for evaluating cell tracking results is the Tracking Accuracy, denoted as TRA, used for example in [72,90,93]. It is a normalized weighted distance between the tracking ground truth and the result of the algorithm, with weights chosen to reflect the effort it takes a human curator to manually carry out the edits needed for matching the two. Tracking results are first represented as acyclic oriented graphs providing the cells lineage. Then the difficulty in transforming a computed tracking graph into the corresponding ground truth graph is estimated as
TRA = 1 min ( AOGM , AOGM 0 ) AOGM 0 ,
where AOGM is the Acyclic Oriented Graph Matching (AOGM) measure [95] and AOGM0 is the AOGM value required for creating the ground truth graph from scratch. TRA assumes values in [0, 1], with higher values corresponding to better tracking performance. The overall performance for the CTB is calculated as the average of the SEG (see Section 5.1) and TRA metrics:
OP C T B = 1 2 SEG + TRA .

6. Open Problems and Future Research Directions

Common challenges in microscopy image processing include increasingly high image sizes, image artifacts, and batch effects, especially in the presence of object crowding and overlapping. Nevertheless, insufficient, imbalanced, and inconsistent data annotations [101] prevent the effective usage of data analysis methods. The image size affects computational and storage time, which might prevent the use of modern image processing techniques on standard hardware. Nevertheless, the batch acquisition of images poses problems, as small variations might be present in successive applications of the same technical procedure. That implies that the hypothesis that all images are independently identically distributed statistical units sampled from the same population, which means they all describe the same process independently of its natural variability, might not be true anymore. As a consequence, the training of AI/ML/DL/methods can be biased and produce results that are not reflecting the probability distribution of the original phenomena. Despite the results reviewed here, these problems require further investigation, as the number of images required for training the parameters of a multi-layer architecture is in the order of tens of thousands. The performance of AI systems and their generalization capabilities strongly depend on the quality of annotations from the available datasets. Although many unsupervised algorithms may not require annotated data, these are crucial for understanding and interpreting such systems. As evident from Table 2 and Table 4, which represent available data collections at the publication time, there is a lack of large-scale curated and annotated datasets of light microscopy images. In particular, this is true for adherent cells or suspension-cultured counterparts, where the lack of annotated datasets makes the segmentation difficult for different LI techniques [20]. This difficulty is exacerbated by the natural variability of the observed phenomena, which often show high cell densities, cell-to-cell variability, complex cellular shapes and texture, cell shape varying over time (e.g., due to drug treatments [26,93]), varying image illumination, and low signal-to-noise ratios (SNRs) [24,25]. These problems are also common in cell detection and tracking, making them even more challenging, especially when using AI approaches. An extreme example is neural cell instance segmentation in neuroscience applications [32], which aims at detecting and segmenting every neural cell in a microscopy image. In these experiments, further limiting factors include cell distortion, unclear cell contours, low-contrast cell protrusion structures, and background impurities. However, accurate detection of objects is crucial for the tracking process, as aberrant object detection leads to missing links and the generation of tracks that end prematurely, with multiple short tracks representing the same individual object over time as different entities. Most detection algorithms treat tightly packed objects (e.g., touching and overlapping cells) as a single entity, resulting in breaks in tracks or single tracks linking groups of objects. In addition, cell tracking can produce terabyte-scale movies as experiments often require multi-day monitoring [93]. In such experiments, rapid cell migrations, high cell density, and multiple rounds of mitoses result in multiple neighboring cells being mis-tracked. Extensive 3D data present additional challenges due not only to the size of the image data itself but also to the very high cell densities they show toward the end of the videos [22].
Other key factors that affect the tracking results [21] include noise, occlusions, difficult object motion, complex objects structures, and background subtraction. Further work is also needed for handling scenarios with low SNR or contrast ratio [22]. These challenges can explain why few cell tracking platforms have been developed for label-free microscopy images. The benchmarked ranking of the Cell Tracking Challenge [22] confirms the difficulties in processing these images and highlights further related research, particularly for DIC images [90]. All these current factors limit the possibility of assessing and comparing the capabilities of different methods used to analyze data. Perspectives and future work in LI also include handling big data of continuously growing size, improving the quality and completeness of annotated datasets, continuous modeling of biological processes using regression rather than classification, and interpretability of ML and DL algorithms.
Finally, investigations should be devoted to the integration of multiple microscopy techniques on the same sample to overcome the proper limits of each technique and the lack of training data. We believe that a holistic view of biological processes and functions might be attained by omics imaging [36], which consists in the integration and analysis of next-generation sequencing data with images, in order to provide more insight into available data.

Author Contributions

Conceptualization, L.M., L.A. and M.R.G.; methodology, all; software, A.A. and A.H.; validation, A.A. and A.H.; formal analysis, L.M., L.A. and M.R.G.; investigation, all; resources, all; data curation, L.M., A.A. and A.H.; writing—original draft preparation, L.M. and L.A.; writing—review and editing, all; visualization, all; supervision, M.R.G.; project administration, M.R.G.; All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

This work has been partially funded by the BiBiNet project (H35F21000430002) within POR-Lazio FESR 2014–2020. It was carried out also within the activities of L.M., L.A. and M.R.G. as members of the ICAR-CNR INdAM Research Unit and partially supported by the INdAM research project “Computational Intelligence methods for Digital Health”. The work of M.R.G. was conducted within the framework of the Basic Research Program at the National Research University Higher School of Economics (HSE).

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AIArtificial Intelligence
ANCISAttentive Neural Cell Instance Segmentation
AOGM-DAcyclic Oriented Graph Matching measure for Detection
BBBCBroad Bioimage Benchmark Collection
BFBright-Field
BU-BILBoston University - Biomedical Image Library
CCSCellular Classification Score
CICCell-In-Cell
CNNConvolutional Neural Network
COCOCommon Objects in Context
CSBCell Segmentation Benchmark
CTBCell Tracking Benchmark
CTCCell Tracking Challenge
CTMCCell Tracking with Mitosis Detection Challenge
CVMIComputer Vision for Microscopy Image Analysis
DICDifferential Interphase Contrast
DLDeep Learning
DNADeoxyribonucleic Acid
EVICANExpert VIsual Cell ANnotation
FNFalse Negative
FNAFalse Negative Association
FPFalse Positive
GMANGenerative Multi Adversarial Networks
GPUGraphics Processing Unit
GTGround Truth
HMCHoffman Modulation Contrast
HOTAHigher Order Tracking Accuracy
IoUIntersection over Union
LILabel-free Imaging
LPLinear Programming
LSTMLong Short-Term Memory
MLMachine Learning
MOTMultiple Object Tracking
MOTAMultiple Object Tracking Accuracy
MOTPMultiple Object Tracking Precision
PhCPhase Contrast
PSSDProgressive Sequence Saliency Discovery Network
QLIQuantitative Label-free Imaging
QPIQuantitative Phase Imaging
ROIRegions of Interest
SSDSingle Shot multi-box Detector
SNRSignal-to-Noise Ratio
SVMSupport Vector Machine
TPTrue Positive
TWSTrainable Weka Segmentation
UVUltraviolet
WekaWaikato Environment for Knowledge Analysis

References

  1. Sebestyén, E.; Marullo, F.; Lucini, F.; Petrini, C.; Bianchi, A.; Valsoni, S.; Olivieri, I.; Antonelli, L.; Gregoretti, F.; Oliva, G.; et al. SAMMY-seq reveals early alteration of heterochromatin and deregulation of bivalent genes in Hutchinson-Gilford Progeria Syndrome. Nat. Commun. 2020, 11, 1–16. [Google Scholar] [CrossRef] [PubMed]
  2. Marullo, F.; Cesarini, E.; Antonelli, L.; Gregoretti, F.; Oliva, G.; Lanzuolo, C. Nucleoplasmic Lamin A/C and Polycomb group of proteins: An evolutionarily conserved interplay. Nucleus 2016, 7, 103–111. [Google Scholar] [CrossRef] [PubMed]
  3. Song, L.; Hennink, E.; Young, I.; Tanke, H. Photobleaching kinetics of fluoresce in quantitative fluorescence microscopy. Biophys J. 1995, 68, 2588–2600. [Google Scholar] [CrossRef]
  4. Mir, M.; Bhaduri, B.; Wang, R.; Zhu, R.; Popescu, G. Quantitative Phase Imaging. Prog. Opt. 2012, 57, 133–217. [Google Scholar] [CrossRef]
  5. Zernike, F. How I Discovered Phase Contrast. Science 1955, 121, 345–349. [Google Scholar] [CrossRef] [PubMed]
  6. Nomarski, G. Differential microinterferometer with polarized waves. J. Phys. Radium Paris 1955, 16, 9S. [Google Scholar]
  7. Hoffman, R.; Gross, L. Modulation Contrast Microscope. Appl. Opt. 1975, 14, 1169–1176. [Google Scholar] [CrossRef]
  8. Yin, Z.; Kanade, T.; Chen, M. Understanding the phase contrast optics to restore artifact-free microscopy images for segmentation. Med. Image Anal. 2012, 16, 1047–1062. [Google Scholar] [CrossRef]
  9. Gregoretti, F.; Lucini, F.; Cesarini, E.; Oliva, G.; Lanzuolo, C.; Antonelli, L. Segmentation, 3D reconstruction and analysis of PcG proteins in fluorescence microscopy images in different cell culture conditions. In Methods in Molecular Biology; Springer: New York, NY, USA, 2022. [Google Scholar]
  10. Popescu, G. Quantitative Phase Imaging of Cells and Tissues; Mc-Graw-Hill: New York, NY, USA, 2011. [Google Scholar]
  11. Helgadottir, S.; Midtvedt, B.; Pineda, J.; Sabirsh, A.; Adiels, C.B.; Romeo, S.; Midtvedt, D.; Volpe, G. Extracting quantitative biological information from bright-field cell images using deep learning. Biophys. Rev. 2021, 2, 031401. [Google Scholar] [CrossRef]
  12. Buggenthin, F.; Marr, C.; Schwarzfischer, M.; Hoppe, P.S.; Hilsenbeck, O.; Schroeder, T.; Theis, F.J. An automatic method for robust and fast cell detection in bright field images from high-throughput microscopy. BMC Bioinform. 2013, 14, 297. [Google Scholar] [CrossRef]
  13. Selinummi, J.; Ruusuvuori, P.; Podolsky, I.; Ozinsky, A.; Gold, E.; Yli-Harja, O.; Aderem, A.; Shmulevich, I. Bright Field Microscopy as an Alternative to Whole Cell Fluorescence in Automated Analysis of Macrophage Images. PLoS ONE 2009, 4, 1–9. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  14. Naso, F.D.; Sterbini, V.; Crecca, E.; Asteriti, I.A.; Russo, A.D.; Giubettini, M.; Cundari, E.; Lindon, C.; Rosa, A.; Guarguaglini, G. Excess TPX2 interferes with microtubule disassembly and nuclei reformation at mitotic exit. Cells 2020, 9, 374. [Google Scholar] [CrossRef] [PubMed]
  15. Jiang, Q.; Sudalagunta, P.; Meads, M.B.; Ahmed, K.T.; Rutkowski, T.; Shain, K.; Silva, A.S.; Zhang, W. An Advanced Framework for Time-lapse Microscopy Image Analysis. bioRxiv 2020. [Google Scholar] [CrossRef]
  16. Caldon, C.E.; Burgess, A. Label free, quantitative single-cell fate tracking of time-lapse movies. MethodsX 2019, 6, 2468–2475. [Google Scholar] [CrossRef]
  17. Janiesch, C.; Zschech, P.; Heinrich, K. Machine learning and deep learning. Electron. Mark. 2021, 31, 685–695. [Google Scholar] [CrossRef]
  18. Gupta, A.; Harrison, P.J.; Wieslander, H.; Pielawski, N.; Kartasalo, K.; Partel, G.; Solorzano, L.; Suveer, A.; Klemm, A.H.; Spjuth, O.; et al. Deep Learning in Image Cytometry: A Review. Cytometry Part A 2019, 95, 366–380. [Google Scholar] [CrossRef] [PubMed]
  19. Jo, Y.; Cho, H.; Lee, S.Y.; Choi, G.; Kim, G.; Min, H.s.; Park, Y. Quantitative Phase Imaging and Artificial Intelligence: A Review. IEEE J. Sel. Top. Quantum Electron. 2019, 25, 1–14. [Google Scholar] [CrossRef]
  20. Vicar, T.; Balvan, J.; Jaros, J.; Jug, F.; Kolar, R.; Masarik, M.; Gumulec, J. Cell segmentation methods for label-free contrast microscopy: Review and comprehensive comparison. BMC Bioinform. 2019, 20, 1–25. [Google Scholar] [CrossRef]
  21. Emami, N.; Sedaei, Z.; Ferdousi, R. Computerized cell tracking: Current methods, tools and challenges. Visual Inform. 2021, 5, 1–13. [Google Scholar] [CrossRef]
  22. Ulman, V.; Maška, M.; Magnusson, K.E.G.; Ronneberger, O.; Haubold, C.; Harder, N.; Matula, P.; Matula, P.; Svoboda, D.; Radojevic, M.; et al. An objective comparison of cell-tracking algorithms. Nat. Methods 2017, 14, 1141–1152. [Google Scholar] [CrossRef]
  23. Van Valen, D.A.; Kudo, T.; Lane, K.M.; Macklin, D.N.; Quach, N.T.; DeFelice, M.M.; Maayan, I.; Tanouchi, Y.; Ashley, E.A.; Covert, M.W. Deep Learning Automates the Quantitative Analysis of Individual Cells in Live-Cell Imaging Experiments. PLoS Comput. Biol. 2016, 12, e1005177. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  24. Lux, F.; Matula, P. Cell segmentation by combining marker-controlled watershed and deep learning. arXiv 2020, arXiv:2004.01607. [Google Scholar]
  25. Hilsenbeck, O.; Schwarzfischer, M.; Loeffler, D.; Dimopoulos, S.; Hastreiter, S.; Marr, C.; Theis, F.J.; Schroeder, T. fastER: A user-friendly tool for ultrafast and robust cell segmentation in large-scale microscopy. Bioinformatics 2017, 33, 2020–2028. [Google Scholar] [CrossRef] [PubMed]
  26. Edlund, C.; Jackson, T.R.; Khalid, N.; Bevan, N.; Dale, T.; Dengel, A.; Ahmed, S.; Trygg, J.; Sjögren, R. LIVECell: A large-scale dataset for label-free live cell segmentation. Nat. Methods 2021, 18, 1038–1045. [Google Scholar] [CrossRef]
  27. Caicedo, J.; Goodman, A.; Karhohs, K.; Cimini, B.; Ackerman, J.; Haghighi, M.; Heng, C.; Becker, T.; Doan, M.; McQuin, C.; et al. Nucleus segmentation across imaging experiments: The 2018 Data Science Bowl. Nat. Methods 2019, 16, 1247–1253. [Google Scholar] [CrossRef]
  28. Casalino, L.; D’Ambra, P.; Guarracino, M.R.; Irpino, A.; Maddalena, L.; Maiorano, F.; Minchiotti, G.; Jorge Patriarca, E. Image Analysis and Classification for High-Throughput Screening of Embryonic Stem Cells. In Proceedings of the Mathematical Models in Biology: Bringing Mathematics to Life; Zazzu, V., Ferraro, M.B., Guarracino, M.R., Eds.; Springer International Publishing: Cham, Switzerland, 2015; pp. 17–31. [Google Scholar] [CrossRef]
  29. Casalino, L.; Guarracino, M.R.; Maddalena, L. Imaging for High-Throughput Screening of Pluripotent Stem Cells, SIAM Conference on Imaging Science—IS18. 2018. Available online: https://www.siam-is18.dm.unibo.it/presentations/811.html (accessed on 3 August 2022).
  30. de Haan, K.; Rivenson, Y.; Wu, Y.; Ozcan, A. Deep-Learning-Based Image Reconstruction and Enhancement in Optical Microscopy. Proc. IEEE 2020, 108, 30–50. [Google Scholar] [CrossRef]
  31. Gregoretti, F.; Cesarini, E.; Lanzuolo, C.; Oliva, G.; Antonelli, L. An Automatic Segmentation Method Combining an Active Contour Model and a Classification Technique for Detecting Polycomb-group Proteinsin High-Throughput Microscopy Images. In Polycomb Group Proteins: Methods and Protocols; Lanzuolo, C., Bodega, B., Eds.; Springer New York: New York, NY, USA, 2016; pp. 181–197. [Google Scholar] [CrossRef]
  32. Yi, J.; Wu, P.; Jiang, M.; Huang, Q.; Hoeppner, D.J.; Metaxas, D.N. Attentive neural cell instance segmentation. Med. Image Anal. 2019, 55, 228–240. [Google Scholar] [CrossRef]
  33. Gregoretti, F.; Cortesi, A.; Oliva, G.; Bodega, B.; Antonelli, L. An Algorithm for the Analysis of the 3D Spatial Organization of the Genome. In Capturing Chromosome Conformation: Methods and Protocols; Bodega, B., Lanzuolo, C., Eds.; Springer US: New York, NY, USA, 2021; pp. 299–320. [Google Scholar] [CrossRef]
  34. Antonelli, L.; De Simone, V.; di Serafino, D. A view of computational models for image segmentation. In Annali dell’Universitá di Ferrara; Springer: Cham, Switzerland, 2022. [Google Scholar] [CrossRef]
  35. Arteta, C.; Lempitsky, V.S.; Noble, J.A.; Zisserman, A. Learning to Detect Cells Using Non-overlapping Extremal Regions. In Proceedings of the Medical Image Computing and Computer-Assisted Intervention—MICCAI 2012—15th International Conference, Nice, France, 1–5 October 2012; Proceedings, Part I. Ayache, N., Delingette, H., Golland, P., Mori, K., Eds.; Springer: Berlin/Heidelberg, Germany, 2012; Volume 751, pp. 348–356. [Google Scholar] [CrossRef] [Green Version]
  36. Antonelli, L.; Guarracino, M.R.; Maddalena, L.; Sangiovanni, M. Integrating imaging and omics data: A review. Biomed. Signal Process. Control. 2019, 52, 264–280. [Google Scholar] [CrossRef]
  37. Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. In Proceedings of the Medical Image Computing and Computer-Assisted Intervention—MICCAI 2015—18th International Conference, Munich, Germany, 5–9 October 2015; Proceedings, Part III. Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F., Eds.; Springer: Cham, Switzerland, 2015; Volume 9351, pp. 234–241. [Google Scholar] [CrossRef]
  38. Berg, S.; Kutra, D.; Kroeger, T.; Straehle, C.N.; Kausler, B.X.; Haubold, C.; Schiegg, M.; Ales, J.; Beier, T.; Rudy, M.; et al. ilastik: Interactive machine learning for (bio)image analysis. Nat. Methods 2019, 16, 1226–1232. [Google Scholar] [CrossRef]
  39. Carpenter, A.; Jones, T.; Lamprecht, M.; Clarke, C.; Kang, I.; Friman, O.; Guertin, D.A.; Chang, J.H.; Lindquist, R.A.; Moffat, J.; et al. CellProfiler: Image analysis software for identifying and quantifying cell phenotypes. Genome Biol. 2006, 7, R100. [Google Scholar] [CrossRef]
  40. Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.E.; Fu, C.; Berg, A.C. SSD: Single Shot MultiBox Detector. In Proceedings of the Computer Vision—ECCV 2016—14th European Conference, Amsterdam, The Netherlands, 11–14 October 2016; Proceedings, Part I. Leibe, B., Matas, J., Sebe, N., Welling, M., Eds.; Springer: Cham, Switzerland, 2016; Volume 9905, pp. 21–37. [Google Scholar] [CrossRef]
  41. Beucher, S.; Meyer, F. The Morphological Approach to Segmentation: The Watershed Transformation. In Mathematical Morphology in Image Processing; Thompson, B.J., Dougherty, E., Eds.; CRC Press: Boca Raton, FL, USA, 1993; p. 49. [Google Scholar] [CrossRef]
  42. Scherr, T.; Löffler, K.; Böhland, M.; Mikut, R. Cell segmentation and tracking using CNN-based distance predictions and a graph-based matching strategy. PLoS ONE 2020, 15, e0243219. [Google Scholar] [CrossRef] [PubMed]
  43. Nishimura, K.; Wang, C.; Watanabe, K.; Fei Elmer Ker, D.; Bise, R. Weakly supervised cell instance segmentation under various conditions. Med. Image Anal. 2021, 73, 102182. [Google Scholar] [CrossRef] [PubMed]
  44. Boykov, Y.; Kolmogorov, V. An experimental comparison of min-cut/max- flow algorithms for energy minimization in vision. IEEE Trans. Pattern Anal. Mach. Intell. 2004, 26, 1124–1137. [Google Scholar] [CrossRef] [PubMed]
  45. Stringer, C.; Wang, T.; Michaelos, M.; Pachitariu, M. Cellpose: A generalist algorithm for cellular segmentation. Nat. Methods 2021, 18, 100–106. [Google Scholar] [CrossRef]
  46. Stringer, C.; Pachitariu, M. Cellpose 2.0: How to train your own model. bioRxiv 2022. [Google Scholar] [CrossRef]
  47. Borensztejn, K.; Tyrna, P.; Gaweł, A.M.; Dziuba, I.; Wojcik, C.; Bialy, L.P.; Mlynarczuk-Bialy, I. Classification of Cell-in-Cell Structures: Different Phenomena with Similar Appearance. Cells 2021, 10, 2569. [Google Scholar] [CrossRef]
  48. Su, Y.T.; Lu, Y.; Chen, M.; Liu, A.A. Spatiotemporal joint mitosis detection using CNN-LSTM network in time-lapse phase contrast microscopy images. IEEE Access 2017, 5, 18033–18041. [Google Scholar] [CrossRef]
  49. Mao, Y.; Yin, Z. Two-Stream Bidirectional Long Short-Term Memory for Mitosis Event Detection and Stage Localization in Phase-Contrast Microscopy Images. In Proceedings of the Medical Image Computing and Computer-Assisted Intervention—MICCAI 2017; Descoteaux, M., Maier-Hein, L., Franz, A., Jannin, P., Collins, D., Duchesne, S., Eds.; Springer: Cham, Switzerland, 2017; pp. 56–64. [Google Scholar] [CrossRef]
  50. Phan, H.T.H.; Kumar, A.; Feng, D.; Fulham, M.; Kim, J. Semi-supervised estimation of event temporal length for cell event detection. arXiv 2019, arXiv:1909.09946. [Google Scholar] [CrossRef]
  51. Nishimura, K.; Bise, R. Spatial-Temporal Mitosis Detection in Phase-Contrast Microscopy via Likelihood Map Estimation by 3DCNN. In Proceedings of the 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), Montreal, QC, Canada, 20–24 July 2020; pp. 1811–1815. [Google Scholar] [CrossRef]
  52. Milletari, F.; Navab, N.; Ahmadi, S. V-Net: Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation. In Proceedings of the 2016 Fourth International Conference on 3D Vision (3DV), Stanford, CA, USA, 25–28 October 2016; pp. 565–571. [Google Scholar] [CrossRef]
  53. Su, Y.; Lu, Y.; Chen, M.; Liu, A. Deep Reinforcement Learning-Based Progressive Sequence Saliency Discovery Network for Mitosis Detection In Time-Lapse Phase-Contrast Microscopy Images. IEEE ACM Trans. Comput. Biol. Bioinform. 2022, 19, 854–865. [Google Scholar] [CrossRef] [PubMed]
  54. Su, Y.T.; Lu, Y.; Liu, J.; Chen, M.; Liu, A.A. Spatio-Temporal Mitosis Detection in Time-Lapse Phase-Contrast Microscopy Image Sequences: A Benchmark. IEEE Trans. Med. Imaging 2021, 40, 1319–1328. [Google Scholar] [CrossRef]
  55. Theagarajan, R.; Bhanu, B. DeephESC 2.0: Deep Generative Multi Adversarial Networks for improving the classification of hESC. PLoS ONE 2019, 14, 1–28. [Google Scholar] [CrossRef] [PubMed]
  56. Guan, B.X.; Bhanu, B.; Talbot, P.; Lin, S. Bio-Driven Cell Region Detection in Human Embryonic Stem Cell Assay. IEEE/ACM Trans. Comput. Biol. Bioinform. 2014, 11, 604–611. [Google Scholar] [CrossRef] [PubMed]
  57. Durugkar, I.; Gemp, I.M.; Mahadevan, S. Generative Multi-Adversarial Networks. arXiv 2017, arXiv:1611.01673. [Google Scholar]
  58. La Greca, A.D.; Pérez, N.; Castañeda, S.; Milone, P.M.; Scarafía, M.A.; Möbbs, A.M.; Waisman, A.; Moro, L.N.; Sevlever, G.E.; Luzzani, C.D.; et al. celldeath: A tool for detection of cell death in transmitted light microscopy images by deep learning-based visual recognition. PLoS ONE 2021, 16, e0253666. [Google Scholar] [CrossRef]
  59. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar] [CrossRef]
  60. Magnusson, K.E.G.; Jaldén, J.; Gilbert, P.M.; Blau, H.M. Global Linking of Cell Tracks Using the Viterbi Algorithm. IEEE Trans. Med. Imaging 2015, 34, 911–929. [Google Scholar] [CrossRef]
  61. Grah, J.S.; Harrington, J.A.; Koh, S.B.; Pike, J.A.; Schreiner, A.; Burger, M.; Schönlieb, C.B.; Reichelt, S. Mathematical imaging methods for mitosis analysis in live-cell phase contrast microscopy. Methods 2017, 115, 91–99, Image Processing for Biologists. [Google Scholar] [CrossRef]
  62. Rea, D.; Perrino, G.; di Bernardo, D.; Marcellino, L.; Romano, D. A GPU algorithm for tracking yeast cells in phase-contrast microscopy images. Int. J. High Perform. Comput. Appl. 2019, 33. [Google Scholar] [CrossRef]
  63. Tsai, H.F.; Gajda, J.; Sloan, T.F.; Rares, A.; Shen, A.Q. Usiigaci: Instance-aware cell tracking in stain-free phase contrast microscopy enabled by machine learning. SoftwareX 2019, 9, 230–237. [Google Scholar] [CrossRef]
  64. He, K.; Gkioxari, G.; Dollár, P.; Girshick, R. Mask R-CNN. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 2980–2988. [Google Scholar] [CrossRef]
  65. Allan, D.B.; Caswell, T.; Keim, N.C.; van der Wel, C.M. trackpy: Trackpy v0.4.1; Zenodo. 2018, 1226458. [Google Scholar] [CrossRef]
  66. Frank, E.; Hall, M.A.; Witten, I.H. The WEKA Workbench. Online Appendix for Data Mining: Practical Machine Learning Tools and Techniques, 3rd ed.; Morgan Kaufmann Series in Data Management Systems, Morgan Kaufmann: Amsterdam, The Netherlands, 2011. [Google Scholar]
  67. Breiman, L. Random Forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef] [Green Version]
  68. Arganda-Carreras, I.; Kaynig, V.; Rueden, C.; Eliceiri, K.W.; Schindelin, J.; Cardona, A.; Sebastian Seung, H. Trainable Weka Segmentation: A machine learning tool for microscopy pixel classification. Bioinformatics 2017, 33, 2424–2426. [Google Scholar] [CrossRef] [PubMed]
  69. Von Chamier, L.; Laine, R.F.; Jukkala, J.; Spahn, C.; Krentzel, D.; Nehme, E.; Lerche, M.; Hernández-Pérez, S.; Mattila, P.K.; Karinou, E.; et al. ZeroCostDL4Mic: An open platform to use Deep-Learning in Microscopy. BioRxiv 2020. [Google Scholar] [CrossRef]
  70. Gómez-de Mariscal, E.; García-López-de Haro, C.; Ouyang, W.; Donati, L.; Lundberg, E.; Unser, M.; Muñoz-Barrutia, A.; Sage, D. DeepImageJ: A user-friendly environment to run deep learning models in ImageJ. bioRxiv 2021. [Google Scholar] [CrossRef]
  71. Ouyang, W.; Beuttenmueller, F.; Gómez-de Mariscal, E.; Pape, C.; Burke, T.; Garcia-López-de Haro, C.; Russell, C.; Moya-Sans, L.; de-la Torre-Gutiérrez, C.; Schmidt, D.; et al. BioImage Model Zoo: A Community-Driven Resource for Accessible Deep Learning in BioImage Analysis. bioRxiv 2022. [Google Scholar] [CrossRef]
  72. Aragaki, H.; Ogoh, K.; Kondo, Y.; Aoki, K. LIM Tracker: A software package for cell tracking and analysis with advanced interactivity. Sci. Rep. 2022, 12, 2702. [Google Scholar] [CrossRef] [PubMed]
  73. Ershov, D.; Phan, M.; Pylvänäinen, J.; Rigaud, S.; Le Blanc, L.; Charles-Orszag, A.; Conway, J.; Laine, R.; Roy, N.; Bonazzi, D.; et al. TrackMate 7: Integrating state-of-the-art segmentation algorithms into tracking pipelines. Nat. Methods 2022, 19, 829–832. [Google Scholar] [CrossRef]
  74. Schmidt, U.; Weigert, M.; Broaddus, C.; Myers, G. Cell Detection with Star-Convex Polygons. In Proceedings of the Medical Image Computing and Computer Assisted Intervention—MICCAI 2018—21st International Conference, Granada, Spain, 16–20 September 2018; Proceedings, Part II. pp. 265–273. [Google Scholar] [CrossRef]
  75. Weigert, M.; Schmidt, U.; Haase, R.; Sugawara, K.; Myers, G. Star-convex Polyhedra for 3D Object Detection and Segmentation in Microscopy. In Proceedings of the 2020 IEEE Winter Conference on Applications of Computer Vision (WACV), Snowmass Village, CO, USA, 2–5 March 2020; pp. 3655–3662. [Google Scholar] [CrossRef]
  76. Ouyang, W.; Mueller, F.; Hjelmare, M.; Lundberg, E.; Zimmer, C. ImJoy: An open-source computational platform for the deep learning era. Nat. Methods 2019, 16, 1199–1200. [Google Scholar] [CrossRef]
  77. Schindelin, J.; Arganda-Carreras, I.; Frise, E.; Kaynig, V.; Longair, M.; Pietzsch, T.; Preibisch, S.; Rueden, C.; Saalfeld, S.; Schmid, B.; et al. Fiji: An open-source platform for biological-image analysis. Nat. Methods 2012, 9, 676–682. [Google Scholar] [CrossRef] [Green Version]
  78. Ouyang, W.; Winsnes, C.F.; Hjelmare, M.; Åkesson, L.; Xu, H.; Sullivan, D.P.; Lundberg, E. Analysis of the Human Protein Atlas Image Classification competition. Nat. Methods 2019, 16, 1254. [Google Scholar] [CrossRef]
  79. Jaqaman, K.; Loerke, D.; Mettlen, M.; Kuwata, H.; Grinstein, S.; Schmid, S.; Danuser, G. Robust single-particle tracking in live-cell time-lapse sequences. Nat. Methods 2008, 5, 695–702. [Google Scholar] [CrossRef]
  80. Bolya, D.; Zhou, C.; Xiao, F.; Lee, Y.J. YOLACT++ Better Real-Time Instance Segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 44, 1108–1121. [Google Scholar] [CrossRef] [PubMed]
  81. Abdulla, W. Mask R-CNN for Object Detection and Instance Segmentation on Keras and TensorFlow. 2017. Available online: https://github.com/matterport/Mask_RCNN (accessed on 3 August 2022).
  82. Wu, Y.; Kirillov, A.; Massa, F.; Lo, W.Y.; Girshick, R. Detectron2. 2019. Available online: https://github.com/facebookresearch/detectron2 (accessed on 3 August 2022).
  83. Tinevez, J.Y.; Perry, N.; Schindelin, J.; Hoopes, G.M.; Reynolds, G.D.; Laplantine, E.; Bednarek, S.Y.; Shorte, S.L.; Eliceiri, K.W. TrackMate: An open and extensible platform for single-particle tracking. Methods 2017, 115, 80–90, Image Processing for 108 Biologists. [Google Scholar] [CrossRef] [PubMed]
  84. Lucas, A.M.; Ryder, P.V.; Li, B.; Cimini, B.A.; Eliceiri, K.W.; Carpenter, A.E. Open-source deep-learning software for bioimage segmentation. Mol. Biol. Cell 2021, 32, 823–829. [Google Scholar] [CrossRef] [PubMed]
  85. Smith, K.; Piccinini, F.; Balassa, T.; Koos, K.; Danka, T.; Azizpour, H.; Horvath, P. Phenotypic Image Analysis Software Tools for Exploring and Understanding Big Image Data from Cell-Based Assays. Cell Syst. 2018, 6, 636–653. [Google Scholar] [CrossRef] [PubMed]
  86. Roberts, B.; Haupt, A.; Tucker, A.; Grancharova, T.; Arakaki, J.; Fuqua, M.A.; Nelson, A.; Hookway, C.; Ludmann, S.A.; Mueller, I.A.; et al. Systematic gene tagging using CRISPR/Cas9 in human stem cells to illuminate cell organization. Mol. Biol. Cell 2017, 28, 2854–2874. [Google Scholar] [CrossRef]
  87. Gurari, D.; Theriault, D.; Sameki, M.; Isenberg, B.; Pham, T.A.; Purwada, A.; Solski, P.; Walker, M.; Zhang, C.; Wong, J.Y.; et al. How to Collect Segmentations for Biomedical Images? A Benchmark Evaluating the Performance of Experts, Crowdsourced Non-experts, and Algorithms. In Proceedings of the 2015 IEEE Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA, 5–9 January 2015; pp. 1169–1176. [Google Scholar] [CrossRef]
  88. Schwendy, M.; Unger, R.E.; Parekh, S.H. EVICAN—A balanced dataset for algorithm development in cell and nucleus segmentation. Bioinformatics 2020, 36, 3863–3870. [Google Scholar] [CrossRef]
  89. Maska, M.; Ulman, V.; Svoboda, D.; Matula, P.; Matula, P.; Ederra, C.; Urbiola, A.; España, T.; Venkatesan, S.; Balak, D.M.W.; et al. A benchmark for comparison of cell tracking algorithms. Bioinformatics 2014, 30, 1609–1617. [Google Scholar] [CrossRef] [Green Version]
  90. Anjum, S.; Gurari, D. CTMC: Cell Tracking with Mitosis Detection Dataset Challenge. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Seattle, WA, USA, 14–19 June 2020; pp. 4228–4237. [Google Scholar] [CrossRef]
  91. Ker, D.; Eom, S.; Sanami, S.; Bise, R.; Pascale, C.; Yin, Z.; Huh, S.; Osuna-Highley, E.; Junkers, S.; Helfrich, C.; et al. Phase contrast time-lapse microscopy datasets with automated and manual cell tracking annotations. Sci. Data 2018, 5. [Google Scholar] [CrossRef]
  92. Ljosa, V.; Sokolnicki, K.; Carpenter, A. Annotated high-throughput microscopy image sets for validation. Nat. Methods 2012, 9, 637. [Google Scholar] [CrossRef]
  93. Tian, C.; Yang, C.; Spencer, S.L. EllipTrack: A Global-Local Cell-Tracking Pipeline for 2D Fluorescence Time-Lapse Microscopy. Cell Rep. 2020, 32, 107984. [Google Scholar] [CrossRef]
  94. Lin, T.; Maire, M.; Belongie, S.J.; Hays, J.; Perona, P.; Ramanan, D.; Dollár, P.; Zitnick, C.L. Microsoft COCO: Common Objects in Context. In Proceedings of the Computer Vision—ECCV 2014—13th European Conference, Zurich, Switzerland, 6–12 September 2014; Proceedings, Part V. Fleet, D.J., Pajdla, T., Schiele, B., Tuytelaars, T., Eds.; Springer: Cham, Switzerland, 2014; Volume 8693, pp. 740–755. [Google Scholar] [CrossRef]
  95. Matula, P.; Maška, M.; Sorokin, D.V.; Matula, P.; de Solórzano, C.O.; Kozubek, M. Cell tracking accuracy measurement based on comparison of acyclic oriented graphs. PLoS ONE 2015, 10, e0144959. [Google Scholar] [CrossRef]
  96. Dendorfer, P.; Ošep, A.; Milan, A.; Schindler, K.; Cremers, D.; Reid, I.; Roth, S.; Leal-Taixé, L. MOTChallenge: A Benchmark for Single-Camera Multiple Target Tracking. arXiv 2020, arXiv:2010.07548. [Google Scholar] [CrossRef]
  97. Bernardin, K.; Stiefelhagen, R. Evaluating Multiple Object Tracking Performance: The CLEAR MOT Metrics. EURASIP J. Image Video Process. 2008, 2008. [Google Scholar] [CrossRef]
  98. Milan, A.; Leal-Taixe, L.; Reid, I.; Roth, S.; Schindler, K. MOT16: A Benchmark for Multi-Object Tracking. arXiv 2016, arXiv:1603.00831. [Google Scholar] [CrossRef]
  99. Ristani, E.; Solera, F.; Zou, R.S.; Cucchiara, R.; Tomasi, C. Performance Measures and a Data Set for Multi-Target, Multi-Camera Tracking. arXiv 2016, arXiv:1609.01775. [Google Scholar] [CrossRef]
  100. Luiten, J.; Osep, A.; Dendorfer, P.; Torr, P.; Geiger, A.; Leal-Taixé, L.; Leibe, B. HOTA: A Higher Order Metric for Evaluating Multi-Object Tracking. Int. J. Comput. Vis. 2020, 1–31. [Google Scholar] [CrossRef]
  101. Xing, F.; Xie, Y.; Su, H.; Liu, F.; Yang, L. Deep Learning in Microscopy Image Analysis: A Survey. IEEE Trans. Neural Netw. Learn. Syst. 2018, 29, 4550–4568. [Google Scholar] [CrossRef]
Figure 3. Example data from the EVICAN dataset [88]: (a) original image (ID 92_ACHN); (b) image with annotated cells (red) and nuclei (blue); (c) image where non-annotated areas have been blurred.
Figure 3. Example data from the EVICAN dataset [88]: (a) original image (ID 92_ACHN); (b) image with annotated cells (red) and nuclei (blue); (c) image where non-annotated areas have been blurred.
Algorithms 15 00313 g003
Table 2. Details of available annotated data for cell segmentation in traditional label-free images: dataset name (Name); reference ([Ref]); type of microscopy data (Content); url (Link); number of annotated images (# imgs), annotated cells (# cells), and cell lines (#cell lines). All links were accessed on 28 August 2022.
Table 2. Details of available annotated data for cell segmentation in traditional label-free images: dataset name (Name); reference ([Ref]); type of microscopy data (Content); url (Link); number of annotated images (# imgs), annotated cells (# cells), and cell lines (#cell lines). All links were accessed on 28 August 2022.
Name[Ref]ContentLink# Imgs# Cells# Cell Lines
Allen Cell Explorer[86]3D Label- Freehttps://www.allencell.org/data-downloading.html/#sectionLabelFreeTrainingData~18,000~39,0001
BU-BIL[87]PhChttps://www.cs.bu.edu/fac/betke/BiomedicalImageSegmentation/1511513
CTC[22]PhC, DIC, BFhttp://www.celltrackingchallenge.net21319805
DeepCell[23]PhChttps://doi.org/10.1371/journal.pcbi.1005177.s021, https://doi.org/10.1371/journal.pcbi.1005177.s022, https://doi.org/10.1371/journal.pcbi.1005177.s02345~43001
EVICAN[88]PhC, BFhttps://edmond.mpdl.mpg.de/dataset.xhtml?persistentId=doi:10.17617/3.AJBV1S464026,42830
fastER[25]PhC, BFhttps://bsse.ethz.ch/csd/software/faster.html391653 (+953) 12
LIVEcell[26]PhChttps://sartorius-research.github.io/LIVECell/52391,686,3528
Usiigaci[63]PhChttps://github.com/ElsevierSoftwareX/SOFTX_2018_1583726411
Vicar et al.[20]PhC, DIC, HMChttps://zenodo.org/record/12507293245461
1 For other 953 cells, only centroids are provided.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Maddalena, L.; Antonelli, L.; Albu, A.; Hada, A.; Guarracino, M.R. Artificial Intelligence for Cell Segmentation, Event Detection, and Tracking for Label-Free Microscopy Imaging. Algorithms 2022, 15, 313. https://doi.org/10.3390/a15090313

AMA Style

Maddalena L, Antonelli L, Albu A, Hada A, Guarracino MR. Artificial Intelligence for Cell Segmentation, Event Detection, and Tracking for Label-Free Microscopy Imaging. Algorithms. 2022; 15(9):313. https://doi.org/10.3390/a15090313

Chicago/Turabian Style

Maddalena, Lucia, Laura Antonelli, Alexandra Albu, Aroj Hada, and Mario Rosario Guarracino. 2022. "Artificial Intelligence for Cell Segmentation, Event Detection, and Tracking for Label-Free Microscopy Imaging" Algorithms 15, no. 9: 313. https://doi.org/10.3390/a15090313

APA Style

Maddalena, L., Antonelli, L., Albu, A., Hada, A., & Guarracino, M. R. (2022). Artificial Intelligence for Cell Segmentation, Event Detection, and Tracking for Label-Free Microscopy Imaging. Algorithms, 15(9), 313. https://doi.org/10.3390/a15090313

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop