Next Issue
Volume 4, August
Previous Issue
Volume 4, June
 
 

J. Imaging, Volume 4, Issue 7 (July 2018) – 13 articles

Cover Story (view full-size image): The automated detection of glomeruli in the Whole Slide Image (WSI) of kidney specimens is an important step toward computerized image analysis in the field of renal pathology. This paper describes an approach that combines a sliding window with Faster R-CNN for the fully automated detection of glomeruli with multistained, human WSIs. The evaluation results on a dataset consisting of more than 33,000 annotated glomeruli obtained from 800 WSIs showed the approach produces comparable or higher F-measures with different stains compared with other recently published approaches. View the paper here.
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
17 pages, 357 KiB  
Article
An Ensemble SSL Algorithm for Efficient Chest X-Ray Image Classification
by Ioannis E. Livieris, Andreas Kanavos, Vassilis Tampakas and Panagiotis Pintelas
J. Imaging 2018, 4(7), 95; https://doi.org/10.3390/jimaging4070095 - 20 Jul 2018
Cited by 34 | Viewed by 5715
Abstract
A critical component in the computer-aided medical diagnosis of digital chest X-rays is the automatic detection of lung abnormalities, since the effective identification at an initial stage constitutes a significant and crucial factor in patient’s treatment. The vigorous advances in computer and digital [...] Read more.
A critical component in the computer-aided medical diagnosis of digital chest X-rays is the automatic detection of lung abnormalities, since the effective identification at an initial stage constitutes a significant and crucial factor in patient’s treatment. The vigorous advances in computer and digital technologies have ultimately led to the development of large repositories of labeled and unlabeled images. Due to the effort and expense involved in labeling data, training datasets are of a limited size, while in contrast, electronic medical record systems contain a significant number of unlabeled images. Semi-supervised learning algorithms have become a hot topic of research as an alternative to traditional classification methods, exploiting the explicit classification information of labeled data with the knowledge hidden in the unlabeled data for building powerful and effective classifiers. In the present work, we evaluate the performance of an ensemble semi-supervised learning algorithm for the classification of chest X-rays of tuberculosis. The efficacy of the presented algorithm is demonstrated by several experiments and confirmed by the statistical nonparametric tests, illustrating that reliable and robust prediction models could be developed utilizing a few labeled and many unlabeled data. Full article
(This article belongs to the Special Issue Image Based Information Retrieval from the Web)
Show Figures

Figure 1

10 pages, 1614 KiB  
Article
ECG Electrode Placements for Magnetohydrodynamic Voltage Suppression
by T. Stan Gregory, John N. Oshinski and Zion Tsz Ho Tse
J. Imaging 2018, 4(7), 94; https://doi.org/10.3390/jimaging4070094 - 17 Jul 2018
Cited by 4 | Viewed by 6157
Abstract
This study aims to investigate a set of electrocardiogram (ECG) electrode lead locations to improve the quality of four-lead ECG signals acquired during magnetic resonance imaging (MRI). This was achieved by identifying electrode placements that minimized the amount of induced magnetohydrodynamic voltages (V [...] Read more.
This study aims to investigate a set of electrocardiogram (ECG) electrode lead locations to improve the quality of four-lead ECG signals acquired during magnetic resonance imaging (MRI). This was achieved by identifying electrode placements that minimized the amount of induced magnetohydrodynamic voltages (VMHD) in the ECG signals. Reducing VMHD can improve the accuracy of QRS complex detection in ECG as well as heartbeat synchronization between MRI and ECG during the acquisition of cardiac cine. A vector model based on thoracic geometry was developed to predict induced VMHD and to optimize four-lead ECG electrode placement for the purposes of improved MRI gating. Four human subjects were recruited for vector model establishment (Group 1), and five human subjects were recruited for validation of VMHD reduction in the proposed four-lead ECG (Group 2). The vector model was established using 12-lead ECG data recorded from Group 1 of four healthy subjects at 3 Tesla, and a gradient descent optimization routine was utilized to predict optimal four-lead ECG placement based on VMHD vector alignment. The optimized four-lead ECG was then validated in Group 2 of five healthy subjects by comparing the standard and proposed lead placements. A 43.41% reduction in VMHD was observed in ECGs using the proposed electrode placement, and the QRS complex was preserved. A VMHD-minimized electrode placement for four-lead ECG gating was presented and shown to reduce induced magnetohydrodynamic (MHD) signals, potentially allowing for improved cardiac MRI physiological monitoring. Full article
Show Figures

Graphical abstract

2 pages, 130 KiB  
Editorial
Detection of Moving Objects
by Thierry Bouwmans
J. Imaging 2018, 4(7), 93; https://doi.org/10.3390/jimaging4070093 - 13 Jul 2018
Viewed by 3410
(This article belongs to the Special Issue Detection of Moving Objects)
28 pages, 1047 KiB  
Article
Background Subtraction Based on a New Fuzzy Mixture of Gaussians for Moving Object Detection
by Ali Darwich, Pierre-Alexandre Hébert, André Bigand and Yasser Mohanna
J. Imaging 2018, 4(7), 92; https://doi.org/10.3390/jimaging4070092 - 10 Jul 2018
Cited by 21 | Viewed by 7616
Abstract
Moving foreground detection is a very important step for many applications such as human behavior analysis for visual surveillance, model-based action recognition, road traffic monitoring, etc. Background subtraction is a very popular approach, but it is difficult to apply given that it must [...] Read more.
Moving foreground detection is a very important step for many applications such as human behavior analysis for visual surveillance, model-based action recognition, road traffic monitoring, etc. Background subtraction is a very popular approach, but it is difficult to apply given that it must overcome many obstacles, such as dynamic background changes, lighting variations, occlusions, and so on. In the presented work, we focus on this problem (foreground/background segmentation), using a type-2 fuzzy modeling to manage the uncertainty of the video process and of the data. The proposed method models the state of each pixel using an imprecise and adjustable Gaussian mixture model, which is exploited by several fuzzy classifiers to ultimately estimate the pixel class for each frame. More precisely, this decision not only takes into account the history of its evolution, but also its spatial neighborhood and its possible displacements in the previous frames. Then we compare the proposed method with other close methods, including methods based on a Gaussian mixture model or on fuzzy sets. This comparison will allow us to assess our method’s performance, and to propose some perspectives to this work. Full article
(This article belongs to the Special Issue Detection of Moving Objects)
Show Figures

Figure 1

19 pages, 5364 KiB  
Article
Faster R-CNN-Based Glomerular Detection in Multistained Human Whole Slide Images
by Yoshimasa Kawazoe, Kiminori Shimamoto, Ryohei Yamaguchi, Yukako Shintani-Domoto, Hiroshi Uozaki, Masashi Fukayama and Kazuhiko Ohe
J. Imaging 2018, 4(7), 91; https://doi.org/10.3390/jimaging4070091 - 4 Jul 2018
Cited by 61 | Viewed by 10140
Abstract
The detection of objects of interest in high-resolution digital pathological images is a key part of diagnosis and is a labor-intensive task for pathologists. In this paper, we describe a Faster R-CNN-based approach for the detection of glomeruli in multistained whole slide images [...] Read more.
The detection of objects of interest in high-resolution digital pathological images is a key part of diagnosis and is a labor-intensive task for pathologists. In this paper, we describe a Faster R-CNN-based approach for the detection of glomeruli in multistained whole slide images (WSIs) of human renal tissue sections. Faster R-CNN is a state-of-the-art general object detection method based on a convolutional neural network, which simultaneously proposes object bounds and objectness scores at each point in an image. The method takes an image obtained from a WSI with a sliding window and classifies and localizes every glomerulus in the image by drawing the bounding boxes. We configured Faster R-CNN with a pretrained Inception-ResNet model and retrained it to be adapted to our task, then evaluated it based on a large dataset consisting of more than 33,000 annotated glomeruli obtained from 800 WSIs. The results showed the approach produces comparable or higher than average F-measures with different stains compared to other recently published approaches. This approach could have practical application in hospitals and laboratories for the quantitative analysis of glomeruli in WSIs and, potentially, lead to a better understanding of chronic glomerulonephritis. Full article
(This article belongs to the Special Issue Medical Image Analysis)
Show Figures

Graphical abstract

23 pages, 3290 KiB  
Article
Compressive Online Video Background–Foreground Separation Using Multiple Prior Information and Optical Flow
by Srivatsa Prativadibhayankaram, Huynh Van Luong, Thanh Ha Le and André Kaup
J. Imaging 2018, 4(7), 90; https://doi.org/10.3390/jimaging4070090 - 3 Jul 2018
Cited by 15 | Viewed by 6446
Abstract
In the context of video background–foreground separation, we propose a compressive online Robust Principal Component Analysis (RPCA) with optical flow that separates recursively a sequence of video frames into foreground (sparse) and background (low-rank) components. This separation method operates on a small set [...] Read more.
In the context of video background–foreground separation, we propose a compressive online Robust Principal Component Analysis (RPCA) with optical flow that separates recursively a sequence of video frames into foreground (sparse) and background (low-rank) components. This separation method operates on a small set of measurements taken per frame, in contrast to conventional batch-based RPCA, which processes the full data. The proposed method also leverages multiple prior information by incorporating previously separated background and foreground frames in an n-1 minimization problem. Moreover, optical flow is utilized to estimate motions between the previous foreground frames and then compensate the motions to achieve higher quality prior foregrounds for improving the separation. Our method is tested on several video sequences in different scenarios for online background–foreground separation given compressive measurements. The visual and quantitative results show that the proposed method outperforms other existing methods. Full article
(This article belongs to the Special Issue Detection of Moving Objects)
Show Figures

Figure 1

34 pages, 7516 KiB  
Article
Digital Comics Image Indexing Based on Deep Learning
by Nhu-Van Nguyen, Christophe Rigaud and Jean-Christophe Burie
J. Imaging 2018, 4(7), 89; https://doi.org/10.3390/jimaging4070089 - 2 Jul 2018
Cited by 44 | Viewed by 18060
Abstract
The digital comic book market is growing every year now, mixing digitized and digital-born comics. Digitized comics suffer from a limited automatic content understanding which restricts online content search and reading applications. This study shows how to combine state-of-the-art image analysis methods to [...] Read more.
The digital comic book market is growing every year now, mixing digitized and digital-born comics. Digitized comics suffer from a limited automatic content understanding which restricts online content search and reading applications. This study shows how to combine state-of-the-art image analysis methods to encode and index images into an XML-like text file. Content description file can then be used to automatically split comic book images into sub-images corresponding to panels easily indexable with relevant information about their respective content. This allows advanced search in keywords said by specific comic characters, action and scene retrieval using natural language processing. We get down to panel, balloon, text, comic character and face detection using traditional approaches and breakthrough deep learning models, and also text recognition using LSTM model. Evaluations on a dataset composed of online library content are presented, and a new public dataset is also proposed. Full article
(This article belongs to the Special Issue Image Based Information Retrieval from the Web)
Show Figures

Figure 1

21 pages, 4819 KiB  
Article
Imaging with a Commercial Electron Backscatter Diffraction (EBSD) Camera in a Scanning Electron Microscope: A Review
by Nicolas Brodusch, Hendrix Demers and Raynald Gauvin
J. Imaging 2018, 4(7), 88; https://doi.org/10.3390/jimaging4070088 - 1 Jul 2018
Cited by 28 | Viewed by 8950
Abstract
Scanning electron microscopy is widespread in field of material science and research, especially because of its high surface sensitivity due to the increased interactions of electrons with the target material’s atoms compared to X-ray-oriented methods. Among the available techniques in scanning electron microscopy [...] Read more.
Scanning electron microscopy is widespread in field of material science and research, especially because of its high surface sensitivity due to the increased interactions of electrons with the target material’s atoms compared to X-ray-oriented methods. Among the available techniques in scanning electron microscopy (SEM), electron backscatter diffraction (EBSD) is used to gather information regarding the crystallinity and the chemistry of crystalline and amorphous regions of a specimen. When post-processing the diffraction patterns or the image captured by the EBSD detector screen which was obtained in this manner, specific imaging contrasts are generated and can be used to understand some of the mechanisms involved in several imaging modes. In this manuscript, we reviewed the benefits of this procedure regarding topographic, compositional, diffraction, and magnetic domain contrasts. This work shows preliminary and encouraging results regarding the non-conventional use of the EBSD detector. The method is becoming viable with the advent of new EBSD camera technologies, allowing acquisition speed close to imaging rates. This method, named dark-field electron backscatter diffraction imaging, is described in detail, and several application examples are given in reflection as well as in transmission modes. Full article
(This article belongs to the Special Issue Phase-Contrast and Dark-Field Imaging)
Show Figures

Figure 1

19 pages, 3130 KiB  
Article
A Survey of Comics Research in Computer Science
by Olivier Augereau, Motoi Iwata and Koichi Kise
J. Imaging 2018, 4(7), 87; https://doi.org/10.3390/jimaging4070087 - 26 Jun 2018
Cited by 37 | Viewed by 12234
Abstract
Graphic novels such as comic books and mangas are well known all over the world. The digital transition started to change the way people are reading comics: more and more on smartphones and tablets, and less and less on paper. In recent years, [...] Read more.
Graphic novels such as comic books and mangas are well known all over the world. The digital transition started to change the way people are reading comics: more and more on smartphones and tablets, and less and less on paper. In recent years, a wide variety of research about comics has been proposed and might change the way comics are created, distributed and read in the future. Early work focuses on low level document image analysis. Comic books are complex; they contains text, drawings, balloons, panels, onomatopoeia, etc. Different fields of computer science covered research about user interaction and content generation such as multimedia, artificial intelligence, human–computer interaction, etc. with different sets of values. We review the previous research about comics in computer science to state what has been done and give some insights about the main outlooks. Full article
Show Figures

Figure 1

22 pages, 10901 KiB  
Article
LaBGen-P-Semantic: A First Step for Leveraging Semantic Segmentation in Background Generation
by Benjamin Laugraud, Sébastien Piérard and Marc Van Droogenbroeck
J. Imaging 2018, 4(7), 86; https://doi.org/10.3390/jimaging4070086 - 25 Jun 2018
Cited by 21 | Viewed by 5381
Abstract
Given a video sequence acquired from a fixed camera, the stationary background generation problem consists of generating a unique image estimating the stationary background of the sequence. During the IEEE Scene Background Modeling Contest (SBMC) organized in 2016, we presented the LaBGen-P method. [...] Read more.
Given a video sequence acquired from a fixed camera, the stationary background generation problem consists of generating a unique image estimating the stationary background of the sequence. During the IEEE Scene Background Modeling Contest (SBMC) organized in 2016, we presented the LaBGen-P method. In short, this method relies on a motion detection algorithm for selecting, for each pixel location, a given amount of pixel intensities that are most likely static by keeping the ones with the smallest quantities of motion. These quantities are estimated by aggregating the motion scores returned by the motion detection algorithm in the spatial neighborhood of the pixel. After this selection process, the background image is then generated by blending the selected intensities with a median filter. In our previous works, we showed that using a temporally-memoryless motion detection, detecting motion between two frames without relying on additional temporal information, leads our method to achieve the best performance. In this work, we go one step further by developing LaBGen-P-Semantic, a variant of LaBGen-P, the motion detection step of which is built on the current frame only by using semantic segmentation. For this purpose, two intra-frame motion detection algorithms, detecting motion from a unique frame, are presented and compared. Our experiments, carried out on the Scene Background Initialization (SBI) and SceneBackgroundModeling.NET (SBMnet) datasets, show that leveraging semantic segmentation improves the robustness against intermittent motions, background motions and very short video sequences, which are among the main challenges in the background generation field. Moreover, our results confirm that using an intra-frame motion detection is an appropriate choice for our method and paves the way for more techniques based on semantic segmentation. Full article
(This article belongs to the Special Issue Detection of Moving Objects)
Show Figures

Figure 1

28 pages, 3976 KiB  
Article
Fisher Vector Coding for Covariance Matrix Descriptors Based on the Log-Euclidean and Affine Invariant Riemannian Metrics
by Ioana Ilea, Lionel Bombrun, Salem Said and Yannick Berthoumieu
J. Imaging 2018, 4(7), 85; https://doi.org/10.3390/jimaging4070085 - 22 Jun 2018
Cited by 8 | Viewed by 4785
Abstract
This paper presents an overview of coding methods used to encode a set of covariance matrices. Starting from a Gaussian mixture model (GMM) adapted to the Log-Euclidean (LE) or affine invariant Riemannian metric, we propose a Fisher Vector (FV) descriptor adapted to each [...] Read more.
This paper presents an overview of coding methods used to encode a set of covariance matrices. Starting from a Gaussian mixture model (GMM) adapted to the Log-Euclidean (LE) or affine invariant Riemannian metric, we propose a Fisher Vector (FV) descriptor adapted to each of these metrics: the Log-Euclidean Fisher Vectors (LE FV) and the Riemannian Fisher Vectors (RFV). Some experiments on texture and head pose image classification are conducted to compare these two metrics and to illustrate the potential of these FV-based descriptors compared to state-of-the-art BoW and VLAD-based descriptors. A focus is also applied to illustrate the advantage of using the Fisher information matrix during the derivation of the FV. In addition, finally, some experiments are conducted in order to provide fairer comparison between the different coding strategies. This includes some comparisons between anisotropic and isotropic models, and a estimation performance analysis of the GMM dispersion parameter for covariance matrices of large dimension. Full article
Show Figures

Figure 1

2 pages, 150 KiB  
Editorial
Document Image Processing
by Laurence Likforman-Sulem and Ergina Kavallieratou
J. Imaging 2018, 4(7), 84; https://doi.org/10.3390/jimaging4070084 - 22 Jun 2018
Cited by 2 | Viewed by 4047
(This article belongs to the Special Issue Document Image Processing)
16 pages, 2484 KiB  
Article
Evaluation of 3D/2D Imaging and Image Processing Techniques for the Monitoring of Seed Imbibition
by Etienne Belin, Clément Douarre, Nicolas Gillard, Florence Franconi, Julio Rojas-Varela, François Chapeau-Blondeau, Didier Demilly, Jérôme Adrien, Eric Maire and David Rousseau
J. Imaging 2018, 4(7), 83; https://doi.org/10.3390/jimaging4070083 - 21 Jun 2018
Cited by 12 | Viewed by 6468
Abstract
Seed imbibition is a very important process in plant biology by which, thanks to a simple water income, a dry seed may turn into a developing organism. In natural conditions, this process occurs in the soil, e.g., with difficult access for a direct [...] Read more.
Seed imbibition is a very important process in plant biology by which, thanks to a simple water income, a dry seed may turn into a developing organism. In natural conditions, this process occurs in the soil, e.g., with difficult access for a direct observation. Monitoring the seed imbibition with non-invasive imaging techniques is therefore an important and possibly challenging task if one tries to perform it in natural conditions. In this report, we describe a set of four different imaging techniques that enable to addressing this task either in 3D or in 2D. For each technique, the following items are proposed. A detailed experimental protocol is provided to acquire images of the imbibition process. With the illustration of real data, the significance of the physical quantities measured in terms of their relation to the income of water in the seed is presented. Complete image analysis pipelines are then proposed to extract dynamic information on the imbibition process from such monitoring experiments. A final discussion compares the advantages and current limitations of each technique in addition to elements concerning the associated throughput and cost. These are criteria especially relevant in the field of plant phenotyping where large populations of plants are imaged to produce quantitatively significative traits after image processing. Full article
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop