Next Issue
Volume 4, July
Previous Issue
Volume 4, May
 
 

J. Imaging, Volume 4, Issue 6 (June 2018) – 10 articles

Cover Story (view full-size image): Deep neural network-based background subtraction (DNN-BS) has demonstrated excellent performance for change detection. However, previous works fail to detail why DNN-BSs work well. This discussion helps to improve DNN-BSs. For investigation of DNN-BSs, we observed feature maps in all layers of a DNN-BS directly, and we found important filters for the detection accuracy by removing specific filters from the DNN-BS. From the analysis, we found that the DNN-BS consists of subtraction operations in convolutional layers and thresholding operations in bias layers and scene-specific filters are generated to suppress false positives from dynamic backgrounds. View Paper here.
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
24 pages, 6895 KiB  
Article
Full Reference Objective Quality Assessment for Reconstructed Background Images
by Aditee Shrotre and Lina J. Karam
J. Imaging 2018, 4(6), 82; https://doi.org/10.3390/jimaging4060082 - 19 Jun 2018
Cited by 1 | Viewed by 4938
Abstract
With an increased interest in applications that require a clean background image, such as video surveillance, object tracking, street view imaging and location-based services on web-based maps, multiple algorithms have been developed to reconstruct a background image from cluttered scenes. Traditionally, statistical measures [...] Read more.
With an increased interest in applications that require a clean background image, such as video surveillance, object tracking, street view imaging and location-based services on web-based maps, multiple algorithms have been developed to reconstruct a background image from cluttered scenes. Traditionally, statistical measures and existing image quality techniques have been applied for evaluating the quality of the reconstructed background images. Though these quality assessment methods have been widely used in the past, their performance in evaluating the perceived quality of the reconstructed background image has not been verified. In this work, we discuss the shortcomings in existing metrics and propose a full reference Reconstructed Background image Quality Index (RBQI) that combines color and structural information at multiple scales using a probability summation model to predict the perceived quality in the reconstructed background image given a reference image. To compare the performance of the proposed quality index with existing image quality assessment measures, we construct two different datasets consisting of reconstructed background images and corresponding subjective scores. The quality assessment measures are evaluated by correlating their objective scores with human subjective ratings. The correlation results show that the proposed RBQI outperforms all the existing approaches. Additionally, the constructed datasets and the corresponding subjective scores provide a benchmark to evaluate the performance of future metrics that are developed to evaluate the perceived quality of reconstructed background images. Full article
(This article belongs to the Special Issue Detection of Moving Objects)
Show Figures

Figure 1

13 pages, 3124 KiB  
Article
Automated Analysis of Spatially Resolved X-ray Scattering and Micro Computed Tomography of Artificial and Natural Enamel Carious Lesions
by Hans Deyhle, Shane N. White, Lea Botta, Marianne Liebi, Manuel Guizar-Sicairos, Oliver Bunk and Bert Müller
J. Imaging 2018, 4(6), 81; https://doi.org/10.3390/jimaging4060081 - 15 Jun 2018
Cited by 6 | Viewed by 5360
Abstract
Radiography has long been the standard approach to characterize carious lesions. Spatially resolved X-ray diffraction, specifically small-angle X-ray scattering (SAXS), has recently been applied to caries research. The aims of this combined SAXS and micro computed tomography (µCT) study were to locally characterize [...] Read more.
Radiography has long been the standard approach to characterize carious lesions. Spatially resolved X-ray diffraction, specifically small-angle X-ray scattering (SAXS), has recently been applied to caries research. The aims of this combined SAXS and micro computed tomography (µCT) study were to locally characterize and compare the micro- and nanostructures of one natural carious lesion and of one artificially induced enamel lesion; and demonstrate the feasibility of an automated approach to combined SAXS and µCT data in segmenting affected and unaffected enamel. Enamel, demineralized by natural or artificial caries, exhibits a significantly reduced X-ray attenuation compared to sound enamel and gives rise to a drastically increased small-angle scattering signal associated with the presence of nanometer-size pores. In addition, X-ray scattering allows the assessment of the overall orientation and the degree of anisotropy of the nanostructures present. Subsequent to the characterization with µCT, specimens were analyzed using synchrotron radiation-based SAXS in transmission raster mode. The bivariate histogram plot of the projected data combined the local scattering signal intensity with the related X-ray attenuation from µCT measurements. These histograms permitted the segmentation of anatomical features, including the lesions, with micrometer precision. The natural and artificial lesions showed comparable features, but they also exhibited size and shape differences. The clear identification of the affected regions and the characterization of their nanostructure allow the artificially induced lesions to be verified against selected natural carious lesions, offering the potential to optimize artificial demineralization protocols. Analysis of joint SAXS and µCT histograms objectively segmented sound and affected enamel. Full article
(This article belongs to the Special Issue Phase-Contrast and Dark-Field Imaging)
Show Figures

Figure 1

16 pages, 6387 KiB  
Article
Slant Removal Technique for Historical Document Images
by Ergina Kavallieratou, Laurence Likforman-Sulem and Nikos Vasilopoulos
J. Imaging 2018, 4(6), 80; https://doi.org/10.3390/jimaging4060080 - 12 Jun 2018
Cited by 3 | Viewed by 9297
Abstract
Slanted text has been demonstrated to be a salient feature of handwriting. Its estimation is a necessary preprocessing task in many document image processing systems in order to improve the required training. This paper describes and evaluates a new technique for removing the [...] Read more.
Slanted text has been demonstrated to be a salient feature of handwriting. Its estimation is a necessary preprocessing task in many document image processing systems in order to improve the required training. This paper describes and evaluates a new technique for removing the slant from historical document pages that avoids the segmentation procedure into text lines and words. The proposed technique first relies on slant angle detection from an accurate selection of fragments. Then, a slant removal technique is applied. However, the presented slant removal technique may be combined with any other slant detection algorithm. Experimental results are provided for four document image databases: two historical document databases, the TrigraphSlant database (the only database dedicated to slant removal), and a printed database in order to check the precision of the proposed technique. Full article
(This article belongs to the Special Issue Document Image Processing)
Show Figures

Figure 1

17 pages, 8886 KiB  
Article
Deep Learning with a Spatiotemporal Descriptor of Appearance and Motion Estimation for Video Anomaly Detection
by Kishanprasad G. Gunale and Prachi Mukherji
J. Imaging 2018, 4(6), 79; https://doi.org/10.3390/jimaging4060079 - 8 Jun 2018
Cited by 20 | Viewed by 6999
Abstract
The automatic detection and recognition of anomalous events in crowded and complex scenes on video are the research objectives of this paper. The main challenge in this system is to create models for detecting such events due to their changeability and the territory [...] Read more.
The automatic detection and recognition of anomalous events in crowded and complex scenes on video are the research objectives of this paper. The main challenge in this system is to create models for detecting such events due to their changeability and the territory of the context of the scenes. Due to these challenges, this paper proposed a novel HOME FAST (Histogram of Orientation, Magnitude, and Entropy with Fast Accelerated Segment Test) spatiotemporal feature extraction approach based on optical flow information to capture anomalies. This descriptor performs the video analysis within the smart surveillance domain and detects anomalies. In deep learning, the training step learns all the normal patterns from the high-level and low-level information. The events are described in testing and, if they differ from the normal pattern, are considered as anomalous. The overall proposed system robustly identifies both local and global abnormal events from complex scenes and solves the problem of detection under various transformations with respect to the state-of-the-art approaches. The performance assessment of the simulation outcome validated that the projected model could handle different anomalous events in a crowded scene and automatically recognize anomalous events with success. Full article
(This article belongs to the Special Issue Detection of Moving Objects)
Show Figures

Figure 1

19 pages, 34950 KiB  
Article
Analytics of Deep Neural Network-Based Background Subtraction
by Tsubasa Minematsu, Atsushi Shimada, Hideaki Uchiyama and Rin-ichiro Taniguchi
J. Imaging 2018, 4(6), 78; https://doi.org/10.3390/jimaging4060078 - 8 Jun 2018
Cited by 46 | Viewed by 6522
Abstract
Deep neural network-based (DNN-based) background subtraction has demonstrated excellent performance for moving object detection. The DNN-based background subtraction automatically learns the background features from training images and outperforms conventional background modeling based on handcraft features. However, previous works fail to detail why DNNs [...] Read more.
Deep neural network-based (DNN-based) background subtraction has demonstrated excellent performance for moving object detection. The DNN-based background subtraction automatically learns the background features from training images and outperforms conventional background modeling based on handcraft features. However, previous works fail to detail why DNNs work well for change detection. This discussion helps to understand the potential of DNNs in background subtraction and to improve DNNs. In this paper, we observe feature maps in all layers of a DNN used in our investigation directly. The DNN provides feature maps with the same resolution as that of the input image. These feature maps help to analyze DNN behaviors because feature maps and the input image can be simultaneously compared. Furthermore, we analyzed important filters for the detection accuracy by removing specific filters from the trained DNN. From the experiments, we found that the DNN consists of subtraction operations in convolutional layers and thresholding operations in bias layers and scene-specific filters are generated to suppress false positives from dynamic backgrounds. In addition, we discuss the characteristics and issues of the DNN based on our observation. Full article
(This article belongs to the Special Issue Detection of Moving Objects)
Show Figures

Figure 1

18 pages, 3347 KiB  
Article
Optimization Based Evaluation of Grating Interferometric Phase Stepping Series and Analysis of Mechanical Setup Instabilities
by Jonas Dittmann, Andreas Balles and Simon Zabler
J. Imaging 2018, 4(6), 77; https://doi.org/10.3390/jimaging4060077 - 7 Jun 2018
Cited by 16 | Viewed by 4929
Abstract
The diffraction contrast modalities accessible by X-ray grating interferometers are not imaged directly but have to be inferred from sine-like signal variations occurring in a series of images acquired at varying relative positions of the interferometer’s gratings. The absolute spatial translations involved in [...] Read more.
The diffraction contrast modalities accessible by X-ray grating interferometers are not imaged directly but have to be inferred from sine-like signal variations occurring in a series of images acquired at varying relative positions of the interferometer’s gratings. The absolute spatial translations involved in the acquisition of these phase stepping series usually lie in the range of only a few hundred nanometers, wherefore positioning errors as small as 10 nm will already translate into signal uncertainties of 1–10% in the final images if not accounted for. Classically, the relative grating positions in the phase stepping series are considered input parameters to the analysis and are, for the Fast Fourier Transform that is typically employed, required to be equidistantly distributed over multiples of the gratings’ period. In the following, a fast converging optimization scheme is presented simultaneously determining the phase stepping curves’ parameters as well as the actually performed motions of the stepped grating, including also erroneous rotational motions which are commonly neglected. While the correction of solely the translational errors along the stepping direction is found to be sufficient with regard to the reduction of image artifacts, the possibility to also detect minute rotations about all axes proves to be a valuable tool for system calibration and monitoring. The simplicity of the provided algorithm, in particular when only considering translational errors, makes it well suitable as a standard evaluation procedure also for large image series. Full article
(This article belongs to the Special Issue Phase-Contrast and Dark-Field Imaging)
Show Figures

Figure 1

6 pages, 644 KiB  
Article
Single-Shot X-ray Phase Retrieval through Hierarchical Data Analysis and a Multi-Aperture Analyser
by Marco Endrizzi, Fabio A. Vittoria and Alessandro Olivo
J. Imaging 2018, 4(6), 76; https://doi.org/10.3390/jimaging4060076 - 6 Jun 2018
Cited by 1 | Viewed by 3831
Abstract
A multi-aperture analyser set-up was recently developed for X-ray phase contrast imaging and tomography, simultaneously attaining a high sensitivity and wide dynamic range. We present a single-shot image retrieval algorithm in which differential phase and dark-field images are extracted from a single intensity [...] Read more.
A multi-aperture analyser set-up was recently developed for X-ray phase contrast imaging and tomography, simultaneously attaining a high sensitivity and wide dynamic range. We present a single-shot image retrieval algorithm in which differential phase and dark-field images are extracted from a single intensity projection. Scanning of the object is required to build a two-dimensional image, because only one pre-sample aperture is used in the experiment reported here. A pure-phase object approximation and a hierarchical approach to the data analysis are used in order to overcome numerical instabilities. The single-shot capability reduces the exposure times by a factor of five with respect to the standard implementation and significantly simplifies the acquisition procedure by only requiring sample scanning during data collection. Full article
(This article belongs to the Special Issue Phase-Contrast and Dark-Field Imaging)
Show Figures

Figure 1

20 pages, 9427 KiB  
Article
Stochastic Capsule Endoscopy Image Enhancement
by Ahmed Mohammed, Ivar Farup, Marius Pedersen, Øistein Hovde and Sule Yildirim Yayilgan
J. Imaging 2018, 4(6), 75; https://doi.org/10.3390/jimaging4060075 - 6 Jun 2018
Cited by 15 | Viewed by 6237
Abstract
Capsule endoscopy, which uses a wireless camera to take images of the digestive tract, is emerging as an alternative to traditional colonoscopy. The diagnostic values of these images depend on the quality of revealed underlying tissue surfaces. In this paper, we consider the [...] Read more.
Capsule endoscopy, which uses a wireless camera to take images of the digestive tract, is emerging as an alternative to traditional colonoscopy. The diagnostic values of these images depend on the quality of revealed underlying tissue surfaces. In this paper, we consider the problem of enhancing the visibility of detail and shadowed tissue surfaces for capsule endoscopy images. Using concentric circles at each pixel for random walks combined with stochastic sampling, the proposed method enhances the details of vessel and tissue surfaces. The framework decomposes the image into two detailed layers that contain shadowed tissue surfaces and detail features. The target pixel value is recalculated for the smooth layer using similarity of the target pixel to neighboring pixels by weighting against the total gradient variation and intensity differences. In order to evaluate the diagnostic image quality of the proposed method, we used clinical subjective evaluation with a rank order on selected KID image database and compared it to state-of-the-art enhancement methods. The result showed that the proposed method provides a better result in terms of diagnostic image quality and objective quality contrast metrics and structural similarity index. Full article
(This article belongs to the Special Issue Computational Colour Imaging)
Show Figures

Figure 1

61 pages, 16557 KiB  
Article
A Review of Supervised Edge Detection Evaluation Methods and an Objective Comparison of Filtering Gradient Computations Using Hysteresis Thresholds
by Baptiste Magnier, Hasan Abdulrahman and Philippe Montesinos
J. Imaging 2018, 4(6), 74; https://doi.org/10.3390/jimaging4060074 - 31 May 2018
Cited by 33 | Viewed by 8356
Abstract
Useful for human visual perception, edge detection remains a crucial stage in numerous image processing applications. One of the most challenging goals in contour detection is to operate algorithms that can process visual information as humans require. To ensure that an edge detection [...] Read more.
Useful for human visual perception, edge detection remains a crucial stage in numerous image processing applications. One of the most challenging goals in contour detection is to operate algorithms that can process visual information as humans require. To ensure that an edge detection technique is reliable, it needs to be rigorously assessed before being used in a computer vision tool. This assessment corresponds to a supervised evaluation process to quantify differences between a reference edge map and a candidate, computed by a performance measure/criterion. To achieve this task, a supervised evaluation computes a score between a ground truth edge map and a candidate image. This paper presents a survey of supervised edge detection evaluation methods. Considering a ground truth edge map, various methods have been developed to assess a desired contour. Several techniques are based on the number of false positive, false negative, true positive and/or true negative points. Other methods strongly penalize misplaced points when they are outside a window centered on a true or false point. In addition, many approaches compute the distance from the position where a contour point should be located. Most of these edge detection assessment methods will be detailed, highlighting their drawbacks using several examples. In this study, a new supervised edge map quality measure is proposed. The new measure provides an overall evaluation of the quality of a contour map by taking into account the number of false positives and false negatives, and the degrees of shifting. Numerous examples and experiments show the importance of penalizing false negative points differently than false positive pixels because some false points may not necessarily disturb the visibility of desired objects, whereas false negative points can significantly change the aspect of an object. Finally, an objective assessment is performed by varying the hysteresis thresholds on contours of real images obtained by filtering techniques. Theoretically, by varying the hysteresis thresholds of the thin edges obtained by filtering gradient computations, the minimum score of the measure corresponds to the best edge map, compared to the ground truth. Twenty-eight measures are compared using different edge detectors that are robust or not robust regarding noise. The scores of the different measures and different edge detectors are recorded and plotted as a function of the noise level in the original image. The plotted curve of a reliable edge detection measure must increase monotonously with the noise level and a reliable edge detector must be less penalized than a poor detector. In addition, the obtained edge map tied to the minimum score of a considered measure exposes the reliability of an edge detection evaluation measure if the edge map obtained is visually closer to the ground truth or not. Hence, experiments illustrate that the desired objects are not always completely visible using ill-suited evaluation measure. Full article
Show Figures

Figure 1

21 pages, 11031 KiB  
Article
Efficient Implementation of Gaussian and Laplacian Kernels for Feature Extraction from IP Fisheye Cameras
by Konstantinos K. Delibasis
J. Imaging 2018, 4(6), 73; https://doi.org/10.3390/jimaging4060073 - 24 May 2018
Cited by 6 | Viewed by 6061
Abstract
The Gaussian kernel, its partial derivatives and the Laplacian kernel, applied at different image scales, play a very important role in image processing and in feature extraction from images. Although they have been extensively studied in the case of images acquired by projective [...] Read more.
The Gaussian kernel, its partial derivatives and the Laplacian kernel, applied at different image scales, play a very important role in image processing and in feature extraction from images. Although they have been extensively studied in the case of images acquired by projective cameras, this is not the case for cameras with fisheye lenses. This type of cameras is becoming very popular, since it exhibits a Field of View of 180 degrees. The model of fisheye image formation differs substantially from the simple projective transformation, causing straight lines to be imaged as curves. Thus the traditional kernels used for processing images acquired by projective cameras, are not optimal for fisheye images. This work uses the calibration of the acquiring fisheye camera to define a geodesic metric for distance between pixels in fisheye images and subsequently redefines the Gaussian kernel, its partial derivatives, as well as the Laplacian kernel. Finally, algorithms for applying in the spatial domain these kernels, as well as the Harris corner detector, are proposed, using efficient computational implementations. Comparative results are shown, in terms of correctness of image processing, efficiency of application for multi scale processing, as well as salient point extraction. Thus we conclude that the proposed algorithms allow the efficient application of standard processing and analysis techniques of fisheye images, in the spatial domain, once the calibration of the specific camera is available. Full article
(This article belongs to the Special Issue Image Based Information Retrieval from the Web)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop