Next Issue
Volume 3, June
Previous Issue
Volume 2, December
 
 

J. Imaging, Volume 3, Issue 1 (March 2017) – 12 articles

Cover Story (view full-size image): In this article, a Laserscanner Multi-Fisheye Camera Dataset (LaFiDa) for applying benchmarks is presented. A head-mounted multi-fisheye camera system combined with a mobile laserscanner was utilized to capture the benchmark datasets. Besides this, accurate six degrees of freedom (6 DoF) ground truth poses were obtained from a motion capture system with a sampling rate of 360 Hz. Multiple sequences were recorded in an indoor and outdoor environment, comprising different motion characteristics, lighting conditions, and scene dynamics. The benchmark dataset is available online released under the Creative Commons Attributions Licence (CC-BY 4.0), and it contains raw sensor data and specifications like timestamps, calibration, and evaluation scripts. The provided dataset can be used for multi-fisheye camera and/or laserscanner simultaneous localization and mapping (SLAM). View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
1691 KiB  
Article
Dense Descriptors for Optical Flow Estimation: A Comparative Study
by Ahmadreza Baghaie, Roshan M. D’Souza and Zeyun Yu
J. Imaging 2017, 3(1), 12; https://doi.org/10.3390/jimaging3010012 - 25 Feb 2017
Cited by 8 | Viewed by 7612
Abstract
Estimating the displacements of intensity patterns between sequential frames is a very well-studied problem, which is usually referred to as optical flow estimation. The first assumption among many of the methods in the field is the brightness constancy during movements of pixels between [...] Read more.
Estimating the displacements of intensity patterns between sequential frames is a very well-studied problem, which is usually referred to as optical flow estimation. The first assumption among many of the methods in the field is the brightness constancy during movements of pixels between frames. This assumption is proven to be not true in general, and therefore, the use of photometric invariant constraints has been studied in the past. One other solution can be sought by use of structural descriptors rather than pixels for estimating the optical flow. Unlike sparse feature detection/description techniques and since the problem of optical flow estimation tries to find a dense flow field, a dense structural representation of individual pixels and their neighbors is computed and then used for matching and optical flow estimation. Here, a comparative study is carried out by extending the framework of SIFT-flow to include more dense descriptors, and comprehensive comparisons are given. Overall, the work can be considered as a baseline for stimulating more interest in the use of dense descriptors for optical flow estimation. Full article
Show Figures

Figure 1

2258 KiB  
Article
Computer Assisted Examination of Infrared and Near Infrared Spectra to Assess Structural and Molecular Changes in Biological Samples Exposed to Pollutants: A Case of Study
by Mauro Mecozzi and Elena Sturchio
J. Imaging 2017, 3(1), 11; https://doi.org/10.3390/jimaging3010011 - 16 Feb 2017
Cited by 36 | Viewed by 6631
Abstract
We present a computer assisted method for the examination of the structural changes present in the probe organism Vicia faba exposed to inorganic arsenic, detected by means of Fourier transform infrared (FTIR) and Fourier transform near infrared (FTNIR) spectroscopy. Like the common ecotoxicological [...] Read more.
We present a computer assisted method for the examination of the structural changes present in the probe organism Vicia faba exposed to inorganic arsenic, detected by means of Fourier transform infrared (FTIR) and Fourier transform near infrared (FTNIR) spectroscopy. Like the common ecotoxicological tests, the method is based on the comparison among control and exposed sample spectra of the organisms to detect structural changes caused by pollutants. Using FTIR spectroscopy, we measured and plotted the spectral changes related to the unsaturated to saturated lipid ratio changes (USL), the lipid to protein ratio changes (LPR), fatty and ester fatty acid content changes (FA), protein oxidation (PO) and denaturation, and DNA and RNA changes (DNA-RNA). Using FTNIR spectroscopy, we measured two spectral ranges that belonged to hydrogen bond interactions and aliphatic lipid chains called IntHCONH and Met1overt, respectively. The FTIR results showed that As modified the DNA-RNA ratio and also caused partial protein denaturation in the Vicia faba samples. The FTNIR results supported the FTIR results. The main advantage of the proposed computational method is that it does not require a skilled infrared or near infrared operator, lending support to conventional studies performed by toxicological testing. Full article
(This article belongs to the Special Issue The World in Infrared Imaging)
Show Figures

Figure 1

1605 KiB  
Article
Effect of Soil Use and Coverage on the Spectral Response of an Oxisol in the VIS-NIR-MIR Region
by Javier M. Martín-López and Giovanna Quintero-Arias
J. Imaging 2017, 3(1), 10; https://doi.org/10.3390/jimaging3010010 - 16 Feb 2017
Cited by 1 | Viewed by 5928
Abstract
In this study, the spectral responses obtained from a Typic Red Hapludox (oxisol) were analyzed under different uses and occupations: Ficus elastica cultivation, Citrus + Arachis association cultivation, transitional crops, forest, Mangifera indica, Anacardium occidentale, Elaeis guineensis (18 years), Brachiaria decumbens [...] Read more.
In this study, the spectral responses obtained from a Typic Red Hapludox (oxisol) were analyzed under different uses and occupations: Ficus elastica cultivation, Citrus + Arachis association cultivation, transitional crops, forest, Mangifera indica, Anacardium occidentale, Elaeis guineensis (18 years), Brachiaria decumbens, Brachiaria brizantha, and Musa × paradisiaca + Zea mays at the La Libertad Research Center in the department of Meta in Colombia (4°04′ North latitude, 73°30′ West longitude, 330 MAMSL). Sampling was performed with four random replicates of the horizon A and B to determine the contents of organic carbon (CO), pH, exchangeable acidity (Ac. I), cation exchange capacity (Cc), P, Ca, Mg, K, Na, sand, lime, and clay and spectral responses were obtained in the visible band (VIS), near infrared (NIR), and infrared (MIR) for each sample under laboratory conditions. A comparison was made between the obtained spectra, determining the main changes in soil properties due to their use and coverage. Variation of soil characteristics such as color, organic carbon content, presence of ferrous compounds, sand, silt, and clay content and mineralogy allow the identification of the main spectral changes of soils, demonstrating the importance of the use of reflectance spectroscopy as a tool of comparison and estimation between physical-chemical properties of the soils. Full article
Show Figures

Figure 1

2617 KiB  
Article
3D Imaging with a Sonar Sensor and an Automated 3-Axes Frame for Selective Spraying in Controlled Conditions
by David Reiser, Javier M. Martín-López, Emir Memic, Manuel Vázquez-Arellano, Steffen Brandner and Hans W. Griepentrog
J. Imaging 2017, 3(1), 9; https://doi.org/10.3390/jimaging3010009 - 8 Feb 2017
Cited by 17 | Viewed by 8241
Abstract
Autonomous selective spraying could be a way for agriculture to reduce production costs, save resources, protect the environment and help to fulfill specific pesticide regulations. The objective of this paper was to investigate the use of a low-cost sonar sensor for autonomous selective [...] Read more.
Autonomous selective spraying could be a way for agriculture to reduce production costs, save resources, protect the environment and help to fulfill specific pesticide regulations. The objective of this paper was to investigate the use of a low-cost sonar sensor for autonomous selective spraying of single plants. For this, a belt driven autonomous robot was used with an attached 3-axes frame with three degrees of freedom. In the tool center point (TCP) of the 3-axes frame, a sonar sensor and a spray valve were attached to create a point cloud representation of the surface, detect plants in the area and perform selective spraying. The autonomous robot was tested on replicates of artificial crop plants. The location of each plant was identified out of the acquired point cloud with the help of Euclidian clustering. The gained plant positions were spatially transformed from the coordinates of the sonar sensor to the valve location to determine the exact irrigation points. The results showed that the robot was able to automatically detect the position of each plant with an accuracy of 2.7 cm and could spray on these selected points. This selective spraying reduced the used liquid by 72%, when comparing it to a conventional spraying method in the same conditions. Full article
Show Figures

Figure 1

2534 KiB  
Article
Radial Distortion from Epipolar Constraint for Rectilinear Cameras
by Ville V. Lehtola, Matti Kurkela and Petri Rönnholm
J. Imaging 2017, 3(1), 8; https://doi.org/10.3390/jimaging3010008 - 24 Jan 2017
Cited by 3 | Viewed by 9830
Abstract
Lens distortion causes difficulties for 3D reconstruction, when uncalibrated image sets with weak geometry are used. We show that the largest part of lens distortion, known as the radial distortion, can be estimated along with the center of distortion from the epipolar constraint [...] Read more.
Lens distortion causes difficulties for 3D reconstruction, when uncalibrated image sets with weak geometry are used. We show that the largest part of lens distortion, known as the radial distortion, can be estimated along with the center of distortion from the epipolar constraint separately and before bundle adjustment without any calibration rig. The estimate converges as more image pairs are added. Descriptor matched scale-invariant feature (SIFT) point pairs that contain false matches can readily be given to our algorithm, EPOS (EpiPOlar-based Solver), as input. The processing is automated to the point where EPOS solves the distortion whether its type is barrel or pincushion or reports if there is no need for correction. Full article
Show Figures

Graphical abstract

1426 KiB  
Article
Estimation Methods of the Point Spread Function Axial Position: A Comparative Computational Study
by Javier Eduardo Diaz Zamboni and Víctor Hugo Casco
J. Imaging 2017, 3(1), 7; https://doi.org/10.3390/jimaging3010007 - 24 Jan 2017
Cited by 4 | Viewed by 7869
Abstract
The precise knowledge of the point spread function is central for any imaging system characterization. In fluorescence microscopy, point spread function (PSF) determination has become a common and obligatory task for each new experimental device, mainly due to its strong dependence on acquisition [...] Read more.
The precise knowledge of the point spread function is central for any imaging system characterization. In fluorescence microscopy, point spread function (PSF) determination has become a common and obligatory task for each new experimental device, mainly due to its strong dependence on acquisition conditions. During the last decade, algorithms have been developed for the precise calculation of the PSF, which fit model parameters that describe image formation on the microscope to experimental data. In order to contribute to this subject, a comparative study of three parameter estimation methods is reported, namely: I-divergence minimization (MIDIV), maximum likelihood (ML) and non-linear least square (LSQR). They were applied to the estimation of the point source position on the optical axis, using a physical model. Methods’ performance was evaluated under different conditions and noise levels using synthetic images and considering success percentage, iteration number, computation time, accuracy and precision. The main results showed that the axial position estimation requires a high SNR to achieve an acceptable success level and higher still to be close to the estimation error lower bound. ML achieved a higher success percentage at lower SNR compared to MIDIV and LSQR with an intrinsic noise source. Only the ML and MIDIV methods achieved the error lower bound, but only with data belonging to the optical axis and high SNR. Extrinsic noise sources worsened the success percentage, but no difference was found between noise sources for the same method for all methods studied. Full article
Show Figures

Figure 1

2562 KiB  
Article
Early Yield Prediction Using Image Analysis of Apple Fruit and Tree Canopy Features with Neural Networks
by Hong Cheng, Lutz Damerow, Yurui Sun and Michael Blanke
J. Imaging 2017, 3(1), 6; https://doi.org/10.3390/jimaging3010006 - 19 Jan 2017
Cited by 93 | Viewed by 13819
Abstract
(1) Background: Since early yield prediction is relevant for resource requirements of harvesting and marketing in the whole fruit industry, this paper presents a new approach of using image analysis and tree canopy features to predict early yield with artificial neural networks (ANN); [...] Read more.
(1) Background: Since early yield prediction is relevant for resource requirements of harvesting and marketing in the whole fruit industry, this paper presents a new approach of using image analysis and tree canopy features to predict early yield with artificial neural networks (ANN); (2) Methods: Two back propagation neural network (BPNN) models were developed for the early period after natural fruit drop in June and the ripening period, respectively. Within the same periods, images of apple cv. “Gala” trees were captured from an orchard near Bonn, Germany. Two sample sets were developed to train and test models; each set included 150 samples from the 2009 and 2010 growing season. For each sample (each canopy image), pixels were segmented into fruit, foliage, and background using image segmentation. The four features extracted from the data set for the canopy were: total cross-sectional area of fruits, fruit number, total cross-section area of small fruits, and cross-sectional area of foliage, and were used as inputs. With the actual weighted yield per tree as a target, BPNN was employed to learn their mutual relationship as a prerequisite to develop the prediction; (3) Results: For the developed BPNN model of the early period after June drop, correlation coefficients (R2) between the estimated and the actual weighted yield, mean forecast error (MFE), mean absolute percentage error (MAPE), and root mean square error (RMSE) were 0.81, −0.05, 10.7%, 2.34 kg/tree, respectively. For the model of the ripening period, these measures were 0.83, −0.03, 8.9%, 2.3 kg/tree, respectively. In 2011, the two previously developed models were used to predict apple yield. The RMSE and R2 values between the estimated and harvested apple yield were 2.6 kg/tree and 0.62 for the early period (small, green fruit) and improved near harvest (red, large fruit) to 2.5 kg/tree and 0.75 for a tree with ca. 18 kg yield per tree. For further method verification, the cv. “Pinova” apple trees were used as another variety in 2012 to develop the BPNN prediction model for the early period after June drop. The model was used in 2013, which gave similar results as those found with cv. “Gala”; (4) Conclusion: Overall, the results showed in this research that the proposed estimation models performed accurately using canopy and fruit features using image analysis algorithms. Full article
(This article belongs to the Special Issue Image Processing in Agriculture and Forestry)
Show Figures

Figure 1

6145 KiB  
Article
LaFiDa—A Laserscanner Multi-Fisheye Camera Dataset
by Steffen Urban and Boris Jutzi
J. Imaging 2017, 3(1), 5; https://doi.org/10.3390/jimaging3010005 - 17 Jan 2017
Cited by 15 | Viewed by 10949
Abstract
In this article, the Laserscanner Multi-Fisheye Camera Dataset (LaFiDa) for applying benchmarks is presented. A head-mounted multi-fisheye camera system combined with a mobile laserscanner was utilized to capture the benchmark datasets. Besides this, accurate six degrees of freedom (6 DoF) ground truth poses [...] Read more.
In this article, the Laserscanner Multi-Fisheye Camera Dataset (LaFiDa) for applying benchmarks is presented. A head-mounted multi-fisheye camera system combined with a mobile laserscanner was utilized to capture the benchmark datasets. Besides this, accurate six degrees of freedom (6 DoF) ground truth poses were obtained from a motion capture system with a sampling rate of 360 Hz. Multiple sequences were recorded in an indoor and outdoor environment, comprising different motion characteristics, lighting conditions, and scene dynamics. The provided sequences consist of images from three—by hardware trigger—fully synchronized fisheye cameras combined with a mobile laserscanner on the same platform. In total, six trajectories are provided. Each trajectory also comprises intrinsic and extrinsic calibration parameters and related measurements for all sensors. Furthermore, we generalize the most common toolbox for an extrinsic laserscanner to camera calibration to work with arbitrary central cameras, such as omnidirectional or fisheye projections. The benchmark dataset is available online released under the Creative Commons Attributions Licence (CC-BY 4.0), and it contains raw sensor data and specifications like timestamps, calibration, and evaluation scripts. The provided dataset can be used for multi-fisheye camera and/or laserscanner simultaneous localization and mapping (SLAM). Full article
(This article belongs to the Special Issue 3D Imaging)
Show Figures

Figure 1

12455 KiB  
Article
Comparison of Small Unmanned Aerial Vehicles Performance Using Image Processing
by Esteban Cano, Ryan Horton, Chase Liljegren and Duke M. Bulanon
J. Imaging 2017, 3(1), 4; https://doi.org/10.3390/jimaging3010004 - 11 Jan 2017
Cited by 22 | Viewed by 6821
Abstract
Precision agriculture is a farm management technology that involves sensing and then responding to the observed variability in the field. Remote sensing is one of the tools of precision agriculture. The emergence of small unmanned aerial vehicles (sUAV) have paved the way to [...] Read more.
Precision agriculture is a farm management technology that involves sensing and then responding to the observed variability in the field. Remote sensing is one of the tools of precision agriculture. The emergence of small unmanned aerial vehicles (sUAV) have paved the way to accessible remote sensing tools for farmers. This paper describes the development of an image processing approach to compare two popular off-the-shelf sUAVs: 3DR Iris+ and DJI Phantom 2. Both units are equipped with a camera gimbal attached with a GoPro camera. The comparison of the two sUAV involves a hovering test and a rectilinear motion test. In the hovering test, the sUAV was allowed to hover over a known object and images were taken every quarter of a second for two minutes. For the image processing evaluation, the position of the object in the images was measured and this was used to assess the stability of the sUAV while hovering. In the rectilinear test, the sUAV was allowed to follow a straight path and images of a lined track were acquired. The lines on the images were then measured on how accurate the sUAV followed the path. The hovering test results show that the 3DR Iris+ had a maximum position deviation of 0.64 m (0.126 m root mean square RMS displacement) while the DJI Phantom 2 had a maximum deviation of 0.79 m (0.150 m RMS displacement). In the rectilinear motion test, the maximum displacement for the 3DR Iris+ and the DJI phantom 2 were 0.85 m (0.134 m RMS displacement) and 0.73 m (0.372 m RMS displacement). These results demonstrated that the two sUAVs performed well in both the hovering test and the rectilinear motion test and thus demonstrated that both sUAVs can be used for civilian applications such as agricultural monitoring. The results also showed that the developed image processing approach can be used to evaluate performance of a sUAV and has the potential to be used as another feedback control parameter for autonomous navigation. Full article
Show Figures

Figure 1

278 KiB  
Editorial
Acknowledgement to Reviewers of Journal of Imaging in 2016
by J. Imaging Editorial Office
J. Imaging 2017, 3(1), 3; https://doi.org/10.3390/jimaging3010003 - 11 Jan 2017
Cited by 1 | Viewed by 3658
Abstract
The editors of Journal of Imaging would like to express their sincere gratitude to the following reviewers for assessing manuscripts in 2016.[...] Full article
5608 KiB  
Article
Peach Flower Monitoring Using Aerial Multispectral Imaging
by Ryan Horton, Esteban Cano, Duke Bulanon and Esmaeil Fallahi
J. Imaging 2017, 3(1), 2; https://doi.org/10.3390/jimaging3010002 - 6 Jan 2017
Cited by 39 | Viewed by 10118
Abstract
One of the tools for optimal crop production is regular monitoring and assessment of crops. During the growing season of fruit trees, the bloom period has increased photosynthetic rates that correlate with the fruiting process. This paper presents the development of an image [...] Read more.
One of the tools for optimal crop production is regular monitoring and assessment of crops. During the growing season of fruit trees, the bloom period has increased photosynthetic rates that correlate with the fruiting process. This paper presents the development of an image processing algorithm to detect peach blossoms on trees. Aerial images of peach (Prunus persica) trees were acquired from both experimental and commercial peach orchards in the southwestern part of Idaho using an off-the-shelf unmanned aerial system (UAS), equipped with a multispectral camera (near-infrared, green, blue). The image processing algorithm included contrast stretching of the three bands to enhance the image and thresholding segmentation method to detect the peach blossoms. Initial results showed that the image processing algorithm could detect peach blossoms with an average detection rate of 84.3% and demonstrated good potential as a monitoring tool for orchard management. Full article
(This article belongs to the Special Issue Image Processing in Agriculture and Forestry)
Show Figures

Figure 1

4312 KiB  
Review
Polyp Detection and Segmentation from Video Capsule Endoscopy: A Review
by V. B. Surya Prasath
J. Imaging 2017, 3(1), 1; https://doi.org/10.3390/jimaging3010001 - 23 Dec 2016
Cited by 50 | Viewed by 15067
Abstract
Video capsule endoscopy (VCE) is used widely nowadays for visualizing the gastrointestinal (GI) tract. Capsule endoscopy exams are prescribed usually as an additional monitoring mechanism and can help in identifying polyps, bleeding, etc. To analyze the large scale video data produced by VCE [...] Read more.
Video capsule endoscopy (VCE) is used widely nowadays for visualizing the gastrointestinal (GI) tract. Capsule endoscopy exams are prescribed usually as an additional monitoring mechanism and can help in identifying polyps, bleeding, etc. To analyze the large scale video data produced by VCE exams, automatic image processing, computer vision, and learning algorithms are required. Recently, automatic polyp detection algorithms have been proposed with various degrees of success. Though polyp detection in colonoscopy and other traditional endoscopy procedure based images is becoming a mature field, due to its unique imaging characteristics, detecting polyps automatically in VCE is a hard problem. We review different polyp detection approaches for VCE imagery and provide systematic analysis with challenges faced by standard image processing and computer vision methods. Full article
(This article belongs to the Special Issue Image and Video Processing in Medicine)
Show Figures

Graphical abstract

Previous Issue
Next Issue
Back to TopTop