Journal Description
Journal of Imaging
Journal of Imaging
is an international, multi/interdisciplinary, peer-reviewed, open access journal of imaging techniques published online monthly by MDPI.
- Open Access— free for readers, with article processing charges (APC) paid by authors or their institutions.
- High Visibility: indexed within Scopus, ESCI (Web of Science), PubMed, PMC, dblp, Inspec, Ei Compendex, and other databases.
- Journal Rank: CiteScore - Q1 (Computer Graphics and Computer-Aided Design)
- Rapid Publication: manuscripts are peer-reviewed and a first decision is provided to authors approximately 20.9 days after submission; acceptance to publication is undertaken in 3.4 days (median values for papers published in this journal in the first half of 2024).
- Recognition of Reviewers: reviewers who provide timely, thorough peer-review reports receive vouchers entitling them to a discount on the APC of their next publication in any MDPI journal, in appreciation of the work done.
Impact Factor:
2.7 (2023);
5-Year Impact Factor:
3.0 (2023)
Latest Articles
Deep Learning-Based Method for Detecting Traffic Flow Parameters Under Snowfall
J. Imaging 2024, 10(12), 301; https://doi.org/10.3390/jimaging10120301 - 22 Nov 2024
Abstract
In recent years, advancements in computer vision have yielded new prospects for intelligent transportation applications, specifically in the realm of automated traffic flow data collection. Within this emerging trend, the ability to swiftly and accurately detect vehicles and extract traffic flow parameters from
[...] Read more.
In recent years, advancements in computer vision have yielded new prospects for intelligent transportation applications, specifically in the realm of automated traffic flow data collection. Within this emerging trend, the ability to swiftly and accurately detect vehicles and extract traffic flow parameters from videos captured during snowfall conditions has become imperative for numerous future applications. This paper proposes a new analytical framework designed to extract traffic flow parameters from traffic flow videos recorded under snowfall conditions. The framework encompasses four distinct stages aimed at addressing the challenges posed by image degradation and the diminished accuracy of traffic flow parameter recognition caused by snowfall. The initial two stages propose a deep learning network for removing snow particles and snow streaks, resulting in an 8.6% enhancement in vehicle recognition accuracy after snow removal, specifically under moderate snow conditions. Additionally, the operation speed is significantly enhanced. Subsequently, the latter two stages encompass yolov5-based vehicle recognition and the employment of the virtual coil method for traffic flow parameter estimation. Following rigorous testing, the accuracy of traffic flow parameter estimation reaches 97.2% under moderate snow conditions.
Full article
(This article belongs to the Section Computer Vision and Pattern Recognition)
Open AccessArticle
Learning More May Not Be Better: Knowledge Transferability in Vision-and-Language Tasks
by
Tianwei Chen, Noa Garcia, Mayu Otani, Chenhui Chu, Yuta Nakashima and Hajime Nagahara
J. Imaging 2024, 10(12), 300; https://doi.org/10.3390/jimaging10120300 - 22 Nov 2024
Abstract
Is learning more knowledge always better for vision-and-language models? In this paper, we study knowledge transferability in multi-modal tasks. The current tendency in machine learning is to assume that by joining multiple datasets from different tasks, their overall performance improves. However, we show
[...] Read more.
Is learning more knowledge always better for vision-and-language models? In this paper, we study knowledge transferability in multi-modal tasks. The current tendency in machine learning is to assume that by joining multiple datasets from different tasks, their overall performance improves. However, we show that not all knowledge transfers well or has a positive impact on related tasks, even when they share a common goal. We conducted an exhaustive analysis based on hundreds of cross-experiments on twelve vision-and-language tasks categorized into four groups. While tasks in the same group are prone to improve each other, results show that this is not always the case. In addition, other factors, such as dataset size or the pre-training stage, may have a great impact on how well the knowledge is transferred.
Full article
(This article belongs to the Special Issue Deep Learning in Image Analysis: Progress and Challenges)
►▼
Show Figures
Figure 1
Open AccessArticle
Enhanced Atrous Spatial Pyramid Pooling Feature Fusion for Small Ship Instance Segmentation
by
Rabi Sharma, Muhammad Saqib, C. T. Lin and Michael Blumenstein
J. Imaging 2024, 10(12), 299; https://doi.org/10.3390/jimaging10120299 - 21 Nov 2024
Abstract
In the maritime environment, the instance segmentation of small ships is crucial. Small ships are characterized by their limited appearance, smaller size, and ships in distant locations in marine scenes. However, existing instance segmentation algorithms do not detect and segment them, resulting in
[...] Read more.
In the maritime environment, the instance segmentation of small ships is crucial. Small ships are characterized by their limited appearance, smaller size, and ships in distant locations in marine scenes. However, existing instance segmentation algorithms do not detect and segment them, resulting in inaccurate ship segmentation. To address this, we propose a novel solution called enhanced Atrous Spatial Pyramid Pooling (ASPP) feature fusion for small ship instance segmentation. The enhanced ASPP feature fusion module focuses on small objects by refining them and fusing important features. The framework consistently outperforms state-of-the-art models, including Mask R-CNN, Cascade Mask R-CNN, YOLACT, SOLO, and SOLOv2, in three diverse datasets, achieving an average precision (mask AP) score of 75.8% for ShipSG, 69.5% for ShipInsSeg, and 54.5% for the MariBoats datasets.
Full article
(This article belongs to the Special Issue Image Processing and Computer Vision: Algorithms and Applications)
►▼
Show Figures
Figure 1
Open AccessReview
Image Processing Hardware Acceleration—A Review of Operations Involved and Current Hardware Approaches
by
Costin-Emanuel Vasile, Andrei-Alexandru Ulmămei and Călin Bîră
J. Imaging 2024, 10(12), 298; https://doi.org/10.3390/jimaging10120298 - 21 Nov 2024
Abstract
This review provides an in-depth analysis of current hardware acceleration approaches for image processing and neural network inference, focusing on key operations involved in these applications and the hardware platforms used to deploy them. We examine various solutions, including traditional CPU–GPU systems, custom
[...] Read more.
This review provides an in-depth analysis of current hardware acceleration approaches for image processing and neural network inference, focusing on key operations involved in these applications and the hardware platforms used to deploy them. We examine various solutions, including traditional CPU–GPU systems, custom ASIC designs, and FPGA implementations, while also considering emerging low-power, resource-constrained devices.
Full article
(This article belongs to the Section Image and Video Processing)
►▼
Show Figures
Figure 1
Open AccessArticle
Multimodal Machine Learning for Predicting Post-Surgery Quality of Life in Colorectal Cancer Patients
by
Maryem Rhanoui, Mounia Mikram, Kamelia Amazian, Abderrahim Ait-Abderrahim, Siham Yousfi and Imane Toughrai
J. Imaging 2024, 10(12), 297; https://doi.org/10.3390/jimaging10120297 - 21 Nov 2024
Abstract
Colorectal cancer is a major public health issue, causing significant morbidity and mortality worldwide. Treatment for colorectal cancer often has a significant impact on patients’ quality of life, which can vary over time and across individuals. The application of artificial intelligence and machine
[...] Read more.
Colorectal cancer is a major public health issue, causing significant morbidity and mortality worldwide. Treatment for colorectal cancer often has a significant impact on patients’ quality of life, which can vary over time and across individuals. The application of artificial intelligence and machine learning techniques has great potential for optimizing patient outcomes by providing valuable insights. In this paper, we propose a multimodal machine learning framework for the prediction of quality of life indicators in colorectal cancer patients at various temporal stages, leveraging both clinical data and computed tomography scan images. Additionally, we identify key predictive factors for each quality of life indicator, thereby enabling clinicians to make more informed treatment decisions and ultimately enhance patient outcomes. Our approach integrates data from multiple sources, enhancing the performance of our predictive models. The analysis demonstrates a notable improvement in accuracy for some indicators, with results for the Wexner score increasing from 24% to 48% and for the Anorectal Ultrasound score from 88% to 96% after integrating data from different modalities. These results highlight the potential of multimodal learning to provide valuable insights and improve patient care in real-world applications.
Full article
(This article belongs to the Section Medical Imaging)
►▼
Show Figures
Figure 1
Open AccessArticle
Evaluating Brain Tumor Detection with Deep Learning Convolutional Neural Networks Across Multiple MRI Modalities
by
Ioannis Stathopoulos, Luigi Serio, Efstratios Karavasilis, Maria Anthi Kouri, Georgios Velonakis, Nikolaos Kelekis and Efstathios Efstathopoulos
J. Imaging 2024, 10(12), 296; https://doi.org/10.3390/jimaging10120296 - 21 Nov 2024
Abstract
Central Nervous System (CNS) tumors represent a significant public health concern due to their high morbidity and mortality rates. Magnetic Resonance Imaging (MRI) has emerged as a critical non-invasive modality for the detection, diagnosis, and management of brain tumors, offering high-resolution visualization of
[...] Read more.
Central Nervous System (CNS) tumors represent a significant public health concern due to their high morbidity and mortality rates. Magnetic Resonance Imaging (MRI) has emerged as a critical non-invasive modality for the detection, diagnosis, and management of brain tumors, offering high-resolution visualization of anatomical structures. Recent advancements in deep learning, particularly convolutional neural networks (CNNs), have shown potential in augmenting MRI-based diagnostic accuracy for brain tumor detection. In this study, we evaluate the diagnostic performance of six fundamental MRI sequences in detecting tumor-involved brain slices using four distinct CNN architectures enhanced with transfer learning techniques. Our dataset comprises 1646 MRI slices from the examinations of 62 patients, encompassing both tumor-bearing and normal findings. With our approach, we achieved a classification accuracy of 98.6%, underscoring the high potential of CNN-based models in this context. Additionally, we assessed the performance of each MRI sequence across the different CNN models, identifying optimal combinations of MRI modalities and neural networks to meet radiologists’ screening requirements effectively. This study offers critical insights into the integration of deep learning with MRI for brain tumor detection, with implications for improving diagnostic workflows in clinical settings.
Full article
(This article belongs to the Special Issue Image Segmentation Techniques: Current Status and Future Directions (2nd Edition))
►▼
Show Figures
Figure 1
Open AccessArticle
AQSA—Algorithm for Automatic Quantification of Spheres Derived from Cancer Cells in Microfluidic Devices
by
Ana Belén Peñaherrera-Pazmiño, Ramiro Fernando Isa-Jara, Elsa Hincapié-Arias, Silvia Gómez, Denise Belgorosky, Eduardo Imanol Agüero, Matías Tellado, Ana María Eiján, Betiana Lerner and Maximiliano Pérez
J. Imaging 2024, 10(11), 295; https://doi.org/10.3390/jimaging10110295 - 20 Nov 2024
Abstract
Sphere formation assay is an accepted cancer stem cell (CSC) enrichment method. CSCs play a crucial role in chemoresistance and cancer recurrence. Therefore, CSC growth is studied in plates and microdevices to develop prediction chemotherapy assays in cancer. As counting spheres cultured in
[...] Read more.
Sphere formation assay is an accepted cancer stem cell (CSC) enrichment method. CSCs play a crucial role in chemoresistance and cancer recurrence. Therefore, CSC growth is studied in plates and microdevices to develop prediction chemotherapy assays in cancer. As counting spheres cultured in devices is laborious, time-consuming, and operator-dependent, a computational program called the Automatic Quantification of Spheres Algorithm (ASQA) that detects, identifies, counts, and measures spheres automatically was developed. The algorithm and manual counts were compared, and there was no statistically significant difference (p = 0.167). The performance of the AQSA is better when the input image has a uniform background, whereas, with a nonuniform background, artifacts can be interpreted as spheres according to image characteristics. The areas of spheres derived from LN229 cells and CSCs from primary cultures were measured. For images with one sphere, area measurements obtained with the AQSA and SpheroidJ were compared, and there was no statistically significant difference between them (p = 0.173). Notably, the AQSA detects more than one sphere, compared to other approaches available in the literature, and computes the sphere area automatically, which enables the observation of treatment response in the sphere derived from the human glioblastoma LN229 cell line. In addition, the algorithm identifies spheres with numbers to identify each one over time. The AQSA analyzes many images in 0.3 s per image with a low computational cost, enabling laboratories from developing countries to perform sphere counts and area measurements without needing a powerful computer. Consequently, it can be a useful tool for automated CSC quantification from cancer cell lines, and it can be adjusted to quantify CSCs from primary culture cells. CSC-derived sphere detection is highly relevant as it avoids expensive treatments and unnecessary toxicity.
Full article
(This article belongs to the Special Issue Advancements in Imaging Techniques for Detection of Cancer)
►▼
Show Figures
Graphical abstract
Open AccessArticle
Spatially Localized Visual Perception Estimation by Means of Prosthetic Vision Simulation
by
Diego Luján Villarreal and Wolfgang Krautschneider
J. Imaging 2024, 10(11), 294; https://doi.org/10.3390/jimaging10110294 - 18 Nov 2024
Abstract
Retinal prosthetic devices aim to repair some vision in visually impaired patients by electrically stimulating neural cells in the visual system. Although there have been several notable advancements in the creation of electrically stimulated small dot-like perceptions, a deeper comprehension of the physical
[...] Read more.
Retinal prosthetic devices aim to repair some vision in visually impaired patients by electrically stimulating neural cells in the visual system. Although there have been several notable advancements in the creation of electrically stimulated small dot-like perceptions, a deeper comprehension of the physical properties of phosphenes is still necessary. This study analyzes the influence of two independent electrode array topologies to achieve single-localized stimulation while the retina is electrically stimulated: a two-dimensional (2D) hexagon-shaped array reported in clinical studies and a patented three-dimensional (3D) linear electrode carrier. For both, cell stimulation is verified in COMSOL Multiphysics by developing a lifelike 3D computational model that includes the relevant retinal interface elements and dynamics of the voltage-gated ionic channels. The evoked percepts previously described in clinical studies using the 2D array are strongly associated with our simulation-based findings, allowing for the development of analytical models of the evoked percepts. Moreover, our findings identify differences between visual sensations induced by the arrays. The 2D array showed drawbacks during stimulation; similarly, the state-of-the-art 2D visual prostheses provide only dot-like visual sensations in close proximity to the electrode. The 3D design could offer a technique for improving cell selectivity because it requires low-intensity threshold activation which results in volumes of stimulation similar to the volume surrounded by a solitary RGC. Our research establishes a proof-of-concept technique for determining the utility of the 3D electrode array for selectively activating individual RGCs at the highest density via small-sized electrodes while maintaining electrochemical safety.
Full article
(This article belongs to the Special Issue Visual and Physiological Optics: Optical Design, Image Processing and Machine Learning Algorithms)
►▼
Show Figures
Figure 1
Open AccessEditorial
Editorial on the Special Issue “Fluorescence Imaging and Analysis of Cellular Systems”
by
Ashutosh Sharma
J. Imaging 2024, 10(11), 293; https://doi.org/10.3390/jimaging10110293 - 18 Nov 2024
Abstract
Fluorescence imaging has indeed become a cornerstone in modern cell biology due to its ability to offer highly sensitive, specific, and real-time visualization of cellular structures and dynamic processes [...]
Full article
(This article belongs to the Special Issue Fluorescence Imaging and Analysis of Cellular System)
Open AccessTechnical Note
MOTH: Memory-Efficient On-the-Fly Tiling of Histological Image Annotations Using QuPath
by
Thomas Kauer, Jannik Sehring, Kai Schmid, Marek Bartkuhn, Benedikt Wiebach, Slaven Crnkovic, Grazyna Kwapiszewska, Till Acker and Daniel Amsel
J. Imaging 2024, 10(11), 292; https://doi.org/10.3390/jimaging10110292 - 15 Nov 2024
Abstract
The emerging usage of digitalized histopathological images is leading to a novel possibility for data analysis. With the help of artificial intelligence algorithms, it is now possible to detect certain structures and morphological features on whole slide images automatically. This enables algorithms to
[...] Read more.
The emerging usage of digitalized histopathological images is leading to a novel possibility for data analysis. With the help of artificial intelligence algorithms, it is now possible to detect certain structures and morphological features on whole slide images automatically. This enables algorithms to count, measure, or evaluate those areas when trained properly. To achieve suitable training, datasets must be annotated and curated by users in programs like QuPath. The extraction of this data for artificial intelligence algorithms is still rather tedious and needs to be saved on a local hard drive. We developed a toolkit for integration into existing pipelines and tools, like U-net, for the on-the-fly extraction of annotation tiles from existing QuPath projects. The tiles can be directly used as input for artificial intelligence algorithms, and the results are directly transferred back to QuPath for visual inspection. With the toolkit, we created a convenient way to incorporate QuPath into existing AI workflows.
Full article
(This article belongs to the Special Issue Advances in Biomedical Image Processing and Artificial Intelligence for Computer-Aided Diagnosis in Medicine)
►▼
Show Figures
Figure 1
Open AccessArticle
Anatomical Characteristics of Cervicomedullary Compression on MRI Scans in Children with Achondroplasia
by
Isabella Trautwein, Daniel Behme, Philip Kunkel, Jasper Gerdes and Klaus Mohnike
J. Imaging 2024, 10(11), 291; https://doi.org/10.3390/jimaging10110291 - 14 Nov 2024
Abstract
This retrospective study assessed anatomical characteristics of cervicomedullary compression in children with achondroplasia. Twelve anatomical parameters were analyzed (foramen magnum diameter and area; myelon area; clivus length; tentorium and occipital angles; brainstem volume outside the posterior fossa; and posterior fossa, cerebellum, supratentorial ventricular
[...] Read more.
This retrospective study assessed anatomical characteristics of cervicomedullary compression in children with achondroplasia. Twelve anatomical parameters were analyzed (foramen magnum diameter and area; myelon area; clivus length; tentorium and occipital angles; brainstem volume outside the posterior fossa; and posterior fossa, cerebellum, supratentorial ventricular system, intracranial cerebrospinal fluid, and fourth ventricle volumes) from sagittal and transversal T1- and T2-weighted magnetic resonance imaging (MRI) scans from 37 children with achondroplasia aged ≤ 4 years (median [range] 0.8 [0.1–3.6] years) and compared with scans from 37 children without achondroplasia (median age 1.5 [0–3.9] years). Mann–Whitney U testing was used for between-group comparisons. Foramen magnum diameter and area were significantly smaller in children with achondroplasia compared with the reference group (mean 10.0 vs. 16.1 mm [p < 0.001] and 109.0 vs. 160.8 mm2 [p = 0.005], respectively). The tentorial angle was also steeper in children with achondroplasia (mean 47.6 vs. 38.1 degrees; p < 0.001), while the clivus was significantly shorter (mean 23.5 vs. 30.3 mm; p < 0.001). Significant differences were also observed in myelon area, occipital angle, fourth ventricle, intracranial cerebrospinal fluid and supratentorial ventricular volumes, and the volume of brainstem protruding beyond the posterior fossa (all p < 0.05). MRI analysis of brain structures may provide a standardized value to indicate decompression surgery in children with achondroplasia.
Full article
(This article belongs to the Special Issue Deep Learning in Computer Vision)
►▼
Show Figures
Figure 1
Open AccessArticle
Preclinical Implementation of matRadiomics: A Case Study for Early Malformation Prediction in Zebrafish Model
by
Fabiano Bini, Elisa Missori, Gaia Pucci, Giovanni Pasini, Franco Marinozzi, Giusi Irma Forte, Giorgio Russo and Alessandro Stefano
J. Imaging 2024, 10(11), 290; https://doi.org/10.3390/jimaging10110290 - 14 Nov 2024
Abstract
►▼
Show Figures
Radiomics provides a structured approach to support clinical decision-making through key steps; however, users often face difficulties when switching between various software platforms to complete the workflow. To streamline this process, matRadiomics integrates the entire radiomics workflow within a single platform. This study
[...] Read more.
Radiomics provides a structured approach to support clinical decision-making through key steps; however, users often face difficulties when switching between various software platforms to complete the workflow. To streamline this process, matRadiomics integrates the entire radiomics workflow within a single platform. This study extends matRadiomics to preclinical settings and validates it through a case study focused on early malformation differentiation in a zebrafish model. The proposed plugin incorporates Pyradiomics and streamlines feature extraction, selection, and classification using machine learning models (linear discriminant analysis—LDA; k-nearest neighbors—KNNs; and support vector machines—SVMs) with k-fold cross-validation for model validation. Classifier performances are evaluated using area under the ROC curve (AUC) and accuracy. The case study indicated the criticality of the long time required to extract features from preclinical images, generally of higher resolution than clinical images. To address this, a feature analysis was conducted to optimize settings, reducing extraction time while maintaining similarity to the original features. As a result, SVM exhibited the best performance for early malformation differentiation in zebrafish (AUC = 0.723; accuracy of 0.72). This case study underscores the plugin’s versatility and effectiveness in early biological outcome prediction, emphasizing its applicability across biomedical research fields.
Full article
Figure 1
Open AccessArticle
The Methodology of Adaptive Levels of Interval for Laser Speckle Imaging
by
Ali A. Al-Temeemy
J. Imaging 2024, 10(11), 289; https://doi.org/10.3390/jimaging10110289 - 11 Nov 2024
Abstract
A methodology is proposed for use in the laser speckle imaging field. This methodology modified the graphical and numerical speckle pattern imaging methods to improve their extraction and discrimination capabilities when processing the embedded temporal activity in the images of laser speckle patterns.
[...] Read more.
A methodology is proposed for use in the laser speckle imaging field. This methodology modified the graphical and numerical speckle pattern imaging methods to improve their extraction and discrimination capabilities when processing the embedded temporal activity in the images of laser speckle patterns. This is through enabling these methods to adapt the levels of speckle images’ interval during processing to speed up the process and overcome the lack of discrimination when they deal with a complex scattering medium having regions of various scales of activity. The impact of using the new methodology on the imaging methods’ performance was evaluated using graphical and numerical evaluation tests, in addition, an exceptional laser speckle imaging system was designed and implemented to undertake a series of experimental validation tests on this methodology. The evaluation and experimental validation tests show the effectiveness of this methodology on the extraction and discrimination capabilities for the standard imaging speckle pattern methods and prove its ability to provide high performance with the real images of speckle patterns. The results also show an improvement in the processing speed for both graphical and numerical methods when the adaptive levels methodology is applied to them, which reaches for the graphical and for the numerical speckle processing methods.
Full article
(This article belongs to the Section Image and Video Processing)
►▼
Show Figures
Figure 1
Open AccessArticle
Iris Recognition System Using Advanced Segmentation Techniques and Fuzzy Clustering Methods for Robotic Control
by
Slim Ben Chaabane, Rafika Harrabi and Hassene Seddik
J. Imaging 2024, 10(11), 288; https://doi.org/10.3390/jimaging10110288 - 8 Nov 2024
Abstract
The idea of developing a robot controlled by iris movement to assist physically disabled individuals is, indeed, innovative and has the potential to significantly improve their quality of life. This technology can empower individuals with limited mobility and enhance their ability to interact
[...] Read more.
The idea of developing a robot controlled by iris movement to assist physically disabled individuals is, indeed, innovative and has the potential to significantly improve their quality of life. This technology can empower individuals with limited mobility and enhance their ability to interact with their environment. Disability of movement has a huge impact on the lives of physically disabled people. Therefore, there is need to develop a robot that can be controlled using iris movement. The main idea of this work revolves around iris recognition from an eye image, specifically identifying the centroid of the iris. The centroid’s position is then utilized to issue commands to control the robot. This innovative approach leverages iris movement as a means of communication and control, offering a potential breakthrough in assisting individuals with physical disabilities. The proposed method aims to improve the precision and effectiveness of iris recognition by incorporating advanced segmentation techniques and fuzzy clustering methods. Fast gradient filters using a fuzzy inference system (FIS) are employed to separate the iris from its surroundings. Then, the bald eagle search (BES) algorithm is employed to locate and isolate the iris region. Subsequently, the fuzzy KNN algorithm is applied for the matching process. This combined methodology aims to improve the overall performance of iris recognition systems by leveraging advanced segmentation, search, and classification techniques. The results of the proposed model are validated using the true success rate (TSR) and compared to those of other existing models. These results highlight the effectiveness of the proposed method for the 400 tested images representing 40 people.
Full article
(This article belongs to the Topic Applications in Image Analysis and Pattern Recognition)
►▼
Show Figures
Figure 1
Open AccessArticle
Precision Ice Detection on Power Transmission Lines: A Novel Approach with Multi-Scale Retinex and Advanced Morphological Edge Detection Monitoring
by
Nalini Rizkyta Nusantika, Jin Xiao and Xiaoguang Hu
J. Imaging 2024, 10(11), 287; https://doi.org/10.3390/jimaging10110287 - 8 Nov 2024
Abstract
Line icings on the power transmission lines are dangerous risks that may lead to situations like structural damage or power outages. The current techniques used for identifying ice have certain drawbacks, particularly when used in complex environments. This paper aims to detect lines
[...] Read more.
Line icings on the power transmission lines are dangerous risks that may lead to situations like structural damage or power outages. The current techniques used for identifying ice have certain drawbacks, particularly when used in complex environments. This paper aims to detect lines on the top and bottom in PTLI with low illumination and complex backgrounds. The proposed method integrates multistage image processing techniques, including image enhancement, filtering, thresholding, object isolation, edge detection, and line identification. A binocular camera is used to capture images of PTLI. The effectiveness of the method is evaluated through a series of metrics, including accuracy, sensitivity, specificity, and precision, and compared with existing methods. It is observed that the proposed method significantly outperforms the existing methods of ice detection and thickness measurement. This paper uses average accuracy of detection and isolation of ice formations under various conditions at a percentage of 98.35, sensitivity at 91.63%, specificity at 99.42%, and precision of 96.03%. Furthermore, the accuracy of the ice thickness based on the thickness measurements is shown with a much smaller RMSE of 1.20 mm, MAE of 1.10 mm, and R-squared of 0.95. The proposed scheme for ice detection provides a more accurate and reliable method for monitoring ice formation on power transmission lines.
Full article
(This article belongs to the Topic Applications in Image Analysis and Pattern Recognition)
►▼
Show Figures
Figure 1
Open AccessArticle
ssc-cdi: A Memory-Efficient, Multi-GPU Package for Ptychography with Extreme Data
by
Yuri Rossi Tonin, Alan Zanoni Peixinho, Mauro Luiz Brandao-Junior, Paola Ferraz and Eduardo Xavier Miqueles
J. Imaging 2024, 10(11), 286; https://doi.org/10.3390/jimaging10110286 - 7 Nov 2024
Abstract
We introduce <tt>ssc-cdi</tt>, an open-source software package from the Sirius Scientific Computing family, designed for memory-efficient, single-node multi-GPU ptychography reconstruction. <tt>ssc-cdi</tt> offers a range of reconstruction engines in Python version 3.9.2 and C++/CUDA. It aims at developing local expertise and customized solutions to
[...] Read more.
We introduce <tt>ssc-cdi</tt>, an open-source software package from the Sirius Scientific Computing family, designed for memory-efficient, single-node multi-GPU ptychography reconstruction. <tt>ssc-cdi</tt> offers a range of reconstruction engines in Python version 3.9.2 and C++/CUDA. It aims at developing local expertise and customized solutions to meet the specific needs of beamlines and user community of the Brazilian Synchrotron Light Laboratory (LNLS). We demonstrate ptychographic reconstruction of beamline data and present benchmarks for the package. Results show that <tt>ssc-cdi</tt> effectively handles extreme datasets typical of modern X-ray facilities without significantly compromising performance, offering a complementary approach to well-established packages of the community and serving as a robust tool for high-resolution imaging applications.
Full article
(This article belongs to the Special Issue Recent Advances in X-ray Imaging)
►▼
Show Figures
Figure 1
Open AccessArticle
A Mathematical Model for Wind Velocity Field Reconstruction and Visualization Taking into Account the Topography Influence
by
Guzel Khayretdinova and Christian Gout
J. Imaging 2024, 10(11), 285; https://doi.org/10.3390/jimaging10110285 - 7 Nov 2024
Abstract
In this paper, we propose a global modelling for vector field approximation from a given finite set of vectors (corresponding to the wind velocity field or marine currents). In the modelling, we propose using the minimization on a Hilbert space of an energy
[...] Read more.
In this paper, we propose a global modelling for vector field approximation from a given finite set of vectors (corresponding to the wind velocity field or marine currents). In the modelling, we propose using the minimization on a Hilbert space of an energy functional that includes a fidelity criterion to the data and a smoothing term. We discretize the continuous problem using a finite elements method. We then propose taking into account the topographic effects on the wind velocity field, and visualization using a free library is also proposed, which constitutes an added value compared to other vector field approximation models.
Full article
(This article belongs to the Special Issue Geometry Reconstruction from Images (2nd Edition))
►▼
Show Figures
Figure 1
Open AccessArticle
Strabismus Detection in Monocular Eye Images for Telemedicine Applications
by
Wattanapong Kurdthongmee, Lunla Udomvej, Arsanchai Sukkuea, Piyadhida Kurdthongmee, Chitchanok Sangeamwong and Chayanid Chanakarn
J. Imaging 2024, 10(11), 284; https://doi.org/10.3390/jimaging10110284 - 7 Nov 2024
Abstract
►▼
Show Figures
This study presents a novel method for the early detection of strabismus, a common eye misalignment disorder, with an emphasis on its application in telemedicine. The technique leverages synchronized eye movements to estimate the pupil location of one eye based on the other,
[...] Read more.
This study presents a novel method for the early detection of strabismus, a common eye misalignment disorder, with an emphasis on its application in telemedicine. The technique leverages synchronized eye movements to estimate the pupil location of one eye based on the other, achieving close alignment in non-strabismic cases. Regression models for each eye are developed using advanced machine learning algorithms, and significant discrepancies between estimated and actual pupil positions indicate the presence of strabismus. This approach provides a non-invasive, efficient solution for early detection and bridges the gap between basic research and clinical care by offering an accessible, machine learning-based tool that facilitates timely intervention and improved outcomes in diverse healthcare settings. The potential for pediatric screening is discussed as a possible direction for future research.
Full article
Figure 1
Open AccessArticle
Bright Luminal Sign on High b-Value Diffusion-Weighted Magnetic Resonance Enterography Imaging as a New Biomarker to Predict Fibrotic Strictures in Crohn’s Disease Patients: A Retrospective Preliminary Study
by
Luca Pio Stoppino, Stefano Piscone, Ottavia Quarta Colosso, Sara Saccone, Paola Milillo, Nicola Della Valle, Rodolfo Sacco, Alfonso Reginelli, Luca Macarini and Roberta Vinci
J. Imaging 2024, 10(11), 283; https://doi.org/10.3390/jimaging10110283 - 7 Nov 2024
Abstract
A retrospective analysis was conducted to investigate how a bright luminal sign on high b-value diffusion-weighted imaging (DWI) could be considered as a new biomarker for identifying fibrotic strictures in Crohn’s disease (CD). Fibrotic strictures, due to excessive deposition of extracellular matrix following
[...] Read more.
A retrospective analysis was conducted to investigate how a bright luminal sign on high b-value diffusion-weighted imaging (DWI) could be considered as a new biomarker for identifying fibrotic strictures in Crohn’s disease (CD). Fibrotic strictures, due to excessive deposition of extracellular matrix following chronic inflammatory processes, can be difficult to distinguish from inflammatory strictures using endoscopy. This study was performed on 65 patients with CD who underwent MRE, and among them 32 patients showed the bright luminal sign on high b-value DWI. DWI findings were compared to pre- and post-contrast MRE data. Luminal bright sign performance results were calculated using a confusion matrix, the relationship between categorical variables was assessed by the χ2 test of independence, and the Kruskal–Wallis test (ANOVA) was used for the assessment of statistical significance of differences between groups. The results indicated a high sensitivity (90%) and specificity (85%) of the bright luminal sign for fibro-stenotic CD and a significant correlation between DWI luminal brightness and markers such as the homogeneous enhancement pattern (p < 0.001), increase in enhancement percentage from 70 s to 7 min after gadolinium injection (p < 0.001), and submucosal fat penetration (p = 0.05). These findings indicate that DWI hyperintensity can be considered as a good non-invasive indicator for the detection of severe intestinal fibrosis and may provide an efficient and accurate method for assessing fibrotic strictures. This new non-invasive biomarker could allow an early diagnosis of fibrotic stricture, delaying the onset of complications and subsequent surgery. Moreover, further evaluations through larger prospective trials with histopathological correlation are needed to confirm these results and completely determine the clinical benefits of DWI in treating CD.
Full article
(This article belongs to the Special Issue New Perspectives in Medical Image Analysis)
►▼
Show Figures
Figure 1
Open AccessArticle
Considerations for a Micromirror Array Optimized for Compressive Sensing (VIS to MIR) in Space Applications
by
Ulrike Dauderstädt, Peter Dürr, Detlef Kunze, Sara Francés González, Donato Borrelli, Lorenzo Palombi, Valentina Raimondi and Michael Wagner
J. Imaging 2024, 10(11), 282; https://doi.org/10.3390/jimaging10110282 - 5 Nov 2024
Abstract
►▼
Show Figures
Earth observation (EO) is crucial for addressing environmental and societal challenges, but it struggles with revisit times and spatial resolution. The EU-funded SURPRISE project aims to improve EO capabilities by studying space instrumentation using compressive sensing (CS) implemented through spatial light modulators (SLMs)
[...] Read more.
Earth observation (EO) is crucial for addressing environmental and societal challenges, but it struggles with revisit times and spatial resolution. The EU-funded SURPRISE project aims to improve EO capabilities by studying space instrumentation using compressive sensing (CS) implemented through spatial light modulators (SLMs) based on micromirror arrays (MMAs) to improve the ground sampling distance. In the SURPRISE project, we studied the development of an MMA that meets the requirements of a CS-based geostationary instrument working in the visible (VIS) and mid-infrared (MIR) spectral ranges. This paper describes the optical simulation procedure and the results obtained for analyzing the performance of such an MMA with the goal of identifying a mirror design that would allow the device to meet the optical requirements of this specific application.
Full article
Figure 1
Highly Accessed Articles
Latest Books
E-Mail Alert
News
21 November 2024
Meet Us at the IEEE Region 10 Conference 2024 (TENCON 2024), 1–4 December 2024, Marina Bay, Singapore
Meet Us at the IEEE Region 10 Conference 2024 (TENCON 2024), 1–4 December 2024, Marina Bay, Singapore
13 November 2024
Meet Us at the 2024 International Conference on Science and Engineering of Electronics (ICSEE 2024), 22–26 November 2024, Wuhan, China
Meet Us at the 2024 International Conference on Science and Engineering of Electronics (ICSEE 2024), 22–26 November 2024, Wuhan, China
Topics
Topic in
Applied Sciences, Electronics, J. Imaging, MAKE, Remote Sensing
Computational Intelligence in Remote Sensing: 2nd Edition
Topic Editors: Yue Wu, Kai Qin, Maoguo Gong, Qiguang MiaoDeadline: 31 December 2024
Topic in
Future Internet, Information, J. Imaging, Mathematics, Symmetry
Research on Deep Neural Networks for Video Motion Recognition
Topic Editors: Hamad Naeem, Hong Su, Amjad Alsirhani, Muhammad Shoaib BhuttaDeadline: 31 January 2025
Topic in
Applied Sciences, Computers, Electronics, Information, J. Imaging
Visual Computing and Understanding: New Developments and Trends
Topic Editors: Wei Zhou, Guanghui Yue, Wenhan YangDeadline: 30 March 2025
Topic in
Applied Sciences, Computation, Entropy, J. Imaging, Optics
Color Image Processing: Models and Methods (CIP: MM)
Topic Editors: Giuliana Ramella, Isabella TorcicolloDeadline: 30 July 2025
Conferences
Special Issues
Special Issue in
J. Imaging
Learning and Optimization for Medical Imaging
Guest Editors: Simona Moldovanu, Elena MorottiDeadline: 30 November 2024
Special Issue in
J. Imaging
Clinical and Pathological Imaging in the Era of Artificial Intelligence: New Insights and Perspectives
Guest Editors: Gerardo Cazzato, Francesca ArezzoDeadline: 30 November 2024
Special Issue in
J. Imaging
Advances in Retinal Image Processing
Guest Editors: P. Jidesh, Vasudevan (Vengu) LakshminarayananDeadline: 30 November 2024
Special Issue in
J. Imaging
Color in Image Processing and Computer Vision
Guest Editors: Alain Tremeau, Marco Buzzelli, Shoji TominagaDeadline: 30 November 2024