Next Issue
Volume 8, December
Previous Issue
Volume 8, October
 
 

J. Imaging, Volume 8, Issue 11 (November 2022) – 19 articles

Cover Story (view full-size image): Clinicians managing patients with structural and congenital heart disease rely on noninvasive imaging data such as echocardiography, computed tomography and magnetic resonance imaging to plan cardiac procedures. There has been an increasing interest in novel 3D imaging techniques, such as virtual reality, enabling intuitive data interaction and realistic depth perception. Measurement accuracy in such planning tools is critical; however, there is a current paucity of such data in similar applications. In this study, we evaluate the accuracy and reliability of a VR linear measurement tool for multimodality imaging data using industry-standard phantoms to provide ground truth. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
15 pages, 1271 KiB  
Article
Perception and Quantization Model for Periodic Contour Modifications
by Dmitri Presnov and Andreas Kolb
J. Imaging 2022, 8(11), 311; https://doi.org/10.3390/jimaging8110311 - 21 Nov 2022
Cited by 2 | Viewed by 1436
Abstract
Periodic, wave-like modifications of 2D shape contours are often applied to convey quantitative data via images. However, to the best of our knowledge, there has been no in-depth investigation of the perceptual uniformity and legibility of these kind of approaches. In this paper, [...] Read more.
Periodic, wave-like modifications of 2D shape contours are often applied to convey quantitative data via images. However, to the best of our knowledge, there has been no in-depth investigation of the perceptual uniformity and legibility of these kind of approaches. In this paper, we design and perform a user study to evaluate the perception of periodic contour modifications related to their geometry and colour. Based on the study results, we statistically derive a perceptual model, which demonstrates a mainly linear stimulus-to-perception relationship for geometric and colour amplitude and a close-to-quadratic relationship for the respective frequencies, with a rather negligible dependency on the waveform. Furthermore, analyzing the distribution of perceived magnitudes and the overlapping of the respective 50% confidence intervals, we extract distinguishable, visually equidistant quantization levels for each contour-related visual variable. Moreover, we give first insights into the perceptual dependency between amplitude and frequency, and propose a scheme for transferring our model to glyphs with different size, which preserves the distinguishability and the visual equidistance. This work is seen as a first step towards a comprehensive understanding of the perception of periodic contour modifications in image-based visualizations. Full article
(This article belongs to the Special Issue Geometry Reconstruction from Images)
Show Figures

Figure 1

33 pages, 2318 KiB  
Review
A Review of Synthetic Image Data and Its Use in Computer Vision
by Keith Man and Javaan Chahl
J. Imaging 2022, 8(11), 310; https://doi.org/10.3390/jimaging8110310 - 21 Nov 2022
Cited by 37 | Viewed by 6619
Abstract
Development of computer vision algorithms using convolutional neural networks and deep learning has necessitated ever greater amounts of annotated and labelled data to produce high performance models. Large, public data sets have been instrumental in pushing forward computer vision by providing the data [...] Read more.
Development of computer vision algorithms using convolutional neural networks and deep learning has necessitated ever greater amounts of annotated and labelled data to produce high performance models. Large, public data sets have been instrumental in pushing forward computer vision by providing the data necessary for training. However, many computer vision applications cannot rely on general image data provided in the available public datasets to train models, instead requiring labelled image data that is not readily available in the public domain on a large scale. At the same time, acquiring such data from the real world can be difficult, costly to obtain, and manual labour intensive to label in large quantities. Because of this, synthetic image data has been pushed to the forefront as a potentially faster and cheaper alternative to collecting and annotating real data. This review provides general overview of types of synthetic image data, as categorised by synthesised output, common methods of synthesising different types of image data, existing applications and logical extensions, performance of synthetic image data in different applications and the associated difficulties in assessing data performance, and areas for further research. Full article
(This article belongs to the Topic Computer Vision and Image Processing)
Show Figures

Figure 1

23 pages, 6937 KiB  
Article
Robust Measures of Image-Registration-Derived Lung Biomechanics in SPIROMICS
by Yue Pan, Di Wang, Muhammad F. A. Chaudhary, Wei Shao, Sarah E. Gerard, Oguz C. Durumeric, Surya P. Bhatt, R. Graham Barr, Eric A. Hoffman, Joseph M. Reinhardt and Gary E. Christensen
J. Imaging 2022, 8(11), 309; https://doi.org/10.3390/jimaging8110309 - 16 Nov 2022
Cited by 2 | Viewed by 2386
Abstract
Chronic obstructive pulmonary disease (COPD) is an umbrella term used to define a collection of inflammatory lung diseases that cause airflow obstruction and severe damage to the lung parenchyma. This study investigated the robustness of image-registration-based local biomechanical properties of the lung in [...] Read more.
Chronic obstructive pulmonary disease (COPD) is an umbrella term used to define a collection of inflammatory lung diseases that cause airflow obstruction and severe damage to the lung parenchyma. This study investigated the robustness of image-registration-based local biomechanical properties of the lung in individuals with COPD as a function of Global Initiative for Chronic Obstructive Lung Disease (GOLD) stage. Image registration was used to estimate the pointwise correspondences between the inspiration (total lung capacity) and expiration (residual volume) computed tomography (CT) images of the lung for each subject. In total, three biomechanical measures were computed from the correspondence map: the Jacobian determinant; the anisotropic deformation index (ADI); and the slab-rod index (SRI). CT scans from 245 subjects with varying GOLD stages were analyzed from the SubPopulations and InteRmediate Outcome Measures In COPD Study (SPIROMICS). Results show monotonic increasing or decreasing trends in the three biomechanical measures as a function of GOLD stage for the entire lung and on a lobe-by-lobe basis. Furthermore, these trends held across all five image registration algorithms. The consistency of the five image registration algorithms on a per individual basis is shown using Bland–Altman plots. Full article
Show Figures

Figure 1

34 pages, 8144 KiB  
Article
3D Pose Estimation and Tracking in Handball Actions Using a Monocular Camera
by Romeo Šajina and Marina Ivašić-Kos
J. Imaging 2022, 8(11), 308; https://doi.org/10.3390/jimaging8110308 - 10 Nov 2022
Cited by 8 | Viewed by 5831
Abstract
Player pose estimation is particularly important for sports because it provides more accurate monitoring of athlete movements and performance, recognition of player actions, analysis of techniques, and evaluation of action execution accuracy. All of these tasks are extremely demanding and challenging in sports [...] Read more.
Player pose estimation is particularly important for sports because it provides more accurate monitoring of athlete movements and performance, recognition of player actions, analysis of techniques, and evaluation of action execution accuracy. All of these tasks are extremely demanding and challenging in sports that involve rapid movements of athletes with inconsistent speed and position changes, at varying distances from the camera with frequent occlusions, especially in team sports when there are more players on the field. A prerequisite for recognizing the player’s actions on the video footage and comparing their poses during the execution of an action is the detection of the player’s pose in each element of an action or technique. First, a 2D pose of the player is determined in each video frame, and converted into a 3D pose, then using the tracking method all the player poses are grouped into a sequence to construct a series of elements of a particular action. Considering that action recognition and comparison depend significantly on the accuracy of the methods used to estimate and track player pose in real-world conditions, the paper provides an overview and analysis of the methods that can be used for player pose estimation and tracking using a monocular camera, along with evaluation metrics on the example of handball scenarios. We have evaluated the applicability and robustness of 12 selected 2-stage deep learning methods for 3D pose estimation on a public and a custom dataset of handball jump shots for which they have not been trained and where never-before-seen poses may occur. Furthermore, this paper proposes methods for retargeting and smoothing the 3D sequence of poses that have experimentally shown a performance improvement for all tested models. Additionally, we evaluated the applicability and robustness of five state-of-the-art tracking methods on a public and a custom dataset of a handball training recorded with a monocular camera. The paper ends with a discussion apostrophizing the shortcomings of the pose estimation and tracking methods, reflected in the problems of locating key skeletal points and generating poses that do not follow possible human structures, which consequently reduces the overall accuracy of action recognition. Full article
(This article belongs to the Section Computer Vision and Pattern Recognition)
Show Figures

Figure 1

17 pages, 3859 KiB  
Article
Periocular Data Fusion for Age and Gender Classification
by Carmen Bisogni, Lucia Cascone and Fabio Narducci
J. Imaging 2022, 8(11), 307; https://doi.org/10.3390/jimaging8110307 - 9 Nov 2022
Cited by 2 | Viewed by 1638
Abstract
In recent years, the study of soft biometrics has gained increasing interest in the security and business sectors. These characteristics provide limited biometric information about the individual; hence, it is possible to increase performance by combining numerous data sources to overcome the accuracy [...] Read more.
In recent years, the study of soft biometrics has gained increasing interest in the security and business sectors. These characteristics provide limited biometric information about the individual; hence, it is possible to increase performance by combining numerous data sources to overcome the accuracy limitations of a single trait. In this research, we provide a study on the fusion of periocular features taken from pupils, fixations, and blinks to achieve a demographic classification, i.e., by age and gender. A data fusion approach is implemented for this purpose. To build a trust evaluation of the selected biometric traits, we first employ a concatenation scheme for fusion at the feature level and, at the score level, transformation and classifier-based score fusion approaches (e.g., weighted sum, weighted product, Bayesian rule, etc.). Data fusion enables improved performance and the synthesis of acquired information, as well as its secure storage and protection of the multi-biometric system’s original biometric models. The combination of these soft biometrics characteristics combines flawlessly the need to protect individual privacy and to have a strong discriminatory element. The results are quite encouraging, with an age classification accuracy of 84.45% and a gender classification accuracy of 84.62%, respectively. The results obtained encourage the studies on periocular area to detect soft biometrics to be applied when the lower part of the face is not visible. Full article
(This article belongs to the Special Issue Multi-Biometric and Multi-Modal Authentication)
Show Figures

Figure 1

27 pages, 10384 KiB  
Article
Analysis of Thermal Imaging Performance under Extreme Foggy Conditions: Applications to Autonomous Driving
by Josué Manuel Rivera Velázquez, Louahdi Khoudour, Guillaume Saint Pierre, Pierre Duthon, Sébastien Liandrat, Frédéric Bernardin, Sharon Fiss, Igor Ivanov and Raz Peleg
J. Imaging 2022, 8(11), 306; https://doi.org/10.3390/jimaging8110306 - 9 Nov 2022
Cited by 10 | Viewed by 3415
Abstract
Object detection is recognized as one of the most critical research areas for the perception of self-driving cars. Current vision systems combine visible imaging, LIDAR, and/or RADAR technology, allowing perception of the vehicle’s surroundings. However, harsh weather conditions mitigate the performances of these [...] Read more.
Object detection is recognized as one of the most critical research areas for the perception of self-driving cars. Current vision systems combine visible imaging, LIDAR, and/or RADAR technology, allowing perception of the vehicle’s surroundings. However, harsh weather conditions mitigate the performances of these systems. Under these circumstances, thermal imaging becomes the complementary solution to current systems not only because it makes it possible to detect and recognize the environment in the most extreme conditions, but also because thermal images are compatible with detection and recognition algorithms, such as those based on artificial neural networks. In this paper, an analysis of the resilience of thermal sensors in very unfavorable fog conditions is presented. The goal was to study the operational limits, i.e., the very degraded fog situation beyond which a thermal camera becomes unreliable. For the analysis, the mean pixel intensity and the contrast were used as indicators. Results showed that the angle of view (AOV) of a thermal camera is a determining parameter for object detection in foggy conditions. Additionally, results show that cameras with AOVs 18° and 30° are suitable for object detection, even under thick fog conditions (from 13 m meteorological optical range). These results were extended using object detection software, with which it is shown that, for the pedestrian, a detection rate ≥90% was achieved using the images from the 18° and 30° cameras. Full article
(This article belongs to the Section Computer Vision and Pattern Recognition)
Show Figures

Figure 1

12 pages, 1925 KiB  
Article
Towards a Low-Cost Monitor-Based Augmented Reality Training Platform for At-Home Ultrasound Skill Development
by Marine Y. Shao, Tamara Vagg, Matthias Seibold and Mitchell Doughty
J. Imaging 2022, 8(11), 305; https://doi.org/10.3390/jimaging8110305 - 9 Nov 2022
Cited by 5 | Viewed by 2260
Abstract
Ultrasound education traditionally involves theoretical and practical training on patients or on simulators; however, difficulty accessing training equipment during the COVID-19 pandemic has highlighted the need for home-based training systems. Due to the prohibitive cost of ultrasound probes, few medical students have access [...] Read more.
Ultrasound education traditionally involves theoretical and practical training on patients or on simulators; however, difficulty accessing training equipment during the COVID-19 pandemic has highlighted the need for home-based training systems. Due to the prohibitive cost of ultrasound probes, few medical students have access to the equipment required for at home training. Our proof of concept study focused on the development and assessment of the technical feasibility and training performance of an at-home training solution to teach the basics of interpreting and generating ultrasound data. The training solution relies on monitor-based augmented reality for displaying virtual content and requires only a marker printed on paper and a computer with webcam. With input webcam video, we performed body pose estimation to track the student’s limbs and used surface tracking of printed fiducials to track the position of a simulated ultrasound probe. The novelty of our work is in its combination of printed markers with marker-free body pose tracking. In a small user study, four ultrasound lecturers evaluated the training quality with a questionnaire and indicated the potential of our system. The strength of our method is that it allows students to learn the manipulation of an ultrasound probe through the simulated probe combined with the tracking system and to learn how to read ultrasounds in B-mode and Doppler mode. Full article
Show Figures

Figure 1

11 pages, 1600 KiB  
Article
Evaluation of a Linear Measurement Tool in Virtual Reality for Assessment of Multimodality Imaging Data—A Phantom Study
by Natasha Stephenson, Kuberan Pushparajah, Gavin Wheeler, Shujie Deng, Julia A. Schnabel and John M. Simpson
J. Imaging 2022, 8(11), 304; https://doi.org/10.3390/jimaging8110304 - 8 Nov 2022
Viewed by 1903
Abstract
This study aimed to evaluate the accuracy and reliability of a virtual reality (VR) system line measurement tool using phantom data across three cardiac imaging modalities: three-dimensional echocardiography (3DE), computed tomography (CT) and magnetic resonance imaging (MRI). The same phantoms were also measured [...] Read more.
This study aimed to evaluate the accuracy and reliability of a virtual reality (VR) system line measurement tool using phantom data across three cardiac imaging modalities: three-dimensional echocardiography (3DE), computed tomography (CT) and magnetic resonance imaging (MRI). The same phantoms were also measured using industry-standard image visualisation software packages. Two participants performed blinded measurements on volume-rendered images of standard phantoms both in VR and on an industry-standard image visualisation platform. The intra- and interrater reliability of the VR measurement method was evaluated by intraclass correlation coefficient (ICC) and coefficient of variance (CV). Measurement accuracy was analysed using Bland–Altman and mean absolute percentage error (MAPE). VR measurements showed good intra- and interobserver reliability (ICC ≥ 0.99, p < 0.05; CV < 10%) across all imaging modalities. MAPE for VR measurements compared to ground truth were 1.6%, 1.6% and 7.7% in MRI, CT and 3DE datasets, respectively. Bland–Altman analysis demonstrated no systematic measurement bias in CT or MRI data in VR compared to ground truth. A small bias toward smaller measurements in 3DE data was seen in both VR (mean −0.52 mm [−0.16 to −0.88]) and the standard platform (mean −0.22 mm [−0.03 to −0.40]) when compared to ground truth. Limits of agreement for measurements across all modalities were similar in VR and standard software. This study has shown good measurement accuracy and reliability of VR in CT and MRI data with a higher MAPE for 3DE data. This may relate to the overall smaller measurement dimensions within the 3DE phantom. Further evaluation is required of all modalities for assessment of measurements <10 mm. Full article
(This article belongs to the Topic Extended Reality (XR): AR, VR, MR and Beyond)
Show Figures

Figure 1

22 pages, 991 KiB  
Review
Harmonization Strategies in Multicenter MRI-Based Radiomics
by Elisavet Stamoulou, Constantinos Spanakis, Georgios C. Manikis, Georgia Karanasiou, Grigoris Grigoriadis, Theodoros Foukakis, Manolis Tsiknakis, Dimitrios I. Fotiadis and Kostas Marias
J. Imaging 2022, 8(11), 303; https://doi.org/10.3390/jimaging8110303 - 7 Nov 2022
Cited by 26 | Viewed by 4199
Abstract
Radiomics analysis is a powerful tool aiming to provide diagnostic and prognostic patient information directly from images that are decoded into handcrafted features, comprising descriptors of shape, size and textural patterns. Although radiomics is gaining momentum since it holds great promise for accelerating [...] Read more.
Radiomics analysis is a powerful tool aiming to provide diagnostic and prognostic patient information directly from images that are decoded into handcrafted features, comprising descriptors of shape, size and textural patterns. Although radiomics is gaining momentum since it holds great promise for accelerating digital diagnostics, it is susceptible to bias and variation due to numerous inter-patient factors (e.g., patient age and gender) as well as inter-scanner ones (different protocol acquisition depending on the scanner center). A variety of image and feature based harmonization methods has been developed to compensate for these effects; however, to the best of our knowledge, none of these techniques has been established as the most effective in the analysis pipeline so far. To this end, this review provides an overview of the challenges in optimizing radiomics analysis, and a concise summary of the most relevant harmonization techniques, aiming to provide a thorough guide to the radiomics harmonization process. Full article
(This article belongs to the Special Issue Radiomics and Texture Analysis in Medical Imaging)
Show Figures

Figure 1

13 pages, 8871 KiB  
Article
HAPPY: Hip Arthroscopy Portal Placement Using Augmented Reality
by Tianyu Song, Michael Sommersperger, The Anh Baran, Matthias Seibold and Nassir Navab
J. Imaging 2022, 8(11), 302; https://doi.org/10.3390/jimaging8110302 - 6 Nov 2022
Cited by 4 | Viewed by 2307
Abstract
Correct positioning of the endoscope is crucial for successful hip arthroscopy. Only with adequate alignment can the anatomical target area be visualized and the procedure be successfully performed. Conventionally, surgeons rely on anatomical landmarks such as bone structure, and on intraoperative X-ray imaging, [...] Read more.
Correct positioning of the endoscope is crucial for successful hip arthroscopy. Only with adequate alignment can the anatomical target area be visualized and the procedure be successfully performed. Conventionally, surgeons rely on anatomical landmarks such as bone structure, and on intraoperative X-ray imaging, to correctly place the surgical trocar and insert the endoscope to gain access to the surgical site. One factor complicating the placement is deformable soft tissue, as it can obscure important anatomical landmarks. In addition, the commonly used endoscopes with an angled camera complicate hand–eye coordination and, thus, navigation to the target area. Adjusting for an incorrectly positioned endoscope prolongs surgery time, requires a further incision and increases the radiation exposure as well as the risk of infection. In this work, we propose an augmented reality system to support endoscope placement during arthroscopy. Our method comprises the augmentation of a tracked endoscope with a virtual augmented frustum to indicate the reachable working volume. This is further combined with an in situ visualization of the patient anatomy to improve perception of the target area. For this purpose, we highlight the anatomy that is visible in the endoscopic camera frustum and use an automatic colorization method to improve spatial perception. Our system was implemented and visualized on a head-mounted display. The results of our user study indicate the benefit of the proposed system compared to baseline positioning without additional support, such as an increased alignment speed, improved positioning error and reduced mental effort. The proposed approach might aid in the positioning of an angled endoscope, and may result in better access to the surgical area, reduced surgery time, less patient trauma, and less X-ray exposure during surgery. Full article
Show Figures

Figure 1

14 pages, 2660 KiB  
Article
Z2-γ: An Application of Zienkiewicz-Zhu Error Estimator to Brain Tumor Detection in MR Images
by Antonella Falini
J. Imaging 2022, 8(11), 301; https://doi.org/10.3390/jimaging8110301 - 5 Nov 2022
Viewed by 1591
Abstract
Brain tumors are abnormal cell growth in the brain tissues that can be cancerous or not. In any case, they could be a very aggressive disease that should be detected as early as possible. Usually, magnetic resonance imaging (MRI) is the main tool [...] Read more.
Brain tumors are abnormal cell growth in the brain tissues that can be cancerous or not. In any case, they could be a very aggressive disease that should be detected as early as possible. Usually, magnetic resonance imaging (MRI) is the main tool commonly adopted by neurologists and radiologists to identify and classify any possible anomalies present in the brain anatomy. In the present work, an automatic unsupervised method called Z2-γ, based on the use of adaptive finite-elements and suitable pre-processing and post-processing techniques, is introduced. The adaptive process, driven by a Zienkiewicz-Zhu type error estimator (Z2), is carried out on isotropic triangulations, while the given input images are pre-processed via nonlinear transformations (γ corrections) to enhance the ability of the error estimator to detect any relevant anomaly. The proposed methodology is able to automatically classify whether a given MR image represents a healthy or a diseased brain and, in this latter case, is able to locate the tumor area, which can be easily delineated by removing any redundancy with post-processing techniques based on morphological transformations. The method is tested on a freely available dataset achieving 0.846 of accuracy and F1 score equal to 0.88. Full article
Show Figures

Figure 1

19 pages, 2986 KiB  
Review
Role of Cardiovascular Magnetic Resonance in the Management of Atrial Fibrillation: A Review
by Davide Tore, Riccardo Faletti, Andrea Biondo, Andrea Carisio, Fabio Giorgino, Ilenia Landolfi, Katia Rocco, Sara Salto, Ambra Santonocito, Federica Ullo, Matteo Anselmino, Paolo Fonio and Marco Gatti
J. Imaging 2022, 8(11), 300; https://doi.org/10.3390/jimaging8110300 - 4 Nov 2022
Cited by 5 | Viewed by 2658
Abstract
Atrial fibrillation (AF) is the most common arrhythmia, and its prevalence is growing with time. Since the introduction of catheter ablation procedures for the treatment of AF, cardiovascular magnetic resonance (CMR) has had an increasingly important role for the treatment of this pathology [...] Read more.
Atrial fibrillation (AF) is the most common arrhythmia, and its prevalence is growing with time. Since the introduction of catheter ablation procedures for the treatment of AF, cardiovascular magnetic resonance (CMR) has had an increasingly important role for the treatment of this pathology both in clinical practice and as a research tool to provide insight into the arrhythmic substrate. The most common applications of CMR for AF catheter ablation are the angiographic study of the pulmonary veins, the sizing of the left atrium (LA), and the evaluation of the left atrial appendage (LAA) for stroke risk assessment. Moreover, CMR may provide useful information about esophageal anatomical relationship to LA to prevent thermal injuries during ablation procedures. The use of late gadolinium enhancement (LGE) imaging allows to evaluate the burden of atrial fibrosis before the ablation procedure and to assess procedural induced scarring. Recently, the possibility to assess atrial function, strain, and the burden of cardiac adipose tissue with CMR has provided more elements for risk stratification and clinical decision making in the setting of catheter ablation planning of AF. The purpose of this review is to provide a comprehensive overview of the potential applications of CMR in the workup of ablation procedures for atrial fibrillation. Full article
(This article belongs to the Section Medical Imaging)
Show Figures

Figure 1

10 pages, 2001 KiB  
Article
Putamen Atrophy Is a Possible Clinical Evaluation Index for Parkinson’s Disease Using Human Brain Magnetic Resonance Imaging
by Keisuke Kinoshita, Takehito Kuge, Yoshie Hara and Kojiro Mekata
J. Imaging 2022, 8(11), 299; https://doi.org/10.3390/jimaging8110299 - 2 Nov 2022
Cited by 2 | Viewed by 4184
Abstract
Parkinson’s disease is characterized by motor dysfunction caused by functional deterioration of the substantia nigra. Lower putamen volume (i.e., putamen atrophy) may be an important clinical indicator of motor dysfunction and neurological symptoms, such as autonomic dysfunction, in patients with Parkinson’s disease. We [...] Read more.
Parkinson’s disease is characterized by motor dysfunction caused by functional deterioration of the substantia nigra. Lower putamen volume (i.e., putamen atrophy) may be an important clinical indicator of motor dysfunction and neurological symptoms, such as autonomic dysfunction, in patients with Parkinson’s disease. We proposed and applied a new evaluation method for putamen volume measurement on 31 high-resolution T2-weighted magnetic resonance images from 16 patients with Parkinson’s disease (age, 80.3 ± 7.30 years; seven men, nine women) and 30 such images from 19 control participants (age, 75.1 ± 7.85 years; eleven men, eight women). Putamen atrophy was expressed using a ratio based on the thalamus. The obtained values were used to assess differences between the groups using the Wilcoxon rank-sum test. The intraclass correlation coefficient showed sufficient intra-rater reliability and validity of this method. The Parkinson’s disease group had a significantly lower mean change ratio in the putamen (0.633) than the control group (0.719), suggesting that putamen atrophy may be identified using two-dimensional images. The evaluation method presented in this study may indicate the appearance of motor dysfunction and cognitive decline and could serve as a clinical evaluation index for Parkinson’s disease. Full article
(This article belongs to the Topic Medical Image Analysis)
Show Figures

Figure 1

15 pages, 420 KiB  
Article
A Multimodal Ensemble Driven by Multiobjective Optimisation to Predict Overall Survival in Non-Small-Cell Lung Cancer
by Camillo Maria Caruso, Valerio Guarrasi, Ermanno Cordelli, Rosa Sicilia, Silvia Gentile, Laura Messina, Michele Fiore, Claudia Piccolo, Bruno Beomonte Zobel, Giulio Iannello, Sara Ramella and Paolo Soda
J. Imaging 2022, 8(11), 298; https://doi.org/10.3390/jimaging8110298 - 2 Nov 2022
Cited by 7 | Viewed by 2466
Abstract
Lung cancer accounts for more deaths worldwide than any other cancer disease. In order to provide patients with the most effective treatment for these aggressive tumours, multimodal learning is emerging as a new and promising field of research that aims to extract complementary [...] Read more.
Lung cancer accounts for more deaths worldwide than any other cancer disease. In order to provide patients with the most effective treatment for these aggressive tumours, multimodal learning is emerging as a new and promising field of research that aims to extract complementary information from the data of different modalities for prognostic and predictive purposes. This knowledge could be used to optimise current treatments and maximise their effectiveness. To predict overall survival, in this work, we investigate the use of multimodal learning on the CLARO dataset, which includes CT images and clinical data collected from a cohort of non-small-cell lung cancer patients. Our method allows the identification of the optimal set of classifiers to be included in the ensemble in a late fusion approach. Specifically, after training unimodal models on each modality, it selects the best ensemble by solving a multiobjective optimisation problem that maximises both the recognition performance and the diversity of the predictions. In the ensemble, the labels of each sample are assigned using the majority voting rule. As further validation, we show that the proposed ensemble outperforms the models learning a single modality, obtaining state-of-the-art results on the task at hand. Full article
Show Figures

Figure 1

16 pages, 2038 KiB  
Article
Prognostic Value of Bone Marrow Uptake Using 18F-FDG PET/CT Scans in Solid Neoplasms
by Francisco Tustumi, David Gutiérrez Albenda, Fernando Simionato Perrotta, Rubens Antonio Aissar Sallum, Ulysses Ribeiro Junior, Carlos Alberto Buchpiguel and Paulo Schiavom Duarte
J. Imaging 2022, 8(11), 297; https://doi.org/10.3390/jimaging8110297 - 31 Oct 2022
Viewed by 2723
Abstract
Background: Fluorine-18-fluorodeoxyglucose positron emission tomography/computerized tomography (18F-FDG PET/CT) uptake is known to increase in infective and inflammatory conditions. Systemic inflammation plays a role in oncologic prognosis. Consequently, bone marrow increased uptake in oncology patients could potentially depict the systemic cancer burden. Methods: A [...] Read more.
Background: Fluorine-18-fluorodeoxyglucose positron emission tomography/computerized tomography (18F-FDG PET/CT) uptake is known to increase in infective and inflammatory conditions. Systemic inflammation plays a role in oncologic prognosis. Consequently, bone marrow increased uptake in oncology patients could potentially depict the systemic cancer burden. Methods: A single institute cohort analysis and a systematic review were performed, evaluating the prognostic role of 18F-FDG uptake in the bone marrow in solid neoplasms before treatment. The cohort included 113 esophageal cancer patients (adenocarcinoma or squamous cell carcinoma). The systematic review was based on 18 studies evaluating solid neoplasms, including gynecological, lung, pleura, breast, pancreas, head and neck, esophagus, stomach, colorectal, and anus. Results: Bone marrow 18F-FDG uptake in esophageal cancer was not correlated with staging, pathological response, and survival. High bone marrow uptake was related to advanced staging in colorectal, head and neck, and breast cancer, but not in lung cancer. Bone marrow 18F-FDG uptake was significantly associated with survival rates for lung, head and neck, breast, gastric, colorectal, pancreatic, and gynecological neoplasms but was not significantly associated with survival in pediatric neuroblastoma and esophageal cancer. Conclusion: 18F-FDG bone marrow uptake in PET/CT has prognostic value in several solid neoplasms, including lung, gastric, colorectal, head and neck, breast, pancreas, and gynecological cancers. However, future studies are still needed to define the role of bone marrow role in cancer prognostication. Full article
Show Figures

Figure 1

11 pages, 2168 KiB  
Article
Iodine-123 β-methyl-P-iodophenyl-pentadecanoic Acid (123I-BMIPP) Myocardial Scintigraphy for Breast Cancer Patients and Possible Early Signs of Cancer-Therapeutics-Related Cardiac Dysfunction (CTRCD)
by Yuko Harada, Kyosuke Shimada, Satoshi John Harada, Tomomi Sato, Yukino Kubota and Miyoko Yamashita
J. Imaging 2022, 8(11), 296; https://doi.org/10.3390/jimaging8110296 - 29 Oct 2022
Cited by 3 | Viewed by 2154
Abstract
(1) Background: The mortality of breast cancer has decreased due to the advancement of cancer therapies. However, more patients are suffering from cancer-therapeutics-related cardiac dysfunction (CTRCD). Diagnostic and treatment guidelines for CTRCD have not been fully established yet. Ultrasound cardiogram (UCG) is the [...] Read more.
(1) Background: The mortality of breast cancer has decreased due to the advancement of cancer therapies. However, more patients are suffering from cancer-therapeutics-related cardiac dysfunction (CTRCD). Diagnostic and treatment guidelines for CTRCD have not been fully established yet. Ultrasound cardiogram (UCG) is the gold standard for diagnosis of CTRCD, but many breast cancer patients cannot undergo UCG due to the surgery wounds or anatomical reasons. The purpose of the study is to evaluate the usefulness of myocardial scintigraphy using Iodine-123 β-methyl-P-iodophenyl-pentadecanoic acid (123I-BMIPP) in comparison with UCG. (2) Methods: 100 breast cancer patients who received chemotherapy within 3 years underwent Thallium (201Tl) and 23I-BMIPP myocardial perfusion and metabolism scintigraphy. The images were visually evaluated by doctors and radiological technologists, and the grade of uptake reduction was scored by Heart Risk View-S software (Nihon Medi-Physics). The scores were deployed in a 17-segment model of the heart. The distribution of the scores were analyzed. (3) Results: Nine patients (9%) could not undergo UCG. No correlation was found between left ventricular ejection fraction (LVEF) and Heart Risk View-S scores of 201Tl myocardial perfusion scintigraphy nor those of BMIPP myocardial metabolism scintigraphy. In a 17-segment model of the heart, the scores of the middle rings were higher than for the basal ring. (4) Conclusions: Evaluation by UCG is not possible for some patients. Myocardial scintigraphy cannot serve as a perfect alternative to UCG. However, it will become the preferable second-choice screening test, as it could point out the early stage of CTRCD. Full article
(This article belongs to the Topic Medical Image Analysis)
Show Figures

Figure 1

18 pages, 7441 KiB  
Article
4-Band Multispectral Images Demosaicking Combining LMMSE and Adaptive Kernel Regression Methods
by Norbert Hounsou, Amadou T. Sanda Mahama and Pierre Gouton
J. Imaging 2022, 8(11), 295; https://doi.org/10.3390/jimaging8110295 - 25 Oct 2022
Viewed by 1999
Abstract
In recent years, multispectral imaging systems are considerably expanding with a variety of multispectral demosaicking algorithms. The most crucial task is setting up an optimal multispectral demosaicking algorithm in order to reconstruct the image with less error from the raw image of a [...] Read more.
In recent years, multispectral imaging systems are considerably expanding with a variety of multispectral demosaicking algorithms. The most crucial task is setting up an optimal multispectral demosaicking algorithm in order to reconstruct the image with less error from the raw image of a single sensor. In this paper, we presented a four-band multispectral filter array (MSFA) with the dominant blue band and a multispectral demosaicking algorithm that combines the linear minimum mean square error (LMMSE) and the adaptive kernel regression methods. To estimate the missing blue bands, we used the LMMSE algorithm and for the other spectral bands, the directional gradient method, which relies on the estimated blue bands. The adaptive kernel regression is then applied to each spectral band for their update without persistent artifacts. The experiment results demonstrate that our proposed method outperforms other existing approaches both visually and quantitatively in terms of peak signal-to-noise-ratio (PSNR), structural similarity index (SSIM) and root mean square error (RMSE). Full article
(This article belongs to the Topic Computer Vision and Image Processing)
Show Figures

Figure 1

18 pages, 3434 KiB  
Article
Hybrid of Deep Learning and Word Embedding in Generating Captions: Image-Captioning Solution for Geological Rock Images
by Agus Nursikuwagus, Rinaldi Munir and Masayu Leylia Khodra
J. Imaging 2022, 8(11), 294; https://doi.org/10.3390/jimaging8110294 - 22 Oct 2022
Cited by 2 | Viewed by 3091
Abstract
Captioning is the process of assembling a description for an image. Previous research on captioning has usually focused on foreground objects. In captioning concepts, there are two main objects for discussion: background object and foreground object. In contrast to the previous image-captioning research, [...] Read more.
Captioning is the process of assembling a description for an image. Previous research on captioning has usually focused on foreground objects. In captioning concepts, there are two main objects for discussion: background object and foreground object. In contrast to the previous image-captioning research, generating captions from the geological images of rocks is more focused on the background of the images. This study proposed image captioning using a convolutional neural network, long short-term memory, and word2vec to generate words from the image. The proposed model was constructed by a convolutional neural network (CNN), long short-term memory (LSTM), and word2vec and gave a dense output of 256 units. To make it properly grammatical, a sequence of predicted words was reconstructed into a sentence by the beam search algorithm with K = 3. An evaluation of the pre-trained baseline model VGG16 and our proposed CNN-A, CNN-B, CNN-C, and CNN-D models used BLEU score methods for the N-gram. The BLEU scores achieved for BLEU-1 using these models were 0.5515, 0.6463, 0.7012, 0.7620, and 0.5620, respectively. BLEU-2 showed scores of 0.6048, 0.6507, 0.7083, 0.8756, and 0.6578, respectively. BLEU-3 performed with scores of 0.6414, 0.6892, 0.7312, 0.8861, and 0.7307, respectively. Finally, BLEU-4 had scores of 0.6526, 0.6504, 0.7345, 0.8250, and 0.7537, respectively. Our CNN-C model outperformed the other models, especially the baseline model. Furthermore, there are several future challenges in studying captions, such as geological sentence structure, geological sentence phrase, and constructing words by a geological tagger. Full article
Show Figures

Figure 1

22 pages, 10011 KiB  
Article
CNN-Based Classification for Highly Similar Vehicle Model Using Multi-Task Learning
by Donny Avianto, Agus Harjoko and Afiahayati
J. Imaging 2022, 8(11), 293; https://doi.org/10.3390/jimaging8110293 - 22 Oct 2022
Cited by 7 | Viewed by 3440
Abstract
Vehicle make and model classification is crucial to the operation of an intelligent transportation system (ITS). Fine-grained vehicle information such as make and model can help officers uncover cases of traffic violations when license plate information cannot be obtained. Various techniques have been [...] Read more.
Vehicle make and model classification is crucial to the operation of an intelligent transportation system (ITS). Fine-grained vehicle information such as make and model can help officers uncover cases of traffic violations when license plate information cannot be obtained. Various techniques have been developed to perform vehicle make and model classification. However, it is very hard to identify the make and model of vehicles with highly similar visual appearances. The classifier contains a lot of potential for mistakes because the vehicles look very similar but have different models and manufacturers. To solve this problem, a fine-grained classifier based on convolutional neural networks with a multi-task learning approach is proposed in this paper. The proposed method takes a vehicle image as input and extracts features using the VGG-16 architecture. The extracted features will then be sent to two different branches, with one branch being used to classify the vehicle model and the other to classify the vehicle make. The performance of the proposed method was evaluated using the InaV-Dash dataset, which contains an Indonesian vehicle model with a highly similar visual appearance. The experimental results show that the proposed method achieves 98.73% accuracy for vehicle make and 97.69% accuracy for vehicle model. Our study also demonstrates that the proposed method is able to improve the performance of the baseline method on highly similar vehicle classification problems. Full article
(This article belongs to the Special Issue Computer Vision and Deep Learning: Trends and Applications)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop