Next Issue
Volume 9, September
Previous Issue
Volume 9, July
 
 

J. Imaging, Volume 9, Issue 8 (August 2023) – 17 articles

Cover Story (view full-size image): The k-space sampling trajectory heavily affects the performance of compressed sensing image reconstruction models. Still, knowledge on the relative performance of the well-known compressed sensing models with three-dimensional radial trajectories is lacking. Here, we utilize radial three-dimensional T1 mapping to compare the performance of total variation, locally low rank, and Huber penalty function approaches. Graphics processing unit implementation of a preconditioned primal-dual proximal splitting algorithm is used to solve the large-scale optimization problems in feasible time. Spatial total variation combined with locally low rank was the best performing model. However, differences between the models are not necessarily large. All of the source code is freely available. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
15 pages, 5524 KiB  
Article
Content-Based Image Retrieval for Traditional Indonesian Woven Fabric Images Using a Modified Convolutional Neural Network Method
by Silvester Tena, Rudy Hartanto and Igi Ardiyanto
J. Imaging 2023, 9(8), 165; https://doi.org/10.3390/jimaging9080165 - 18 Aug 2023
Cited by 3 | Viewed by 2909
Abstract
A content-based image retrieval system, as an Indonesian traditional woven fabric knowledge base, can be useful for artisans and trade promotions. However, creating an effective and efficient retrieval system is difficult due to the lack of an Indonesian traditional woven fabric dataset, and [...] Read more.
A content-based image retrieval system, as an Indonesian traditional woven fabric knowledge base, can be useful for artisans and trade promotions. However, creating an effective and efficient retrieval system is difficult due to the lack of an Indonesian traditional woven fabric dataset, and unique characteristics are not considered simultaneously. One type of traditional Indonesian fabric is ikat woven fabric. Thus, this study collected images of this traditional Indonesian woven fabric to create the TenunIkatNet dataset. The dataset consists of 120 classes and 4800 images. The images were captured perpendicularly, and the ikat woven fabrics were placed on different backgrounds, hung, and worn on the body, according to the utilization patterns. The feature extraction method using a modified convolutional neural network (MCNN) learns the unique features of Indonesian traditional woven fabrics. The experimental results show that the modified CNN model outperforms other pretrained CNN models (i.e., ResNet101, VGG16, DenseNet201, InceptionV3, MobileNetV2, Xception, and InceptionResNetV2) in top-5, top-10, top-20, and top-50 accuracies with scores of 99.96%, 99.88%, 99.50%, and 97.60%, respectively. Full article
(This article belongs to the Special Issue Advances in Image Analysis: Shapes, Textures and Multifractals)
Show Figures

Figure 1

11 pages, 2091 KiB  
Article
Accuracy of Intra-Oral Radiography and Cone Beam Computed Tomography in the Diagnosis of Buccal Bone Loss
by Véronique Christiaens, Ruben Pauwels, Bassant Mowafey and Reinhilde Jacobs
J. Imaging 2023, 9(8), 164; https://doi.org/10.3390/jimaging9080164 - 17 Aug 2023
Cited by 1 | Viewed by 1980
Abstract
Background: The use of cone beam computed tomography (CBCT) in dentistry started in the maxillofacial field, where it was used for complex and comprehensive treatment planning. Due to the use of reduced radiation dose compared to a computed tomography (CT) scan, CBCT has [...] Read more.
Background: The use of cone beam computed tomography (CBCT) in dentistry started in the maxillofacial field, where it was used for complex and comprehensive treatment planning. Due to the use of reduced radiation dose compared to a computed tomography (CT) scan, CBCT has become a frequently used diagnostic tool in dental practice. However, published data on the accuracy of CBCT in the diagnosis of buccal bone level is lacking. The aim of this study was to compare the accuracy of intra-oral radiography (IOR) and CBCT in the diagnosis of the extent of buccal bone loss. Methods: A dry skull was used to create a buccal bone defect at the most coronal level of a first premolar; the defect was enlarged apically in steps of 1 mm. After each step, IOR and CBCT were taken. Based on the CBCT data, two observers jointly selected three axial slices at different levels of the buccal bone, as well as one transverse slice. Six dentists participated in the radiographic observations. First, all observers received the 10 intra-oral radiographs, and each observer was asked to rank the intra-oral radiographs on the extent of the buccal bone defect. Afterwards, the procedure was repeated with the CBCT scans based on a combination of axial and transverse information. For the second part of the study, each observer was asked to evaluate the axial and transverse CBCT slices on the presence or absence of a buccal bone defect. Results: The percentage of buccal bone defect progression rankings that were within 1 of the true rank was 32% for IOR and 42% for CBCT. On average, kappa values increased by 0.384 for CBCT compared to intra-oral radiography. The overall sensitivity and specificity of CBCT in the diagnosis of the presence or absence of a buccal bone defect was 0.89 and 0.85, respectively. The average area under the curve (AUC) of the receiver operating curve (ROC) was 0.892 for all observers. Conclusion: When CBCT images are available for justified indications, other than bone level assessment, such 3D images are more accurate and thus preferred to 2D images to assess periodontal buccal bone. For other clinical applications, intra-oral radiography remains the standard method for radiographic evaluation. Full article
(This article belongs to the Section Medical Imaging)
Show Figures

Figure 1

15 pages, 3172 KiB  
Article
MRI-Based Effective Ensemble Frameworks for Predicting Human Brain Tumor
by Farhana Khan, Shahnawaz Ayoub, Yonis Gulzar, Muneer Majid, Faheem Ahmad Reegu, Mohammad Shuaib Mir, Arjumand Bano Soomro and Osman Elwasila
J. Imaging 2023, 9(8), 163; https://doi.org/10.3390/jimaging9080163 - 16 Aug 2023
Cited by 17 | Viewed by 1810
Abstract
The diagnosis of brain tumors at an early stage is an exigent task for radiologists. Untreated patients rarely survive more than six months. It is a potential cause of mortality that can occur very quickly. Because of this, the early and effective diagnosis [...] Read more.
The diagnosis of brain tumors at an early stage is an exigent task for radiologists. Untreated patients rarely survive more than six months. It is a potential cause of mortality that can occur very quickly. Because of this, the early and effective diagnosis of brain tumors requires the use of an automated method. This study aims at the early detection of brain tumors using brain magnetic resonance imaging (MRI) data and efficient learning paradigms. In visual feature extraction, convolutional neural networks (CNN) have achieved significant breakthroughs. The study involves features extraction by deep convolutional layers for the efficient classification of brain tumor victims from the normal group. The deep convolutional neural network was implemented to extract features that represent the image more comprehensively for model training. Using deep convolutional features helps to increase the precision of tumor and non-tumor patient classifications. In this paper, we experimented with five machine learnings (ML) to heighten the understanding and enhance the scope and significance of brain tumor classification. Further, we proposed an ensemble of three high-performing individual ML models, namely Extreme Gradient Boosting, Ada-Boost, and Random Forest (XG-Ada-RF), to derive binary class classification output for detecting brain tumors in images. The proposed voting classifier, along with convoluted features, produced results that showed the highest accuracy of 95.9% for tumor and 94.9% for normal. Compared to individual methods, the proposed ensemble approach demonstrated improved accuracy and outperformed the individual methods. Full article
(This article belongs to the Special Issue Multimodal Imaging for Radiotherapy: Latest Advances and Challenges)
Show Figures

Figure 1

15 pages, 645 KiB  
Article
Thangka Image Captioning Based on Semantic Concept Prompt and Multimodal Feature Optimization
by Wenjin Hu, Lang Qiao, Wendong Kang and Xinyue Shi
J. Imaging 2023, 9(8), 162; https://doi.org/10.3390/jimaging9080162 - 16 Aug 2023
Viewed by 1675
Abstract
Thangka images exhibit a high level of diversity and richness, and the existing deep learning-based image captioning methods generate poor accuracy and richness of Chinese captions for Thangka images. To address this issue, this paper proposes a Semantic Concept Prompt and Multimodal Feature [...] Read more.
Thangka images exhibit a high level of diversity and richness, and the existing deep learning-based image captioning methods generate poor accuracy and richness of Chinese captions for Thangka images. To address this issue, this paper proposes a Semantic Concept Prompt and Multimodal Feature Optimization network (SCAMF-Net). The Semantic Concept Prompt (SCP) module is introduced in the text encoding stage to obtain more semantic information about the Thangka by introducing contextual prompts, thus enhancing the richness of the description content. The Multimodal Feature Optimization (MFO) module is proposed to optimize the correlation between Thangka images and text. This module enhances the correlation between the image features and text features of the Thangka through the Captioner and Filter to more accurately describe the visual concept features of the Thangka. The experimental results demonstrate that our proposed method outperforms baseline models on the Thangka dataset in terms of BLEU-4, METEOR, ROUGE, CIDEr, and SPICE by 8.7%, 7.9%, 8.2%, 76.6%, and 5.7%, respectively. Furthermore, this method also exhibits superior performance compared to the state-of-the-art methods on the public MSCOCO dataset. Full article
(This article belongs to the Topic Computer Vision and Image Processing)
Show Figures

Figure 1

18 pages, 5485 KiB  
Article
Vision Transformer Customized for Environment Detection and Collision Prediction to Assist the Visually Impaired
by Nasrin Bayat, Jong-Hwan Kim, Renoa Choudhury, Ibrahim F. Kadhim, Zubaidah Al-Mashhadani, Mark Aldritz Dela Virgen, Reuben Latorre, Ricardo De La Paz and Joon-Hyuk Park
J. Imaging 2023, 9(8), 161; https://doi.org/10.3390/jimaging9080161 - 15 Aug 2023
Cited by 4 | Viewed by 2087
Abstract
This paper presents a system that utilizes vision transformers and multimodal feedback modules to facilitate navigation and collision avoidance for the visually impaired. By implementing vision transformers, the system achieves accurate object detection, enabling the real-time identification of objects in front of the [...] Read more.
This paper presents a system that utilizes vision transformers and multimodal feedback modules to facilitate navigation and collision avoidance for the visually impaired. By implementing vision transformers, the system achieves accurate object detection, enabling the real-time identification of objects in front of the user. Semantic segmentation and the algorithms developed in this work provide a means to generate a trajectory vector of all identified objects from the vision transformer and to detect objects that are likely to intersect with the user’s walking path. Audio and vibrotactile feedback modules are integrated to convey collision warning through multimodal feedback. The dataset used to create the model was captured from both indoor and outdoor settings under different weather conditions at different times across multiple days, resulting in 27,867 photos consisting of 24 different classes. Classification results showed good performance (95% accuracy), supporting the efficacy and reliability of the proposed model. The design and control methods of the multimodal feedback modules for collision warning are also presented, while the experimental validation concerning their usability and efficiency stands as an upcoming endeavor. The demonstrated performance of the vision transformer and the presented algorithms in conjunction with the multimodal feedback modules show promising prospects of its feasibility and applicability for the navigation assistance of individuals with vision impairment. Full article
Show Figures

Figure 1

13 pages, 6011 KiB  
Article
Selective Optical Imaging for Detection of Bacterial Biofilms in Tissues
by Michael Okebiorun, Cody Oberbeck, Cameron Waite, Samuel Clark, Dalton Miller, Elisa H. Barney Smith, Kenneth A. Cornell and Jim Browning
J. Imaging 2023, 9(8), 160; https://doi.org/10.3390/jimaging9080160 - 15 Aug 2023
Viewed by 1648
Abstract
Significance: The development of an imaging technique to accurately identify biofilm regions on tissues and in wounds is crucial for the implementation of precise surface-based treatments, leading to better patient outcomes and reduced chances of infection. Aim: The goal of this study was [...] Read more.
Significance: The development of an imaging technique to accurately identify biofilm regions on tissues and in wounds is crucial for the implementation of precise surface-based treatments, leading to better patient outcomes and reduced chances of infection. Aim: The goal of this study was to develop an imaging technique that relies on selective trypan blue (TB) staining of dead cells, necrotic tissues, and bacterial biofilms, to identify biofilm regions on tissues and wounds. Approach: The study explored combinations of ambient multi-colored LED lights to obtain maximum differentiation between stained biofilm regions and the underlying chicken tissue or glass substrate during image acquisition. The TB imaging results were then visually and statistically compared to fluorescence images using a shape similarity measure. Results: The comparisons between the proposed TB staining method and the fluorescence standard used to detect biofilms on tissues and glass substrates showed up to 97 percent similarity, suggesting that the TB staining method is a promising technique for identifying biofilm regions. Conclusions: The TB staining method demonstrates significant potential as an effective imaging technique for the identification of fluorescing and non-fluorescing biofilms on tissues and in wounds. This approach could lead to improved precision in surface-based treatments and better patient outcomes. Full article
(This article belongs to the Section Medical Imaging)
Show Figures

Figure 1

14 pages, 3582 KiB  
Article
Biased Deep Learning Methods in Detection of COVID-19 Using CT Images: A Challenge Mounted by Subject-Wise-Split ISFCT Dataset
by Shiva Parsarad, Narges Saeedizadeh, Ghazaleh Jamalipour Soufi, Shamim Shafieyoon, Farzaneh Hekmatnia, Andrew Parviz Zarei, Samira Soleimany, Amir Yousefi, Hengameh Nazari, Pegah Torabi, Abbas S. Milani, Seyed Ali Madani Tonekaboni, Hossein Rabbani, Ali Hekmatnia and Rahele Kafieh
J. Imaging 2023, 9(8), 159; https://doi.org/10.3390/jimaging9080159 - 8 Aug 2023
Cited by 1 | Viewed by 1726
Abstract
Accurate detection of respiratory system damage including COVID-19 is considered one of the crucial applications of deep learning (DL) models using CT images. However, the main shortcoming of the published works has been unreliable reported accuracy and the lack of repeatability with new [...] Read more.
Accurate detection of respiratory system damage including COVID-19 is considered one of the crucial applications of deep learning (DL) models using CT images. However, the main shortcoming of the published works has been unreliable reported accuracy and the lack of repeatability with new datasets, mainly due to slice-wise splits of the data, creating dependency between training and test sets due to shared data across the sets. We introduce a new dataset of CT images (ISFCT Dataset) with labels indicating the subject-wise split to train and test our DL algorithms in an unbiased manner. We also use this dataset to validate the real performance of the published works in a subject-wise data split. Another key feature provides more specific labels (eight characteristic lung features) rather than being limited to COVID-19 and healthy labels. We show that the reported high accuracy of the existing models on current slice-wise splits is not repeatable for subject-wise splits, and distribution differences between data splits are demonstrated using t-distribution stochastic neighbor embedding. We indicate that, by examining subject-wise data splitting, less complicated models show competitive results compared to the exiting complicated models, demonstrating that complex models do not necessarily generate accurate and repeatable results. Full article
(This article belongs to the Section Medical Imaging)
Show Figures

Figure 1

15 pages, 4960 KiB  
Article
Enhancing Fingerprint Liveness Detection Accuracy Using Deep Learning: A Comprehensive Study and Novel Approach
by Deep Kothadiya, Chintan Bhatt, Dhruvil Soni, Kalpita Gadhe, Samir Patel, Alessandro Bruno and Pier Luigi Mazzeo
J. Imaging 2023, 9(8), 158; https://doi.org/10.3390/jimaging9080158 - 7 Aug 2023
Cited by 6 | Viewed by 2533
Abstract
Liveness detection for fingerprint impressions plays a role in the meaningful prevention of any unauthorized activity or phishing attempt. The accessibility of unique individual identification has increased the popularity of biometrics. Deep learning with computer vision has proven remarkable results in image classification, [...] Read more.
Liveness detection for fingerprint impressions plays a role in the meaningful prevention of any unauthorized activity or phishing attempt. The accessibility of unique individual identification has increased the popularity of biometrics. Deep learning with computer vision has proven remarkable results in image classification, detection, and many others. The proposed methodology relies on an attention model and ResNet convolutions. Spatial attention (SA) and channel attention (CA) models were used sequentially to enhance feature learning. A three-fold sequential attention model is used along with five convolution learning layers. The method’s performances have been tested across different pooling strategies, such as Max, Average, and Stochastic, over the LivDet-2021 dataset. Comparisons against different state-of-the-art variants of Convolutional Neural Networks, such as DenseNet121, VGG19, InceptionV3, and conventional ResNet50, have been carried out. In particular, tests have been aimed at assessing ResNet34 and ResNet50 models on feature extraction by further enhancing the sequential attention model. A Multilayer Perceptron (MLP) classifier used alongside a fully connected layer returns the ultimate prediction of the entire stack. Finally, the proposed method is also evaluated on feature extraction with and without attention models for ResNet and considering different pooling strategies. Full article
(This article belongs to the Section Biometrics, Forensics, and Security)
Show Figures

Figure 1

18 pages, 4028 KiB  
Article
Target Design in SEM-Based Nano-CT and Its Influence on X-ray Imaging
by Jonas Fell, Felix Wetzler, Michael Maisl and Hans-Georg Herrmann
J. Imaging 2023, 9(8), 157; https://doi.org/10.3390/jimaging9080157 - 4 Aug 2023
Viewed by 1484
Abstract
Nano-computed tomography (nano-CT) based on scanning electron microscopy (SEM) is utilized for multimodal material characterization in one instrument. Since SEM-based CT uses geometrical magnification, X-ray targets can be adapted without any further changes to the system. This allows for designing targets with varying [...] Read more.
Nano-computed tomography (nano-CT) based on scanning electron microscopy (SEM) is utilized for multimodal material characterization in one instrument. Since SEM-based CT uses geometrical magnification, X-ray targets can be adapted without any further changes to the system. This allows for designing targets with varying geometry and chemical composition to influence the X-ray focal spot, intensity and energy distribution with the aim to enhance the image quality. In this paper, three different target geometries with a varying volume are presented: bulk, foil and needle target. Based on the analyzed electron beam properties and X-ray beam path, the influence of the different target designs on X-ray imaging is investigated. With the obtained information, three targets for different applications are recommended. A platinum (Pt) bulk target tilted by 25° as an optimal combination of high photon flux and spatial resolution is used for fast CT scans and the investigation of high-absorbing or large sample volumes. To image low-absorbing materials, e.g., polymers or organic materials, a target material with a characteristic line energy right above the detector energy threshold is recommended. In the case of the observed system, we used a 30° tilted chromium (Cr) target, leading to a higher image contrast. To reach a maximum spatial resolution of about 100 nm, we recommend a tungsten (W) needle target with a tip diameter of about 100 nm. Full article
Show Figures

Figure 1

14 pages, 5234 KiB  
Article
Classification of a 3D Film Pattern Image Using the Optimal Height of the Histogram for Quality Inspection
by Jaeeun Lee, Hongseok Choi, Kyeongmin Yum, Jungwon Park and Jongnam Kim
J. Imaging 2023, 9(8), 156; https://doi.org/10.3390/jimaging9080156 - 2 Aug 2023
Viewed by 1192
Abstract
A 3D film pattern image was recently developed for marketing purposes, and an inspection method is needed to evaluate the quality of the pattern for mass production. However, due to its recent development, there are limited methods to inspect the 3D film pattern. [...] Read more.
A 3D film pattern image was recently developed for marketing purposes, and an inspection method is needed to evaluate the quality of the pattern for mass production. However, due to its recent development, there are limited methods to inspect the 3D film pattern. The good pattern in the 3D film has a clear outline and high contrast, while the bad pattern has a blurry outline and low contrast. Due to these characteristics, it is challenging to examine the quality of the 3D film pattern. In this paper, we propose a simple algorithm that classifies the 3D film pattern as either good or bad by using the height of the histograms. Despite its simplicity, the proposed method can accurately and quickly inspect the 3D film pattern. In the experimental results, the proposed method achieved 99.09% classification accuracy with a computation time of 6.64 s, demonstrating better performance than existing algorithms. Full article
(This article belongs to the Section Image and Video Processing)
Show Figures

Figure 1

11 pages, 1135 KiB  
Article
The Cross-Sectional Area Assessment of Pelvic Muscles Using the MRI Manual Segmentation among Patients with Low Back Pain and Healthy Subjects
by Wiktoria Frącz, Jakub Matuska, Jarosław Szyszka, Paweł Dobrakowski, Wiktoria Szopka and Elżbieta Skorupska
J. Imaging 2023, 9(8), 155; https://doi.org/10.3390/jimaging9080155 - 31 Jul 2023
Cited by 1 | Viewed by 1506
Abstract
The pain pathomechanism of chronic low back pain (LBP) is complex and the available diagnostic methods are insufficient. Patients present morphological changes in volume and cross-sectional area (CSA) of lumbosacral region. The main objective of this study was to assess if CSA measurements [...] Read more.
The pain pathomechanism of chronic low back pain (LBP) is complex and the available diagnostic methods are insufficient. Patients present morphological changes in volume and cross-sectional area (CSA) of lumbosacral region. The main objective of this study was to assess if CSA measurements of pelvic muscle will indicate muscle atrophy between asymptomatic and symptomatic sides in chronic LBP patients, as well as between right and left sides in healthy volunteers. In addition, inter-rater reliability for CSA measurements was examined. The study involved 71 chronic LBP patients and 29 healthy volunteers. The CSA of gluteus maximus, medius, minimus and piriformis were measured using the MRI manual segmentation method. Muscle atrophy was confirmed in gluteus maximus, gluteus minimus and piriformis muscle for over 50% of chronic LBP patients (p < 0.05). Gluteus medius showed atrophy in patients with left side pain occurrence (p < 0.001). Muscle atrophy occurred on the symptomatic side for all inspected muscles, except gluteus maximus in rater one assessment. The reliability of CSA measurements between raters calculated using CCC and ICC presented great inter-rater reproducibility for each muscle both in patients and healthy volunteers (p < 0.95). Therefore, there is the possibility of using CSA assessment in the diagnosis of patients with symptoms of chronic LBP. Full article
(This article belongs to the Section Medical Imaging)
Show Figures

Figure 1

13 pages, 1536 KiB  
Article
Open-Set Recognition of Wood Species Based on Deep Learning Feature Extraction Using Leaves
by Tianyu Fang, Zhenyu Li, Jialin Zhang, Dawei Qi and Lei Zhang
J. Imaging 2023, 9(8), 154; https://doi.org/10.3390/jimaging9080154 - 30 Jul 2023
Viewed by 1813
Abstract
An open-set recognition scheme for tree leaves based on deep learning feature extraction is presented in this study. Deep learning algorithms are used to extract leaf features for different wood species, and the leaf set of a wood species is divided into two [...] Read more.
An open-set recognition scheme for tree leaves based on deep learning feature extraction is presented in this study. Deep learning algorithms are used to extract leaf features for different wood species, and the leaf set of a wood species is divided into two datasets: the leaf set of a known wood species and the leaf set of an unknown species. The deep learning network (CNN) is trained on the leaves of selected known wood species, and the features of the remaining known wood species and all unknown wood species are extracted using the trained CNN. Then, the single-class classification is performed using the weighted SVDD algorithm to recognize the leaves of known and unknown wood species. The features of leaves recognized as known wood species are fed back to the trained CNN to recognize the leaves of known wood species. The recognition results of a single-class classifier for known and unknown wood species are combined with the recognition results of a multi-class CNN to finally complete the open recognition of wood species. We tested the proposed method on the publicly available Swedish Leaf Dataset, which includes 15 wood species (5 species used as known and 10 species used as unknown). The test results showed that, with F1 scores of 0.7797 and 0.8644, mixed recognition rates of 95.15% and 93.14%, and Kappa coefficients of 0.7674 and 0.8644 under two different data distributions, the proposed method outperformed the state-of-the-art open-set recognition algorithms in all three aspects. And, the more wood species that are known, the better the recognition. This approach can extract effective features from tree leaf images for open-set recognition and achieve wood species recognition without compromising tree material. Full article
(This article belongs to the Topic Applications in Image Analysis and Pattern Recognition)
Show Figures

Figure 1

14 pages, 6850 KiB  
Article
Nighttime Image Dehazing by Render
by Zheyan Jin, Huajun Feng, Zhihai Xu and Yueting Chen
J. Imaging 2023, 9(8), 153; https://doi.org/10.3390/jimaging9080153 - 28 Jul 2023
Cited by 2 | Viewed by 1719
Abstract
Nighttime image dehazing presents unique challenges due to the unevenly distributed haze caused by the color change of artificial light sources. This results in multiple interferences, including atmospheric light, glow, and direct light, which make the complex scattering haze interference difficult to accurately [...] Read more.
Nighttime image dehazing presents unique challenges due to the unevenly distributed haze caused by the color change of artificial light sources. This results in multiple interferences, including atmospheric light, glow, and direct light, which make the complex scattering haze interference difficult to accurately distinguish and remove. Additionally, obtaining pairs of high-definition data for fog removal at night is a difficult task. These challenges make nighttime image dehazing a particularly challenging problem to solve. To address these challenges, we introduced the haze scattering formula to more accurately express the haze in three-dimensional space. We also proposed a novel data synthesis method using the latest CG textures and lumen lighting technology to build scenes where various hazes can be seen clearly through ray tracing. We converted the complex 3D scattering relationship transformation into a 2D image dataset to better learn the mapping from 3D haze to 2D haze. Additionally, we improved the existing neural network and established a night haze intensity evaluation label based on the idea of optical PSF. This allowed us to adjust the haze intensity of the rendered dataset according to the intensity of the real haze image and improve the accuracy of dehazing. Our experiments showed that our data construction and network improvement achieved better visual effects, objective indicators, and calculation speed. Full article
(This article belongs to the Section Image and Video Processing)
Show Figures

Figure 1

20 pages, 4940 KiB  
Article
Quantification of Gas Flaring from Satellite Imagery: A Comparison of Two Methods for SLSTR and BIROS Imagery
by Alexandre Caseiro and Agnieszka Soszyńska
J. Imaging 2023, 9(8), 152; https://doi.org/10.3390/jimaging9080152 - 27 Jul 2023
Cited by 1 | Viewed by 2087
Abstract
Gas flaring is an environmental problem of local, regional and global concerns. Gas flares emit pollutants and greenhouse gases, yet knowledge about the source strength is limited due to disparate reporting approaches in different geographies, whenever and wherever those are considered. Remote sensing [...] Read more.
Gas flaring is an environmental problem of local, regional and global concerns. Gas flares emit pollutants and greenhouse gases, yet knowledge about the source strength is limited due to disparate reporting approaches in different geographies, whenever and wherever those are considered. Remote sensing has bridged the gap but uncertainties remain. There are numerous sensors which provide measurements over flaring-active regions in wavelengths that are suitable for the observation of gas flares and the retrieval of flaring activity. However, their use for operational monitoring has been limited. Besides several potential sensors, there are also different approaches to conduct the retrievals. In the current paper, we compare two retrieval approaches over an offshore flaring area during an extended period of time. Our results show that retrieved activities are consistent between methods although discrepancies may originate for individual flares at the highly temporal scale, which are traced back to the variable nature of flaring. The presented results are helpful for the estimation of flaring activity from different sources and will be useful in a future integration of diverse sensors and methodologies into a single monitoring scheme. Full article
(This article belongs to the Section Color, Multi-spectral, and Hyperspectral Imaging)
Show Figures

Figure 1

15 pages, 7753 KiB  
Article
Fast Compressed Sensing of 3D Radial T1 Mapping with Different Sparse and Low-Rank Models
by Antti Paajanen, Matti Hanhela, Nina Hänninen, Olli Nykänen, Ville Kolehmainen and Mikko J. Nissi
J. Imaging 2023, 9(8), 151; https://doi.org/10.3390/jimaging9080151 - 26 Jul 2023
Cited by 2 | Viewed by 1342
Abstract
Knowledge of the relative performance of the well-known sparse and low-rank compressed sensing models with 3D radial quantitative magnetic resonance imaging acquisitions is limited. We use 3D radial T1 relaxation time mapping data to compare the total variation, low-rank, and Huber penalty [...] Read more.
Knowledge of the relative performance of the well-known sparse and low-rank compressed sensing models with 3D radial quantitative magnetic resonance imaging acquisitions is limited. We use 3D radial T1 relaxation time mapping data to compare the total variation, low-rank, and Huber penalty function approaches to regularization to provide insights into the relative performance of these image reconstruction models. Simulation and ex vivo specimen data were used to determine the best compressed sensing model as measured by normalized root mean squared error and structural similarity index. The large-scale compressed sensing models were solved by combining a GPU implementation of a preconditioned primal-dual proximal splitting algorithm to provide high-quality T1 maps within a feasible computation time. The model combining spatial total variation and locally low-rank regularization yielded the best performance, followed closely by the model combining spatial and contrast dimension total variation. Computation times ranged from 2 to 113 min, with the low-rank approaches taking the most time. The differences between the compressed sensing models are not necessarily large, but the overall performance is heavily dependent on the imaged object. Full article
(This article belongs to the Section Medical Imaging)
Show Figures

Figure 1

10 pages, 1048 KiB  
Article
Quantitative CT Metrics Associated with Variability in the Diffusion Capacity of the Lung of Post-COVID-19 Patients with Minimal Residual Lung Lesions
by Han Wen, Julio A. Huapaya, Shreya M. Kanth, Junfeng Sun, Brianna P. Matthew, Simone C. Lee, Michael Do, Marcus Y. Chen, Ashkan A. Malayeri and Anthony F. Suffredini
J. Imaging 2023, 9(8), 150; https://doi.org/10.3390/jimaging9080150 - 26 Jul 2023
Cited by 4 | Viewed by 1422
Abstract
(1) Background: A reduction in the diffusion capacity of the lung for carbon monoxide is a prevalent longer-term consequence of COVID-19 infection. In patients who have zero or minimal residual radiological abnormalities in the lungs, it has been debated whether the cause was [...] Read more.
(1) Background: A reduction in the diffusion capacity of the lung for carbon monoxide is a prevalent longer-term consequence of COVID-19 infection. In patients who have zero or minimal residual radiological abnormalities in the lungs, it has been debated whether the cause was mainly due to a reduced alveolar volume or involved diffuse interstitial or vascular abnormalities. (2) Methods: We performed a cross-sectional study of 45 patients with either zero or minimal residual lesions in the lungs (total volume < 7 cc) at two months to one year post COVID-19 infection. There was considerable variability in the diffusion capacity of the lung for carbon monoxide, with 27% of the patients at less than 80% of the predicted reference. We investigated a set of independent variables that may affect the diffusion capacity of the lung, including demographic, pulmonary physiology and CT (computed tomography)-derived variables of vascular volume, parenchymal density and residual lesion volume. (3) Results: The leading three variables that contributed to the variability in the diffusion capacity of the lung for carbon monoxide were the alveolar volume, determined via pulmonary function tests, the blood vessel volume fraction, determined via CT, and the parenchymal radiodensity, also determined via CT. These factors explained 49% of the variance of the diffusion capacity, with p values of 0.031, 0.005 and 0.018, respectively, after adjusting for confounders. A multiple-regression model combining these three variables fit the measured values of the diffusion capacity, with R = 0.70 and p < 0.001. (4) Conclusions: The results are consistent with the notion that in some post-COVID-19 patients, after their pulmonary lesions resolve, diffuse changes in the vascular and parenchymal structures, in addition to a low alveolar volume, could be contributors to a lingering low diffusion capacity. Full article
(This article belongs to the Topic Applications in Image Analysis and Pattern Recognition)
Show Figures

Figure 1

19 pages, 970 KiB  
Review
Prospective of Pancreatic Cancer Diagnosis Using Cardiac Sensing
by Mansunderbir Singh, Priyanka Anvekar, Bhavana Baraskar, Namratha Pallipamu, Srikanth Gadam, Akhila Sai Sree Cherukuri, Devanshi N. Damani, Kanchan Kulkarni and Shivaram P. Arunachalam
J. Imaging 2023, 9(8), 149; https://doi.org/10.3390/jimaging9080149 - 25 Jul 2023
Viewed by 5526
Abstract
Pancreatic carcinoma (Ca Pancreas) is the third leading cause of cancer-related deaths in the world. The malignancies of the pancreas can be diagnosed with the help of various imaging modalities. An endoscopic ultrasound with a tissue biopsy is so far considered to be [...] Read more.
Pancreatic carcinoma (Ca Pancreas) is the third leading cause of cancer-related deaths in the world. The malignancies of the pancreas can be diagnosed with the help of various imaging modalities. An endoscopic ultrasound with a tissue biopsy is so far considered to be the gold standard in terms of the detection of Ca Pancreas, especially for lesions <2 mm. However, other methods, like computed tomography (CT), ultrasound, and magnetic resonance imaging (MRI), are also conventionally used. Moreover, newer techniques, like proteomics, radiomics, metabolomics, and artificial intelligence (AI), are slowly being introduced for diagnosing pancreatic cancer. Regardless, it is still a challenge to diagnose pancreatic carcinoma non-invasively at an early stage due to its delayed presentation. Similarly, this also makes it difficult to demonstrate an association between Ca Pancreas and other vital organs of the body, such as the heart. A number of studies have proven a correlation between the heart and pancreatic cancer. The tumor of the pancreas affects the heart at the physiological, as well as the molecular, level. An overexpression of the SMAD4 gene; a disruption in biomolecules, such as IGF, MAPK, and ApoE; and increased CA19-9 markers are a few of the many factors that are noted to affect cardiovascular systems with pancreatic malignancies. A comprehensive review of this correlation will aid researchers in conducting studies to help establish a definite relation between the two organs and discover ways to use it for the early detection of Ca Pancreas. Full article
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop