Next Issue
Volume 9, June
Previous Issue
Volume 9, April
 
 

J. Imaging, Volume 9, Issue 5 (May 2023) – 17 articles

Cover Story (view full-size image): The pan-sharpening methods increase the geometric resolution of multispectral images (MSI) using panchromatic data of the same scene. It is not trivial to choose a suitable pan-sharpening algorithm: there are several, but none of them is universally recognized as the best. This article aims to analyze pan-sharpening in relation to different land covers present in GeoEye-1 images. Nine methods are compared considering the results they produce on four different types of areas, i.e., natural, rural, semi-urban, and urban, distinguishable by the vegetation content. The images show RGB compositions obtained with the initial MSI (on the left), the least-performing method (in the middle), and the best-performing method (on the right), resulting in a rural area (upper) and urban area (lower). View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
18 pages, 4717 KiB  
Article
Perceptual Translucency in 3D Printing Using Surface Texture
by Kazuki Nagasawa, Kamui Ono, Wataru Arai and Norimichi Tsumura
J. Imaging 2023, 9(5), 105; https://doi.org/10.3390/jimaging9050105 - 22 May 2023
Viewed by 1455
Abstract
We propose a method of reproducing perceptual translucency in three-dimensional printing. In contrast to most conventional methods, which reproduce the physical properties of translucency, we focus on the perceptual aspects of translucency. Humans are known to rely on simple cues to perceive translucency, [...] Read more.
We propose a method of reproducing perceptual translucency in three-dimensional printing. In contrast to most conventional methods, which reproduce the physical properties of translucency, we focus on the perceptual aspects of translucency. Humans are known to rely on simple cues to perceive translucency, and we develop a method of reproducing these cues using the gradation of surface textures. Textures are designed to reproduce the intensity distribution of the shading and thus provide a cue for the perception of translucency. In creating textures, we adopt computer graphics to develop an image-based optimization method. We validate the effectiveness of the method through subjective evaluation experiments using three-dimensionally printed objects. The results of the validation suggest that the proposed method using texture may increase perceptual translucency under specific conditions. As a method for translucent 3D printing, our method has the limitation that it depends on the observation conditions; however, it provides knowledge to the field of perception that the human visual system can be cheated by only surface textures. Full article
(This article belongs to the Special Issue Imaging Technologies for Understanding Material Appearance)
Show Figures

Figure 1

17 pages, 2657 KiB  
Article
Combining CNNs and Markov-like Models for Facial Landmark Detection with Spatial Consistency Estimates
by Ahmed Gdoura, Markus Degünther, Birgit Lorenz and Alexander Effland
J. Imaging 2023, 9(5), 104; https://doi.org/10.3390/jimaging9050104 - 22 May 2023
Cited by 2 | Viewed by 2405
Abstract
The accurate localization of facial landmarks is essential for several tasks, including face recognition, head pose estimation, facial region extraction, and emotion detection. Although the number of required landmarks is task-specific, models are typically trained on all available landmarks in the datasets, limiting [...] Read more.
The accurate localization of facial landmarks is essential for several tasks, including face recognition, head pose estimation, facial region extraction, and emotion detection. Although the number of required landmarks is task-specific, models are typically trained on all available landmarks in the datasets, limiting efficiency. Furthermore, model performance is strongly influenced by scale-dependent local appearance information around landmarks and the global shape information generated by them. To account for this, we propose a lightweight hybrid model for facial landmark detection designed specifically for pupil region extraction. Our design combines a convolutional neural network (CNN) with a Markov random field (MRF)-like process trained on only 17 carefully selected landmarks. The advantage of our model is the ability to run different image scales on the same convolutional layers, resulting in a significant reduction in model size. In addition, we employ an approximation of the MRF that is run on a subset of landmarks to validate the spatial consistency of the generated shape. This validation process is performed against a learned conditional distribution, expressing the location of one landmark relative to its neighbor. Experimental results on popular facial landmark localization datasets such as 300 w, WFLW, and HELEN demonstrate the accuracy of our proposed model. Furthermore, our model achieves state-of-the-art performance on a well-defined robustness metric. In conclusion, the results demonstrate the ability of our lightweight model to filter out spatially inconsistent predictions, even with significantly fewer training landmarks. Full article
(This article belongs to the Topic Computer Vision and Image Processing)
Show Figures

Figure 1

9 pages, 1317 KiB  
Article
Tomosynthesis-Detected Architectural Distortions: Correlations between Imaging Characteristics and Histopathologic Outcomes
by Giovanna Romanucci, Francesca Fornasa, Andrea Caneva, Claudia Rossati, Marta Mandarà, Oscar Tommasini and Rossella Rella
J. Imaging 2023, 9(5), 103; https://doi.org/10.3390/jimaging9050103 - 19 May 2023
Cited by 3 | Viewed by 2120
Abstract
Objective: to determine the positive predictive value (PPV) of tomosynthesis (DBT)-detected architectural distortions (ADs) and evaluate correlations between AD’s imaging characteristics and histopathologic outcomes. Methods: biopsies performed between 2019 and 2021 on ADs were included. Images were interpreted by dedicated breast imaging radiologists. [...] Read more.
Objective: to determine the positive predictive value (PPV) of tomosynthesis (DBT)-detected architectural distortions (ADs) and evaluate correlations between AD’s imaging characteristics and histopathologic outcomes. Methods: biopsies performed between 2019 and 2021 on ADs were included. Images were interpreted by dedicated breast imaging radiologists. Pathologic results after DBT-vacuum assisted biopsy (DBT-VAB) and core needle biopsy were compared with AD detected by DBT, synthetic2D (synt2D) and ultrasound (US). Results: US was performed to assess a correlation for ADs in all 123 cases and a US correlation was identified in 12/123 (9.7%) cases, which underwent US-guided core needle biopsy (CNB). The remaining 111/123 (90.2%) ADs were biopsied under DBT guidance. Among the 123 ADs included, 33/123 (26.8%) yielded malignant results. The overall PPV for malignancy was 30.1% (37/123). The imaging-specific PPV for malignancy was 19.2% (5/26) for DBT-only ADs, 28.2% (24/85) for ADs visible on DBT and synth2D mammography and 66.7% (8/12) for ADs with a US correlation with a statistically significant difference among the three groups (p = 0.01). Conclusions: DBT-only ADs demonstrated a lower PPV of malignancy when compared with syntD mammography, and DBT detected ADs but not low enough to avoid biopsy. As the presence of a US correlate was found to be related with malignancy, it should increase the radiologist’s level of suspicion, even when CNB returned a B3 result. Full article
(This article belongs to the Topic Medical Image Analysis)
Show Figures

Figure 1

36 pages, 30803 KiB  
Review
Intraoperative Gamma Cameras: A Review of Development in the Last Decade and Future Outlook
by Andrew L. Farnworth and Sarah L. Bugby
J. Imaging 2023, 9(5), 102; https://doi.org/10.3390/jimaging9050102 - 16 May 2023
Cited by 8 | Viewed by 4047
Abstract
Portable gamma cameras suitable for intraoperative imaging are in active development and testing. These cameras utilise a range of collimation, detection, and readout architectures, each of which can have significant and interacting impacts on the performance of the system as a whole. In [...] Read more.
Portable gamma cameras suitable for intraoperative imaging are in active development and testing. These cameras utilise a range of collimation, detection, and readout architectures, each of which can have significant and interacting impacts on the performance of the system as a whole. In this review, we provide an analysis of intraoperative gamma camera development over the past decade. The designs and performance of 17 imaging systems are compared in depth. We discuss where recent technological developments have had the greatest impact, identify emerging technological and scientific requirements, and predict future research directions. This is a comprehensive review of the current and emerging state-of-the-art as more devices enter clinical practice. Full article
Show Figures

Figure 1

10 pages, 1644 KiB  
Article
Examination for the Factors Involving to Joint Effusion in Patients with Temporomandibular Disorders Using Magnetic Resonance Imaging
by Fumi Mizuhashi, Ichiro Ogura, Ryo Mizuhashi, Yuko Watarai, Makoto Oohashi, Tatsuhiro Suzuki and Hisato Saegusa
J. Imaging 2023, 9(5), 101; https://doi.org/10.3390/jimaging9050101 - 16 May 2023
Cited by 3 | Viewed by 1981
Abstract
Background: This study investigated the factors involving joint effusion in patients with temporomandibular disorders. Methods: The magnetic resonance images of 131 temporomandibular joints (TMJs) of patients with temporomandibular disorders were evaluated. Gender, age, disease classification, duration of manifestation, muscle pain, TMJ pain, jaw [...] Read more.
Background: This study investigated the factors involving joint effusion in patients with temporomandibular disorders. Methods: The magnetic resonance images of 131 temporomandibular joints (TMJs) of patients with temporomandibular disorders were evaluated. Gender, age, disease classification, duration of manifestation, muscle pain, TMJ pain, jaw opening disturbance, disc displacement with and without reduction, deformation of the articular disc, deformation of bone, and joint effusion were investigated. Differences in the appearance of symptoms and observations were evaluated using cross-tabulation. The differences in the amounts of synovial fluid in joint effusion vs. duration of manifestation were analyzed using the Kruskal–Wallis test. Multiple logistic regression analysis was performed to analyze the factors contributing to joint effusion. Results: Manifestation duration was significantly longer when joint effusion was not recognized (p < 0.05). Arthralgia and deformation of the articular disc were related to a high risk of joint effusion (p < 0.05). Conclusions: The results of this study suggest that joint effusion recognized in magnetic resonance imaging was easily observed when the manifestation duration was short, and arthralgia and deformation of the articular disc were related to a higher risk of joint effusion. Full article
Show Figures

Figure 1

14 pages, 2540 KiB  
Article
Assessing the Design of Interactive Radial Data Visualizations for Mobile Devices
by Ana Svalina, Jesenka Pibernik, Jurica Dolić and Lidija Mandić
J. Imaging 2023, 9(5), 100; https://doi.org/10.3390/jimaging9050100 - 14 May 2023
Viewed by 1970
Abstract
The growing use of mobile devices in daily life has led to an increased demand for the display of large amounts of data. In response, radial visualizations have emerged as a popular type of visualization in mobile applications due to their visual appeal. [...] Read more.
The growing use of mobile devices in daily life has led to an increased demand for the display of large amounts of data. In response, radial visualizations have emerged as a popular type of visualization in mobile applications due to their visual appeal. However, previous research has highlighted issues with these visualizations, namely misinterpretation due to their column length and angles. This study aims to provide guidelines for designing interactive visualizations on mobile devices and new evaluation methods based on the results of an empirical study. The perception of four types of circular visualizations on mobile devices was assessed through user interaction. All four types of circular visualizations were found to be suitable for use within mobile activity tracking applications, with no statistically significant difference in responses by type of visualization or interaction. However, distinguishing characteristics of each visualization type were revealed depending on the category that is in focus (memorability, readability, understanding, enjoyment, and engagement). The research outcomes provide guidelines for designing interactive radial visualizations on mobile devices, enhance the user experience, and introduce new evaluation methods. The study’s results have significant implications for the design of visualizations on mobile devices, particularly in activity tracking applications. Full article
Show Figures

Figure 1

18 pages, 5435 KiB  
Article
Future Prediction of Shuttlecock Trajectory in Badminton Using Player’s Information
by Yuka Nokihara, Ryo Hachiuma, Ryosuke Hori and Hideo Saito
J. Imaging 2023, 9(5), 99; https://doi.org/10.3390/jimaging9050099 - 11 May 2023
Cited by 4 | Viewed by 4513
Abstract
Video analysis has become an essential aspect of net sports, such as badminton. Accurately predicting the future trajectory of balls and shuttlecocks can significantly benefit players by enhancing their performance and enabling them to devise effective game strategies. This paper aims to analyze [...] Read more.
Video analysis has become an essential aspect of net sports, such as badminton. Accurately predicting the future trajectory of balls and shuttlecocks can significantly benefit players by enhancing their performance and enabling them to devise effective game strategies. This paper aims to analyze data to provide players with an advantage in the fast-paced rallies of badminton matches. The paper delves into the innovative task of predicting future shuttlecock trajectories in badminton match videos and presents a method that takes into account both the shuttlecock position and the positions and postures of the players. In the experiments, players were extracted from the match video, their postures were analyzed, and a time-series model was trained. The results indicate that the proposed method improved accuracy by 13% compared to methods that solely used shuttlecock position information as input, and by 8.4% compared to methods that employed both shuttlecock and player position information as input. Full article
(This article belongs to the Special Issue Computer Vision and Deep Learning: Trends and Applications)
Show Figures

Figure 1

30 pages, 45741 KiB  
Article
Multispectral Satellite Image Analysis for Computing Vegetation Indices by R in the Khartoum Region of Sudan, Northeast Africa
by Polina Lemenkova and Olivier Debeir
J. Imaging 2023, 9(5), 98; https://doi.org/10.3390/jimaging9050098 - 11 May 2023
Cited by 10 | Viewed by 3082
Abstract
Desertification is one of the most destructive climate-related issues in the Sudan–Sahel region of Africa. As the assessment of desertification is possible by satellite image analysis using vegetation indices (VIs), this study reports on the technical advantages and capabilities of scripting the ‘raster’ [...] Read more.
Desertification is one of the most destructive climate-related issues in the Sudan–Sahel region of Africa. As the assessment of desertification is possible by satellite image analysis using vegetation indices (VIs), this study reports on the technical advantages and capabilities of scripting the ‘raster’ and ‘terra’ R-language packages for computing the VIs. The test area which was considered includes the region of the confluence between the Blue and White Niles in Khartoum, southern Sudan, northeast Africa and the Landsat 8–9 OLI/TIRS images taken for the years 2013, 2018 and 2022, which were chosen as test datasets. The VIs used here are robust indicators of plant greenness, and combined with vegetation coverage, are essential parameters for environmental analytics. Five VIs were calculated to compare both the status and dynamics of vegetation through the differences between the images collected within the nine-year span. Using scripts for computing and visualising the VIs over Sudan demonstrates previously unreported patterns of vegetation to reveal climate–vegetation relationships. The ability of the R packages ‘raster’ and ‘terra’ to process spatial data was enhanced through scripting to automate image analysis and mapping, and choosing Sudan for the case study enables us to present new perspectives for image processing. Full article
(This article belongs to the Section Color, Multi-spectral, and Hyperspectral Imaging)
Show Figures

Figure 1

14 pages, 5432 KiB  
Article
Structural Features of the Fragments from Cast Iron Cauldrons of the Medieval Golden Horde: Neutron Tomography Data
by Bulat Bakirov, Veronica Smirnova, Sergey Kichanov, Eugenia Shaykhutdinova, Mikhail Murashev, Denis Kozlenko and Ayrat Sitdikov
J. Imaging 2023, 9(5), 97; https://doi.org/10.3390/jimaging9050097 - 11 May 2023
Viewed by 1675
Abstract
The spatial arrangement of the internal pores inside several fragments of ancient cast iron cauldrons related to the medieval Golden Horde period was studied using the neutron tomography method. The high neutron penetration into a cast iron material provides sufficient data for detailed [...] Read more.
The spatial arrangement of the internal pores inside several fragments of ancient cast iron cauldrons related to the medieval Golden Horde period was studied using the neutron tomography method. The high neutron penetration into a cast iron material provides sufficient data for detailed analysis of the three-dimensional imaging data. The size, elongation, and orientation distributions of the observed internal pores were obtained. As discussed, the imaging and quantitative analytical data are considered structural markers for the location of cast iron foundries, as well as a feature of the medieval casting process. Full article
Show Figures

Figure 1

23 pages, 3140 KiB  
Article
Face Aging by Explainable Conditional Adversarial Autoencoders
by Christos Korgialas, Evangelia Pantraki, Angeliki Bolari, Martha Sotiroudi and Constantine Kotropoulos
J. Imaging 2023, 9(5), 96; https://doi.org/10.3390/jimaging9050096 - 10 May 2023
Cited by 3 | Viewed by 2652
Abstract
This paper deals with Generative Adversarial Networks (GANs) applied to face aging. An explainable face aging framework is proposed that builds on a well-known face aging approach, namely the Conditional Adversarial Autoencoder (CAAE). The proposed framework, namely, xAI-CAAE, couples CAAE with explainable Artificial [...] Read more.
This paper deals with Generative Adversarial Networks (GANs) applied to face aging. An explainable face aging framework is proposed that builds on a well-known face aging approach, namely the Conditional Adversarial Autoencoder (CAAE). The proposed framework, namely, xAI-CAAE, couples CAAE with explainable Artificial Intelligence (xAI) methods, such as Saliency maps or Shapley additive explanations, to provide corrective feedback from the discriminator to the generator. xAI-guided training aims to supplement this feedback with explanations that provide a “reason” for the discriminator’s decision. Moreover, Local Interpretable Model-agnostic Explanations (LIME) are leveraged to provide explanations for the face areas that most influence the decision of a pre-trained age classifier. To the best of our knowledge, xAI methods are utilized in the context of face aging for the first time. A thorough qualitative and quantitative evaluation demonstrates that the incorporation of the xAI systems contributed significantly to the generation of more realistic age-progressed and regressed images. Full article
(This article belongs to the Section Computer Vision and Pattern Recognition)
Show Figures

Figure 1

15 pages, 37806 KiB  
Review
Mammography Datasets for Neural Networks—Survey
by Adam Mračko, Lucia Vanovčanová and Ivan Cimrák
J. Imaging 2023, 9(5), 95; https://doi.org/10.3390/jimaging9050095 - 10 May 2023
Cited by 2 | Viewed by 8152
Abstract
Deep neural networks have gained popularity in the field of mammography. Data play an integral role in training these models, as training algorithms requires a large amount of data to capture the general relationship between the model’s input and output. Open-access databases are [...] Read more.
Deep neural networks have gained popularity in the field of mammography. Data play an integral role in training these models, as training algorithms requires a large amount of data to capture the general relationship between the model’s input and output. Open-access databases are the most accessible source of mammography data for training neural networks. Our work focuses on conducting a comprehensive survey of mammography databases that contain images with defined abnormal areas of interest. The survey includes databases such as INbreast, the Curated Breast Imaging Subset of Digital Database for Screening Mammography (CBIS-DDSM), the OPTIMAM Medical Image Database (OMI-DB), and The Mammographic Image Analysis Society Digital Mammogram Database (MIAS). Additionally, we surveyed recent studies that have utilized these databases in conjunction with neural networks and the results they have achieved. From these databases, it is possible to obtain at least 3801 unique images with 4125 described findings from approximately 1842 patients. The number of patients with important findings can be increased to approximately 14,474, depending on the type of agreement with the OPTIMAM team. Furthermore, we provide a description of the annotation process for mammography images to enhance the understanding of the information gained from these datasets. Full article
(This article belongs to the Topic Medical Image Analysis)
Show Figures

Figure 1

17 pages, 8379 KiB  
Review
Angiosarcoma of the Breast: Overview of Current Data and Multimodal Imaging Findings
by Marco Conti, Francesca Morciano, Claudia Rossati, Elisabetta Gori, Paolo Belli, Francesca Fornasa, Giovanna Romanucci and Rossella Rella
J. Imaging 2023, 9(5), 94; https://doi.org/10.3390/jimaging9050094 - 30 Apr 2023
Cited by 6 | Viewed by 5746
Abstract
Angiosarcoma of the breast is a rare breast cancer, which can arise de novo (primary breast angiosarcoma, PBA) or as a secondary malignancy (secondary breast angiosarcoma, SBA) as a result of a biological insult. In the latter case, it is usually diagnosed in [...] Read more.
Angiosarcoma of the breast is a rare breast cancer, which can arise de novo (primary breast angiosarcoma, PBA) or as a secondary malignancy (secondary breast angiosarcoma, SBA) as a result of a biological insult. In the latter case, it is usually diagnosed in patients with a previous history of radiation therapy following a conserving treatment for breast cancer. Over the years, the advances in early diagnosis and treatment of breast cancer, with increasing use of breast-conserving surgery and radiation therapy (instead of radical mastectomy), brought about an increased incidence of the secondary type. PBA and SBA have different clinical presentations and often represent a diagnostic challenge due to the nonspecific imaging findings. The purpose of this paper is to review and describe the radiological features of breast angiosarcoma, both in conventional and advanced imaging to guide radiologists in the diagnosis and management of this rare tumor. Full article
Show Figures

Figure 1

21 pages, 11132 KiB  
Article
The Effectiveness of Pan-Sharpening Algorithms on Different Land Cover Types in GeoEye-1 Satellite Images
by Emanuele Alcaras and Claudio Parente
J. Imaging 2023, 9(5), 93; https://doi.org/10.3390/jimaging9050093 - 30 Apr 2023
Cited by 4 | Viewed by 2383
Abstract
In recent years, the demand for very high geometric resolution satellite images has increased significantly. The pan-sharpening techniques, which are part of the data fusion techniques, enable the increase in the geometric resolution of multispectral images using panchromatic imagery of the same scene. [...] Read more.
In recent years, the demand for very high geometric resolution satellite images has increased significantly. The pan-sharpening techniques, which are part of the data fusion techniques, enable the increase in the geometric resolution of multispectral images using panchromatic imagery of the same scene. However, it is not trivial to choose a suitable pan-sharpening algorithm: there are several, but none of these is universally recognized as the best for any type of sensor, in addition to the fact that they can provide different results with regard to the investigated scene. This article focuses on the latter aspect: analyzing pan-sharpening algorithms in relation to different land covers. A dataset of GeoEye-1 images is selected from which four study areas (frames) are extracted: one natural, one rural, one urban and one semi-urban. The type of study area is determined considering the quantity of vegetation included in it based on the normalized difference vegetation index (NDVI). Nine pan-sharpening methods are applied to each frame and the resulting pan-sharpened images are compared by means of spectral and spatial quality indicators. Multicriteria analysis permits to define the best performing method related to each specific area as well as the most suitable one, considering the co-presence of different land covers in the analyzed scene. Brovey transformation fast supplies the best results among the methods analyzed in this study. Full article
(This article belongs to the Special Issue Image Processing and Computer Vision: Algorithms and Applications)
Show Figures

Figure 1

12 pages, 1283 KiB  
Article
Quantifiable Measures of Abdominal Wall Motion for Quality Assessment of Cine-MRI Slices in Detection of Abdominal Adhesions
by Bastiaan A. W. van den Beukel, Bram de Wilde, Frank Joosten, Harry van Goor, Wulphert Venderink, Henkjan J. Huisman and Richard P. G. ten Broek
J. Imaging 2023, 9(5), 92; https://doi.org/10.3390/jimaging9050092 - 30 Apr 2023
Viewed by 2174
Abstract
Abdominal adhesions present a diagnostic challenge, and classic imaging modalities can miss their presence. Cine-MRI, which records visceral sliding during patient-controlled breathing, has proven useful in detecting and mapping adhesions. However, patient movements can affect the accuracy of these images, despite there being [...] Read more.
Abdominal adhesions present a diagnostic challenge, and classic imaging modalities can miss their presence. Cine-MRI, which records visceral sliding during patient-controlled breathing, has proven useful in detecting and mapping adhesions. However, patient movements can affect the accuracy of these images, despite there being no standardized algorithm for defining sufficiently high-quality images. This study aims to develop a biomarker for patient movements and determine which patient-related factors influence movement during cine-MRI. Included patients underwent cine-MRI to detect adhesions for chronic abdominal complaints, data were collected from electronic patient files and radiologic reports. Ninety slices of cine-MRI were assessed for quality, using a five-point scale to quantify amplitude, frequency, and slope, from which an image-processing algorithm was developed. The biomarkers closely correlated with qualitative assessments, with an amplitude of 6.5 mm used to distinguish between sufficient and insufficient-quality slices. In multivariable analysis, the amplitude of movement was influenced by age, sex, length, and the presence of a stoma. Unfortunately, no factor was changeable. Strategies for mitigating their impact may be challenging. This study highlights the utility of the developed biomarker in evaluating image quality and providing useful feedback for clinicians. Future studies could improve diagnostic quality by implementing automated quality criteria during cine-MRI. Full article
Show Figures

Figure 1

18 pages, 2165 KiB  
Article
Real-Time Machine Learning-Based Driver Drowsiness Detection Using Visual Features
by Yaman Albadawi, Aneesa AlRedhaei and Maen Takruri
J. Imaging 2023, 9(5), 91; https://doi.org/10.3390/jimaging9050091 - 29 Apr 2023
Cited by 21 | Viewed by 17248
Abstract
Drowsiness-related car accidents continue to have a significant effect on road safety. Many of these accidents can be eliminated by alerting the drivers once they start feeling drowsy. This work presents a non-invasive system for real-time driver drowsiness detection using visual features. These [...] Read more.
Drowsiness-related car accidents continue to have a significant effect on road safety. Many of these accidents can be eliminated by alerting the drivers once they start feeling drowsy. This work presents a non-invasive system for real-time driver drowsiness detection using visual features. These features are extracted from videos obtained from a camera installed on the dashboard. The proposed system uses facial landmarks and face mesh detectors to locate the regions of interest where mouth aspect ratio, eye aspect ratio, and head pose features are extracted and fed to three different classifiers: random forest, sequential neural network, and linear support vector machine classifiers. Evaluations of the proposed system over the National Tsing Hua University driver drowsiness detection dataset showed that it can successfully detect and alarm drowsy drivers with an accuracy up to 99%. Full article
(This article belongs to the Topic Applications in Image Analysis and Pattern Recognition)
Show Figures

Figure 1

11 pages, 12061 KiB  
Article
Big-Volume SliceGAN for Improving a Synthetic 3D Microstructure Image of Additive-Manufactured TYPE 316L Steel
by Keiya Sugiura, Toshio Ogawa, Yoshitaka Adachi, Fei Sun, Asuka Suzuki, Akinori Yamanaka, Nobuo Nakada, Takuya Ishimoto, Takayoshi Nakano and Yuichiro Koizumi
J. Imaging 2023, 9(5), 90; https://doi.org/10.3390/jimaging9050090 - 29 Apr 2023
Cited by 5 | Viewed by 2738
Abstract
A modified SliceGAN architecture was proposed to generate a high-quality synthetic three-dimensional (3D) microstructure image of TYPE 316L material manufactured through additive methods. The quality of the resulting 3D image was evaluated using an auto-correlation function, and it was discovered that maintaining a [...] Read more.
A modified SliceGAN architecture was proposed to generate a high-quality synthetic three-dimensional (3D) microstructure image of TYPE 316L material manufactured through additive methods. The quality of the resulting 3D image was evaluated using an auto-correlation function, and it was discovered that maintaining a high resolution while doubling the training image size was crucial in creating a more realistic synthetic 3D image. To meet this requirement, modified 3D image generator and critic architecture was developed within the SliceGAN framework. Full article
(This article belongs to the Topic Computer Vision and Image Processing)
Show Figures

Figure 1

12 pages, 874 KiB  
Article
On the Generalization of Deep Learning Models in Video Deepfake Detection
by Davide Alessandro Coccomini, Roberto Caldelli, Fabrizio Falchi and Claudio Gennaro
J. Imaging 2023, 9(5), 89; https://doi.org/10.3390/jimaging9050089 - 29 Apr 2023
Cited by 9 | Viewed by 3016
Abstract
The increasing use of deep learning techniques to manipulate images and videos, commonly referred to as “deepfakes”, is making it more challenging to differentiate between real and fake content, while various deepfake detection systems have been developed, they often struggle to detect deepfakes [...] Read more.
The increasing use of deep learning techniques to manipulate images and videos, commonly referred to as “deepfakes”, is making it more challenging to differentiate between real and fake content, while various deepfake detection systems have been developed, they often struggle to detect deepfakes in real-world situations. In particular, these methods are often unable to effectively distinguish images or videos when these are modified using novel techniques which have not been used in the training set. In this study, we carry out an analysis of different deep learning architectures in an attempt to understand which is more capable of better generalizing the concept of deepfake. According to our results, it appears that Convolutional Neural Networks (CNNs) seem to be more capable of storing specific anomalies and thus excel in cases of datasets with a limited number of elements and manipulation methodologies. The Vision Transformer, conversely, is more effective when trained with more varied datasets, achieving more outstanding generalization capabilities than the other methods analysed. Finally, the Swin Transformer appears to be a good alternative for using an attention-based method in a more limited data regime and performs very well in cross-dataset scenarios. All the analysed architectures seem to have a different way to look at deepfakes, but since in a real-world environment the generalization capability is essential, based on the experiments carried out, the attention-based architectures seem to provide superior performances. Full article
(This article belongs to the Special Issue Robust Deep Learning Techniques for Multimedia Forensics and Security)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop