Next Issue
Volume 6, December
Previous Issue
Volume 6, October
 
 

J. Imaging, Volume 6, Issue 11 (November 2020) – 16 articles

Cover Story (view full-size image): Colour image daltonisation refers to the process of recolouring digital images in order to make them more readily interpretable for colour vision deficient observers. A previously developed method based on recolouring in the gradient domain followed by reintegration gave good results in both behavioural and psychometric experiments. The method was, however, computationally expensive, and tended to produce visible halo artefacts. In the current paper, we propose (i) a new simple colour domain daltonisation method to use as an initial value for reintegration, and (ii) to use linear anisotropic diffusion instead of the Poisson equation. Together, this results in an improved computational efficiency and a significant reduction in visible halo artefacts. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
14 pages, 550 KiB  
Article
Musculoskeletal Images Classification for Detection of Fractures Using Transfer Learning
by Ibrahem Kandel, Mauro Castelli and Aleš Popovič
J. Imaging 2020, 6(11), 127; https://doi.org/10.3390/jimaging6110127 - 23 Nov 2020
Cited by 23 | Viewed by 4012
Abstract
The classification of the musculoskeletal images can be very challenging, mostly when it is being done in the emergency room, where a decision must be made rapidly. The computer vision domain has gained increasing attention in recent years, due to its achievements in [...] Read more.
The classification of the musculoskeletal images can be very challenging, mostly when it is being done in the emergency room, where a decision must be made rapidly. The computer vision domain has gained increasing attention in recent years, due to its achievements in image classification. The convolutional neural network (CNN) is one of the latest computer vision algorithms that achieved state-of-the-art results. A CNN requires an enormous number of images to be adequately trained, and these are always scarce in the medical field. Transfer learning is a technique that is being used to train the CNN by using fewer images. In this paper, we study the appropriate method to classify musculoskeletal images by transfer learning and by training from scratch. We applied six state-of-the-art architectures and compared their performance with transfer learning and with a network trained from scratch. From our results, transfer learning did increase the model performance significantly, and, additionally, it made the model less prone to overfitting. Full article
(This article belongs to the Special Issue Deep Learning in Medical Image Analysis)
Show Figures

Figure 1

15 pages, 5237 KiB  
Article
A Siamese Neural Network for Non-Invasive Baggage Re-Identification
by Pier Luigi Mazzeo, Christian Libetta, Paolo Spagnolo and Cosimo Distante
J. Imaging 2020, 6(11), 126; https://doi.org/10.3390/jimaging6110126 - 20 Nov 2020
Cited by 5 | Viewed by 3068
Abstract
Baggage travelling on a conveyor belt in the sterile area (the rear collector located after the check-in counters) often gets stuck due to traffic jams, mainly caused by incorrect entries from the check-in counters on the collector belt. Using suitcase appearance captured on [...] Read more.
Baggage travelling on a conveyor belt in the sterile area (the rear collector located after the check-in counters) often gets stuck due to traffic jams, mainly caused by incorrect entries from the check-in counters on the collector belt. Using suitcase appearance captured on the Baggage Handling System (BHS) and airport checkpoints and their re-identification allows for us to handle baggage safer and faster. In this paper, we propose a Siamese Neural Network-based model that is able to estimate the baggage similarity: given a set of training images of the same suitcase (taken in different conditions), the network predicts whether the two input images belong to the same baggage identity. The proposed network learns discriminative features in order to measure the similarity among two different images of the same baggage identity. It can be easily applied on different pre-trained backbones. We demonstrate our model in a publicly available suitcase dataset that outperforms the leading latest state-of-the-art architecture in terms of accuracy. Full article
(This article belongs to the Special Issue Advances in Image Feature Extraction and Selection)
Show Figures

Figure 1

14 pages, 2048 KiB  
Article
Lung Segmentation on High-Resolution Computerized Tomography Images Using Deep Learning: A Preliminary Step for Radiomics Studies
by Albert Comelli, Claudia Coronnello, Navdeep Dahiya, Viviana Benfante, Stefano Palmucci, Antonio Basile, Carlo Vancheri, Giorgio Russo, Anthony Yezzi and Alessandro Stefano
J. Imaging 2020, 6(11), 125; https://doi.org/10.3390/jimaging6110125 - 19 Nov 2020
Cited by 36 | Viewed by 4154
Abstract
Background: The aim of this work is to identify an automatic, accurate, and fast deep learning segmentation approach, applied to the parenchyma, using a very small dataset of high-resolution computed tomography images of patients with idiopathic pulmonary fibrosis. In this way, we aim [...] Read more.
Background: The aim of this work is to identify an automatic, accurate, and fast deep learning segmentation approach, applied to the parenchyma, using a very small dataset of high-resolution computed tomography images of patients with idiopathic pulmonary fibrosis. In this way, we aim to enhance the methodology performed by healthcare operators in radiomics studies where operator-independent segmentation methods must be used to correctly identify the target and, consequently, the texture-based prediction model. Methods: Two deep learning models were investigated: (i) U-Net, already used in many biomedical image segmentation tasks, and (ii) E-Net, used for image segmentation tasks in self-driving cars, where hardware availability is limited and accurate segmentation is critical for user safety. Our small image dataset is composed of 42 studies of patients with idiopathic pulmonary fibrosis, of which only 32 were used for the training phase. We compared the performance of the two models in terms of the similarity of their segmentation outcome with the gold standard and in terms of their resources’ requirements. Results: E-Net can be used to obtain accurate (dice similarity coefficient = 95.90%), fast (20.32 s), and clinically acceptable segmentation of the lung region. Conclusions: We demonstrated that deep learning models can be efficiently applied to rapidly segment and quantify the parenchyma of patients with pulmonary fibrosis, without any radiologist supervision, in order to produce user-independent results. Full article
(This article belongs to the Special Issue Radiomics and Texture Analysis in Medical Imaging)
Show Figures

Figure 1

22 pages, 7311 KiB  
Article
Boron-Based Neutron Scintillator Screens for Neutron Imaging
by William Chuirazzi, Aaron Craft, Burkhard Schillinger, Steven Cool and Alessandro Tengattini
J. Imaging 2020, 6(11), 124; https://doi.org/10.3390/jimaging6110124 - 19 Nov 2020
Cited by 15 | Viewed by 3917
Abstract
In digital neutron imaging, the neutron scintillator screen is a limiting factor of spatial resolution and neutron capture efficiency and must be improved to enhance the capabilities of digital neutron imaging systems. Commonly used neutron scintillators are based on 6LiF and gadolinium [...] Read more.
In digital neutron imaging, the neutron scintillator screen is a limiting factor of spatial resolution and neutron capture efficiency and must be improved to enhance the capabilities of digital neutron imaging systems. Commonly used neutron scintillators are based on 6LiF and gadolinium oxysulfide neutron converters. This work explores boron-based neutron scintillators because 10B has a neutron absorption cross-section four times greater than 6Li, less energetic daughter products than Gd and 6Li, and lower γ-ray sensitivity than Gd. These factors all suggest that, although borated neutron scintillators may not produce as much light as 6Li-based screens, they may offer improved neutron statistics and spatial resolution. This work conducts a parametric study to determine the effects of various boron neutron converters, scintillator and converter particle sizes, converter-to-scintillator mix ratio, substrate materials, and sensor construction on image quality. The best performing boron-based scintillator screens demonstrated an improvement in neutron detection efficiency when compared with a common 6LiF/ZnS scintillator, with a 125% increase in thermal neutron detection efficiency and 67% increase in epithermal neutron detection efficiency. The spatial resolution of high-resolution borated scintillators was measured, and the neutron tomography of a test object was successfully performed using some of the boron-based screens that exhibited the highest spatial resolution. For some applications, boron-based scintillators can be utilized to increase the performance of a digital neutron imaging system by reducing acquisition times and improving neutron statistics. Full article
(This article belongs to the Special Issue Neutron Imaging)
Show Figures

Figure 1

15 pages, 10501 KiB  
Article
Enhanced Contactless Vital Sign Estimation from Real-Time Multimodal 3D Image Data
by Chen Zhang, Ingo Gebhart, Peter Kühmstedt, Maik Rosenberger and Gunther Notni
J. Imaging 2020, 6(11), 123; https://doi.org/10.3390/jimaging6110123 - 12 Nov 2020
Cited by 20 | Viewed by 3025
Abstract
The contactless estimation of vital signs using conventional color cameras and ambient light can be affected by motion artifacts and changes in ambient light. On both these problems, a multimodal 3D imaging system with an irritation-free controlled illumination was developed in this work. [...] Read more.
The contactless estimation of vital signs using conventional color cameras and ambient light can be affected by motion artifacts and changes in ambient light. On both these problems, a multimodal 3D imaging system with an irritation-free controlled illumination was developed in this work. In this system, real-time 3D imaging was combined with multispectral and thermal imaging. Based on 3D image data, an efficient method was developed for the compensation of head motions, and novel approaches based on the use of 3D regions of interest were proposed for the estimation of various vital signs from multispectral and thermal video data. The developed imaging system and algorithms were demonstrated with test subjects, delivering a proof-of-concept. Full article
(This article belongs to the Special Issue 3D and Multimodal Image Acquisition Methods)
Show Figures

Figure 1

11 pages, 3478 KiB  
Article
Non-Destructive, Opto-Electronic Determination of the Freshness and Shrivel of Bell Pepper Fruits
by Bernhard Althaus and Michael Blanke
J. Imaging 2020, 6(11), 122; https://doi.org/10.3390/jimaging6110122 - 10 Nov 2020
Cited by 11 | Viewed by 4322
Abstract
(1) The objective of the present study was to identify suitable parameters to determine the (degree of) freshness of Bell pepper fruit of three colors (yellow, red, and green) over a two-week period including the occurrence of shrivel using non-destructive real-time measurements (2) [...] Read more.
(1) The objective of the present study was to identify suitable parameters to determine the (degree of) freshness of Bell pepper fruit of three colors (yellow, red, and green) over a two-week period including the occurrence of shrivel using non-destructive real-time measurements (2) Materials and methods: Surface glossiness was measured non-destructively with a luster sensor type CZ-H72 (Keyence Co., Osaka, Japan), a colorimeter, a spectrometer and a profilometer type VR-5200 (Keyence) to obtain RGB images. (3) Results: During storage and shelf life, bell pepper fruit of initially 230–245 g lost 2.9–4.8 g FW per day at 17 °C and 55% rh. Shriveling started at 6–8% weight loss after 4–5 days and became more pronounced. Glossiness decreased from 450–500 a.u. with fresh fruit without shrivel, 280–310 a.u. with moderately shriveled fruit to 80–90 a.u. with severely shriveled fruit irrespective of color against a background of <40 a.u. within the same color, e.g., light red and dark red. Non-invasive color measurements showed no decline in Lab values (chlorophyll content), irrespective of fruit color and degree of shrivel. RGB images, converted into false color images, showed a concomitant increase in surface roughness (Sa) from Sa = ca. 2 µm for fresh and glossy, Sa = ca. 7 µm for moderately shriveled to Sa = ca. 24 µm for severely shriveled rough surfaces of stored pepper fruit, equivalent to a 12-fold increase in surface roughness. The light reflectance peak at 630–633 nm was universal, irrespective of fruit color and freshness. Hence, a freshness index based on (a) luster values ≥ 450 a.u., (b) Sa ≤ 2 µm and (c) the difference in relative reflectance in % between 630 nm and 500 nm is suggested. The latter values declined from ca. 40% for fresh red Bell pepper, ca. 32% after 6 days when shriveling had started, to ca. 21% after 12 days, but varied with fruit color. (4) Conclusion: overall, it can be concluded that color measurements were unsuitable to determine the freshness of Bell pepper fruit, whereas profilometer, luster sensor, and light reflectance spectra were suitable candidates as a novel opto-electronic approach for defining and parametrizing fruit freshness. Full article
Show Figures

Figure 1

40 pages, 558 KiB  
Review
Deep Learning in Selected Cancers’ Image Analysis—A Survey
by Taye Girma Debelee, Samuel Rahimeto Kebede, Friedhelm Schwenker and Zemene Matewos Shewarega
J. Imaging 2020, 6(11), 121; https://doi.org/10.3390/jimaging6110121 - 10 Nov 2020
Cited by 53 | Viewed by 8319
Abstract
Deep learning algorithms have become the first choice as an approach to medical image analysis, face recognition, and emotion recognition. In this survey, several deep-learning-based approaches applied to breast cancer, cervical cancer, brain tumor, colon and lung cancers are studied and reviewed. Deep [...] Read more.
Deep learning algorithms have become the first choice as an approach to medical image analysis, face recognition, and emotion recognition. In this survey, several deep-learning-based approaches applied to breast cancer, cervical cancer, brain tumor, colon and lung cancers are studied and reviewed. Deep learning has been applied in almost all of the imaging modalities used for cervical and breast cancers and MRIs for the brain tumor. The result of the review process indicated that deep learning methods have achieved state-of-the-art in tumor detection, segmentation, feature extraction and classification. As presented in this paper, the deep learning approaches were used in three different modes that include training from scratch, transfer learning through freezing some layers of the deep learning network and modifying the architecture to reduce the number of parameters existing in the network. Moreover, the application of deep learning to imaging devices for the detection of various cancer cases has been studied by researchers affiliated to academic and medical institutes in economically developed countries; while, the study has not had much attention in Africa despite the dramatic soar of cancer risks in the continent. Full article
(This article belongs to the Special Issue Deep Learning in Medical Image Analysis)
Show Figures

Figure 1

26 pages, 4408 KiB  
Article
Multi-View Hand-Hygiene Recognition for Food Safety
by Chengzhang Zhong, Amy R. Reibman, Hansel A. Mina and Amanda J. Deering
J. Imaging 2020, 6(11), 120; https://doi.org/10.3390/jimaging6110120 - 7 Nov 2020
Cited by 8 | Viewed by 6933
Abstract
A majority of foodborne illnesses result from inappropriate food handling practices. One proven practice to reduce pathogens is to perform effective hand-hygiene before all stages of food handling. In this paper, we design a multi-camera system that uses video analytics to recognize hand-hygiene [...] Read more.
A majority of foodborne illnesses result from inappropriate food handling practices. One proven practice to reduce pathogens is to perform effective hand-hygiene before all stages of food handling. In this paper, we design a multi-camera system that uses video analytics to recognize hand-hygiene actions, with the goal of improving hand-hygiene effectiveness. Our proposed two-stage system processes untrimmed video from both egocentric and third-person cameras. In the first stage, a low-cost coarse classifier efficiently localizes the hand-hygiene period; in the second stage, more complex refinement classifiers recognize seven specific actions within the hand-hygiene period. We demonstrate that our two-stage system has significantly lower computational requirements without a loss of recognition accuracy. Specifically, the computationally complex refinement classifiers process less than 68% of the untrimmed videos, and we anticipate further computational gains in videos that contain a larger fraction of non-hygiene actions. Our results demonstrate that a carefully designed video action recognition system can play an important role in improving hand hygiene for food safety. Full article
Show Figures

Figure 1

21 pages, 21884 KiB  
Article
A Portable Compact System for Laser Speckle Correlation Imaging of Artworks Using Projected Speckle Pattern
by Claudia Daffara and Elisa Marini
J. Imaging 2020, 6(11), 119; https://doi.org/10.3390/jimaging6110119 - 6 Nov 2020
Cited by 3 | Viewed by 4767
Abstract
Artworks have a layered structure subjected to alterations caused by various factors. The monitoring of defects at sub-millimeter scale may be performed by laser interferometric techniques. The aim of this work was to develop a compact system to perform laser speckle imaging in [...] Read more.
Artworks have a layered structure subjected to alterations caused by various factors. The monitoring of defects at sub-millimeter scale may be performed by laser interferometric techniques. The aim of this work was to develop a compact system to perform laser speckle imaging in situ for effective mapping of subsurface defects in paintings. The device was designed to be versatile with the possibility of optimizing the performance by easy parameters adjustment. The system exploits a laser speckle pattern generated through an optical diffuser and projected onto the artworks and image correlation techniques for the analysis of the speckle intensity pattern. A protocol for the optimal measurement was suggested, based on calibration curves for tuning the mean speckle size in the acquired intensity pattern. The system was validated in the analysis of detachments in an ancient painting model using a short pulse thermal stimulus to induce a surface deformation field and standard decorrelation algorithms for speckle pattern matching. The device is equipped with a compact thermal camera for preventing any overheating effects during the phase of the stimulus. The developed system represents a valuable nondestructive tool for artwork diagnostics, allowing the monitoring of subsurface defects in paintings in out-of-laboratory environment. Full article
(This article belongs to the Special Issue Fine Art Pattern Extraction and Recognition)
Show Figures

Figure 1

11 pages, 7950 KiB  
Article
Neutron Radiography and Tomography of the Drying Process of Screed Samples
by Lorenz Kapral, Michael Zawisky and Hartmut Abele
J. Imaging 2020, 6(11), 118; https://doi.org/10.3390/jimaging6110118 - 5 Nov 2020
Cited by 1 | Viewed by 2295
Abstract
The moisture content of screed samples is an essential parameter in the construction industry, since the screed must dry to a certain level of moisture content to be ready for covering. This paper introduces neutron radiography (NR) and neutron tomography (NT) as new, [...] Read more.
The moisture content of screed samples is an essential parameter in the construction industry, since the screed must dry to a certain level of moisture content to be ready for covering. This paper introduces neutron radiography (NR) and neutron tomography (NT) as new, non-destructive techniques for analysing the drying characteristics of screed. Our NR analyses evaluate the results of the established methods while offering much higher spatial resolution of 200 μm, thereby facilitating a two- and three-dimensional understanding of screed’s drying behaviour. Because of NR’s exceptionally high sensitivity regarding the total cross section of hydrogen the precise moisture content of screed samples is obtainable, resulting in new observations. The current methods to measure moisture content comprise the ‘calcium carbide method’, the ‘Darr method’, and electrical sensor systems. Full article
(This article belongs to the Special Issue Neutron Imaging)
Show Figures

Figure 1

16 pages, 2168 KiB  
Article
A Model for Evaluating the Performance of a Multiple Keywords Spotting System for the Transcription of Historical Handwritten Documents
by Angelo Marcelli, Giuseppe De Gregorio and Adolfo Santoro
J. Imaging 2020, 6(11), 117; https://doi.org/10.3390/jimaging6110117 - 3 Nov 2020
Cited by 2 | Viewed by 1952
Abstract
This paper proposes a performance model for estimating the user time needed to transcribe small collections of handwritten documents using a keyword spotting system (KWS) that provides a number of possible transcriptions for each word image. The model assumes that only information obtained [...] Read more.
This paper proposes a performance model for estimating the user time needed to transcribe small collections of handwritten documents using a keyword spotting system (KWS) that provides a number of possible transcriptions for each word image. The model assumes that only information obtained from a small training set is available, and establishes the constraints on the performance measures to achieve a reduction of the time for transcribing the content with respect to the time required by human experts. The model is complemented with a procedure for computing the parameters of the model and eventually estimating the improvement of the time to achieve a complete and error-free transcription of the documents. Full article
(This article belongs to the Special Issue Recent Advances in Historical Document Processing)
Show Figures

Figure 1

10 pages, 9038 KiB  
Article
Individualised Halo-Free Gradient-Domain Colour Image Daltonisation
by Ivar Farup
J. Imaging 2020, 6(11), 116; https://doi.org/10.3390/jimaging6110116 - 29 Oct 2020
Cited by 8 | Viewed by 2703
Abstract
Daltonisation refers to the recolouring of images such that details normally lost by colour vision deficient observers become visible. This comes at the cost of introducing artificial colours. In a previous work, we presented a gradient-domain colour image daltonisation method that outperformed previously [...] Read more.
Daltonisation refers to the recolouring of images such that details normally lost by colour vision deficient observers become visible. This comes at the cost of introducing artificial colours. In a previous work, we presented a gradient-domain colour image daltonisation method that outperformed previously known methods both in behavioural and psychometric experiments. In the present paper, we improve the method by (i) finding a good first estimate of the daltonised image, thus reducing the computational time significantly, and (ii) introducing local linear anisotropic diffusion, thus effectively removing the halo artefacts. The method uses a colour vision deficiency simulation algorithm as an ingredient, and can thus be applied for any colour vision deficiency, and can even be individualised if the exact individual colour vision is known. Full article
Show Figures

Figure 1

18 pages, 8460 KiB  
Article
Detecting Morphing Attacks through Face Geometry Features
by Stephanie Autherith and Cecilia Pasquini
J. Imaging 2020, 6(11), 115; https://doi.org/10.3390/jimaging6110115 - 29 Oct 2020
Cited by 10 | Viewed by 4728
Abstract
Face-morphing operations allow for the generation of digital faces that simultaneously carry the characteristics of two different subjects. It has been demonstrated that morphed faces strongly challenge face-verification systems, as they typically match two different identities. This poses serious security issues in machine-assisted [...] Read more.
Face-morphing operations allow for the generation of digital faces that simultaneously carry the characteristics of two different subjects. It has been demonstrated that morphed faces strongly challenge face-verification systems, as they typically match two different identities. This poses serious security issues in machine-assisted border control applications and calls for techniques to automatically detect whether morphing operations have been previously applied on passport photos. While many proposed approaches analyze the suspect passport photo only, our work operates in a differential scenario, i.e., when the passport photo is analyzed in conjunction with the probe image of the subject acquired at border control to verify that they correspond to the same identity. To this purpose, in this study, we analyze the locations of biologically meaningful facial landmarks identified in the two images, with the goal of capturing inconsistencies in the facial geometry introduced by the morphing process. We report the results of extensive experiments performed on images of various sources and under different experimental settings showing that landmark locations detected through automated algorithms contain discriminative information for identifying pairs with morphed passport photos. Sensitivity of supervised classifiers to different compositions on the training and testing sets are also explored, together with the performance of different derived feature transformations. Full article
(This article belongs to the Special Issue Image and Video Forensics)
Show Figures

Figure 1

15 pages, 12348 KiB  
Article
Ensemble of ERDTs for Spectral–Spatial Classification of Hyperspectral Images Using MRS Object-Guided Morphological Profiles
by Alim Samat, Erzhu Li, Sicong Liu, Zelang Miao and Wei Wang
J. Imaging 2020, 6(11), 114; https://doi.org/10.3390/jimaging6110114 - 26 Oct 2020
Cited by 2 | Viewed by 2216
Abstract
In spectral-spatial classification of hyperspectral image tasks, the performance of conventional morphological profiles (MPs) that use a sequence of structural elements (SEs) with predefined sizes and shapes could be limited by mismatching all the sizes and shapes of real-world objects in an image. [...] Read more.
In spectral-spatial classification of hyperspectral image tasks, the performance of conventional morphological profiles (MPs) that use a sequence of structural elements (SEs) with predefined sizes and shapes could be limited by mismatching all the sizes and shapes of real-world objects in an image. To overcome such limitation, this paper proposes the use of object-guided morphological profiles (OMPs) by adopting multiresolution segmentation (MRS)-based objects as SEs for morphological closing and opening by geodesic reconstruction. Additionally, the ExtraTrees, bagging, adaptive boosting (AdaBoost), and MultiBoost ensemble versions of the extremely randomized decision trees (ERDTs) are introduced and comparatively investigated for spectral-spatial classification of hyperspectral images. Two hyperspectral benchmark images are used to validate the proposed approaches in terms of classification accuracy. The experimental results confirm the effectiveness of the proposed spatial feature extractors and ensemble classifiers. Full article
(This article belongs to the Special Issue Advances in Image Feature Extraction and Selection)
Show Figures

Figure 1

12 pages, 1351 KiB  
Article
Fully 3D Active Surface with Machine Learning for PET Image Segmentation
by Albert Comelli
J. Imaging 2020, 6(11), 113; https://doi.org/10.3390/jimaging6110113 - 23 Oct 2020
Cited by 13 | Viewed by 3185
Abstract
In order to tackle three-dimensional tumor volume reconstruction from Positron Emission Tomography (PET) images, most of the existing algorithms rely on the segmentation of independent PET slices. To exploit cross-slice information, typically overlooked in these 2D implementations, I present an algorithm capable of [...] Read more.
In order to tackle three-dimensional tumor volume reconstruction from Positron Emission Tomography (PET) images, most of the existing algorithms rely on the segmentation of independent PET slices. To exploit cross-slice information, typically overlooked in these 2D implementations, I present an algorithm capable of achieving the volume reconstruction directly in 3D, by leveraging an active surface algorithm. The evolution of such surface performs the segmentation of the whole stack of slices simultaneously and can handle changes in topology. Furthermore, no artificial stop condition is required, as the active surface will naturally converge to a stable topology. In addition, I include a machine learning component to enhance the accuracy of the segmentation process. The latter consists of a forcing term based on classification results from a discriminant analysis algorithm, which is included directly in the mathematical formulation of the energy function driving surface evolution. It is worth noting that the training of such a component requires minimal data compared to more involved deep learning methods. Only eight patients (i.e., two lung, four head and neck, and two brain cancers) were used for training and testing the machine learning component, while fifty patients (i.e., 10 lung, 25 head and neck, and 15 brain cancers) were used to test the full 3D reconstruction algorithm. Performance evaluation is based on the same dataset of patients discussed in my previous work, where the segmentation was performed using the 2D active contour. The results confirm that the active surface algorithm is superior to the active contour algorithm, outperforming the earlier approach on all the investigated anatomical districts with a dice similarity coefficient of 90.47 ± 2.36% for lung cancer, 88.30 ± 2.89% for head and neck cancer, and 90.29 ± 2.52% for brain cancer. Based on the reported results, it can be claimed that the migration into a 3D system yielded a practical benefit justifying the effort to rewrite an existing 2D system for PET imaging segmentation. Full article
(This article belongs to the Special Issue Deep Learning in Medical Image Analysis)
Show Figures

Figure 1

19 pages, 1229 KiB  
Article
Learning Descriptors Invariance through Equivalence Relations within Manifold: A New Approach to Expression Invariant 3D Face Recognition
by Faisal R. Al-Osaimi
J. Imaging 2020, 6(11), 112; https://doi.org/10.3390/jimaging6110112 - 22 Oct 2020
Cited by 3 | Viewed by 1968
Abstract
This paper presents a unique approach for the dichotomy between useful and adverse variations of key-point descriptors, namely the identity and the expression variations in the descriptor (feature) space. The descriptors variations are learned from training examples. Based on labels of the training [...] Read more.
This paper presents a unique approach for the dichotomy between useful and adverse variations of key-point descriptors, namely the identity and the expression variations in the descriptor (feature) space. The descriptors variations are learned from training examples. Based on labels of the training data, the equivalence relations among the descriptors are established. Both types of descriptor variations are represented by a graph embedded in the descriptor manifold. Invariant recognition is then conducted as a graph search problem. A heuristic graph search algorithm suitable for the recognition under this setup was devised. The proposed approach was tested on the FRGC v2.0, the Bosphorus and the 3D TEC datasets. It has shown to enhance the recognition performance, under expression variations, by considerable margins. Full article
(This article belongs to the Special Issue 3D Human Understanding)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop