Next Issue
Volume 8, August
Previous Issue
Volume 8, June
 
 

J. Imaging, Volume 8, Issue 7 (July 2022) – 29 articles

Cover Story (view full-size image): The state of the art for examining lymph nodes in a lung cancer patient involves using an endobronchial ultrasound (EBUS) bronchoscope. New image-guided systems provide guidance for bronchoscope navigation but offer no assistance for guiding EBUS localization. We propose a method for registering a patient’s chest CT scan to live surgical EBUS views, thereby facilitating accurate image-guided EBUS bronchoscopy.  Patient results demonstrate the method’s effectiveness. We also demonstrate the method’s use in an image-guided system designed for guiding bronchoscope navigation and EBUS localization. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
27 pages, 4902 KiB  
Review
Augmenting Performance: A Systematic Review of Optical See-Through Head-Mounted Displays in Surgery
by Mitchell Doughty, Nilesh R. Ghugre and Graham A. Wright
J. Imaging 2022, 8(7), 203; https://doi.org/10.3390/jimaging8070203 - 20 Jul 2022
Cited by 36 | Viewed by 9465
Abstract
We conducted a systematic review of recent literature to understand the current challenges in the use of optical see-through head-mounted displays (OST-HMDs) for augmented reality (AR) assisted surgery. Using Google Scholar, 57 relevant articles from 1 January 2021 through 18 March 2022 were [...] Read more.
We conducted a systematic review of recent literature to understand the current challenges in the use of optical see-through head-mounted displays (OST-HMDs) for augmented reality (AR) assisted surgery. Using Google Scholar, 57 relevant articles from 1 January 2021 through 18 March 2022 were identified. Selected articles were then categorized based on a taxonomy that described the required components of an effective AR-based navigation system: data, processing, overlay, view, and validation. Our findings indicated a focus on orthopedic (n=20) and maxillofacial surgeries (n=8). For preoperative input data, computed tomography (CT) (n=34), and surface rendered models (n=39) were most commonly used to represent image information. Virtual content was commonly directly superimposed with the target site (n=47); this was achieved by surface tracking of fiducials (n=30), external tracking (n=16), or manual placement (n=11). Microsoft HoloLens devices (n=24 in 2021, n=7 in 2022) were the most frequently used OST-HMDs; gestures and/or voice (n=32) served as the preferred interaction paradigm. Though promising system accuracy in the order of 2–5 mm has been demonstrated in phantom models, several human factors and technical challenges—perception, ease of use, context, interaction, and occlusion—remain to be addressed prior to widespread adoption of OST-HMD led surgical navigation. Full article
Show Figures

Figure 1

25 pages, 101469 KiB  
Article
StainCUT: Stain Normalization with Contrastive Learning
by José Carlos Gutiérrez Pérez, Daniel Otero Baguer and Peter Maass
J. Imaging 2022, 8(7), 202; https://doi.org/10.3390/jimaging8070202 - 20 Jul 2022
Cited by 7 | Viewed by 3819
Abstract
In recent years, numerous deep-learning approaches have been developed for the analysis of histopathology Whole Slide Images (WSI). A recurrent issue is the lack of generalization ability of a model that has been trained with images of one laboratory and then used to [...] Read more.
In recent years, numerous deep-learning approaches have been developed for the analysis of histopathology Whole Slide Images (WSI). A recurrent issue is the lack of generalization ability of a model that has been trained with images of one laboratory and then used to analyze images of a different laboratory. This occurs mainly due to the use of different scanners, laboratory procedures, and staining variations. This can produce strong color differences, which change not only the characteristics of the image, such as the contrast, brightness, and saturation, but also create more complex style variations. In this paper, we present a deep-learning solution based on contrastive learning to transfer from one staining style to another: StainCUT. This method eliminates the need to choose a reference frame and does not need paired images with different staining to learn the mapping between the stain distributions. Additionally, it does not rely on the CycleGAN approach, which makes the method efficient in terms of memory consumption and running time. We evaluate the model using two datasets that consist of the same specimens digitized with two different scanners. We also apply it as a preprocessing step for the semantic segmentation of metastases in lymph nodes. The model was trained on data from one of the laboratories and evaluated on data from another. The results validate the hypothesis that stain normalization indeed improves the performance of the model. Finally, we also investigate and compare the application of the stain normalization step during the training of the model and at inference. Full article
(This article belongs to the Special Issue Deep Learning in Medical Image Analysis, Volume II)
Show Figures

Figure 1

26 pages, 6867 KiB  
Article
Quantification of Sub-Pixel Dynamics in High-Speed Neutron Imaging
by Martin L. Wissink, Todd J. Toops, Derek A. Splitter, Eric J. Nafziger, Charles E. A. Finney, Hassina Z. Bilheux, Louis J. Santodonato and Yuxuan Zhang
J. Imaging 2022, 8(7), 201; https://doi.org/10.3390/jimaging8070201 - 18 Jul 2022
Cited by 1 | Viewed by 2095
Abstract
The high penetration depth of neutrons through many metals and other common materials makes neutron imaging an attractive method for non-destructively probing the internal structure and dynamics of objects or systems that may not be accessible by conventional means, such as X-ray or [...] Read more.
The high penetration depth of neutrons through many metals and other common materials makes neutron imaging an attractive method for non-destructively probing the internal structure and dynamics of objects or systems that may not be accessible by conventional means, such as X-ray or optical imaging. While neutron imaging has been demonstrated to achieve a spatial resolution below 10 μm and temporal resolution below 10 μs, the relatively low flux of neutron sources and the limitations of existing neutron detectors have, until now, dictated that these cannot be achieved simultaneously, which substantially restricts the applicability of neutron imaging to many fields of research that could otherwise benefit from its unique capabilities. In this work, we present an attenuation modeling approach to the quantification of sub-pixel dynamics in cyclic ensemble neutron image sequences of an automotive gasoline direct injector at a 5 μs time scale with a spatial noise floor in the order of 5 μm. Full article
(This article belongs to the Special Issue Computational Methods for Neutron Imaging)
Show Figures

Figure 1

11 pages, 603 KiB  
Article
A New LBP Variant: Corner Rhombus Shape LBP (CRSLBP)
by Ibtissam Al Saidi, Mohammed Rziza and Johan Debayle
J. Imaging 2022, 8(7), 200; https://doi.org/10.3390/jimaging8070200 - 17 Jul 2022
Cited by 5 | Viewed by 1892
Abstract
The local binary model is a straightforward, dependable, and effective method for extracting relevant local information from images. However, because it only uses sign information in the local region, the local binary pattern (LBP) is ineffective at capturing discriminating characteristics. Furthermore, most LBP [...] Read more.
The local binary model is a straightforward, dependable, and effective method for extracting relevant local information from images. However, because it only uses sign information in the local region, the local binary pattern (LBP) is ineffective at capturing discriminating characteristics. Furthermore, most LBP variants select a region with one specific center pixel to fill all neighborhoods. In this paper, a new variant of a LBP is proposed for texture classification, known as corner rhombus-shape LBP (CRSLBP). In the CRSLBP approach, we first use three methods to threshold the pixel’s neighbors and center to obtain four center pixels by using sign and magnitude information with respect to a chosen region of an even block. This helps determine not just the relationship between neighbors and the pixel center but also between the center and the neighbor pixels of neighborhood center pixels. We evaluated the performance of our descriptors using four challenging texture databases: Outex (TC10,TC12), Brodatz, KTH-TIPSb2, and UMD. Various extensive experiments were performed that demonstrated the effectiveness and robustness of our descriptor in comparison with the available state of the art (SOTA). Full article
(This article belongs to the Special Issue The Present and the Future of Imaging)
Show Figures

Figure 1

14 pages, 13709 KiB  
Article
Force Estimation during Cell Migration Using Mathematical Modelling
by Fengwei Yang, Chandrasekhar Venkataraman, Sai Gu, Vanessa Styles and Anotida Madzvamuse
J. Imaging 2022, 8(7), 199; https://doi.org/10.3390/jimaging8070199 - 15 Jul 2022
Cited by 1 | Viewed by 2321
Abstract
Cell migration is essential for physiological, pathological and biomedical processes such as, in embryogenesis, wound healing, immune response, cancer metastasis, tumour invasion and inflammation. In light of this, quantifying mechanical properties during the process of cell migration is of great interest in experimental [...] Read more.
Cell migration is essential for physiological, pathological and biomedical processes such as, in embryogenesis, wound healing, immune response, cancer metastasis, tumour invasion and inflammation. In light of this, quantifying mechanical properties during the process of cell migration is of great interest in experimental sciences, yet few theoretical approaches in this direction have been studied. In this work, we propose a theoretical and computational approach based on the optimal control of geometric partial differential equations to estimate cell membrane forces associated with cell polarisation during migration. Specifically, cell membrane forces are inferred or estimated by fitting a mathematical model to a sequence of images, allowing us to capture dynamics of the cell migration. Our approach offers a robust and accurate framework to compute geometric mechanical membrane forces associated with cell polarisation during migration and also yields geometric information of independent interest, we illustrate one such example that involves quantifying cell proliferation levels which are associated with cell division, cell fusion or cell death. Full article
(This article belongs to the Topic Medical Image Analysis)
Show Figures

Figure 1

18 pages, 4972 KiB  
Review
Multiple Papillomas of the Breast: A Review of Current Evidence and Challenges
by Rossella Rella, Giovanna Romanucci, Damiano Arciuolo, Assunta Scaldaferri, Enida Bufi, Sebastiano Croce, Andrea Caulo and Oscar Tommasini
J. Imaging 2022, 8(7), 198; https://doi.org/10.3390/jimaging8070198 - 13 Jul 2022
Cited by 2 | Viewed by 6575
Abstract
Objectives: To conduct a review of evidence about papillomatosis/multiple papillomas (MP), its clinical and imaging presentation, the association between MP and malignancy and the management strategies that follow. Methods: A computerized literature search using PubMed and Google Scholar was performed up to January [...] Read more.
Objectives: To conduct a review of evidence about papillomatosis/multiple papillomas (MP), its clinical and imaging presentation, the association between MP and malignancy and the management strategies that follow. Methods: A computerized literature search using PubMed and Google Scholar was performed up to January 2021 with the following search strategy: “papilloma” OR “intraductal papilloma” OR “intraductal papillary neoplasms” OR “papillomatosis” OR “papillary lesion” AND “breast”. Two authors independently conducted a search, screening and extraction of data from the eligible studies. Results: Of the 1881 articles identified, 29 articles met the inclusion criteria. The most common breast imaging methods (mammography, ultrasound) showed few specific signs of MP, and evidence about magnetic resonance imaging were weak. Regarding the association between MP and malignancy, the risk of underestimation to biopsy methods and the frequent coexistence of MP and other high-risk lesions needs to be taken into consideration. Results about the risk of developing breast carcinoma of patients affected by MP were inconsistent. Conclusions: MP is a challenge for all breast specialists, and familiarity with its features is required to make the correct diagnosis. Further studies are needed to evaluate the factors to take into account to plan management, time of follow-up and imaging methods. Full article
Show Figures

Figure 1

20 pages, 5094 KiB  
Article
Clinically Inspired Skin Lesion Classification through the Detection of Dermoscopic Criteria for Basal Cell Carcinoma
by Carmen Serrano, Manuel Lazo, Amalia Serrano, Tomás Toledo-Pastrana, Rubén Barros-Tornay and Begoña Acha
J. Imaging 2022, 8(7), 197; https://doi.org/10.3390/jimaging8070197 - 12 Jul 2022
Cited by 14 | Viewed by 2952
Abstract
Background and Objective. Skin cancer is the most common cancer worldwide. One of the most common non-melanoma tumors is basal cell carcinoma (BCC), which accounts for 75% of all skin cancers. There are many benign lesions that can be confused with these [...] Read more.
Background and Objective. Skin cancer is the most common cancer worldwide. One of the most common non-melanoma tumors is basal cell carcinoma (BCC), which accounts for 75% of all skin cancers. There are many benign lesions that can be confused with these types of cancers, leading to unnecessary biopsies. In this paper, a new method to identify the different BCC dermoscopic patterns present in a skin lesion is presented. In addition, this information is applied to classify skin lesions into BCC and non-BCC. Methods. The proposed method combines the information provided by the original dermoscopic image, introduced in a convolutional neural network (CNN), with deep and handcrafted features extracted from color and texture analysis of the image. This color analysis is performed by transforming the image into a uniform color space and into a color appearance model. To demonstrate the validity of the method, a comparison between the classification obtained employing exclusively a CNN with the original image as input and the classification with additional color and texture features is presented. Furthermore, an exhaustive comparison of classification employing different color and texture measures derived from different color spaces is presented. Results. Results show that the classifier with additional color and texture features outperforms a CNN whose input is only the original image. Another important achievement is that a new color cooccurrence matrix, proposed in this paper, improves the results obtained with other texture measures. Finally, sensitivity of 0.99, specificity of 0.94 and accuracy of 0.97 are achieved when lesions are classified into BCC or non-BCC. Conclusions. To the best of our knowledge, this is the first time that a methodology to detect all the possible patterns that can be present in a BCC lesion is proposed. This detection leads to a clinically explainable classification into BCC and non-BCC lesions. In this sense, the classification of the proposed tool is based on the detection of the dermoscopic features that dermatologists employ for their diagnosis. Full article
(This article belongs to the Topic Medical Image Analysis)
Show Figures

Figure 1

19 pages, 3350 KiB  
Review
Advances in Digital Holographic Interferometry
by Viktor Petrov, Anastsiya Pogoda, Vladimir Sementin, Alexander Sevryugin, Egor Shalymov, Dmitrii Venediktov and Vladimir Venediktov
J. Imaging 2022, 8(7), 196; https://doi.org/10.3390/jimaging8070196 - 12 Jul 2022
Cited by 11 | Viewed by 3834
Abstract
Holographic interferometry is a well-established field of science and optical engineering. It has a half-century history of successful implementation as the solution to numerous technical tasks and problems. However, fast progress in digital and computer holography has promoted it to a new level [...] Read more.
Holographic interferometry is a well-established field of science and optical engineering. It has a half-century history of successful implementation as the solution to numerous technical tasks and problems. However, fast progress in digital and computer holography has promoted it to a new level of possibilities and has opened brand new fields of its application. In this review paper, we consider some such new techniques and applications. Full article
(This article belongs to the Special Issue Digital Holography: Development and Application)
Show Figures

Figure 1

10 pages, 1018 KiB  
Article
Segmentation of Pancreatic Subregions in Computed Tomography Images
by Sehrish Javed, Touseef Ahmad Qureshi, Zengtian Deng, Ashley Wachsman, Yaniv Raphael, Srinivas Gaddam, Yibin Xie, Stephen Jacob Pandol and Debiao Li
J. Imaging 2022, 8(7), 195; https://doi.org/10.3390/jimaging8070195 - 12 Jul 2022
Cited by 4 | Viewed by 2382
Abstract
The accurate segmentation of pancreatic subregions (head, body, and tail) in CT images provides an opportunity to examine the local morphological and textural changes in the pancreas. Quantifying such changes aids in understanding the spatial heterogeneity of the pancreas and assists in the [...] Read more.
The accurate segmentation of pancreatic subregions (head, body, and tail) in CT images provides an opportunity to examine the local morphological and textural changes in the pancreas. Quantifying such changes aids in understanding the spatial heterogeneity of the pancreas and assists in the diagnosis and treatment planning of pancreatic cancer. Manual outlining of pancreatic subregions is tedious, time-consuming, and prone to subjective inconsistency. This paper presents a multistage anatomy-guided framework for accurate and automatic 3D segmentation of pancreatic subregions in CT images. Using the delineated pancreas, two soft-label maps were estimated for subregional segmentation—one by training a fully supervised naïve Bayes model that considers the length and volumetric proportions of each subregional structure based on their anatomical arrangement, and the other by using the conventional deep learning U-Net architecture for 3D segmentation. The U-Net model then estimates the joint probability of the two maps and performs optimal segmentation of subregions. Model performance was assessed using three datasets of contrast-enhanced abdominal CT scans: one public NIH dataset of the healthy pancreas, and two datasets D1 and D2 (one for each of pre-cancerous and cancerous pancreas). The model demonstrated excellent performance during the multifold cross-validation using the NIH dataset, and external validation using D1 and D2. To the best of our knowledge, this is the first automated model for the segmentation of pancreatic subregions in CT images. A dataset consisting of reference anatomical labels for subregions in all images of the NIH dataset is also established. Full article
(This article belongs to the Special Issue Intelligent Strategies for Medical Image Analysis)
Show Figures

Figure 1

16 pages, 3349 KiB  
Article
Cardiac Disease Classification Using Two-Dimensional Thickness and Few-Shot Learning Based on Magnetic Resonance Imaging Image Segmentation
by Adi Wibowo, Pandji Triadyaksa, Aris Sugiharto, Eko Adi Sarwoko, Fajar Agung Nugroho, Hideo Arai and Masateru Kawakubo
J. Imaging 2022, 8(7), 194; https://doi.org/10.3390/jimaging8070194 - 11 Jul 2022
Cited by 9 | Viewed by 2973
Abstract
Cardiac cine magnetic resonance imaging (MRI) is a widely used technique for the noninvasive assessment of cardiac functions. Deep neural networks have achieved considerable progress in overcoming various challenges in cine MRI analysis. However, deep learning models cannot be used for classification because [...] Read more.
Cardiac cine magnetic resonance imaging (MRI) is a widely used technique for the noninvasive assessment of cardiac functions. Deep neural networks have achieved considerable progress in overcoming various challenges in cine MRI analysis. However, deep learning models cannot be used for classification because limited cine MRI data are available. To overcome this problem, features from cine image settings are derived by handcrafting and addition of other clinical features to the classical machine learning approach for ensuring the model fits the MRI device settings and image parameters required in the analysis. In this study, a novel method was proposed for classifying heart disease (cardiomyopathy patient groups) using only segmented output maps. In the encoder–decoder network, the fully convolutional EfficientNetB5-UNet was modified to perform the semantic segmentation of the MRI image slice. A two-dimensional thickness algorithm was used to combine the segmentation outputs for the 2D representation of images of the end-diastole (ED) and end-systole (ES) cardiac volumes. The thickness images were subsequently used for classification by using a few-shot model with an adaptive subspace classifier. Model performance was verified by applying the model to the 2017 MICCAI Medical Image Computing and Computer-Assisted Intervention dataset. High segmentation performance was achieved as follows: the average Dice coefficients of segmentation were 96.24% (ED) and 89.92% (ES) for the left ventricle (LV); the values for the right ventricle (RV) were 92.90% (ED) and 86.92% (ES). The values for myocardium were 88.90% (ED) and 90.48% (ES). An accuracy score of 92% was achieved in the classification of various cardiomyopathy groups without clinical features. A novel rapid analysis approach was proposed for heart disease diagnosis, especially for cardiomyopathy conditions using cine MRI based on segmented output maps. Full article
(This article belongs to the Special Issue Current Methods in Medical Image Segmentation)
Show Figures

Figure 1

16 pages, 562 KiB  
Article
Dynamic Label Assignment for Object Detection by Combining Predicted IoUs and Anchor IoUs
by Tianxiao Zhang, Bo Luo, Ajay Sharda and Guanghui Wang
J. Imaging 2022, 8(7), 193; https://doi.org/10.3390/jimaging8070193 - 11 Jul 2022
Cited by 9 | Viewed by 2571
Abstract
Label assignment plays a significant role in modern object detection models. Detection models may yield totally different performances with different label assignment strategies. For anchor-based detection models, the IoU (Intersection over Union) threshold between the anchors and their corresponding ground truth bounding boxes [...] Read more.
Label assignment plays a significant role in modern object detection models. Detection models may yield totally different performances with different label assignment strategies. For anchor-based detection models, the IoU (Intersection over Union) threshold between the anchors and their corresponding ground truth bounding boxes is the key element since the positive samples and negative samples are divided by the IoU threshold. Early object detectors simply utilize the fixed threshold for all training samples, while recent detection algorithms focus on adaptive thresholds based on the distribution of the IoUs to the ground truth boxes. In this paper, we introduce a simple while effective approach to perform label assignment dynamically based on the training status with predictions. By introducing the predictions in label assignment, more high-quality samples with higher IoUs to the ground truth objects are selected as the positive samples, which could reduce the discrepancy between the classification scores and the IoU scores, and generate more high-quality boundary boxes. Our approach shows improvements in the performance of the detection models with the adaptive label assignment algorithm and lower bounding box losses for those positive samples, indicating more samples with higher-quality predicted boxes are selected as positives. Full article
(This article belongs to the Special Issue The Present and the Future of Imaging)
Show Figures

Figure 1

34 pages, 5069 KiB  
Article
Sign and Human Action Detection Using Deep Learning
by Shivanarayna Dhulipala, Festus Fatai Adedoyin and Alessandro Bruno
J. Imaging 2022, 8(7), 192; https://doi.org/10.3390/jimaging8070192 - 11 Jul 2022
Cited by 15 | Viewed by 4006
Abstract
Human beings usually rely on communication to express their feeling and ideas and to solve disputes among themselves. A major component required for effective communication is language. Language can occur in different forms, including written symbols, gestures, and vocalizations. It is usually essential [...] Read more.
Human beings usually rely on communication to express their feeling and ideas and to solve disputes among themselves. A major component required for effective communication is language. Language can occur in different forms, including written symbols, gestures, and vocalizations. It is usually essential for all of the communicating parties to be fully conversant with a common language. However, to date this has not been the case between speech-impaired people who use sign language and people who use spoken languages. A number of different studies have pointed out a significant gaps between these two groups which can limit the ease of communication. Therefore, this study aims to develop an efficient deep learning model that can be used to predict British sign language in an attempt to narrow this communication gap between speech-impaired and non-speech-impaired people in the community. Two models were developed in this research, CNN and LSTM, and their performance was evaluated using a multi-class confusion matrix. The CNN model emerged with the highest performance, attaining training and testing accuracies of 98.8% and 97.4%, respectively. In addition, the model achieved average weighted precession and recall of 97% and 96%, respectively. On the other hand, the LSTM model’s performance was quite poor, with the maximum training and testing performance accuracies achieved being 49.4% and 48.7%, respectively. Our research concluded that the CNN model was the best for recognizing and determining British sign language. Full article
Show Figures

Figure 1

18 pages, 2393 KiB  
Article
Hand-Crafted and Learned Feature Aggregation for Visual Marble Tiles Screening
by George K. Sidiropoulos, Athanasios G. Ouzounis, George A. Papakostas, Anastasia Lampoglou, Ilias T. Sarafis, Andreas Stamkos and George Solakis
J. Imaging 2022, 8(7), 191; https://doi.org/10.3390/jimaging8070191 - 8 Jul 2022
Cited by 3 | Viewed by 1915
Abstract
An important factor in the successful marketing of natural ornamental rocks is providing sets of tiles with matching textures. The market price of the tiles is based on the aesthetics of the different quality classes and can change according to the varying needs [...] Read more.
An important factor in the successful marketing of natural ornamental rocks is providing sets of tiles with matching textures. The market price of the tiles is based on the aesthetics of the different quality classes and can change according to the varying needs of the market. The classification of the marble tiles is mainly performed manually by experienced workers. This can lead to misclassifications due to the subjectiveness of such a procedure, causing subsequent problems with the marketing of the product. In this paper, 24 hand-crafted texture descriptors and 20 Convolution Neural Networks were evaluated towards creating aggregated descriptors resulting from the combination of one hand-crafted and one Convolutional Neural Network at a time. A marble tile dataset designed for this study was used for the evaluation process, which was also released publicly to further enable the research for similar studies (both on texture and dolomitic ornamental marble tile analysis). This was done to automate the classification of the marble tiles. The best performing feature descriptors were aggregated together in order to achieve an objective classification. The resulting model was embodied into an automatic screening machine designed and constructed as a part of this study. The experiments showed that the aggregation of the VGG16 and SILTP provided the best results, with an AUC score of 0.9944. Full article
(This article belongs to the Section Computer Vision and Pattern Recognition)
Show Figures

Figure 1

16 pages, 822 KiB  
Article
Brain Tumor Segmentation Using Deep Capsule Network and Latent-Dynamic Conditional Random Fields
by Mahmoud Elmezain, Amena Mahmoud, Diana T. Mosa and Wael Said
J. Imaging 2022, 8(7), 190; https://doi.org/10.3390/jimaging8070190 - 8 Jul 2022
Cited by 16 | Viewed by 4114
Abstract
Because of the large variabilities in brain tumors, automating segmentation remains a difficult task. We propose an automated method to segment brain tumors by integrating the deep capsule network (CapsNet) and the latent-dynamic condition random field (LDCRF). The method consists of three main [...] Read more.
Because of the large variabilities in brain tumors, automating segmentation remains a difficult task. We propose an automated method to segment brain tumors by integrating the deep capsule network (CapsNet) and the latent-dynamic condition random field (LDCRF). The method consists of three main processes to segment the brain tumor—pre-processing, segmentation, and post-processing. In pre-processing, the N4ITK process involves correcting each MR image’s bias field before normalizing the intensity. After that, image patches are used to train CapsNet during the segmentation process. Then, with the CapsNet parameters determined, we employ image slices from an axial view to learn the LDCRF-CapsNet. Finally, we use a simple thresholding method to correct the labels of some pixels and remove small 3D-connected regions from the segmentation outcomes. On the BRATS 2015 and BRATS 2021 datasets, we trained and evaluated our method and discovered that it outperforms and can compete with state-of-the-art methods in comparable conditions. Full article
Show Figures

Figure 1

18 pages, 1548 KiB  
Article
Multimodal Registration for Image-Guided EBUS Bronchoscopy
by Xiaonan Zang, Wennan Zhao, Jennifer Toth, Rebecca Bascom and William Higgins
J. Imaging 2022, 8(7), 189; https://doi.org/10.3390/jimaging8070189 - 8 Jul 2022
Cited by 4 | Viewed by 2893
Abstract
The state-of-the-art procedure for examining the lymph nodes in a lung cancer patient involves using an endobronchial ultrasound (EBUS) bronchoscope. The EBUS bronchoscope integrates two modalities into one device: (1) videobronchoscopy, which gives video images of the airway walls; and (2) convex-probe EBUS, [...] Read more.
The state-of-the-art procedure for examining the lymph nodes in a lung cancer patient involves using an endobronchial ultrasound (EBUS) bronchoscope. The EBUS bronchoscope integrates two modalities into one device: (1) videobronchoscopy, which gives video images of the airway walls; and (2) convex-probe EBUS, which gives 2D fan-shaped views of extraluminal structures situated outside the airways. During the procedure, the physician first employs videobronchoscopy to navigate the device through the airways. Next, upon reaching a given node’s approximate vicinity, the physician probes the airway walls using EBUS to localize the node. Due to the fact that lymph nodes lie beyond the airways, EBUS is essential for confirming a node’s location. Unfortunately, it is well-documented that EBUS is difficult to use. In addition, while new image-guided bronchoscopy systems provide effective guidance for videobronchoscopic navigation, they offer no assistance for guiding EBUS localization. We propose a method for registering a patient’s chest CT scan to live surgical EBUS views, thereby facilitating accurate image-guided EBUS bronchoscopy. The method entails an optimization process that registers CT-based virtual EBUS views to live EBUS probe views. Results using lung cancer patient data show that the method correctly registered 28/28 (100%) lymph nodes scanned by EBUS, with a mean registration time of 3.4 s. In addition, the mean position and direction errors of registered sites were 2.2 mm and 11.8, respectively. In addition, sensitivity studies show the method’s robustness to parameter variations. Lastly, we demonstrate the method’s use in an image-guided system designed for guiding both phases of EBUS bronchoscopy. Full article
Show Figures

Figure 1

12 pages, 14061 KiB  
Article
Efficient and Scalable Object Localization in 3D on Mobile Device
by Neetika Gupta and Naimul Mefraz Khan
J. Imaging 2022, 8(7), 188; https://doi.org/10.3390/jimaging8070188 - 8 Jul 2022
Cited by 5 | Viewed by 2304
Abstract
Two-Dimensional (2D) object detection has been an intensely discussed and researched field of computer vision. With numerous advancements made in the field over the years, we still need to identify a robust approach to efficiently conduct classification and localization of objects in our [...] Read more.
Two-Dimensional (2D) object detection has been an intensely discussed and researched field of computer vision. With numerous advancements made in the field over the years, we still need to identify a robust approach to efficiently conduct classification and localization of objects in our environment by just using our mobile devices. Moreover, 2D object detection limits the overall understanding of the detected object and does not provide any additional information in terms of its size and position in the real world. This work proposes an object localization solution in Three-Dimension (3D) for mobile devices using a novel approach. The proposed method works by combining a 2D object detection Convolutional Neural Network (CNN) model with Augmented Reality (AR) technologies to recognize objects in the environment and determine their real-world coordinates. We leverage the in-built Simultaneous Localization and Mapping (SLAM) capability of Google’s ARCore to detect planes and know the camera information for generating cuboid proposals from an object’s 2D bounding box. The proposed method is fast and efficient for identifying everyday objects in real-world space and, unlike mobile offloading techniques, the method is well designed to work with limited resources of a mobile device. Full article
(This article belongs to the Special Issue Advanced Scene Perception for Augmented Reality)
Show Figures

Figure 1

15 pages, 14037 KiB  
Article
PyPore3D: An Open Source Software Tool for Imaging Data Processing and Analysis of Porous and Multiphase Media
by Amal Aboulhassan, Francesco Brun, George Kourousias, Gabriele Lanzafame, Marco Voltolini, Adriano Contillo and Lucia Mancini
J. Imaging 2022, 8(7), 187; https://doi.org/10.3390/jimaging8070187 - 7 Jul 2022
Cited by 6 | Viewed by 4002
Abstract
In this work, we propose the software library PyPore3D, an open source solution for data processing of large 3D/4D tomographic data sets. PyPore3D is based on the Pore3D core library, developed thanks to the collaboration between Elettra Sincrotrone (Trieste) and the University of [...] Read more.
In this work, we propose the software library PyPore3D, an open source solution for data processing of large 3D/4D tomographic data sets. PyPore3D is based on the Pore3D core library, developed thanks to the collaboration between Elettra Sincrotrone (Trieste) and the University of Trieste (Italy). The Pore3D core library is built with a distinction between the User Interface and the backend filtering, segmentation, morphological processing, skeletonisation and analysis functions. The current Pore3D version relies on the closed source IDL framework to call the backend functions and enables simple scripting procedures for streamlined data processing. PyPore3D addresses this limitation by proposing a full open source solution which provides Python wrappers to the the Pore3D C library functions. The PyPore3D library allows the users to fully use the Pore3D Core Library as an open source solution under Python and Jupyter Notebooks PyPore3D is both getting rid of all the intrinsic limitations of licensed platforms (e.g., closed source and export restrictions) and adding, when needed, the flexibility of being able to integrate scientific libraries available for Python (SciPy, TensorFlow, etc.). Full article
(This article belongs to the Special Issue The Present and the Future of Imaging)
Show Figures

Figure 1

19 pages, 10872 KiB  
Article
Multi-View Learning for Material Classification
by Borhan Uddin Sumon, Damien Muselet, Sixiang Xu and Alain Trémeau
J. Imaging 2022, 8(7), 186; https://doi.org/10.3390/jimaging8070186 - 7 Jul 2022
Cited by 2 | Viewed by 2673
Abstract
Material classification is similar to texture classification and consists in predicting the material class of a surface in a color image, such as wood, metal, water, wool, or ceramic. It is very challenging because of the intra-class variability. Indeed, the visual appearance of [...] Read more.
Material classification is similar to texture classification and consists in predicting the material class of a surface in a color image, such as wood, metal, water, wool, or ceramic. It is very challenging because of the intra-class variability. Indeed, the visual appearance of a material is very sensitive to the acquisition conditions such as viewpoint or lighting conditions. Recent studies show that deep convolutional neural networks (CNNs) clearly outperform hand-crafted features in this context but suffer from a lack of data for training the models. In this paper, we propose two contributions to cope with this problem. First, we provide a new material dataset with a large range of acquisition conditions so that CNNs trained on these data can provide features that can adapt to the diverse appearances of the material samples encountered in real-world. Second, we leverage recent advances in multi-view learning methods to propose an original architecture designed to extract and combine features from several views of a single sample. We show that such multi-view CNNs significantly improve the performance of the classical alternatives for material classification. Full article
(This article belongs to the Special Issue Color Texture Classification)
Show Figures

Figure 1

18 pages, 1130 KiB  
Article
A Hybrid 3D-2D Image Registration Framework for Pedicle Screw Trajectory Registration between Intraoperative X-ray Image and Preoperative CT Image
by Roshan Ramakrishna Naik, Anitha Hoblidar, Shyamasunder N. Bhat, Nishanth Ampar and Raghuraj Kundangar
J. Imaging 2022, 8(7), 185; https://doi.org/10.3390/jimaging8070185 - 6 Jul 2022
Cited by 7 | Viewed by 3291
Abstract
Pedicle screw insertion is considered a complex surgery among Orthopaedics surgeons. Exclusively to prevent postoperative complications associated with pedicle screw insertion, various types of image intensity registration-based navigation systems have been developed. These systems are computation-intensive, have a small capture range and have [...] Read more.
Pedicle screw insertion is considered a complex surgery among Orthopaedics surgeons. Exclusively to prevent postoperative complications associated with pedicle screw insertion, various types of image intensity registration-based navigation systems have been developed. These systems are computation-intensive, have a small capture range and have local maxima issues. On the other hand, deep learning-based techniques lack registration generalizability and have data dependency. To overcome these limitations, a patient-specific hybrid 3D-2D registration principled framework was designed to map a pedicle screw trajectory between intraoperative X-ray image and preoperative CT image. An anatomical landmark-based 3D-2D Iterative Control Point (ICP) registration was performed to register a pedicular marker pose between the X-ray images and axial preoperative CT images. The registration framework was clinically validated by generating projection images possessing an optimal match with intraoperative X-ray images at the corresponding control point registration. The effectiveness of the registered trajectory was evaluated in terms of displacement and directional errors after reprojecting its position on 2D radiographic planes. The mean Euclidean distances for the Head and Tail end of the reprojected trajectory from the actual trajectory in the AP and lateral planes were shown to be 0.6–0.8 mm and 0.5–1.6 mm, respectively. Similarly, the corresponding mean directional errors were found to be 4.90 and 20. The mean trajectory length difference between the actual and registered trajectory was shown to be 2.67 mm. The approximate time required in the intraoperative environment to axially map the marker position for a single vertebra was found to be 3 min. Utilizing the markerless registration techniques, the designed framework functions like a screw navigation tool, and assures the quality of surgery being performed by limiting the need of postoperative CT. Full article
(This article belongs to the Topic Artificial Intelligence (AI) in Medical Imaging)
Show Figures

Figure 1

31 pages, 6546 KiB  
Article
Detecting Cocircular Subsets of a Spherical Set of Points
by Basel Ibrahim and Nahum Kiryati
J. Imaging 2022, 8(7), 184; https://doi.org/10.3390/jimaging8070184 - 5 Jul 2022
Cited by 2 | Viewed by 1943
Abstract
Given a spherical set of points, we consider the detection of cocircular subsets of the data. We distinguish great circles from small circles, and develop algorithms for detecting cocircularities of both types. The suggested approach is an extension of the Hough transform. We [...] Read more.
Given a spherical set of points, we consider the detection of cocircular subsets of the data. We distinguish great circles from small circles, and develop algorithms for detecting cocircularities of both types. The suggested approach is an extension of the Hough transform. We address the unique parameter-space quantization issues arising due to the spherical geometry, present quantization schemes, and evaluate the quantization-induced errors. We demonstrate the proposed algorithms by detecting cocircular cities and airports on Earth’s spherical surface. These results facilitate the detection of great and small circles in spherical images. Full article
(This article belongs to the Special Issue Geometry Reconstruction from Images)
Show Figures

Figure 1

16 pages, 12282 KiB  
Article
Practical Application of Augmented/Mixed Reality Technologies in Surgery of Abdominal Cancer Patients
by Vladimir M. Ivanov, Anton M. Krivtsov, Sergey V. Strelkov, Anton Yu. Smirnov, Roman Yu. Shipov, Vladimir G. Grebenkov, Valery N. Rumyantsev, Igor S. Gheleznyak, Dmitry A. Surov, Michail S. Korzhuk and Valery S. Koskin
J. Imaging 2022, 8(7), 183; https://doi.org/10.3390/jimaging8070183 - 30 Jun 2022
Cited by 12 | Viewed by 3111
Abstract
The technology of augmented and mixed reality (AR/MR) is useful in various areas of modern surgery. We considered the use of augmented and mixed reality technologies as a method of preoperative planning and intraoperative navigation in abdominal cancer patients. Practical use of AM/MR [...] Read more.
The technology of augmented and mixed reality (AR/MR) is useful in various areas of modern surgery. We considered the use of augmented and mixed reality technologies as a method of preoperative planning and intraoperative navigation in abdominal cancer patients. Practical use of AM/MR raises a range questions, which demand suitable solutions. The difficulties and obstacles we encountered in the practical use of AR/MR are presented, along with the ways we chose to overcome them. The most demonstrative case is covered in detail. The three-dimensional anatomical model obtained from the CT scan needed to be rigidly attached to the patient’s body, and therefore an invasive approach was developed, using an orthopedic pin fixed to the pelvic bones. The pin is used both similarly to an X-ray contrast marker and as a marker for augmented reality. This solution made it possible, not only to visualize the anatomical structures of the patient and the border zone of the tumor, but also to change the position of the patient during the operation. In addition, a noninvasive (skin-based) marking method was developed that allows the application of mixed and augmented reality during operation. Both techniques were used (8 clinical cases) for preoperative planning and intraoperative navigation, which allowed surgeons to verify the radicality of the operation, to have visual control of all anatomical structures near the zone of interest, and to reduce the time of surgical intervention, thereby reducing the complication rate and improving the rehabilitation period. Full article
(This article belongs to the Topic Augmented and Mixed Reality)
Show Figures

Figure 1

17 pages, 3670 KiB  
Article
Deep Learning-Based Automatic Detection of Ships: An Experimental Study Using Satellite Images
by Krishna Patel, Chintan Bhatt and Pier Luigi Mazzeo
J. Imaging 2022, 8(7), 182; https://doi.org/10.3390/jimaging8070182 - 28 Jun 2022
Cited by 39 | Viewed by 6829
Abstract
The remote sensing surveillance of maritime areas represents an essential task for both security and environmental reasons. Recently, learning strategies belonging to the field of machine learning (ML) have become a niche of interest for the community of remote sensing. Specifically, a major [...] Read more.
The remote sensing surveillance of maritime areas represents an essential task for both security and environmental reasons. Recently, learning strategies belonging to the field of machine learning (ML) have become a niche of interest for the community of remote sensing. Specifically, a major challenge is the automatic classification of ships from satellite imagery, which is needed for traffic surveillance systems, the protection of illegal fisheries, control systems of oil discharge, and the monitoring of sea pollution. Deep learning (DL) is a branch of ML that has emerged in the last few years as a result of advancements in digital technology and data availability. DL has shown capacity and efficacy in tackling difficult learning tasks that were previously intractable. Specifically, DL methods, such as convolutional neural networks (CNNs), have been reported to be efficient in image detection and recognition applications. In this paper, we focused on the development of an automatic ship detection (ASD) approach by using DL methods for assessing the Airbus ship dataset (composed of about 40 K satellite images). The paper explores and analyzes the distinct variations of the YOLO algorithm for the detection of ships from satellite images. A comparison of different versions of YOLO algorithms for ship detection, such as YOLOv3, YOLOv4, and YOLOv5, is presented, after training them on a personal computer with a large dataset of satellite images of the Airbus Ship Challenge and Shipsnet. The differences between the algorithms could be observed on the personal computer. We have confirmed that these algorithms can be used for effective ship detection from satellite images. The conclusion drawn from the conducted research is that the YOLOv5 object detection algorithm outperforms the other versions of the YOLO algorithm, i.e., YOLOv4 and YOLOv3 in terms accuracy of 99% for YOLOv5 compared to 98% and 97% respectively for YOLOv4 and YOLOv3. Full article
(This article belongs to the Special Issue Computer Vision and Deep Learning: Trends and Applications)
Show Figures

Figure 1

12 pages, 7206 KiB  
Article
Document Liveness Challenge Dataset (DLC-2021)
by Dmitry V. Polevoy, Irina V. Sigareva, Daria M. Ershova, Vladimir V. Arlazarov, Dmitry P. Nikolaev, Zuheng Ming, Muhammad Muzzamil Luqman and Jean-Christophe Burie
J. Imaging 2022, 8(7), 181; https://doi.org/10.3390/jimaging8070181 - 28 Jun 2022
Cited by 8 | Viewed by 5437
Abstract
Various government and commercial services, including, but not limited to, e-government, fintech, banking, and sharing economy services, widely use smartphones to simplify service access and user authorization. Many organizations involved in these areas use identity document analysis systems in order to improve user [...] Read more.
Various government and commercial services, including, but not limited to, e-government, fintech, banking, and sharing economy services, widely use smartphones to simplify service access and user authorization. Many organizations involved in these areas use identity document analysis systems in order to improve user personal-data-input processes. The tasks of such systems are not only ID document data recognition and extraction but also fraud prevention by detecting document forgery or by checking whether the document is genuine. Modern systems of this kind are often expected to operate in unconstrained environments. A significant amount of research has been published on the topic of mobile ID document analysis, but the main difficulty for such research is the lack of public datasets due to the fact that the subject is protected by security requirements. In this paper, we present the DLC-2021 dataset, which consists of 1424 video clips captured in a wide range of real-world conditions, focused on tasks relating to ID document forensics. The novelty of the dataset is that it contains shots from video with color laminated mock ID documents, color unlaminated copies, grayscale unlaminated copies, and screen recaptures of the documents. The proposed dataset complies with the GDPR because it contains images of synthetic IDs with generated owner photos and artificial personal information. For the presented dataset, benchmark baselines are provided for tasks such as screen recapture detection and glare detection. The data presented are openly available in Zenodo. Full article
(This article belongs to the Special Issue Mobile Camera-Based Image and Video Processing)
Show Figures

Figure 1

18 pages, 4615 KiB  
Article
A Hybrid Clustering Method with a Filter Feature Selection for Hyperspectral Image Classification
by Junzhe Zhang
J. Imaging 2022, 8(7), 180; https://doi.org/10.3390/jimaging8070180 - 28 Jun 2022
Cited by 6 | Viewed by 1891
Abstract
Hyperspectral images (HSI) provide ample spectral information of land cover. The hybrid classification method works well for HSI; however, how to select the suitable similarity measures as kernels with the appropriate weights of hybrid classification for HSI is still under investigation. In this [...] Read more.
Hyperspectral images (HSI) provide ample spectral information of land cover. The hybrid classification method works well for HSI; however, how to select the suitable similarity measures as kernels with the appropriate weights of hybrid classification for HSI is still under investigation. In this paper, a filter feature selection was designed to select the most representative features based on similarity measures. Then, the weights of applicable similarity measures were computed based on coefficients of variation (CVs) of similarity measures. Implementing the similarity measures as the kernels with weights into the K-means algorithm, a new hybrid changing-weight classification method with a filter feature selection (HCW-SSC) was developed. Standard spectral libraries, operative modular imaging spectrometer (OMIS) airborne HSI, airborne visible/infrared imaging spectrometer (AVIRIS) HSI, and Hyperion satellite HSI were selected to inspect the HCW-SSC method. The results showed that the HCW-SSC method has the highest overall accuracy and kappa coefficient (or F1 score) in all experiments (97.5% and 0.974 for standard spectral libraries, 93.21% and 0.9245 for OMIS, 79.24% and 0.8044 for AVIRIS, and 81.23% and 0.7234 for Hyperion) compared to the classification methods (93.75% and 0.958 for standard spectral libraries, 88.27% and 0.8698 for OMIS, 73.12% and 0.7225 for AVIRIS, and 56.34% and 0.3623 for Hyperion) without feature selection and the machine-learning method (68.27% and 0.6628 for AVIRIS, and 51.21% and 0.4255 for Hyperion). The experimental results demonstrate that the new hybrid method performs more effectively than the traditional hybrid method. This also shed a light on the importance of feature selection in HSI classification. Full article
(This article belongs to the Section Color, Multi-spectral, and Hyperspectral Imaging)
Show Figures

Figure 1

19 pages, 1713 KiB  
Article
Easy—Ensemble Augmented-Shot-Y-Shaped Learning: State-of-the-Art Few-Shot Classification with Simple Components
by Yassir Bendou, Yuqing Hu, Raphael Lafargue, Giulia Lioi, Bastien Pasdeloup, Stéphane Pateux and Vincent Gripon
J. Imaging 2022, 8(7), 179; https://doi.org/10.3390/jimaging8070179 - 24 Jun 2022
Cited by 28 | Viewed by 3688
Abstract
Few-shot classification aims at leveraging knowledge learned in a deep learning model, in order to obtain good classification performance on new problems, where only a few labeled samples per class are available. Recent years have seen a fair number of works in the [...] Read more.
Few-shot classification aims at leveraging knowledge learned in a deep learning model, in order to obtain good classification performance on new problems, where only a few labeled samples per class are available. Recent years have seen a fair number of works in the field, each one introducing their own methodology. A frequent problem, though, is the use of suboptimally trained models as a first building block, leading to doubts about whether proposed approaches bring gains if applied to more sophisticated pretrained models. In this work, we propose a simple way to train such models, with the aim of reaching top performance on multiple standardized benchmarks in the field. This methodology offers a new baseline on which to propose (and fairly compare) new techniques or adapt existing ones. Full article
Show Figures

Figure 1

13 pages, 1337 KiB  
Article
Extra Proximal-Gradient Network with Learned Regularization for Image Compressive Sensing Reconstruction
by Qingchao Zhang, Xiaojing Ye and Yunmei Chen
J. Imaging 2022, 8(7), 178; https://doi.org/10.3390/jimaging8070178 - 23 Jun 2022
Cited by 3 | Viewed by 1771
Abstract
Learned optimization algorithms are promising approaches to inverse problems by leveraging advanced numerical optimization schemes and deep neural network techniques in machine learning. In this paper, we propose a novel deep neural network architecture imitating an extra proximal gradient algorithm to solve a [...] Read more.
Learned optimization algorithms are promising approaches to inverse problems by leveraging advanced numerical optimization schemes and deep neural network techniques in machine learning. In this paper, we propose a novel deep neural network architecture imitating an extra proximal gradient algorithm to solve a general class of inverse problems with a focus on applications in image reconstruction. The proposed network features learned regularization that incorporates adaptive sparsification mappings, robust shrinkage selections, and nonlocal operators to improve solution quality. Numerical results demonstrate the improved efficiency and accuracy of the proposed network over several state-of-the-art methods on a variety of test problems. Full article
Show Figures

Figure 1

15 pages, 4778 KiB  
Article
U-Net-Based Segmentation of Microscopic Images of Colorants and Simplification of Labeling in the Learning Process
by Ikumi Hirose, Mari Tsunomura, Masami Shishikura, Toru Ishii, Yuichiro Yoshimura, Keiko Ogawa-Ochiai and Norimichi Tsumura
J. Imaging 2022, 8(7), 177; https://doi.org/10.3390/jimaging8070177 - 23 Jun 2022
Cited by 4 | Viewed by 1916
Abstract
Colored product textures correspond to particle size distributions. The microscopic images of colorants must be divided into regions to determine the particle size distribution. The conventional method used for this process involves manually dividing images into areas, which may be inefficient. In this [...] Read more.
Colored product textures correspond to particle size distributions. The microscopic images of colorants must be divided into regions to determine the particle size distribution. The conventional method used for this process involves manually dividing images into areas, which may be inefficient. In this paper, we have overcome this issue by developing two different modified architectures of U-Net convolution neural networks to automatically determine the particle sizes. To develop these modified architectures, a significant amount of ground truth data must be prepared to train the U-Net, which is difficult for big data as the labeling is performed manually. Therefore, we also aim to reduce this process by using incomplete labeling data. The first objective of this study is to determine the accuracy of our modified U-Net architectures for this type of image. The second objective is to reduce the difficulty of preparing the ground truth data by testing the accuracy of training on incomplete labeling data. The results indicate that efficient segmentation can be realized using our modified U-Net architectures, and the generation of ground truth data can be simplified. This paper presents a preliminary study to improve the efficiency of determining particle size distributions with incomplete labeling data. Full article
(This article belongs to the Special Issue Intelligent Media Processing)
Show Figures

Figure 1

14 pages, 2850 KiB  
Article
High-Capacity Reversible Data Hiding in Encrypted Images with Flexible Restoration
by Eichi Arai and Shoko Imaizumi
J. Imaging 2022, 8(7), 176; https://doi.org/10.3390/jimaging8070176 - 21 Jun 2022
Cited by 5 | Viewed by 1861
Abstract
In this paper, we propose a novel reversible data hiding in encrypted images (RDH-EI) method that achieves the highest hiding capacity in the RDH-EI research field and full flexibility in the processing order without restrictions. In the previous work in this field, there [...] Read more.
In this paper, we propose a novel reversible data hiding in encrypted images (RDH-EI) method that achieves the highest hiding capacity in the RDH-EI research field and full flexibility in the processing order without restrictions. In the previous work in this field, there exist two representative methods; one provides flexible processing with a high hiding capacity of 2.17 bpp, and the other achieves the highest hiding capacity of 2.46 bpp by using the BOWS-2 dataset. The latter method has critical restrictions on the processing order. We focus on the advantage of the former method and introduce two efficient algorithms for maximizing the hiding capacity. With these algorithms, the proposed method can predict each pixel value with higher accuracy and refine the embedding algorithm. Consequently, the hiding capacity is effectively enhanced to 2.50 bpp using the BOWS-2 dataset, and a series of processes can be freely conducted without considering any restrictions on the order between data hiding and encryption. In the same way, there are no restrictions on the processing order in the restoration process. Thus, the proposed method provides flexibility in the privileges requested by users. Experimental results show the effectiveness of the proposed method in terms of hiding capacity and marked-image quality. Full article
(This article belongs to the Special Issue Intelligent Media Processing)
Show Figures

Figure 1

10 pages, 1610 KiB  
Article
The Impacts of Vertical Off-Centring, Localiser Direction, Phantom Positioning and Tube Voltage on CT Number Accuracy: An Experimental Study
by Yazan Al-Hayek, Kelly Spuur, Rob Davidson, Christopher Hayre and Xiaoming Zheng
J. Imaging 2022, 8(7), 175; https://doi.org/10.3390/jimaging8070175 - 21 Jun 2022
Cited by 3 | Viewed by 1872
Abstract
Background: This study investigates the effects of vertical off-centring, localiser direction, tube voltage, and phantom positioning (supine and prone) on computed tomography (CT) numbers and radiation dose. Methods: An anthropomorphic phantom was scanned using a Discovery CT750 HD—128 slice (GE Healthcare) scanner at [...] Read more.
Background: This study investigates the effects of vertical off-centring, localiser direction, tube voltage, and phantom positioning (supine and prone) on computed tomography (CT) numbers and radiation dose. Methods: An anthropomorphic phantom was scanned using a Discovery CT750 HD—128 slice (GE Healthcare) scanner at different tube voltages (80, 120, and 140 kVp). Images employing 0° and 180° localisers were acquired in supine and prone positions for each vertical off-centring (±100, ±60, and ±30 mm from the iso-centre). CT numbers and displayed volume CT dose index (CTDIvol) were recorded. The relationship between dose variation and CT number was investigated. Results: The maximum changes in CT number between the two phantom positions as a function of vertical-off-centring were for the upper thorax 34 HU (0° localiser, 120 kVp), mid thorax 43 HU (180° localiser, 80 kVp), and for the abdominal section 31 HU (0° localiser, 80 kVp) in the prone position. A strong positive correlation was reported between the variation in dose and CT number (r = 0.969, p < 0.001); 95% CI (0.93, 0.99). Conclusions: Patient positioning demands an approach with a high degree of accuracy, especially in cases where clinical decisions depend on CT number accuracy for tissue lesion characterisation. Full article
(This article belongs to the Section Medical Imaging)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop