sensors-logo

Journal Browser

Journal Browser

Biomedical Imaging & Instrumentation

A topical collection in Sensors (ISSN 1424-8220). This collection belongs to the section "Biomedical Sensors".

Viewed by 37999

Editor


E-Mail Website
Collection Editor
Independent Researcher, Lincoln LN6 7TS, UK
Interests: advanced biomedical imaging (modalities, processing and exploitation) and all ancillaries and applications thereof; preclinical imaging; 3D imaging, processing and visualisation; multimodality imaging, processing and visualisation; image-guided biomedical applications (diagnosis, therapy planning, monitoring, follow-up); virtual and augmented reality; high-performance computing and visualisation; telemedicine; biomedical image archival
Special Issues, Collections and Topics in MDPI journals

Topical Collection Information

Dear Colleagues,

Modern healthcare relies ever more on technology—especially imaging in its many shapes—to progress and bring tangible benefits to both patients and society. This Special Issue of Sensors on “Biomedical Imaging and Instrumentation” will collect high-quality original contributions focusing on new or ongoing related technological developments at any stage of the translational pipeline, i.e., from lab to the clinic. Of particular relevance are imaging modalities and sensors (e.g., all types of optical imaging and spectroscopy, ultrasound, photoacoustic, X-ray, CT, MR, SPECT, PET, charged particles) for preclinical and clinical applications (e.g., micro to animals to humans in oncology, cardiovascular, pulmonary, neuro, critical care, interventional, surgery.), as well as related data and image processing, analysis and visualization (e.g., reconstruction, segmentation, radiomics, machine learning, 3D, AR/VR). Due to their already demonstrated potential, advanced approaches and instrumentation related to multimodality, hybrid and parametric imaging, image-guided therapy, theranostics, computational planning, and also hadron therapy are especially welcome. Even though manuscripts describing actual (pre)clinical applications are considered to be of interest, all contributions will still need to exhibit a significant technological content in order to comply with the overarching scope of this journal.

Prof. Dr. Luc Bidaut
Collection Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the collection website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • biomedical imaging modalities
  • biomedical instrumentation
  • new sensors for biomedical applications
  • new imaging modalities or paradigms
  • clinical translation
  • disease characterization
  • therapy planning, delivery or monitoring
  • biomedical applications of ML/AI/DL (to diagnosis or therapy)

Published Papers (13 papers)

2024

Jump to: 2023, 2022, 2021

22 pages, 7802 KiB  
Article
Study on Bionic Design and Tissue Manipulation of Breast Interventional Robot
by Weixi Zhang, Jiaxing Yu, Xiaoyang Yu, Yongde Zhang and Zhihui Men
Sensors 2024, 24(19), 6408; https://doi.org/10.3390/s24196408 - 3 Oct 2024
Viewed by 1009
Abstract
Minimally invasive interventional surgery is commonly used for diagnosing and treating breast cancer, but the high fluidity and deformability of breast tissue reduce intervention accuracy. This study proposes a bionic breast interventional robot that mimics the scorpion’s predation process, actively manipulating tissue deformation [...] Read more.
Minimally invasive interventional surgery is commonly used for diagnosing and treating breast cancer, but the high fluidity and deformability of breast tissue reduce intervention accuracy. This study proposes a bionic breast interventional robot that mimics the scorpion’s predation process, actively manipulating tissue deformation to control target displacement and enhance accuracy. The robot’s structure is designed using a modular method, and its kinematics and workspace are analyzed and solved. To address the nonlinear breast tissue deformation problem, a hierarchical tissue method is proposed to simplify the three-dimensional problem into a two-dimensional one. A two-dimensional tissue deformation solver is established based on the minimum energy method for quick resolution. The problem is treated as quasi-static, deriving the displacement relationship between external manipulation points and internal tissue targets. The method of active manipulation of tissue deformation is simulated using MATLAB (2019-b) software to verify the feasibility of the method. Results show maximum errors of 1.7 mm for prostheses and 2.5 mm for in vitro tissues in the X and Y directions. This method improves intervention accuracy in breast surgery and offers a new solution for breast cancer diagnosis and treatment. Full article
Show Figures

Figure 1

12 pages, 5542 KiB  
Article
Superconducting Self-Shielded and Zero-Boil-Off Magnetoencephalogram Systems: A Dry Phantom Evaluation
by Keita Tanaka, Akihiko Tsukahara, Hiroki Miyanaga, Shoji Tsunematsu, Takanori Kato, Yuji Matsubara and Hiromu Sakai
Sensors 2024, 24(18), 6044; https://doi.org/10.3390/s24186044 - 18 Sep 2024
Viewed by 554
Abstract
Magnetoencephalography (MEG) systems are advanced neuroimaging tools used to measure the magnetic fields produced by neuronal activity in the human brain. However, they require significant amounts of liquid helium to keep the superconducting quantum interference device (SQUID) sensors in a stable superconducting state. [...] Read more.
Magnetoencephalography (MEG) systems are advanced neuroimaging tools used to measure the magnetic fields produced by neuronal activity in the human brain. However, they require significant amounts of liquid helium to keep the superconducting quantum interference device (SQUID) sensors in a stable superconducting state. Additionally, MEG systems must be installed in a magnetically shielded room to minimize interference from external magnetic fields. We have developed an advanced MEG system that incorporates a superconducting magnetic shield and a zero-boil-off system. This system overcomes the typical limitations of traditional MEG systems, such as the frequent need for liquid helium refills and the spatial constraints imposed by magnetically shielded rooms. To validate the system, we conducted an evaluation using signal source estimation. This involved a phantom with 50 current sources of known location and magnitude under active zero-boil-off conditions. Our evaluations focused on the precision of the magnetic field distribution and the quantification of estimation errors. We achieved a consistent magnetic field distribution that matched the source current, maintaining an estimation error margin within 3.5 mm, regardless of the frequency of the signal source current. These findings affirm the practicality and efficacy of the system. Full article
Show Figures

Figure 1

2023

Jump to: 2024, 2022, 2021

15 pages, 810 KiB  
Article
Planning of Medical Flexible Needle Motion in Effective Area of Clinical Puncture
by Shuai Feng, Shigang Wang, Wanxiong Jiang and Xueshan Gao
Sensors 2023, 23(2), 671; https://doi.org/10.3390/s23020671 - 6 Jan 2023
Cited by 2 | Viewed by 2113
Abstract
Lung cancer is the leading cause of cancer deaths worldwide. Although several lung cancer diagnostic methods are available for lung nodule biopsy, there are limitations in terms of accuracy, safety, and invasiveness. Transbronchial needle aspiration (TBNA) is a common method for diagnosing and [...] Read more.
Lung cancer is the leading cause of cancer deaths worldwide. Although several lung cancer diagnostic methods are available for lung nodule biopsy, there are limitations in terms of accuracy, safety, and invasiveness. Transbronchial needle aspiration (TBNA) is a common method for diagnosing and treating lung cancer that involves a robot-assisted medical flexible needle moving along a curved three-dimensional trajectory, avoiding anatomical barriers to achieve clinically meaningful goals in humans. Inspired by the puncture angle between the needle tip and the vessel in venipuncture, we suggest that different orientations of the medical flexible needle puncture path affect the cost of the puncture trajectory and propose an effective puncture region based on the optimal puncture direction, which is a strategy based on imposing geometric constraints on the search space of the puncture direction, and based on this, we focused on the improved implementation of RCS*. Planning within the TBNA-based lung environment was performed using the rapidly exploring random tree (RRT), resolution-complete search (RCS), and RCS* (a resolution-optimal version of RCS) within an effective puncture region. The experimental results show that the optimal puncture direction corresponding to the lowest cost puncture trajectory is consistent among the three algorithms and RCS* is more efficient for planning. The experiments verified the feasibility and practicality of our proposed minimum puncture angle and puncture effective region and facilitated the study of the puncture direction of flexible needle puncture. Full article
Show Figures

Figure 1

2022

Jump to: 2024, 2023, 2021

20 pages, 4045 KiB  
Article
A Generic Pixel Pitch Calibration Method for Fundus Camera via Automated ROI Extraction
by Tengfei Long, Yi Xu, Haidong Zou, Lina Lu, Tianyi Yuan, Zhou Dong, Jiqun Dong, Xin Ke, Saiguang Ling and Yingyan Ma
Sensors 2022, 22(21), 8565; https://doi.org/10.3390/s22218565 - 7 Nov 2022
Cited by 11 | Viewed by 2630
Abstract
Pixel pitch calibration is an essential step to make the fundus structures in the fundus image quantitatively measurable, which is important for the diagnosis and treatment of many diseases, e.g., diabetes, arteriosclerosis, hereditary optic atrophy, etc. The conventional calibration approaches require the specific [...] Read more.
Pixel pitch calibration is an essential step to make the fundus structures in the fundus image quantitatively measurable, which is important for the diagnosis and treatment of many diseases, e.g., diabetes, arteriosclerosis, hereditary optic atrophy, etc. The conventional calibration approaches require the specific parameters of the fundus camera or several specially shot images of the chess board, but these are generally not accessible, and the calibration results cannot be generalized to other cameras. Based on automated ROI (region of interest) and optic disc detection, the diameter ratio of ROI and optic disc (ROI–disc ratio) is quantitatively analyzed for a large number of fundus images. With the prior knowledge of the average diameter of an optic disc in fundus, the pixel pitch can be statistically estimated from a large number of fundus images captured by a specific camera without the availability of chess board images or detailed specifics of the fundus camera. Furthermore, for fundus cameras of FOV (fixed field-of-view), the pixel pitch of a fundus image of 45° FOV can be directly estimated according to the automatically measured diameter of ROI in the pixel. The average ROI–disc ratio is approximately constant, i.e., 6.404 ± 0.619 in the pixel, according to 40,600 fundus images, captured by different cameras, of 45° FOV. In consequence, the pixel pitch of a fundus image of 45° FOV can be directly estimated according to the automatically measured diameter of ROI in the pixel, and results show the pixel pitches of Canon CR2, Topcon NW400, Zeiss Visucam 200, and Newvision RetiCam 3100 cameras are 6.825 ± 0.666 μm, 6.625 ± 0.647 μm, 5.793 ± 0.565 μm, and 5.884 ± 0.574 μm, respectively. Compared with the manually measured pixel pitches, based on the method of ISO 10940:2009, i.e., 6.897 μm, 6.807 μm, 5.693 μm, and 6.050 μm, respectively, the bias of the proposed method is less than 5%. Since our method doesn’t require chess board images or detailed specifics, the fundus structures on the fundus image can be measured accurately, according to the pixel pitch obtained by this method, without knowing the type and parameters of the camera. Full article
Show Figures

Figure 1

12 pages, 6526 KiB  
Article
Development of Intraoperative Near-Infrared Fluorescence Imaging System Using a Dual-CMOS Single Camera
by Janghoon Choi, Jun Geun Shin, Hyuk-Sang Kwon, Yoon-Oh Tak, Hyeong Ju Park, Jin-Chul Ahn, Joo Beom Eom, Youngseok Seo, Jin Woo Park, Yongdoo Choi and Jonghyun Eom
Sensors 2022, 22(15), 5597; https://doi.org/10.3390/s22155597 - 26 Jul 2022
Cited by 3 | Viewed by 2838
Abstract
We developed a single-camera-based near-infrared (NIR) fluorescence imaging device using indocyanine green (ICG) NIR fluorescence contrast agents for image-induced surgery. In general, a fluorescent imaging system that simultaneously provides color and NIR images uses two cameras, which is disadvantageous because it increases the [...] Read more.
We developed a single-camera-based near-infrared (NIR) fluorescence imaging device using indocyanine green (ICG) NIR fluorescence contrast agents for image-induced surgery. In general, a fluorescent imaging system that simultaneously provides color and NIR images uses two cameras, which is disadvantageous because it increases the imaging head of the system. Recently, a single-camera-based NIR optical imaging device with quantum efficiency partially extended to the NIR region was developed to overcome this drawback. The system used RGB_NIR filters for camera sensors to provide color and NIR images simultaneously; however, the sensitivity and resolution of the infrared images are reduced by 1/4, and the exposure time and gain cannot be set individually when acquiring color and NIR images. Thus, to overcome these shortcomings, this study developed a compact fluorescent imaging system that uses a single camera with two complementary metal–oxide semiconductor (CMOS) image sensors. Sensitivity and signal-to-background ratio were measured according to the concentrations of ICG solution, exposure time, and camera gain to evaluate the performance of the imaging system. Consequently, the clinical applicability of the system was confirmed through the toxicity analysis of the light source and in vivo testing. Full article
Show Figures

Figure 1

16 pages, 2251 KiB  
Article
Form Factors as Potential Imaging Biomarkers to Differentiate Benign vs. Malignant Lung Lesions on CT Scans
by Francesco Bianconi, Isabella Palumbo, Mario Luca Fravolini, Maria Rondini, Matteo Minestrini, Giulia Pascoletti, Susanna Nuvoli, Angela Spanu, Michele Scialpi, Cynthia Aristei and Barbara Palumbo
Sensors 2022, 22(13), 5044; https://doi.org/10.3390/s22135044 - 4 Jul 2022
Cited by 10 | Viewed by 2640
Abstract
Indeterminate lung nodules detected on CT scans are common findings in clinical practice. Their correct assessment is critical, as early diagnosis of malignancy is crucial to maximise the treatment outcome. In this work, we evaluated the role of form factors as imaging biomarkers [...] Read more.
Indeterminate lung nodules detected on CT scans are common findings in clinical practice. Their correct assessment is critical, as early diagnosis of malignancy is crucial to maximise the treatment outcome. In this work, we evaluated the role of form factors as imaging biomarkers to differentiate benign vs. malignant lung lesions on CT scans. We tested a total of three conventional imaging features, six form factors, and two shape features for significant differences between benign and malignant lung lesions on CT scans. The study population consisted of 192 lung nodules from two independent datasets, containing 109 (38 benign, 71 malignant) and 83 (42 benign, 41 malignant) lung lesions, respectively. The standard of reference was either histological evaluation or stability on radiological followup. The statistical significance was determined via the Mann–Whitney U nonparametric test, and the ability of the form factors to discriminate a benign vs. a malignant lesion was assessed through multivariate prediction models based on Support Vector Machines. The univariate analysis returned four form factors (Angelidakis compactness and flatness, Kong flatness, and maximum projection sphericity) that were significantly different between the benign and malignant group in both datasets. In particular, we found that the benign lesions were on average flatter than the malignant ones; conversely, the malignant ones were on average more compact (isotropic) than the benign ones. The multivariate prediction models showed that adding form factors to conventional imaging features improved the prediction accuracy by up to 14.5 pp. We conclude that form factors evaluated on lung nodules on CT scans can improve the differential diagnosis between benign and malignant lesions. Full article
Show Figures

Figure 1

19 pages, 6507 KiB  
Article
Continual Learning Objective for Analyzing Complex Knowledge Representations
by Asad Mansoor Khan, Taimur Hassan, Muhammad Usman Akram, Norah Saleh Alghamdi and Naoufel Werghi
Sensors 2022, 22(4), 1667; https://doi.org/10.3390/s22041667 - 21 Feb 2022
Cited by 19 | Viewed by 3209
Abstract
Human beings tend to incrementally learn from the rapidly changing environment without comprising or forgetting the already learned representations. Although deep learning also has the potential to mimic such human behaviors to some extent, it suffers from catastrophic forgetting due to which its [...] Read more.
Human beings tend to incrementally learn from the rapidly changing environment without comprising or forgetting the already learned representations. Although deep learning also has the potential to mimic such human behaviors to some extent, it suffers from catastrophic forgetting due to which its performance on already learned tasks drastically decreases while learning about newer knowledge. Many researchers have proposed promising solutions to eliminate such catastrophic forgetting during the knowledge distillation process. However, to our best knowledge, there is no literature available to date that exploits the complex relationships between these solutions and utilizes them for the effective learning that spans over multiple datasets and even multiple domains. In this paper, we propose a continual learning objective that encompasses mutual distillation loss to understand such complex relationships and allows deep learning models to effectively retain the prior knowledge while adapting to the new classes, new datasets, and even new applications. The proposed objective was rigorously tested on nine publicly available, multi-vendor, and multimodal datasets that span over three applications, and it achieved the top-1 accuracy of 0.9863% and an F1-score of 0.9930. Full article
Show Figures

Figure 1

19 pages, 5195 KiB  
Article
Oligoclonal Band Straightening Based on Optimized Hierarchical Warping for Multiple Sclerosis Diagnosis
by Farah Haddad, Samuel Boudet, Laurent Peyrodie, Nicolas Vandenbroucke, Julien Poupart, Patrick Hautecoeur, Vincent Chieux and Gérard Forzy
Sensors 2022, 22(3), 724; https://doi.org/10.3390/s22030724 - 18 Jan 2022
Cited by 1 | Viewed by 5257
Abstract
The detection of immunoglobulin G (IgG) oligoclonal bands (OCB) in cerebrospinal fluid (CSF) by isoelectric focusing (IEF) is a valuable tool for the diagnosis of multiple sclerosis. Over the last decade, the results of our clinical research have suggested that tears are a [...] Read more.
The detection of immunoglobulin G (IgG) oligoclonal bands (OCB) in cerebrospinal fluid (CSF) by isoelectric focusing (IEF) is a valuable tool for the diagnosis of multiple sclerosis. Over the last decade, the results of our clinical research have suggested that tears are a non-invasive alternative to CSF. However, since tear samples have a lower IgG concentration than CSF, a sensitive OCB detection is therefore required. We are developing the first automatic tool for IEF analysis, with a view to speeding up the current visual inspection method, removing user variability, reducing misinterpretation, and facilitating OCB quantification and follow-up studies. The removal of band distortion is a key image enhancement step in increasing the reliability of automatic OCB detection. Here, we describe a novel, fully automatic band-straightening algorithm. The algorithm is based on a correlation directional warping function, estimated using an energy minimization procedure. The approach was optimized via an innovative coupling of a hierarchy of image resolutions to a hierarchy of transformation, in which band misalignment is corrected at successively finer scales. The algorithm’s performance was assessed in terms of the bands’ standard deviation before and after straightening, using a synthetic dataset and a set of 200 lanes of CSF, tear, serum and control samples on which experts had manually delineated the bands. The number of distorted bands was divided by almost 16 for the synthetic lanes and by 7 for the test dataset of real lanes. This method can be applied effectively to different sample types. It can realign minimal contrast bands and is robust for non-uniform deformations. Full article
Show Figures

Figure 1

2021

Jump to: 2024, 2023, 2022

27 pages, 17403 KiB  
Article
SAFIR-I: Design and Performance of a High-Rate Preclinical PET Insert for MRI
by Pascal Bebié, Robert Becker, Volker Commichau, Jan Debus, Günther Dissertori, Lubomir Djambazov, Afroditi Eleftheriou, Jannis Fischer, Peter Fischer, Mikiko Ito, Parisa Khateri, Werner Lustermann, Christian Ritzer, Michael Ritzert, Ulf Röser, Charalampos Tsoumpas, Geoffrey Warnock, Bruno Weber, Matthias T. Wyss and Agnieszka Zagozdzinska-Bochenek
Sensors 2021, 21(21), 7037; https://doi.org/10.3390/s21217037 - 23 Oct 2021
Cited by 3 | Viewed by 4204
Abstract
(1) Background: Small Animal Fast Insert for MRI detector I (SAFIR-I) is a preclinical Positron Emission Tomography (PET) insert for the Bruker BioSpec 70/30 Ultra Shield Refrigerated (USR) preclinical 7T Magnetic Resonance Imaging (MRI) system. It is designed explicitly for high-rate kinetic [...] Read more.
(1) Background: Small Animal Fast Insert for MRI detector I (SAFIR-I) is a preclinical Positron Emission Tomography (PET) insert for the Bruker BioSpec 70/30 Ultra Shield Refrigerated (USR) preclinical 7T Magnetic Resonance Imaging (MRI) system. It is designed explicitly for high-rate kinetic studies in mice and rats with injected activities reaching 500MBq, enabling truly simultaneous quantitative PET and Magnetic Resonance (MR) imaging with time frames of a few seconds in length. (2) Methods: SAFIR-I has an axial field of view of 54.2mm and an inner diameter of 114mm. It employs Lutetium Yttrium OxyorthoSilicate (LYSO) crystals and Multi Pixel Photon Counter (MPPC) arrays. The Position-Energy-Timing Application Specific Integrated Circuit, version 6, Single Ended (PETA6SE) digitizes the MPPC signals and provides time stamps and energy information. (3) Results: SAFIR-I is MR-compatible. The system’s Coincidence Resolving Time (CRT) and energy resolution are between separate-uncertainty 209.0(3)ps and separate-uncertainty 12.41(02) Full Width at Half Maximum (FWHM) at low activity and separate-uncertainty 326.89(12)ps and separate-uncertainty 20.630(011) FWHM at 550MBq, respectively. The peak sensitivity is ∼1.6. The excellent performance facilitated the successful execution of first in vivo rat studies beyond 300MBq. Based on features visible in the acquired images, we estimate the spatial resolution to be ∼2mm in the center of the Field Of View (FOV). (4) Conclusion: The SAFIR-I PET insert provides excellent performance, permitting simultaneous in vivo small animal PET/MR image acquisitions with time frames of a few seconds in length at activities of up to 500MBq. Full article
Show Figures

Figure 1

18 pages, 5591 KiB  
Article
Calderón’s Method with a Spatial Prior for 2-D EIT Imaging of Ventilation and Perfusion
by Kwancheol Shin and Jennifer L. Mueller
Sensors 2021, 21(16), 5635; https://doi.org/10.3390/s21165635 - 21 Aug 2021
Cited by 9 | Viewed by 2466
Abstract
Bedside imaging of ventilation and perfusion is a leading application of 2-D medical electrical impedance tomography (EIT), in which dynamic cross-sectional images of the torso are created by numerically solving the inverse problem of computing the conductivity from voltage measurements arising on electrodes [...] Read more.
Bedside imaging of ventilation and perfusion is a leading application of 2-D medical electrical impedance tomography (EIT), in which dynamic cross-sectional images of the torso are created by numerically solving the inverse problem of computing the conductivity from voltage measurements arising on electrodes due to currents applied on electrodes on the surface. Methods of reconstruction may be direct or iterative. Calderón’s method is a direct reconstruction method based on complex geometrical optics solutions to Laplace’s equation capable of providing real-time reconstructions in a region of interest. In this paper, the importance of accurate modeling of the electrode location on the body is demonstrated on simulated and experimental data, and a method of including a priori spatial information in dynamic human subject data is presented. The results of accurate electrode modeling and a spatial prior are shown to improve detection of inhomogeneities not included in the prior and to improve the resolution of ventilation and perfusion images in a human subject. Full article
Show Figures

Figure 1

15 pages, 6111 KiB  
Article
Feasibility Study of a Time-of-Flight Brain Positron Emission Tomography Employing Individual Channel Readout Electronics
by Kuntai Park, Jiwoong Jung, Yong Choi, Hyuntae Leem and Yeonkyeong Kim
Sensors 2021, 21(16), 5566; https://doi.org/10.3390/s21165566 - 18 Aug 2021
Cited by 4 | Viewed by 2657
Abstract
The purpose of this study was to investigate the feasibility of a time-of-flight (TOF) brain positron emission tomography (PET) providing high-quality images. It consisted of 30 detector blocks arranged in a ring with a diameter of 257 mm and an axial field of [...] Read more.
The purpose of this study was to investigate the feasibility of a time-of-flight (TOF) brain positron emission tomography (PET) providing high-quality images. It consisted of 30 detector blocks arranged in a ring with a diameter of 257 mm and an axial field of view of 52.2 mm. Each detector block was composed of two detector modules and two application-specific integrated circuit (ASIC) chips. The detector module was composed of an 8 × 8 array of 3 × 3 mm2 multi-pixel photon counters and an 8 × 8 array of 3.11 × 3.11 × 15 mm3 lutetium yttrium oxyorthosilicate scintillators. The 64-channel individual readout ASIC was used to acquire the position, energy, and time information of a detected gamma ray. A coincidence timing resolution of 187 ps full width at half maximum (FWHM) was achieved using a pair of channels of two detector modules. The energy resolution and spatial resolution were 6.6 ± 0.6% FWHM (without energy nonlinearity correction) and 2.5 mm FWHM, respectively. The results of this study demonstrate that the developed TOF brain PET could provide excellent performance, allowing for a reduction in radiation dose or scanning time for brain imaging due to improved sensitivity and signal-to-noise ratio. Full article
Show Figures

Figure 1

14 pages, 2656 KiB  
Article
Computer-Aided Colon Polyp Detection on High Resolution Colonoscopy Using Transfer Learning Techniques
by Chia-Pei Tang, Kai-Hong Chen and Tu-Liang Lin
Sensors 2021, 21(16), 5315; https://doi.org/10.3390/s21165315 - 6 Aug 2021
Cited by 16 | Viewed by 3366
Abstract
Colonoscopies reduce the incidence of colorectal cancer through early recognition and resecting of the colon polyps. However, the colon polyp miss detection rate is as high as 26% in conventional colonoscopy. The search for methods to decrease the polyp miss rate is nowadays [...] Read more.
Colonoscopies reduce the incidence of colorectal cancer through early recognition and resecting of the colon polyps. However, the colon polyp miss detection rate is as high as 26% in conventional colonoscopy. The search for methods to decrease the polyp miss rate is nowadays a paramount task. A number of algorithms or systems have been developed to enhance polyp detection, but few are suitable for real-time detection or classification due to their limited computational ability. Recent studies indicate that the automated colon polyp detection system is developing at an astonishing speed. Real-time detection with classification is still a yet to be explored field. Newer image pattern recognition algorithms with convolutional neuro-network (CNN) transfer learning has shed light on this topic. We proposed a study using real-time colonoscopies with the CNN transfer learning approach. Several multi-class classifiers were trained and mAP ranged from 38% to 49%. Based on an Inception v2 model, a detector adopting a Faster R-CNN was trained. The mAP of the detector was 77%, which was an improvement of 35% compared to the same type of multi-class classifier. Therefore, our results indicated that the polyp detection model could attain a high accuracy, but the polyp type classification still leaves room for improvement. Full article
Show Figures

Figure 1

14 pages, 2458 KiB  
Article
High Correlation of Static First-Minute-Frame (FMF) PET Imaging after 18F-Labeled Amyloid Tracer Injection with [18F]FDG PET Imaging
by Alexander P. Seiffert, Adolfo Gómez-Grande, Alberto Villarejo-Galende, Marta González-Sánchez, Héctor Bueno, Enrique J. Gómez and Patricia Sánchez-González
Sensors 2021, 21(15), 5182; https://doi.org/10.3390/s21155182 - 30 Jul 2021
Cited by 6 | Viewed by 2894
Abstract
Dynamic early-phase PET images acquired with radiotracers binding to fibrillar amyloid-beta (Aβ) have shown to correlate with [18F]fluorodeoxyglucose (FDG) PET images and provide perfusion-like information. Perfusion information of static PET scans acquired during the first minute after radiotracer injection (FMF, first-minute-frame) [...] Read more.
Dynamic early-phase PET images acquired with radiotracers binding to fibrillar amyloid-beta (Aβ) have shown to correlate with [18F]fluorodeoxyglucose (FDG) PET images and provide perfusion-like information. Perfusion information of static PET scans acquired during the first minute after radiotracer injection (FMF, first-minute-frame) is compared to [18F]FDG PET images. FMFs of 60 patients acquired with [18F]florbetapir (FBP), [18F]flutemetamol (FMM), and [18F]florbetaben (FBB) are compared to [18F]FDG PET images. Regional standardized uptake value ratios (SUVR) are directly compared and intrapatient Pearson’s correlation coefficients are calculated to evaluate the correlation of FMFs to their corresponding [18F]FDG PET images. Additionally, regional interpatient correlations are calculated. The intensity profiles of mean SUVRs among the study cohort (r = 0.98, p < 0.001) and intrapatient analyses show strong correlations between FMFs and [18F]FDG PET images (r = 0.93 ± 0.05). Regional VOI-based analyses also result in high correlation coefficients. The FMF shows similar information to the cerebral metabolic patterns obtained by [18F]FDG PET imaging. Therefore, it could be an alternative to the dynamic imaging of early phase amyloid PET and be used as an additional neurodegeneration biomarker in amyloid PET studies in routine clinical practice while being acquired at the same time as amyloid PET images. Full article
Show Figures

Figure 1

Back to TopTop