sensors-logo

Journal Browser

Journal Browser

Applications of Convolutional Neural Networks in Imaging and Sensing

A project collection of Sensors (ISSN 1424-8220). This project collection belongs to the section "Sensing and Imaging".

Papers displayed on this page all arise from the same project. Editorial decisions were made independently of project staff and handled by the Editor-in-Chief or qualified Editorial Board members.

Viewed by 44628

Editors


E-Mail Website
Guest Editor
Department of Informatics, Systems and Communication, University of Milano - Bicocca, Milan, Italy
Interests: computer vision; machine learning; optimization
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Informatics, Systems and Communication, University of Milan-Bicocca, 20126 Milano, Italy
Interests: signal/image/video processing and understanding; color imaging; machine learning
Special Issues, Collections and Topics in MDPI journals

Project Overview

Dear Colleagues,

Convolutional Neural Networks (CNNs or ConvNets) are a class of deep neural networks that leverage spatial information, and are, therefore, well-suited to be applied to different tasks in image-processing and computer vision.

Exploiting deep and end-to-end learnable architectures, CNNs allow to learn the best features or abstract representations to solve the particular problem at hand. This flexibility to adapt to different problems is among the reasons they now represent the state-of-the-art in many challenging image-processing and computer vision applications, mostly outperforming traditional techniques based on handcrafted features.

Nonetheless, deep learning presents its own set of specific challenges: the need for large-cardinality training sets can make data labeling cumbersome and expensive, and the high computational complexity of neural models can be an obstacle to mobile embedding and user personalization.

This Special Issue covers all the topics related to the application of CNNs to image processing and computer vision tasks, as well as topics related to the definition of new CNN architectures, highlighting their advantages in addressing the problems currently faced by the imaging community.

Possible contributions to the Special Issue include, but are not limited to, the following topics:

  • Image synthesis and rendering;
  • Image restoration and enhancement;
  • Color, multi-spectral, and hyper-spectral imaging;
  • Image and video quality assessment;
  • Texture, image and video analysis;
  • Image and video recognition, classification, and retrieval;
  • Biomedical and biological image processing and analysis;
  • Image and video quality control and anomaly detection;
  • Image processing for cultural heritage;
  • Image and video dehazing;
  • Image processing form material and object appearance, soft-metrology;
  • Image processing applications;
  • Image sensors.

Dr. Simone Bianco
Dr. Marco Buzzelli
Dr. Jean Baptiste Thomas
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the collection website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (18 papers)

2024

Jump to: 2023, 2022

19 pages, 7211 KiB  
Article
Fast, Zero-Reference Low-Light Image Enhancement with Camera Response Model
by Xiaofeng Wang, Liang Huang, Mingxuan Li, Chengshan Han, Xin Liu and Ting Nie
Sensors 2024, 24(15), 5019; https://doi.org/10.3390/s24155019 - 2 Aug 2024
Viewed by 790
Abstract
Low-light images are prevalent in intelligent monitoring and many other applications, with low brightness hindering further processing. Although low-light image enhancement can reduce the influence of such problems, current methods often involve a complex network structure or many iterations, which are not conducive [...] Read more.
Low-light images are prevalent in intelligent monitoring and many other applications, with low brightness hindering further processing. Although low-light image enhancement can reduce the influence of such problems, current methods often involve a complex network structure or many iterations, which are not conducive to their efficiency. This paper proposes a Zero-Reference Camera Response Network using a camera response model to achieve efficient enhancement for arbitrary low-light images. A double-layer parameter-generating network with a streamlined structure is established to extract the exposure ratio K from the radiation map, which is obtained by inverting the input through a camera response function. Then, K is used as the parameter of a brightness transformation function for one transformation on the low-light image to realize enhancement. In addition, a contrast-preserving brightness loss and an edge-preserving smoothness loss are designed without the requirement for references from the dataset. Both can further retain some key information in the inputs to improve precision. The enhancement is simplified and can reach more than twice the speed of similar methods. Extensive experiments on several LLIE datasets and the DARK FACE face detection dataset fully demonstrate our method’s advantages, both subjectively and objectively. Full article
Show Figures

Figure 1

22 pages, 7787 KiB  
Article
RCEAU-Net: Cascade Multi-Scale Convolution and Attention-Mechanism-Based Network for Laser Beam Target Image Segmentation with Complex Background in Coal Mine
by Wenjuan Yang, Yanqun Wang, Xuhui Zhang, Le Zhu, Zhiteng Ren, Yang Ji, Long Li and Yanbin Xie
Sensors 2024, 24(8), 2552; https://doi.org/10.3390/s24082552 - 16 Apr 2024
Cited by 1 | Viewed by 1082
Abstract
Accurate and reliable pose estimation of boom-type roadheaders is the key to the forming quality of the tunneling face in coal mines, which is of great importance to improve tunneling efficiency and ensure the safety of coal mine production. The multi-laser-beam target-based visual [...] Read more.
Accurate and reliable pose estimation of boom-type roadheaders is the key to the forming quality of the tunneling face in coal mines, which is of great importance to improve tunneling efficiency and ensure the safety of coal mine production. The multi-laser-beam target-based visual localization method is an effective way to realize accurate and reliable pose estimation of a roadheader body. However, the complex background interference in coal mines brings great challenges to the stable and accurate segmentation and extraction of laser beam features, which has become the main problem faced by the long-distance visual positioning method of underground equipment. In this paper, a semantic segmentation network for underground laser beams in coal mines, RCEAU-Net, is proposed based on U-Net. The network introduces residual connections in the convolution of the encoder and decoder parts, which effectively fuses the underlying feature information and improves the gradient circulation performance of the network. At the same time, by introducing cascade multi-scale convolution in the skipping connection section, which compensates for the lack of contextual semantic information in U-Net and improves the segmentation effect of the network model on tiny laser beams at long distance. Finally, the introduction of an efficient multi-scale attention module with cross-spatial learning in the encoder enhances the feature extraction capability of the network. Furthermore, the laser beam target dataset (LBTD) is constructed based on laser beam target images collected from several coal mines, and the proposed RCEAU-Net model is then tested and verified. The experimental results show that, compared with the original U-Net, RCEAU-Net can ensure the real-time performance of laser beam segmentation while increasing the Accuracy by 0.19%, Precision by 2.53%, Recall by 22.01%, and Intersection and Union Ratio by 8.48%, which can meet the requirements of multi-laser-beam feature segmentation and extraction under complex backgrounds in coal mines, so as to further ensure the accuracy and stability of long-distance visual positioning for boom-type roadheaders and ensure the safe production in the working face. Full article
Show Figures

Figure 1

2023

Jump to: 2024, 2022

15 pages, 4034 KiB  
Article
Multi-Scale Attention Feature Enhancement Network for Single Image Dehazing
by Weida Dong, Chunyan Wang, Hao Sun, Yunjie Teng and Xiping Xu
Sensors 2023, 23(19), 8102; https://doi.org/10.3390/s23198102 - 27 Sep 2023
Viewed by 1521
Abstract
Aiming to solve the problem of color distortion and loss of detail information in most dehazing algorithms, an end-to-end image dehazing network based on multi-scale feature enhancement is proposed. Firstly, the feature extraction enhancement module is used to capture the detailed information of [...] Read more.
Aiming to solve the problem of color distortion and loss of detail information in most dehazing algorithms, an end-to-end image dehazing network based on multi-scale feature enhancement is proposed. Firstly, the feature extraction enhancement module is used to capture the detailed information of hazy images and expand the receptive field. Secondly, the channel attention mechanism and pixel attention mechanism of the feature fusion enhancement module are used to dynamically adjust the weights of different channels and pixels. Thirdly, the context enhancement module is used to enhance the context semantic information, suppress redundant information, and obtain the haze density image with higher detail. Finally, our method removes haze, preserves image color, and ensures image details. The proposed method achieved a PSNR score of 33.74, SSIM scores of 0.9843 and LPIPS distance of 0.0040 on the SOTS-outdoor dataset. Compared with representative dehazing methods, it demonstrates better dehazing performance and proves the advantages of the proposed method on synthetic hazy images. Combined with dehazing experiments on real hazy images, the results show that our method can effectively improve dehazing performance while preserving more image details and achieving color fidelity. Full article
Show Figures

Figure 1

20 pages, 3936 KiB  
Article
Wood Veneer Defect Detection Based on Multiscale DETR with Position Encoder Net
by Yilin Ge, Dapeng Jiang and Liping Sun
Sensors 2023, 23(10), 4837; https://doi.org/10.3390/s23104837 - 17 May 2023
Cited by 3 | Viewed by 2017
Abstract
Wood is one of the main building materials. However, defects on veneers result in substantial waste of wood resources. Traditional veneer defect detection relies on manual experience or photoelectric-based methods, which are either subjective and inefficient or need substantial investment. Computer vision-based object [...] Read more.
Wood is one of the main building materials. However, defects on veneers result in substantial waste of wood resources. Traditional veneer defect detection relies on manual experience or photoelectric-based methods, which are either subjective and inefficient or need substantial investment. Computer vision-based object detection methods have been used in many realistic areas. This paper proposes a new deep learning defect detection pipeline. First, an image collection device is constructed and a total of more than 16,380 defect images are collected coupled with a mixed data augmentation method. Then, a detection pipeline is designed based on DEtection TRansformer (DETR). The original DETR needs position encoding functions to be designed and is ineffective for small object detection. To solve these problems, a position encoding net is designed with multiscale feature maps. The loss function is also redefined for much more stable training. The results from the defect dataset show that using a light feature mapping network, the proposed method is much faster with similar accuracy. Using a complex feature mapping network, the proposed method is much more accurate with similar speed. Full article
Show Figures

Figure 1

12 pages, 5823 KiB  
Communication
Deep Layer Aggregation Architectures for Photorealistic Universal Style Transfer
by Marius Dediu, Costin-Emanuel Vasile and Călin Bîră
Sensors 2023, 23(9), 4528; https://doi.org/10.3390/s23094528 - 6 May 2023
Cited by 2 | Viewed by 2512
Abstract
This paper introduces a deep learning approach to photorealistic universal style transfer that extends the PhotoNet network architecture by adding extra feature-aggregation modules. Given a pair of images representing the content and the reference of style, we augment the state-of-the-art solution mentioned above [...] Read more.
This paper introduces a deep learning approach to photorealistic universal style transfer that extends the PhotoNet network architecture by adding extra feature-aggregation modules. Given a pair of images representing the content and the reference of style, we augment the state-of-the-art solution mentioned above with deeper aggregation, to better fuse content and style information across the decoding layers. As opposed to the more flexible implementation of PhotoNet (i.e., PhotoNAS), which targets the minimization of inference time, our method aims to achieve better image reconstruction and a more pleasant stylization. We propose several deep layer aggregation architectures to be used as wrappers over PhotoNet, to enhance the stylization and quality of the output image. Full article
Show Figures

Figure 1

28 pages, 9438 KiB  
Article
Convolutional Neural-Network-Based Reverse-Time Migration with Multiple Reflections
by Shang Huang and Daniel Trad
Sensors 2023, 23(8), 4012; https://doi.org/10.3390/s23084012 - 15 Apr 2023
Cited by 4 | Viewed by 2043
Abstract
Reverse-time migration (RTM) has the advantage that it can handle steep dipping structures and offer high-resolution images of the complex subsurface. Nevertheless, there are some limitations to the chosen initial model, aperture illumination and computation efficiency. RTM has a strong dependency on the [...] Read more.
Reverse-time migration (RTM) has the advantage that it can handle steep dipping structures and offer high-resolution images of the complex subsurface. Nevertheless, there are some limitations to the chosen initial model, aperture illumination and computation efficiency. RTM has a strong dependency on the initial velocity model. The RTM result image will perform poorly if the input background velocity model is inaccurate. One solution is to apply least-squares reverse-time migration (LSRTM), which updates the reflectivity and suppresses artifacts through iterations. However, the output resolution still depends heavily on the input and accuracy of the velocity model, even more than for standard RTM. For the aperture limitation, RTM with multiple reflections (RTMM) is instrumental in improving the illumination but will generate crosstalks because of the interference between different orders of multiples. We proposed a method based on a convolutional neural network (CNN) that behaves like a filter applying the inverse of the Hessian. This approach can learn patterns representing the relation between the reflectivity obtained through RTMM and the true reflectivity obtained from velocity models through a residual U-Net with an identity mapping. Once trained, this neural network can be used to enhance the quality of RTMM images. Numerical experiments show that RTMM-CNN can recover major structures and thin layers with higher resolution and improved accuracy compared with the RTM-CNN method. Additionally, the proposed method demonstrates a significant degree of generalizability across diverse geology models, encompassing complex thin layers, salt bodies, folds, and faults. Moreover, The computational efficiency of the method is demonstrated by its lower computational cost compared with LSRTM. Full article
Show Figures

Figure 1

20 pages, 1846 KiB  
Article
Nested DWT–Based CNN Architecture for Monocular Depth Estimation
by Sandip Paul, Deepak Mishra and Senthil Kumar Marimuthu
Sensors 2023, 23(6), 3066; https://doi.org/10.3390/s23063066 - 13 Mar 2023
Cited by 2 | Viewed by 2081
Abstract
Applications such as medical diagnosis, navigation, robotics, etc., require 3D images. Recently, deep learning networks have been extensively applied to estimate depth. Depth prediction from 2D images poses a problem that is both ill–posed and non–linear. Such networks are computationally and time–wise expensive [...] Read more.
Applications such as medical diagnosis, navigation, robotics, etc., require 3D images. Recently, deep learning networks have been extensively applied to estimate depth. Depth prediction from 2D images poses a problem that is both ill–posed and non–linear. Such networks are computationally and time–wise expensive as they have dense configurations. Further, the network performance depends on the trained model configuration, the loss functions used, and the dataset applied for training. We propose a moderately dense encoder–decoder network based on discrete wavelet decomposition and trainable coefficients (LL, LH, HL, HH). Our Nested Wavelet–Net (NDWTN) preserves the high–frequency information that is otherwise lost during the downsampling process in the encoder. Furthermore, we study the effect of activation functions, batch normalization, convolution layers, skip, etc., in our models. The network is trained with NYU datasets. Our network trains faster with good results. Full article
Show Figures

Figure 1

17 pages, 709 KiB  
Article
Lossless Reconstruction of Convolutional Neural Network for Channel-Based Network Pruning
by Donghyeon Lee, Eunho Lee and Youngbae Hwang
Sensors 2023, 23(4), 2102; https://doi.org/10.3390/s23042102 - 13 Feb 2023
Cited by 1 | Viewed by 2203
Abstract
Network pruning reduces the number of parameters and computational costs of convolutional neural networks while maintaining high performance. Although existing pruning methods have achieved excellent results, they do not consider reconstruction after pruning in order to apply the network to actual devices. This [...] Read more.
Network pruning reduces the number of parameters and computational costs of convolutional neural networks while maintaining high performance. Although existing pruning methods have achieved excellent results, they do not consider reconstruction after pruning in order to apply the network to actual devices. This study proposes a reconstruction process for channel-based network pruning. For lossless reconstruction, we focus on three components of the network: the residual block, skip connection, and convolution layer. Union operation and index alignment are applied to the residual block and skip connection, respectively. Furthermore, we reconstruct a compressed convolution layer by considering batch normalization. We apply our method to existing channel-based pruning methods for downstream tasks such as image classification, object detection, and semantic segmentation. Experimental results show that compressing a large model has a 1.93% higher accuracy in image classification, 2.2 higher mean Intersection over Union (mIoU) in semantic segmentation, and 0.054 higher mean Average Precision (mAP) in object detection than well-designed small models. Moreover, we demonstrate that our method can reduce the actual latency by 8.15× and 5.29× on Raspberry Pi and Jetson Nano, respectively. Full article
Show Figures

Figure 1

18 pages, 8049 KiB  
Article
Eye Recognition by YOLO for Inner Canthus Temperature Detection in the Elderly Using a Transfer Learning Approach
by Malak Ghourabi, Farah Mourad-Chehade and Aly Chkeir
Sensors 2023, 23(4), 1851; https://doi.org/10.3390/s23041851 - 7 Feb 2023
Cited by 4 | Viewed by 4051
Abstract
Early detection of physical frailty and infectious diseases in seniors is important to avoid any fatal drawback and promptly provide them with the necessary healthcare. One of the major symptoms of viral infections is elevated body temperature. In this work, preparation and implementation [...] Read more.
Early detection of physical frailty and infectious diseases in seniors is important to avoid any fatal drawback and promptly provide them with the necessary healthcare. One of the major symptoms of viral infections is elevated body temperature. In this work, preparation and implementation of multi-age thermal faces dataset is done to train different “You Only Look Once” (YOLO) object detection models (YOLOv5,6 and 7) for eye detection. Eye detection allows scanning for the most accurate temperature in the face, which is the inner canthus temperature. An approach using an elderly thermal dataset is performed in order to produce an eye detection model specifically for elderly people. An application of transfer learning is applied from a multi-age YOLOv7 model to an elderly YOLOv7 model. The comparison of speed, accuracy, and size between the trained models shows that the YOLOv7 model performed the best (Mean average precision at Intersection over Union of 0.5 ([email protected]) = 0.996 and Frames per Seconds (FPS) = 150). The bounding box of eyes is scanned for the highest temperature, resulting in a normalized error distance of 0.03. This work presents a fast and reliable temperature detection model generated using non-contact infrared camera and a deep learning approach. Full article
Show Figures

Figure 1

16 pages, 6725 KiB  
Article
Monocular Depth Estimation Using a Laplacian Image Pyramid with Local Planar Guidance Layers
by Youn-Ho Choi and Seok-Cheol Kee
Sensors 2023, 23(2), 845; https://doi.org/10.3390/s23020845 - 11 Jan 2023
Viewed by 3471
Abstract
It is important to estimate the exact depth from 2D images, and many studies have been conducted for a long period of time to solve depth estimation problems. Recently, as research on estimating depth from monocular camera images based on deep learning is [...] Read more.
It is important to estimate the exact depth from 2D images, and many studies have been conducted for a long period of time to solve depth estimation problems. Recently, as research on estimating depth from monocular camera images based on deep learning is progressing, research for estimating accurate depths using various techniques is being conducted. However, depth estimation from 2D images has been a problem in predicting the boundary between objects. In this paper, we aim to predict sophisticated depths by emphasizing the precise boundaries between objects. We propose a depth estimation network with encoder–decoder structures using the Laplacian pyramid and local planar guidance method. In the process of upsampling the learned features using the encoder, the purpose of this step is to obtain a clearer depth map by guiding a more sophisticated boundary of an object using the Laplacian pyramid and local planar guidance techniques. We train and test our models with KITTI and NYU Depth V2 datasets. The proposed network constructs a DNN using only convolution and uses the ConvNext networks as a backbone. A trained model shows the performance of the absolute relative error (Abs_rel) 0.054 and root mean square error (RMSE) 2.252 based on the KITTI dataset and absolute relative error (Abs_rel) 0.102 and root mean square error 0.355 based on the NYU Depth V2 dataset. On the state-of-the-art monocular depth estimation, our network performance shows the fifth-best performance based on the KITTI Eigen split and the eighth-best performance based on the NYU Depth V2. Full article
Show Figures

Figure 1

12 pages, 19996 KiB  
Article
A Two-Stage Network for Zero-Shot Low-Illumination Image Restoration
by Hao Tang, Linfeng Fei, Hongyu Zhu, Huanjie Tao and Chao Xie
Sensors 2023, 23(2), 792; https://doi.org/10.3390/s23020792 - 10 Jan 2023
Cited by 4 | Viewed by 2158
Abstract
Due to the influence of poor lighting conditions and the limitations of existing imaging equipment, captured low-illumination images produce noise, artifacts, darkening, and other unpleasant visual problems. Such problems will have an adverse impact on the following high-level image understanding tasks. To overcome [...] Read more.
Due to the influence of poor lighting conditions and the limitations of existing imaging equipment, captured low-illumination images produce noise, artifacts, darkening, and other unpleasant visual problems. Such problems will have an adverse impact on the following high-level image understanding tasks. To overcome this, a two-stage network is proposed in this paper for better restoring low-illumination images. Specifically, instead of manipulating the raw input directly, our network first decomposes the low-illumination image into three different maps (i.e., reflectance, illumination, and feature) via a Decom-Net. During the decomposition process, only reflectance and illumination are further denoised to suppress the effect of noise, while the feature is preserved to reduce the loss of image details. Subsequently, the illumination is deeply adjusted via another well-designed subnetwork called Enhance-Net. Finally, the three restored maps are fused together to generate the final enhanced output. The entire proposed network is optimized in a zero-shot fashion using a newly introduced loss function. Experimental results demonstrate that the proposed network achieves better performance in terms of both objective evaluation and visual quality. Full article
Show Figures

Figure 1

2022

Jump to: 2024, 2023

17 pages, 4671 KiB  
Article
Automated Machine Learning System for Defect Detection on Cylindrical Metal Surfaces
by Yi-Cheng Huang, Kuo-Chun Hung and Jun-Chang Lin
Sensors 2022, 22(24), 9783; https://doi.org/10.3390/s22249783 - 13 Dec 2022
Cited by 12 | Viewed by 3900
Abstract
Metal workpieces are indispensable in the manufacturing industry. Surface defects affect the appearance and efficiency of a workpiece and reduce the safety of manufactured products. Therefore, products must be inspected for surface defects, such as scratches, dirt, and chips. The traditional manual inspection [...] Read more.
Metal workpieces are indispensable in the manufacturing industry. Surface defects affect the appearance and efficiency of a workpiece and reduce the safety of manufactured products. Therefore, products must be inspected for surface defects, such as scratches, dirt, and chips. The traditional manual inspection method is time-consuming and labor-intensive, and human error is unavoidable when thousands of products require inspection. Therefore, an automated optical inspection method is often adopted. Traditional automated optical inspection algorithms are insufficient in the detection of defects on metal surfaces, but a convolutional neural network (CNN) may aid in the inspection. However, considerable time is required to select the optimal hyperparameters for a CNN through training and testing. First, we compared the ability of three CNNs, namely VGG-16, ResNet-50, and MobileNet v1, to detect defects on metal surfaces. These models were hypothetically implemented for transfer learning (TL). However, in deploying TL, the phenomenon of apparent convergence in prediction accuracy, followed by divergence in validation accuracy, may create a problem when the image pattern is not known in advance. Second, our developed automated machine-learning (AutoML) model was trained through a random search with the core layers of the network architecture of the three TL models. We developed a retraining criterion for scenarios in which the model exhibited poor training results such that a new neural network architecture and new hyperparameters could be selected for retraining when the defect accuracy criterion in the first TL was not met. Third, we used AutoKeras to execute AutoML and identify a model suitable for a metal-surface-defect dataset. The performance of TL, AutoKeras, and our designed AutoML model was compared. The results of this study were obtained using a small number of metal defect samples. Based on TL, the detection accuracy of VGG-16, ResNet-50, and MobileNet v1 was 91%, 59.00%, and 50%, respectively. Moreover, the AutoKeras model exhibited the highest accuracy of 99.83%. The accuracy of the self-designed AutoML model reached 95.50% when using a core layer module, obtained by combining the modules of VGG-16, ResNet-50, and MobileNet v1. The designed AutoML model effectively and accurately recognized defective and low-quality samples despite low training costs. The defect accuracy of the developed model was close to that of the existing AutoKeras model and thus can contribute to the development of new diagnostic technologies for smart manufacturing. Full article
Show Figures

Figure 1

17 pages, 1808 KiB  
Article
An Enhanced Hyper-Parameter Optimization of a Convolutional Neural Network Model for Leukemia Cancer Diagnosis in a Smart Healthcare System
by Joseph Bamidele Awotunde, Agbotiname Lucky Imoize, Oluwafisayo Babatope Ayoade, Moses Kazeem Abiodun, Dinh-Thuan Do, Adão Silva and Samarendra Nath Sur
Sensors 2022, 22(24), 9689; https://doi.org/10.3390/s22249689 - 10 Dec 2022
Cited by 22 | Viewed by 2134
Abstract
Healthcare systems in recent times have witnessed timely diagnoses with a high level of accuracy. Internet of Medical Things (IoMT)-enabled deep learning (DL) models have been used to support medical diagnostics in real time, thus resolving the issue of late-stage diagnosis of various [...] Read more.
Healthcare systems in recent times have witnessed timely diagnoses with a high level of accuracy. Internet of Medical Things (IoMT)-enabled deep learning (DL) models have been used to support medical diagnostics in real time, thus resolving the issue of late-stage diagnosis of various diseases and increasing performance accuracy. The current approach for the diagnosis of leukemia uses traditional procedures, and in most cases, fails in the initial period. Hence, several patients suffering from cancer have died prematurely due to the late discovery of cancerous cells in blood tissue. Therefore, this study proposes an IoMT-enabled convolutional neural network (CNN) model to detect malignant and benign cancer cells in the patient’s blood tissue. In particular, the hyper-parameter optimization through radial basis function and dynamic coordinate search (HORD) optimization algorithm was used to search for optimal values of CNN hyper-parameters. Utilizing the HORD algorithm significantly increased the effectiveness of finding the best solution for the CNN model by searching multidimensional hyper-parameters. This implies that the HORD method successfully found the values of hyper-parameters for precise leukemia features. Additionally, the HORD method increased the performance of the model by optimizing and searching for the best set of hyper-parameters for the CNN model. Leukemia datasets were used to evaluate the performance of the proposed model using standard performance indicators. The proposed model revealed significant classification accuracy compared to other state-of-the-art models. Full article
Show Figures

Figure 1

18 pages, 5226 KiB  
Article
Research on Non-Pooling YOLOv5 Based Algorithm for the Recognition of Randomly Distributed Multiple Types of Parts
by Zehua Yu, Ling Zhang, Xingyu Gao, Yang Huang and Xiaoke Liu
Sensors 2022, 22(23), 9335; https://doi.org/10.3390/s22239335 - 30 Nov 2022
Cited by 6 | Viewed by 2453
Abstract
Part cleaning is very important for the assembly of precision machinery. After cleaning, the parts are randomly distributed in the collection area, which makes it difficult for a robot to collect them. Common robots can only collect parts located in relatively fixed positions, [...] Read more.
Part cleaning is very important for the assembly of precision machinery. After cleaning, the parts are randomly distributed in the collection area, which makes it difficult for a robot to collect them. Common robots can only collect parts located in relatively fixed positions, and it is difficult to adapt these robots to collect at randomly distributed positions. Therefore, a rapid part classification method based on a non-pooling YOLOv5 network for the recognition of randomly distributed multiple types of parts is proposed in this paper; this method classifies parts from their two-dimensional images obtained using industrial cameras. We compared the traditional and non-pooling YOLOv5 networks under different activation functions. Experimental results showed that the non-pooling YOLOv5 network improved part recognition precision by 8% and part recall rate by 3% within 100 epochs of training, which helped improve the part classification efficiency. The experiment showed that the non-pooling YOLOv5 network exhibited improved classification of industrial parts compared to the traditional YOLOv5 network. Full article
Show Figures

Figure 1

19 pages, 1344 KiB  
Article
SCA: Search-Based Computing Hardware Architecture with Precision Scalable and Computation Reconfigurable Scheme
by Liang Chang, Xin Zhao and Jun Zhou
Sensors 2022, 22(21), 8545; https://doi.org/10.3390/s22218545 - 6 Nov 2022
Cited by 2 | Viewed by 2078
Abstract
Deep neural networks have been deployed in various hardware accelerators, such as graph process units (GPUs), field-program gate arrays (FPGAs), and application specific integrated circuit (ASIC) chips. Normally, a huge amount of computation is required in the inference process, creating significant logic resource [...] Read more.
Deep neural networks have been deployed in various hardware accelerators, such as graph process units (GPUs), field-program gate arrays (FPGAs), and application specific integrated circuit (ASIC) chips. Normally, a huge amount of computation is required in the inference process, creating significant logic resource overheads. In addition, frequent data accessions between off-chip memory and hardware accelerators create bottlenecks, leading to decline in hardware efficiency. Many solutions have been proposed to reduce hardware overhead and data movements. For example, specific lookup-table (LUT)-based hardware architecture can be used to mitigate computing operation demands. However, typical LUT-based accelerators are affected by computational precision limitation and poor scalability issues. In this paper, we propose a search-based computing scheme based on an LUT solution, which improves computation efficiency by replacing traditional multiplication with a search operation. In addition, the proposed scheme supports different precision multiple-bit widths to meet the needs of different DNN-based applications. We design a reconfigurable computing strategy, which can efficiently adapt to the convolution of different kernel sizes to improve hardware scalability. We implement a search-based architecture, namely SCA, which adopts an on-chip storage mechanism, thus greatly reducing interactions with off-chip memory and alleviating bandwidth pressure. Based on experimental evaluation, the proposed SCA architecture can achieve 92%, 96% and 98% computational utilization for computational precision of 4 bit, 8 bit and 16 bit, respectively. Compared with state-of-the-art LUT-based architecture, the efficiency can be improved four-fold. Full article
Show Figures

Figure 1

21 pages, 1364 KiB  
Review
Machine Learning for Renal Pathologies: An Updated Survey
by Roberto Magherini, Elisa Mussi, Yary Volpe, Rocco Furferi, Francesco Buonamici and Michaela Servi
Sensors 2022, 22(13), 4989; https://doi.org/10.3390/s22134989 - 1 Jul 2022
Cited by 7 | Viewed by 3186
Abstract
Within the literature concerning modern machine learning techniques applied to the medical field, there is a growing interest in the application of these technologies to the nephrological area, especially regarding the study of renal pathologies, because they are very common and widespread in [...] Read more.
Within the literature concerning modern machine learning techniques applied to the medical field, there is a growing interest in the application of these technologies to the nephrological area, especially regarding the study of renal pathologies, because they are very common and widespread in our society, afflicting a high percentage of the population and leading to various complications, up to death in some cases. For these reasons, the authors have considered it appropriate to collect, using one of the major bibliographic databases available, and analyze the studies carried out until February 2022 on the use of machine learning techniques in the nephrological field, grouping them according to the addressed pathologies: renal masses, acute kidney injury, chronic kidney disease, kidney stone, glomerular disease, kidney transplant, and others less widespread. Of a total of 224 studies, 59 were analyzed according to inclusion and exclusion criteria in this review, considering the method used and the type of data available. Based on the study conducted, it is possible to see a growing trend and interest in the use of machine learning applications in nephrology, becoming an additional tool for physicians, which can enable them to make more accurate and faster diagnoses, although there remains a major limitation given the difficulty in creating public databases that can be used by the scientific community to corroborate and eventually make a positive contribution in this area. Full article
Show Figures

Figure 1

19 pages, 8924 KiB  
Article
A Novel Auto-Synthesis Dataset Approach for Fitting Recognition Using Prior Series Data
by Jie Zhang, Xinyan Qin, Jin Lei, Bo Jia, Bo Li, Zhaojun Li, Huidong Li, Yujie Zeng and Jie Song
Sensors 2022, 22(12), 4364; https://doi.org/10.3390/s22124364 - 9 Jun 2022
Cited by 3 | Viewed by 1943
Abstract
To address power transmission line (PTL) traversing complex environments leading to data collection being difficult and costly, we propose a novel auto-synthesis dataset approach for fitting recognition using prior series data. The approach mainly includes three steps: (1) formulates synthesis rules by the [...] Read more.
To address power transmission line (PTL) traversing complex environments leading to data collection being difficult and costly, we propose a novel auto-synthesis dataset approach for fitting recognition using prior series data. The approach mainly includes three steps: (1) formulates synthesis rules by the prior series data; (2) renders 2D images based on the synthesis rules utilizing advanced virtual 3D techniques; (3) generates the synthetic dataset with images and annotations obtained by processing images using the OpenCV. The trained model using the synthetic dataset was tested by the real dataset (including images and annotations) with a mean average precision (mAP) of 0.98, verifying the feasibility and effectiveness of the proposed approach. The recognition accuracy by the test is comparable with training by real samples and the cost is greatly reduced to generate synthetic datasets. The proposed approach improves the efficiency of establishing a dataset, providing a training data basis for deep learning (DL) of fitting recognition. Full article
Show Figures

Figure 1

16 pages, 4182 KiB  
Article
Weld Feature Extraction Based on Semantic Segmentation Network
by Bin Wang, Fengshun Li, Rongjian Lu, Xiaoyu Ni and Wenhan Zhu
Sensors 2022, 22(11), 4130; https://doi.org/10.3390/s22114130 - 29 May 2022
Cited by 9 | Viewed by 2581
Abstract
Laser welding is an indispensable link in most types of industrial production. The realization of welding automation by industrial robots can greatly improve production efficiency. In the research and development of the welding seam tracking system, information on the position of the weld [...] Read more.
Laser welding is an indispensable link in most types of industrial production. The realization of welding automation by industrial robots can greatly improve production efficiency. In the research and development of the welding seam tracking system, information on the position of the weld joint needs to be obtained accurately. For laser welding images with strong and complex interference, a weld tracking module was designed to capture real-time images of the weld, and a total of 737, 1920 × 1200 pixel weld images were captured using the device, of which 637 were used to create the dataset, and the other 100 were used as images to test the segmentation success rate. Based on the pixel-level segmentation capability of the semantic segmentation network, this study used an encoder–decoder architecture to design a lightweight network structure and introduced a channel attention mechanism. Compared to ERF-Net, SegNet, and DFA-Net, the network model in this paper has a fast segmentation speed and higher segmentation accuracy, with a success rate of 96% and remarkable segmentation results. Full article
Show Figures

Figure 1

Back to TopTop