sensors-logo

Journal Browser

Journal Browser

Image Processing and Pattern Recognition Based on Deep Learning

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Sensing and Imaging".

Deadline for manuscript submissions: closed (20 January 2023) | Viewed by 60181

Special Issue Editors


E-Mail Website
Guest Editor
Department of Automation and Industrial Informatics, Faculty of Automatic Control and Computer Science, University POLITEHNICA of Bucharest, 060042 Bucharest, Romania
Interests: image acquisition; image processing; feature extraction; image classification; image segmentation; artificial neural networks; deep learning; wireless sensor networks; unmanned aerial vehicles; data fusion; data processing in medicine; data processing in agriculture
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Automation and Industrial Informatics, Faculty of Automatic Control and Computer Science, University POLITEHNICA of Bucharest, 060042 Bucharest, Romania
Interests: convolutional neural networks; artificial intelligence; medical image processing; biomedical optical imaging; computer vision; computerised monitoring; data acquisition; image colour analysis; texture analysis; cloud computing
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

The pattern recognition applied in the analysis and interpretation of regions of interest in images is now related to the use of artificial intelligence, particularly neural networks based on deep learning. As trends in the use of neural networks we can notice the modification of networks from established families to increase statistical or time performance, transfer learning, the use of multiple networks in more complex systems, merging the decisions of individual networks, and combining efficient features with neural networks for improved detection or classification performance. Additionally, the combination with other classifiers based on artificial intelligence can be used.

The aim of this Special Issue is to publish original research contributions concerning new neural-network-based approaches to image processing and pattern recognition with direct application in different domains, such as remote sensing, crop monitoring, boarding monitoring, system support in medical diagnosis, emotion detection, etc.

The scope of this Special Issue includes (but is not limited to) the following research areas concerning image processing and pattern recognition aided by new artificial intelligence techniques:

  • Image processing;
  • Pattern recognition;
  • Image segmentation;
  • Object classification;
  • Neural networks;
  • Deep learning;
  • Decision fusion;
  • Systems based on multiple neural networks;
  • Detection of region of interest from remote images;
  • Industry applications;
  • Precision agriculture application ;
  • Medical application;
  • Monitoring of protected areas;
  • Disaster monitoring and assessment.

Prof. Dr. Dan Popescu
Prof. Dr. Loretta Ichim
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Related Special Issue

Published Papers (15 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

21 pages, 9140 KiB  
Article
Recognition of Occluded Goods under Prior Inference Based on Generative Adversarial Network
by Mingxuan Cao, Kai Xie, Feng Liu, Bohao Li, Chang Wen, Jianbiao He and Wei Zhang
Sensors 2023, 23(6), 3355; https://doi.org/10.3390/s23063355 - 22 Mar 2023
Viewed by 1702
Abstract
Aiming at the recognition of intelligent retail dynamic visual container goods, two problems that lead to low recognition accuracy must be addressed; one is the lack of goods features caused by the occlusion of the hand, and the other is the high similarity [...] Read more.
Aiming at the recognition of intelligent retail dynamic visual container goods, two problems that lead to low recognition accuracy must be addressed; one is the lack of goods features caused by the occlusion of the hand, and the other is the high similarity of goods. Therefore, this study proposes an approach for occluding goods recognition based on a generative adversarial network combined with prior inference to address the two abovementioned problems. With DarkNet53 as the backbone network, semantic segmentation is used to locate the occluded part in the feature extraction network, and simultaneously, the YOLOX decoupling head is used to obtain the detection frame. Subsequently, a generative adversarial network under prior inference is used to restore and expand the features of the occluded parts, and a multi-scale spatial attention and effective channel attention weighted attention mechanism module is proposed to select fine-grained features of goods. Finally, a metric learning method based on von Mises–Fisher distribution is proposed to increase the class spacing of features to achieve the effect of feature distinction, whilst the distinguished features are utilized to recognize goods at a fine-grained level. The experimental data used in this study were all obtained from the self-made smart retail container dataset, which contains a total of 12 types of goods used for recognition and includes four couples of similar goods. Experimental results reveal that the peak signal-to-noise ratio and structural similarity under improved prior inference are 0.7743 and 0.0183 higher than those of the other models, respectively. Compared with other optimal models, mAP improves the recognition accuracy by 1.2% and the recognition accuracy by 2.82%. This study solves two problems: one is the occlusion caused by hands, and the other is the high similarity of goods, thus meeting the requirements of commodity recognition accuracy in the field of intelligent retail and exhibiting good application prospects. Full article
(This article belongs to the Special Issue Image Processing and Pattern Recognition Based on Deep Learning)
Show Figures

Figure 1

13 pages, 3539 KiB  
Article
ADE-CycleGAN: A Detail Enhanced Image Dehazing CycleGAN Network
by Bingnan Yan, Zhaozhao Yang, Huizhu Sun and Conghui Wang
Sensors 2023, 23(6), 3294; https://doi.org/10.3390/s23063294 - 21 Mar 2023
Cited by 9 | Viewed by 2864
Abstract
The preservation of image details in the defogging process is still one key challenge in the field of deep learning. The network uses the generation of confrontation loss and cyclic consistency loss to ensure that the generated defog image is similar to the [...] Read more.
The preservation of image details in the defogging process is still one key challenge in the field of deep learning. The network uses the generation of confrontation loss and cyclic consistency loss to ensure that the generated defog image is similar to the original image, but it cannot retain the details of the image. To this end, we propose a detail enhanced image CycleGAN to retain the detail information during the process of defogging. Firstly, the algorithm uses the CycleGAN network as the basic framework and combines the U-Net network’s idea with this framework to extract visual information features in different spaces of the image in multiple parallel branches, and it introduces Dep residual blocks to learn deeper feature information. Secondly, a multi-head attention mechanism is introduced in the generator to strengthen the expressive ability of features and balance the deviation produced by the same attention mechanism. Finally, experiments are carried out on the public data set D-Hazy. Compared with the CycleGAN network, the network structure of this paper improves the SSIM and PSNR of the image dehazing effect by 12.2% and 8.1% compared with the network and can retain image dehazing details. Full article
(This article belongs to the Special Issue Image Processing and Pattern Recognition Based on Deep Learning)
Show Figures

Figure 1

17 pages, 9707 KiB  
Article
Deep Monocular Depth Estimation Based on Content and Contextual Features
by Saddam Abdulwahab, Hatem A. Rashwan, Najwa Sharaf, Saif Khalid and Domenec Puig
Sensors 2023, 23(6), 2919; https://doi.org/10.3390/s23062919 - 8 Mar 2023
Cited by 4 | Viewed by 2990
Abstract
Recently, significant progress has been achieved in developing deep learning-based approaches for estimating depth maps from monocular images. However, many existing methods rely on content and structure information extracted from RGB photographs, which often results in inaccurate depth estimation, particularly for regions with [...] Read more.
Recently, significant progress has been achieved in developing deep learning-based approaches for estimating depth maps from monocular images. However, many existing methods rely on content and structure information extracted from RGB photographs, which often results in inaccurate depth estimation, particularly for regions with low texture or occlusions. To overcome these limitations, we propose a novel method that exploits contextual semantic information to predict precise depth maps from monocular images. Our approach leverages a deep autoencoder network incorporating high-quality semantic features from the state-of-the-art HRNet-v2 semantic segmentation model. By feeding the autoencoder network with these features, our method can effectively preserve the discontinuities of the depth images and enhance monocular depth estimation. Specifically, we exploit the semantic features related to the localization and boundaries of the objects in the image to improve the accuracy and robustness of the depth estimation. To validate the effectiveness of our approach, we tested our model on two publicly available datasets, NYU Depth v2 and SUN RGB-D. Our method outperformed several state-of-the-art monocular depth estimation techniques, achieving an accuracy of 85%, while minimizing the error Rel by 0.12, RMS by 0.523, and log10 by 0.0527. Our approach also demonstrated exceptional performance in preserving object boundaries and faithfully detecting small object structures in the scene. Full article
(This article belongs to the Special Issue Image Processing and Pattern Recognition Based on Deep Learning)
Show Figures

Figure 1

16 pages, 3651 KiB  
Article
QCNN_BaOpt: Multi-Dimensional Data-Based Traffic-Volume Prediction in Cyber–Physical Systems
by Ramesh Sneka Nandhini and Ramanathan Lakshmanan
Sensors 2023, 23(3), 1485; https://doi.org/10.3390/s23031485 - 29 Jan 2023
Cited by 4 | Viewed by 1730
Abstract
The rapid growth of industry and the economy has contributed to a tremendous increase in traffic in all urban areas. People face the problem of traffic congestion frequently in their day-to-day life. To alleviate congestion and provide traffic guidance and control, several types [...] Read more.
The rapid growth of industry and the economy has contributed to a tremendous increase in traffic in all urban areas. People face the problem of traffic congestion frequently in their day-to-day life. To alleviate congestion and provide traffic guidance and control, several types of research have been carried out in the past to develop suitable computational models for short- and long-term traffic. This study developed an effective multi-dimensional dataset-based model in cyber–physical systems for more accurate traffic-volume prediction. The integration of quantum convolutional neural network and Bayesian optimization (QCNN_BaOpt) constituted the proposed model in this study. Furthermore, optimal tuning of hyperparameters was carried out using Bayesian optimization. The constructed model was evaluated using the US accident dataset records available in Kaggle, which comprise 1.5 million records. The dataset consists of 47 attributes describing spatial and temporal behavior, accidents, and weather characteristics. The efficiency of the proposed model was evaluated by calculating various metrics. The performance of the proposed model was assessed as having an accuracy of 99.3%. Furthermore, the proposed model was compared against the existing state-of-the-art models to demonstrate its superiority. Full article
(This article belongs to the Special Issue Image Processing and Pattern Recognition Based on Deep Learning)
Show Figures

Figure 1

39 pages, 11325 KiB  
Article
The Best of Both Worlds: A Framework for Combining Degradation Prediction with High Performance Super-Resolution Networks
by Matthew Aquilina, Keith George Ciantar, Christian Galea, Kenneth P. Camilleri, Reuben A. Farrugia and John Abela
Sensors 2023, 23(1), 419; https://doi.org/10.3390/s23010419 - 30 Dec 2022
Viewed by 3446
Abstract
To date, the best-performing blind super-resolution (SR) techniques follow one of two paradigms: (A) train standard SR networks on synthetic low-resolution–high-resolution (LR–HR) pairs or (B) predict the degradations of an LR image and then use these to inform a customised SR network. Despite [...] Read more.
To date, the best-performing blind super-resolution (SR) techniques follow one of two paradigms: (A) train standard SR networks on synthetic low-resolution–high-resolution (LR–HR) pairs or (B) predict the degradations of an LR image and then use these to inform a customised SR network. Despite significant progress, subscribers to the former miss out on useful degradation information and followers of the latter rely on weaker SR networks, which are significantly outperformed by the latest architectural advancements. In this work, we present a framework for combining any blind SR prediction mechanism with any deep SR network. We show that a single lightweight metadata insertion block together with a degradation prediction mechanism can allow non-blind SR architectures to rival or outperform state-of-the-art dedicated blind SR networks. We implement various contrastive and iterative degradation prediction schemes and show they are readily compatible with high-performance SR networks such as RCAN and HAN within our framework. Furthermore, we demonstrate our framework’s robustness by successfully performing blind SR on images degraded with blurring, noise and compression. This represents the first explicit combined blind prediction and SR of images degraded with such a complex pipeline, acting as a baseline for further advancements. Full article
(This article belongs to the Special Issue Image Processing and Pattern Recognition Based on Deep Learning)
Show Figures

Figure 1

22 pages, 6374 KiB  
Article
Feature Pyramid U-Net with Attention for Semantic Segmentation of Forward-Looking Sonar Images
by Dongdong Zhao, Weihao Ge, Peng Chen, Yingtian Hu, Yuanjie Dang, Ronghua Liang and Xinxin Guo
Sensors 2022, 22(21), 8468; https://doi.org/10.3390/s22218468 - 3 Nov 2022
Cited by 4 | Viewed by 4491
Abstract
Forward-looking sonar is a technique widely used for underwater detection. However, most sonar images have underwater noise and low resolution due to their acoustic properties. In recent years, the semantic segmentation model U-Net has shown excellent segmentation performance, and it has great potential [...] Read more.
Forward-looking sonar is a technique widely used for underwater detection. However, most sonar images have underwater noise and low resolution due to their acoustic properties. In recent years, the semantic segmentation model U-Net has shown excellent segmentation performance, and it has great potential in forward-looking sonar image segmentation. However, forward-looking sonar images are affected by noise, which prevents the existing U-Net model from segmenting small objects effectively. Therefore, this study presents a forward-looking sonar semantic segmentation model called Feature Pyramid U-Net with Attention (FPUA). This model uses residual blocks to improve the training depth of the network. To improve the segmentation accuracy of the network for small objects, a feature pyramid module combined with an attention structure is introduced. This improves the model’s ability to learn deep semantic and shallow detail information. First, the proposed model is compared against other deep learning models and on two datasets, of which one was collected in a tank environment and the other was collected in a real marine environment. To further test the validity of the model, a real forward-looking sonar system was devised and employed in the lake trials. The results show that the proposed model performs better than the other models for small-object and few-sample classes and that it is competitive in semantic segmentation of forward-looking sonar images. Full article
(This article belongs to the Special Issue Image Processing and Pattern Recognition Based on Deep Learning)
Show Figures

Figure 1

16 pages, 21230 KiB  
Article
Colorful Image Colorization with Classification and Asymmetric Feature Fusion
by Zhiyuan Wang, Yi Yu, Daqun Li, Yuanyuan Wan and Mingyang Li
Sensors 2022, 22(20), 8010; https://doi.org/10.3390/s22208010 - 20 Oct 2022
Cited by 1 | Viewed by 2258
Abstract
An automatic colorization algorithm can convert a grayscale image to a colorful image using regression loss functions or classification loss functions. However, the regression loss function leads to brown results, while the classification loss function leads to the problem of color overflow and [...] Read more.
An automatic colorization algorithm can convert a grayscale image to a colorful image using regression loss functions or classification loss functions. However, the regression loss function leads to brown results, while the classification loss function leads to the problem of color overflow and the computation of the color categories and balance weights of the ground truth required for the weighted classification loss is too large. In this paper, we propose a new method to compute color categories and balance the weights of color images. In this paper, we propose a new method to compute color categories and balance weights of color images. Furthermore, we propose a U-Net-based colorization network. First, we propose a category conversion module and a category balance module to obtain the color categories and to balance weights, which dramatically reduces the training time. Second, we construct a classification subnetwork to constrain the colorization network with category loss, which improves the colorization accuracy and saturation. Finally, we introduce an asymmetric feature fusion (AFF) module to fuse the multiscale features, which effectively prevents color overflow and improves the colorization effect. The experiments show that our colorization network has peak signal-to-noise ratio (PSNR) and structure similarity index measure (SSIM) metrics of 25.8803 and 0.9368, respectively, for the ImageNet dataset. As compared with existing algorithms, our algorithm produces colorful images with vivid colors, no significant color overflow, and higher saturation. Full article
(This article belongs to the Special Issue Image Processing and Pattern Recognition Based on Deep Learning)
Show Figures

Figure 1

15 pages, 5670 KiB  
Article
Shape–Texture Debiased Training for Robust Template Matching
by Bo Gao and Michael W. Spratling
Sensors 2022, 22(17), 6658; https://doi.org/10.3390/s22176658 - 2 Sep 2022
Cited by 4 | Viewed by 2489
Abstract
Finding a template in a search image is an important task underlying many computer vision applications. This is typically solved by calculating a similarity map using features extracted from the separate images. Recent approaches perform template matching in a deep feature space, produced [...] Read more.
Finding a template in a search image is an important task underlying many computer vision applications. This is typically solved by calculating a similarity map using features extracted from the separate images. Recent approaches perform template matching in a deep feature space, produced by a convolutional neural network (CNN), which is found to provide more tolerance to changes in appearance. Inspired by these findings, in this article we investigate whether enhancing the CNN’s encoding of shape information can produce more distinguishable features that improve the performance of template matching. By comparing features from the same CNN trained using different shape–texture training methods, we determined a feature space which improves the performance of most template matching algorithms. When combining the proposed method with the Divisive Input Modulation (DIM) template matching algorithm, its performance is greatly improved, and the resulting method produces state-of-the-art results on a standard benchmark. To confirm these results, we create a new benchmark and show that the proposed method outperforms existing techniques on this new dataset. Full article
(This article belongs to the Special Issue Image Processing and Pattern Recognition Based on Deep Learning)
Show Figures

Figure 1

17 pages, 3965 KiB  
Article
RPDNet: Automatic Fabric Defect Detection Based on a Convolutional Neural Network and Repeated Pattern Analysis
by Yubo Huang and Zhong Xiang
Sensors 2022, 22(16), 6226; https://doi.org/10.3390/s22166226 - 19 Aug 2022
Cited by 13 | Viewed by 4026
Abstract
On a global scale, the process of automatic defect detection represents a critical stage of quality control in textile industries. In this paper, a semantic segmentation network using a repeated pattern analysis algorithm is proposed for pixel-level detection of fabric defects, which is [...] Read more.
On a global scale, the process of automatic defect detection represents a critical stage of quality control in textile industries. In this paper, a semantic segmentation network using a repeated pattern analysis algorithm is proposed for pixel-level detection of fabric defects, which is termed RPDNet (repeated pattern defect network). Specifically, we utilize a repeated pattern detector based on convolutional neural network (CNN) to detect periodic patterns in fabric images. Through the acquired repeated pattern information and proper guidance of the network in a high-level semantic space, the ability to understand periodic feature knowledge and emphasize potential defect areas is realized. Concurrently, we propose a semi-supervised learning scheme to inject the periodic knowledge into the model separately, which enables the model to function independently from further pre-calculation during detection, so there is no additional network capacity required and no loss in detection speed caused. In addition, the model integrates two advanced architectures of DeeplabV3+ and GhostNet to effectively implement lightweight fabric defect detection. The comparative experiments on repeated pattern fabric images highlights the potential of the algorithm to determine competitive detection results without incurring further computational cost. Full article
(This article belongs to the Special Issue Image Processing and Pattern Recognition Based on Deep Learning)
Show Figures

Figure 1

22 pages, 11698 KiB  
Article
Skin Lesion Classification Using Collective Intelligence of Multiple Neural Networks
by Dan Popescu, Mohamed El-khatib and Loretta Ichim
Sensors 2022, 22(12), 4399; https://doi.org/10.3390/s22124399 - 10 Jun 2022
Cited by 40 | Viewed by 9244
Abstract
Skin lesion detection and analysis are very important because skin cancer must be found in its early stages and treated immediately. Once installed in the body, skin cancer can easily spread to other body parts. Early detection would represent a very important aspect [...] Read more.
Skin lesion detection and analysis are very important because skin cancer must be found in its early stages and treated immediately. Once installed in the body, skin cancer can easily spread to other body parts. Early detection would represent a very important aspect since, by ensuring correct treatment, it could be curable. Thus, by taking all these issues into consideration, there is a need for highly accurate computer-aided systems to assist medical staff in the early detection of malignant skin lesions. In this paper, we propose a skin lesion classification system based on deep learning techniques and collective intelligence, which involves multiple convolutional neural networks, trained on the HAM10000 dataset, which is able to predict seven skin lesions including melanoma. The convolutional neural networks experimentally chosen, considering their performances, to implement the collective intelligence-based system for this purpose are: AlexNet, GoogLeNet, GoogLeNet-Places365, MobileNet-V2, Xception, ResNet-50, ResNet-101, InceptionResNet-V2 and DenseNet201. We then analyzed the performances of each of the above-mentioned convolutional neural networks to obtain a weight matrix whose elements are weights associated with neural networks and classes of lesions. Based on this matrix, a new decision matrix was used to build the multi-network ensemble system (Collective Intelligence-based System), combining each of individual neural network decision into a decision fusion module (Collective Decision Block). This module would then have the responsibility to take a final and more accurate decision related to the prediction based on the associated weights of each network output. The validation accuracy of the proposed system is about 3 percent better than that of the best performing individual network. Full article
(This article belongs to the Special Issue Image Processing and Pattern Recognition Based on Deep Learning)
Show Figures

Figure 1

16 pages, 4922 KiB  
Article
Noise2Kernel: Adaptive Self-Supervised Blind Denoising Using a Dilated Convolutional Kernel Architecture
by Kanggeun Lee and Won-Ki Jeong
Sensors 2022, 22(11), 4255; https://doi.org/10.3390/s22114255 - 2 Jun 2022
Cited by 5 | Viewed by 3073
Abstract
With the advent of unsupervised learning, efficient training of a deep network for image denoising without pairs of noisy and clean images has become feasible. Most current unsupervised denoising methods are built on self-supervised loss with the assumption of zero-mean noise under the [...] Read more.
With the advent of unsupervised learning, efficient training of a deep network for image denoising without pairs of noisy and clean images has become feasible. Most current unsupervised denoising methods are built on self-supervised loss with the assumption of zero-mean noise under the signal-independent condition, which causes brightness-shifting artifacts on unconventional noise statistics (i.e., different from commonly used noise models). Moreover, most blind denoising methods require a random masking scheme for training to ensure the invariance of the denoising process. In this study, we propose a dilated convolutional network that satisfies an invariant property, allowing efficient kernel-based training without random masking. We also propose an adaptive self-supervision loss to increase the tolerance for unconventional noise, which is specifically effective in removing salt-and-pepper or hybrid noise where prior knowledge of noise statistics is not readily available. We demonstrate the efficacy of the proposed method by comparing it with state-of-the-art denoising methods using various examples. Full article
(This article belongs to the Special Issue Image Processing and Pattern Recognition Based on Deep Learning)
Show Figures

Figure 1

20 pages, 4687 KiB  
Article
A Novel Method to Inspect 3D Ball Joint Socket Products Using 2D Convolutional Neural Network with Spatial and Channel Attention
by Bekhzod Mustafaev, Anvarjon Tursunov, Sungwon Kim and Eungsoo Kim
Sensors 2022, 22(11), 4192; https://doi.org/10.3390/s22114192 - 31 May 2022
Cited by 3 | Viewed by 2358
Abstract
Product defect inspections are extremely important for industrial manufacturing processes. It is necessary to develop a special inspection system for each industrial product due to their complexity and diversity. Even though high-precision 3D cameras are usually used to acquire data to inspect 3D [...] Read more.
Product defect inspections are extremely important for industrial manufacturing processes. It is necessary to develop a special inspection system for each industrial product due to their complexity and diversity. Even though high-precision 3D cameras are usually used to acquire data to inspect 3D objects, it is hard to use them in real-time defect inspection systems due to their high price and long processing time. To address these problems, we propose a product inspection system that uses five 2D cameras to capture all inspection parts of the product and a deep learning-based 2D convolutional neural network (CNN) with spatial and channel attention (SCA) mechanisms to efficiently inspect 3D ball joint socket products. Channel attention (CA) in our model detects the most relevant feature maps while spatial attention (SA) finds the most important regions in the extracted feature map of the target. To build the final SCA feature vector, we concatenated the learned feature vectors of CA and SA because they complement each other. Thus, our proposed CNN with SCA provides high inspection accuracy as well as it having the potential to detect small defects of the product. Our proposed model achieved 98% classification accuracy in the experiments and proved its efficiency on product inspection in real-time. Full article
(This article belongs to the Special Issue Image Processing and Pattern Recognition Based on Deep Learning)
Show Figures

Figure 1

17 pages, 4498 KiB  
Article
Semantic Image Inpainting with Multi-Stage Feature Reasoning Generative Adversarial Network
by Guangyao Li, Liangfu Li, Yingdan Pu, Nan Wang and Xi Zhang
Sensors 2022, 22(8), 2854; https://doi.org/10.3390/s22082854 - 8 Apr 2022
Cited by 5 | Viewed by 2554
Abstract
Most existing image inpainting methods have achieved remarkable progress in small image defects. However, repairing large missing regions with insufficient context information is still an intractable problem. In this paper, a Multi-stage Feature Reasoning Generative Adversarial Network to gradually restore irregular holes is [...] Read more.
Most existing image inpainting methods have achieved remarkable progress in small image defects. However, repairing large missing regions with insufficient context information is still an intractable problem. In this paper, a Multi-stage Feature Reasoning Generative Adversarial Network to gradually restore irregular holes is proposed. Specifically, dynamic partial convolution is used to adaptively adjust the restoration proportion during inpainting progress, which strengthens the correlation between valid and invalid pixels. In the decoding phase, the statistical natures of features in the masked areas differentiate from those of unmasked areas. To this end, a novel decoder is designed which not only dynamically assigns a scaling factor and bias on per feature point basis using point-wise normalization, but also utilizes skip connections to solve the problem of information loss between the codec network layers. Moreover, in order to eliminate gradient vanishing and increase the reasoning times, a hybrid weighted merging method consisting of a hard weight map and a soft weight map is proposed to ensemble the feature maps generated during the whole reconstruction process. Experiments on CelebA, Places2, and Paris StreetView show that the proposed model generates results with a PSNR improvement of 0.3 dB to 1.2 dB compared to other methods. Full article
(This article belongs to the Special Issue Image Processing and Pattern Recognition Based on Deep Learning)
Show Figures

Figure 1

17 pages, 12076 KiB  
Article
GAN-Based Image Colorization for Self-Supervised Visual Feature Learning
by Sandra Treneska, Eftim Zdravevski, Ivan Miguel Pires, Petre Lameski and Sonja Gievska
Sensors 2022, 22(4), 1599; https://doi.org/10.3390/s22041599 - 18 Feb 2022
Cited by 24 | Viewed by 6044
Abstract
Large-scale labeled datasets are generally necessary for successfully training a deep neural network in the computer vision domain. In order to avoid the costly and tedious work of manually annotating image datasets, self-supervised learning methods have been proposed to learn general visual features [...] Read more.
Large-scale labeled datasets are generally necessary for successfully training a deep neural network in the computer vision domain. In order to avoid the costly and tedious work of manually annotating image datasets, self-supervised learning methods have been proposed to learn general visual features automatically. In this paper, we first focus on image colorization with generative adversarial networks (GANs) because of their ability to generate the most realistic colorization results. Then, via transfer learning, we use this as a proxy task for visual understanding. Particularly, we propose to use conditional GANs (cGANs) for image colorization and transfer the gained knowledge to two other downstream tasks, namely, multilabel image classification and semantic segmentation. This is the first time that GANs have been used for self-supervised feature learning through image colorization. Through extensive experiments with the COCO and Pascal datasets, we show an increase of 5% for the classification task and 2.5% for the segmentation task. This demonstrates that image colorization with conditional GANs can boost other downstream tasks’ performance without the need for manual annotation. Full article
(This article belongs to the Special Issue Image Processing and Pattern Recognition Based on Deep Learning)
Show Figures

Figure 1

Review

Jump to: Research

41 pages, 9340 KiB  
Review
New Trends in Melanoma Detection Using Neural Networks: A Systematic Review
by Dan Popescu, Mohamed El-Khatib, Hassan El-Khatib and Loretta Ichim
Sensors 2022, 22(2), 496; https://doi.org/10.3390/s22020496 - 10 Jan 2022
Cited by 63 | Viewed by 8617
Abstract
Due to its increasing incidence, skin cancer, and especially melanoma, is a serious health disease today. The high mortality rate associated with melanoma makes it necessary to detect the early stages to be treated urgently and properly. This is the reason why many [...] Read more.
Due to its increasing incidence, skin cancer, and especially melanoma, is a serious health disease today. The high mortality rate associated with melanoma makes it necessary to detect the early stages to be treated urgently and properly. This is the reason why many researchers in this domain wanted to obtain accurate computer-aided diagnosis systems to assist in the early detection and diagnosis of such diseases. The paper presents a systematic review of recent advances in an area of increased interest for cancer prediction, with a focus on a comparative perspective of melanoma detection using artificial intelligence, especially neural network-based systems. Such structures can be considered intelligent support systems for dermatologists. Theoretical and applied contributions were investigated in the new development trends of multiple neural network architecture, based on decision fusion. The most representative articles covering the area of melanoma detection based on neural networks, published in journals and impact conferences, were investigated between 2015 and 2021, focusing on the interval 2018–2021 as new trends. Additionally presented are the main databases and trends in their use in teaching neural networks to detect melanomas. Finally, a research agenda was highlighted to advance the field towards the new trends. Full article
(This article belongs to the Special Issue Image Processing and Pattern Recognition Based on Deep Learning)
Show Figures

Figure 1

Back to TopTop