sensors-logo

Journal Browser

Journal Browser

Machine Learning Based Feature Recognition and Image Processing in Sensing

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Sensing and Imaging".

Deadline for manuscript submissions: closed (28 April 2023) | Viewed by 10970

Special Issue Editor


E-Mail Website
Guest Editor
School of Communication and Information Engineering, Shanghai University, Shanghai 200444, China
Interests: computer vision; multimedia content analysis and pattern recognition

Special Issue Information

Dear Colleagues,

Recently, we have seen a growing interest in the potential use of Feature Recognition and Image Processing. Feature Recognition (FR) is a technique to identify and extract application-specific information from input models for downstream engineering activities. Image feature recognition is an important area of artificial intelligence, aimed at recognizing targets and objects in various modes. Machine learning (ML) is a field of inquiry devoted to understanding and building methods that 'learn'. It is the method that leverages data to improve performance on some set of tasks and allows software applications to become more accurate at predicting outcomes without being explicitly programmed to do so. Benefiting from the excellent ability of feature learning, machine learning has achieved good results in the field of feature recognition.

The feature recognition method still faces challenges such as feature extraction difficulties, poor classification and recognition effects, high sensitivity and so on, and needs to be improved in theory and method. Thus, feature recognition based on machine learning has become crucial in many fields. This Special Issue, therefore, aims to put together original research and review articles on recent advances, technologies, solutions, applications, and new challenges in the field of machine learning-based feature recognition.

Potential topics include but are not limited to: feature recognition; image processing; image feature recognition; machine learning; pattern recognition.

Prof. Dr. Dan Zeng
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • feature recognition
  • image processing
  • image feature recognition
  • machine learning
  • pattern recognition

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

21 pages, 1316 KiB  
Article
Adversarial and Random Transformations for Robust Domain Adaptation and Generalization
by Liang Xiao, Jiaolong Xu, Dawei Zhao, Erke Shang, Qi Zhu and Bin Dai
Sensors 2023, 23(11), 5273; https://doi.org/10.3390/s23115273 - 1 Jun 2023
Cited by 5 | Viewed by 1851
Abstract
Data augmentation has been widely used to improve generalization in training deep neural networks. Recent works show that using worst-case transformations or adversarial augmentation strategies can significantly improve accuracy and robustness. However, due to the non-differentiable properties of image transformations, searching algorithms such [...] Read more.
Data augmentation has been widely used to improve generalization in training deep neural networks. Recent works show that using worst-case transformations or adversarial augmentation strategies can significantly improve accuracy and robustness. However, due to the non-differentiable properties of image transformations, searching algorithms such as reinforcement learning or evolution strategy have to be applied, which are not computationally practical for large-scale problems. In this work, we show that by simply applying consistency training with random data augmentation, state-of-the-art results on domain adaptation (DA) and generalization (DG) can be obtained. To further improve the accuracy and robustness with adversarial examples, we propose a differentiable adversarial data augmentation method based on spatial transformer networks (STNs). The combined adversarial and random-transformation-based method outperforms the state-of-the-art on multiple DA and DG benchmark datasets. Furthermore, the proposed method shows desirable robustness to corruption, which is also validated on commonly used datasets. Full article
Show Figures

Figure 1

30 pages, 11456 KiB  
Article
Assessment of Various Multimodal Fusion Approaches Using Synthetic Aperture Radar (SAR) and Electro-Optical (EO) Imagery for Vehicle Classification via Neural Networks
by Ram M. Narayanan, Noah S. Wood and Benjamin P. Lewis
Sensors 2023, 23(4), 2207; https://doi.org/10.3390/s23042207 - 16 Feb 2023
Cited by 6 | Viewed by 2369
Abstract
Multimodal fusion approaches that combine data from dissimilar sensors can better exploit human-like reasoning and strategies for situational awareness. The performance of a six-layer convolutional neural network (CNN) and an 18-layer ResNet architecture are compared for a variety of fusion methods using synthetic [...] Read more.
Multimodal fusion approaches that combine data from dissimilar sensors can better exploit human-like reasoning and strategies for situational awareness. The performance of a six-layer convolutional neural network (CNN) and an 18-layer ResNet architecture are compared for a variety of fusion methods using synthetic aperture radar (SAR) and electro-optical (EO) imagery to classify military targets. The dataset used is the Synthetic and Measured Paired Labeled Experiment (SAMPLE) dataset, using both original measured SAR data and synthetic EO data. We compare the classification performance of both networks using the data modalities individually, feature level fusion, decision level fusion, and using a novel fusion method based on the three RGB-input channels of a residual neural network (ResNet). In the proposed input channel fusion method, the SAR and the EO imagery are separately fed to each of the three input channels, while the third channel is fed a zero vector. It is found that the input channel fusion method using ResNet was able to consistently perform to a higher classification accuracy in every equivalent scenario. Full article
Show Figures

Figure 1

17 pages, 18703 KiB  
Article
Irregular Scene Text Detection Based on a Graph Convolutional Network
by Shiyu Zhang, Caiying Zhou, Yonggang Li, Xianchao Zhang, Lihua Ye and Yuanwang Wei
Sensors 2023, 23(3), 1070; https://doi.org/10.3390/s23031070 - 17 Jan 2023
Cited by 6 | Viewed by 2062
Abstract
Detecting irregular or arbitrary shape text in natural scene images is a challenging task that has recently attracted considerable attention from research communities. However, limited by the CNN receptive field, these methods cannot directly capture relations between distant component regions by local convolutional [...] Read more.
Detecting irregular or arbitrary shape text in natural scene images is a challenging task that has recently attracted considerable attention from research communities. However, limited by the CNN receptive field, these methods cannot directly capture relations between distant component regions by local convolutional operators. In this paper, we propose a novel method that can effectively and robustly detect irregular text in natural scene images. First, we employ a fully convolutional network architecture based on VGG16_BN to generate text components via the estimated character center points, which can ensure a high text component detection recall rate and fewer noncharacter text components. Second, text line grouping is treated as a problem of inferring the adjacency relations of text components with a graph convolution network (GCN). Finally, to evaluate our algorithm, we compare it with other existing algorithms by performing experiments on three public datasets: ICDAR2013, CTW-1500 and MSRA-TD500. The results show that the proposed method handles irregular scene text well and that it achieves promising results on these three public datasets. Full article
Show Figures

Figure 1

15 pages, 15564 KiB  
Article
Super-Resolution Reconstruction Method of Pavement Crack Images Based on an Improved Generative Adversarial Network
by Bo Yuan, Zhaoyun Sun, Lili Pei, Wei Li, Minghang Ding and Xueli Hao
Sensors 2022, 22(23), 9092; https://doi.org/10.3390/s22239092 - 23 Nov 2022
Cited by 7 | Viewed by 2229
Abstract
A super-resolution reconstruction approach based on an improved generative adversarial network is presented to overcome the huge disparities in image quality due to variable equipment and illumination conditions in the image-collecting stage of intelligent pavement detection. The nonlinear network of the generator is [...] Read more.
A super-resolution reconstruction approach based on an improved generative adversarial network is presented to overcome the huge disparities in image quality due to variable equipment and illumination conditions in the image-collecting stage of intelligent pavement detection. The nonlinear network of the generator is first improved, and the Residual Dense Block (RDB) is created to serve as Batch Normalization (BN). The Attention Module is then formed by combining the RDB, Gated Recurrent Unit (GRU), and Conv Layer. Finally, a loss function based on the L1 norm is utilized to replace the original loss function. The experimental findings demonstrate that the self-built pavement crack dataset’s Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity (SSIM) of the reconstructed images reach 29.21 dB and 0.854, respectively. The results improved compared to the Set5, Set14, and BSD100 datasets. Additionally, by employing Faster-RCNN and a Fully Convolutional Network (FCN), the effects of image reconstruction on detection and segmentation are confirmed. The findings indicate that the segmentation results’ F1 is enhanced by 0.012 to 0.737 and the detection results’ confidence is increased by 0.031 to 0.9102 when compared to state-of-the-art methods. It has a significant engineering application value and can successfully increase pavement crack-detecting accuracy. Full article
Show Figures

Figure 1

13 pages, 4768 KiB  
Article
An Improved Frequency Domain Guided Thermal Imager Strips Removal Algorithm Based on LRSID
by Junchen Li, Li Zhong, Zhuoyue Hu and Fansheng Chen
Sensors 2022, 22(19), 7348; https://doi.org/10.3390/s22197348 - 28 Sep 2022
Cited by 2 | Viewed by 1686
Abstract
The thermal imaging image of the Sustainable Development Science Satellite (SDGSAT-1) is mainly used for high-resolution observations of the ground width, due to the influence of blind elements and non-uniformity of the detector, and the system is a pendulum sweep imaging mode, resulting [...] Read more.
The thermal imaging image of the Sustainable Development Science Satellite (SDGSAT-1) is mainly used for high-resolution observations of the ground width, due to the influence of blind elements and non-uniformity of the detector, and the system is a pendulum sweep imaging mode, resulting in fringed noise in the image. In this paper, a Fringing algorithm based on LRSID (low-rank-based single-image decomposition) algorithm is proposed, which can effectively remove the lateral and vertical fringe noise of the thermal imager and maintain the detail and clarity of the image. First, pretreatment of the obvious light and dark stripes then, based on LLSID algorithm, the vertical direction pinstripes and horizontal stripes are processed; finally, the fringed frequency band of the original image is replaced in the frequency domain with the image frequency domain processed by the LRSID algorithm, and then the Fourier inverse transformation is performed to obtain the final image. Using the method proposed in this paper, the simulated and actual SDGSAT-1 thermal imaging camera remote sensing stripes images are removed, and the visual and quantitative indicators are compared with the processing results of other algorithms, and the results show that the proposed algorithm has the best performance to remove the stripes, which can effectively remove horizontal and vertical fringes at the same time, and retain the detail and clarity of the image. Full article
Show Figures

Figure 1

Back to TopTop