sensors-logo

Journal Browser

Journal Browser

Computer Vision and Intelligent Sensing Based on Pattern Recognition

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Sensing and Imaging".

Deadline for manuscript submissions: closed (30 April 2023) | Viewed by 16090

Special Issue Editors


E-Mail Website
Guest Editor
School of Aeronautics and Astronautics, Zhejiang University, Hangzhou 310027, China
Interests: image/video processing and analysis; deep learning; data mining; information security

E-Mail Website
Guest Editor
School of Information Science and Technology, Zhejiang Sci-Tech University, Hangzhou 310018, China
Interests: image/video processing and analysis; Signal Processing; information security
School of Aeronautics and Astronautics, Zhejiang University, Hangzhou 310027, China
Interests: embedded system; deep learning; signal processing
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Short introduction: Deep learning and a new round of artificial intelligence development have greatly promoted the development of computer vision and intelligent sensing—for example, human action pattern recognition based on acceleration sensors has become an emerging research direction in the field of pattern recognition. This Special Issue is oriented towards intelligent algorithms and technologies in computer vision and sensing fields. The aim is to share the latest theoretical and technological achievements in intelligent sensing and computer vision, and to encourage scientists to publish their experimental and theoretical results in these fields based on pattern recognition, mainly based on deep learning. The related application areas include image and video analysis and processing, intelligent sensors, intelligent video surveillance, intelligent visual inspection, as well as security and privacy problems in sensing. This Special Issue welcomes submissions related to the following research topics: vision research under new imaging conditions, biologically inspired computer vision research, multi-sensor fusion 3D vision research, visual scene understanding under highly dynamic complex scenes, small-sample target recognition and understanding, and complex behavior semantic understanding. Electronic files and software providing full details of calculation and experimental procedures can be deposited as supplementary material.

Prof. Dr. Zhe-Ming Lu
Prof. Dr. Yi-Jia Zhang
Dr. Hao Luo
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • computer vision
  • pattern recognition
  • intelligent sensing
  • deep learning
  • image and video analysis and processing
  • intelligent sensors
  • intelligent video surveillance
  • intelligent visual inspection
  • security and privacy in sensing

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

12 pages, 39977 KiB  
Article
Reading Direct-Part Marking Data Matrix Code in the Context of Polymer-Based Additive Manufacturing
by Daniel Matuszczyk and Frank Weichert
Sensors 2023, 23(3), 1619; https://doi.org/10.3390/s23031619 - 2 Feb 2023
Cited by 4 | Viewed by 2932
Abstract
A novel approach to detect and decode direct-part-marked, low-contrast data matrix codes on polymer-based selective laser sintering manufactured parts, which is able to work on lightweight devices, is presented. Direct-part marking is a concept for labeling parts directly, which can be carried out [...] Read more.
A novel approach to detect and decode direct-part-marked, low-contrast data matrix codes on polymer-based selective laser sintering manufactured parts, which is able to work on lightweight devices, is presented. Direct-part marking is a concept for labeling parts directly, which can be carried out during the additive manufacturing’s design process. Because of low contrast in polymer-based selective laser sintering manufactured parts, it is a challenging task to detect and read codes on unicolored parts. To achieve this, at first, codes are located using a deep-learning-based approach. Afterwards, the calculated regions of interest are passed into an image encoding network in order to compute readable standard data matrix codes. To enhance the training process, rendered images, improved with a generative adversarial network, are used. This process fulfills the traceability task in assembly line production and is suitable for running on mobile devices such as smartphones or cheap sensors placed in the assembly line. The results show that codes can be localized with 97.38% mean average precision, and a readability of 89.36% is achieved. Full article
(This article belongs to the Special Issue Computer Vision and Intelligent Sensing Based on Pattern Recognition)
Show Figures

Figure 1

15 pages, 12563 KiB  
Article
An Efficient Dehazing Algorithm Based on the Fusion of Transformer and Convolutional Neural Network
by Jun Xu, Zi-Xuan Chen, Hao Luo and Zhe-Ming Lu
Sensors 2023, 23(1), 43; https://doi.org/10.3390/s23010043 - 21 Dec 2022
Cited by 5 | Viewed by 4128
Abstract
The purpose of image dehazing is to remove the interference from weather factors in degraded images and enhance the clarity and color saturation of images to maximize the restoration of useful features. Single image dehazing is one of the most important tasks in [...] Read more.
The purpose of image dehazing is to remove the interference from weather factors in degraded images and enhance the clarity and color saturation of images to maximize the restoration of useful features. Single image dehazing is one of the most important tasks in the field of image restoration. In recent years, due to the progress of deep learning, single image dehazing has made great progress. With the success of Transformer in advanced computer vision tasks, some research studies also began to apply Transformer to image dehazing tasks and obtained surprising results. However, both the deconvolution-neural-network-based dehazing algorithm and Transformer based dehazing algorithm magnify their advantages and disadvantages separately. Therefore, this paper proposes a novel Transformer–Convolution fusion dehazing network (TCFDN), which uses Transformer’s global modeling ability and convolutional neural network’s local modeling ability to improve the dehazing ability. In the Transformer–Convolution fusion dehazing network, the classic self-encoder structure is used. This paper proposes a Transformer–Convolution hybrid layer, which uses an adaptive fusion strategy to make full use of the Swin-Transformer and convolutional neural network to extract and reconstruct image features. On the basis of previous research, this layer further improves the ability of the network to remove haze. A series of contrast experiments and ablation experiments not only proved that the Transformer–Convolution fusion dehazing network proposed in this paper exceeded the more advanced dehazing algorithm, but also provided solid and powerful evidence for the basic theory on which it depends. Full article
(This article belongs to the Special Issue Computer Vision and Intelligent Sensing Based on Pattern Recognition)
Show Figures

Figure 1

15 pages, 3538 KiB  
Article
Helmet Wearing State Detection Based on Improved Yolov5s
by Yi-Jia Zhang, Fu-Su Xiao and Zhe-Ming Lu
Sensors 2022, 22(24), 9843; https://doi.org/10.3390/s22249843 - 14 Dec 2022
Cited by 13 | Viewed by 2896
Abstract
At many construction sites, whether to wear a helmet is directly related to the safety of the workers. Therefore, the detection of helmet use has become a crucial monitoring tool for construction safety. However, most of the current helmet wearing detection algorithms are [...] Read more.
At many construction sites, whether to wear a helmet is directly related to the safety of the workers. Therefore, the detection of helmet use has become a crucial monitoring tool for construction safety. However, most of the current helmet wearing detection algorithms are only dedicated to distinguishing pedestrians who wear helmets from those who do not. In order to further enrich the detection in construction scenes, this paper builds a dataset with six cases: not wearing a helmet, wearing a helmet, just wearing a hat, having a helmet, but not wearing it, wearing a helmet correctly, and wearing a helmet without wearing the chin strap. On this basis, this paper proposes a practical algorithm for detecting helmet wearing states based on the improved YOLOv5s algorithm. Firstly, according to the characteristics of the label of the dataset constructed by us, the K-means method is used to redesign the size of the prior box and match it to the corresponding feature layer to increase the accuracy of the feature extraction of the model; secondly, an additional layer is added to the algorithm to improve the ability of the model to recognize small targets; finally, the attention mechanism is introduced in the algorithm, and the CIOU_Loss function in the YOLOv5 method is replaced by the EIOU_Loss function. The experimental results indicate that the improved algorithm is more accurate than the original YOLOv5s algorithm. In addition, the finer classification also significantly enhances the detection performance of the model. Full article
(This article belongs to the Special Issue Computer Vision and Intelligent Sensing Based on Pattern Recognition)
Show Figures

Figure 1

14 pages, 1268 KiB  
Article
Self-Supervised Action Representation Learning Based on Asymmetric Skeleton Data Augmentation
by Hualing Zhou, Xi Li, Dahong Xu, Hong Liu, Jianping Guo and Yihan Zhang
Sensors 2022, 22(22), 8989; https://doi.org/10.3390/s22228989 - 20 Nov 2022
Viewed by 1601
Abstract
Contrastive learning has received increasing attention in the field of skeleton-based action representations in recent years. Most contrastive learning methods use simple augmentation strategies to construct pairs of positive samples. When using such pairs of positive samples to learn action representations, deeper feature [...] Read more.
Contrastive learning has received increasing attention in the field of skeleton-based action representations in recent years. Most contrastive learning methods use simple augmentation strategies to construct pairs of positive samples. When using such pairs of positive samples to learn action representations, deeper feature information cannot be learned, thus affecting the performance of downstream tasks. To solve the problem of insufficient learning ability, we propose an asymmetric data augmentation strategy and attempt to apply it to the training of 3D skeleton-based action representations. First, we carefully study the different characteristics presented by different skeleton views and choose a specific augmentation method for a certain view. Second, specific augmentation methods are incorporated into the left and right branches of the asymmetric data augmentation pipeline to increase the convergence difficulty of the contrastive learning task, thereby significantly improving the quality of the learned action representations. Finally, since many methods directly act on the joint view, the augmented samples are quite different from the original samples. We use random probability activation to transform the joint view to avoid extreme augmentation of the joint view. Extensive experiments on NTU RGB + D datasets show that our method is effective. Full article
(This article belongs to the Special Issue Computer Vision and Intelligent Sensing Based on Pattern Recognition)
Show Figures

Figure 1

17 pages, 7257 KiB  
Article
A Real-Time Cup-Detection Method Based on YOLOv3 for Inventory Management
by Wen-Sheng Wu and Zhe-Ming Lu
Sensors 2022, 22(18), 6956; https://doi.org/10.3390/s22186956 - 14 Sep 2022
Cited by 4 | Viewed by 3253
Abstract
Inventory is the basis of business activities; inventory management helps industries keep their inventories stocked with reasonable quantities, which ensures consumers demand while minimizing storage costs. The traditional manual inventory management has low efficiency and a high labor cost. In this paper, we [...] Read more.
Inventory is the basis of business activities; inventory management helps industries keep their inventories stocked with reasonable quantities, which ensures consumers demand while minimizing storage costs. The traditional manual inventory management has low efficiency and a high labor cost. In this paper, we used improved YOLOv3 to detect the cups stored on the warehouse shelves and counted their numbers to realize automated inventory management. The warehouse images are collected by the camera and transmitted to the industrial computer, which runs the YOLOv3 network. There are three feature maps in YOLOv3, the two smaller feature maps and the structure behind them are removed, and the k-means algorithm is used to optimize the default anchor size. Moreover, the detection range is limited to a specified area. Experiments show that, by eliminating those two feature maps, the network parameter is reduced from 235 MB to 212 MB, and detection FPS is improved from 48.15 to 54.88 while mAP is improved from 95.65% to 96.65% on our test dataset. The new anchors obtained by the k-means algorithm further improve the mAP to 96.82%. With those improvements, the average error rate of detection is reduced to 1.61%. Restricted detection areas eliminate irrelevant items to ensure the high accuracy of the detection result. The accurately counted number of cups and its change provide significant data for inventory management. Full article
(This article belongs to the Special Issue Computer Vision and Intelligent Sensing Based on Pattern Recognition)
Show Figures

Figure 1

Back to TopTop