sensors-logo

Journal Browser

Journal Browser

Image/Video Restoration Based on Deep Learning and Its Application in Sensing

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Sensing and Imaging".

Deadline for manuscript submissions: closed (31 December 2023) | Viewed by 21978

Special Issue Editors


E-Mail Website
Guest Editor
School of Computing and Artificial Intelligence, Southwest Jiaotong University, Chengdu 611756, China
Interests: image/video restoration; deep learning; medical image analysis
Electrical and Information Engineering School, Panzhihua University, Panzhihua 617000, China
Interests: computer vision; deep learning; remote sensing image processing and application

Special Issue Information

Dear Colleagues,

High-quality images/videos are critical for various practical applications, such as remote sensing, medical imaging, and intelligent monitoring. However, the images/videos captured in real scenarios may suffer from noise, blur, compression artifacts, etc., degrading visual quality and decreasing the accuracy of content analysis (e.g., object detection and segmentation). The images/videos obtained in bad weather, such as foggy and rainy days, also face these challenges. Image/video restoration, including super resolution, denoising, deblurring and dehazing, aims to recover high-quality outputs from low-quality observations, thus obtaining better visual effects and benefiting subsequent analysis. Therefore, it has been a focus of image/video processing and numerous approaches have been developed. In particular, deep learning-based models have significantly improved restoration performance in recent years.

The goal of this Special Issue is to present recent advances in deep learning-based image/video restoration techniques and their applications in sensing. Authors are welcome to submit research papers, as well as literature reviews, related to image/video restoration, including but not limited to super resolution, denoising, deblurring, compression artifact reduction, deraining, dehazing, restoration-assisted object detection and segmentation, etc.

Dr. Honggang Chen
Prof. Dr. Liang-Jian Deng
Dr. Xiaole Zhao
Dr. Yuwei Jin
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • image/video restoration
  • deep learning
  • super-resolution and video frame interpolation
  • denoising and compression artifact reduction
  • deblurring
  • dehazing and deraining
  • restoration-assisted image/video analysis

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (7 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

17 pages, 2475 KiB  
Article
Lightweight Meter Pointer Recognition Method Based on Improved YOLOv5
by Chi Zhang, Kai Wang, Jie Zhang, Fan Zhou and Le Zou
Sensors 2024, 24(5), 1507; https://doi.org/10.3390/s24051507 - 26 Feb 2024
Viewed by 1019
Abstract
In substation lightning rod meter reading data taking, the classical object detection model is not suitable for deployment in substation monitoring hardware devices due to its large size, large number of parameters, and slow detection speed, while is difficult to balance detection accuracy [...] Read more.
In substation lightning rod meter reading data taking, the classical object detection model is not suitable for deployment in substation monitoring hardware devices due to its large size, large number of parameters, and slow detection speed, while is difficult to balance detection accuracy and real-time requirements with the existing lightweight object detection model. To address this problem, this paper constructs a lightweight object detection algorithm, YOLOv5-Meter Reading Lighting (YOLOv5-MRL), based on the improved YOLOv5 model’s speed while maintaining accuracy. Then, the YOLOv5s are pruned based on the convolutional kernel channel soft pruning algorithm, which greatly reduces the number of parameters in the YOLOv5-MRL model while maintaining a certain accuracy loss. Finally, in order to facilitate the dial reading, the dial external circle fitting method is proposed to calculate the dial reading using the circular angle algorithm. The experimental results on the self-built dataset show that the YOLOv5-MRL object detection model achieves a mean average precision of 96.9%, a detection speed of 5 ms/frame, and a model weight size of 5.5 MB, making it better than other advanced dial reading models. Full article
Show Figures

Figure 1

20 pages, 6727 KiB  
Article
Real-Time Optimal States Estimation with Inertial and Delayed Visual Measurements for Unmanned Aerial Vehicles
by Xinxin Sun, Chi Zhang, Le Zou and Shanhong Li
Sensors 2023, 23(22), 9074; https://doi.org/10.3390/s23229074 - 9 Nov 2023
Viewed by 1218
Abstract
Motion estimation is a major issue in applications of Unmanned Aerial Vehicles (UAVs). This paper proposes an entire solution to solve this issue using information from an Inertial Measurement Unit (IMU) and a monocular camera. The solution includes two steps: visual location and [...] Read more.
Motion estimation is a major issue in applications of Unmanned Aerial Vehicles (UAVs). This paper proposes an entire solution to solve this issue using information from an Inertial Measurement Unit (IMU) and a monocular camera. The solution includes two steps: visual location and multisensory data fusion. In this paper, attitude information provided by the IMU is used as parameters in Kalman equations, which are different from pure visual location methods. Then, the location of the system is obtained, and it will be utilized as the observation in data fusion. Considering the multiple updating frequencies of sensors and the delay of visual observation, a multi-rate delay-compensated optimal estimator based on the Kalman filter is presented, which could fuse the information and obtain the estimation of 3D positions as well as translational speed. Additionally, the estimator was modified to minimize the computational burden, so that it could run onboard in real time. The performance of the overall solution was assessed using field experiments on a quadrotor system, compared with the estimation results of some other methods as well as the ground truth data. The results illustrate the effectiveness of the proposed method. Full article
Show Figures

Figure 1

15 pages, 4924 KiB  
Article
PMIndoor: Pose Rectified Network and Multiple Loss Functions for Self-Supervised Monocular Indoor Depth Estimation
by Siyu Chen, Ying Zhu and Hong Liu
Sensors 2023, 23(21), 8821; https://doi.org/10.3390/s23218821 - 30 Oct 2023
Cited by 2 | Viewed by 1195
Abstract
Self-supervised monocular depth estimation, which has attained remarkable progress for outdoor scenes in recent years, often faces greater challenges for indoor scenes. These challenges comprise: (i) non-textured regions: indoor scenes often contain large areas of non-textured regions, such as ceilings, walls, floors, etc., [...] Read more.
Self-supervised monocular depth estimation, which has attained remarkable progress for outdoor scenes in recent years, often faces greater challenges for indoor scenes. These challenges comprise: (i) non-textured regions: indoor scenes often contain large areas of non-textured regions, such as ceilings, walls, floors, etc., which render the widely adopted photometric loss as ambiguous for self-supervised learning; (ii) camera pose: the sensor is mounted on a moving vehicle in outdoor scenes, whereas it is handheld and moves freely in indoor scenes, which results in complex motions that pose challenges for indoor depth estimation. In this paper, we propose a novel self-supervised indoor depth estimation framework-PMIndoor that addresses these two challenges. We use multiple loss functions to constrain the depth estimation for non-textured regions. We introduce a pose rectified network that only estimates the rotation transformation between two adjacent frames of images for the camera pose problem, and improves the pose estimation results with the pose rectified network loss. We also incorporate a multi-head self-attention module in the depth estimation network to enhance the model’s accuracy. Extensive experiments are conducted on the benchmark indoor dataset NYU Depth V2, demonstrating that our method achieves excellent performance and is better than previous state-of-the-art methods. Full article
Show Figures

Figure 1

18 pages, 8668 KiB  
Article
Atmospheric Turbulence Degraded Video Restoration with Recurrent GAN (ATVR-GAN)
by Bar Ettedgui and Yitzhak Yitzhaky
Sensors 2023, 23(21), 8815; https://doi.org/10.3390/s23218815 - 30 Oct 2023
Cited by 3 | Viewed by 1191
Abstract
Atmospheric turbulence (AT) can change the path and direction of light during video capturing of a target in space due to the random motion of the turbulent medium, a phenomenon that is most noticeable when shooting videos at long ranges, resulting in severe [...] Read more.
Atmospheric turbulence (AT) can change the path and direction of light during video capturing of a target in space due to the random motion of the turbulent medium, a phenomenon that is most noticeable when shooting videos at long ranges, resulting in severe video dynamic distortion and blur. To mitigate geometric distortion and reduce spatially and temporally varying blur, we propose a novel Atmospheric Turbulence Video Restoration Generative Adversarial Network (ATVR-GAN) with a specialized Recurrent Neural Network (RNN) generator, which is trained to predict the scene’s turbulent optical flow (OF) field and utilizes a recurrent structure to catch both spatial and temporal dependencies. The new architecture is trained using a newly combined loss function that counts for the spatiotemporal distortions, specifically tailored to the AT problem. Our network was tested on synthetic and real imaging data and compared against leading algorithms in the field of AT mitigation and image restoration. The proposed method outperformed these methods for both synthetic and real data examined. Full article
Show Figures

Figure 1

18 pages, 3224 KiB  
Article
Text Recognition Model Based on Multi-Scale Fusion CRNN
by Le Zou, Zhihuang He, Kai Wang, Zhize Wu, Yifan Wang, Guanhong Zhang and Xiaofeng Wang
Sensors 2023, 23(16), 7034; https://doi.org/10.3390/s23167034 - 8 Aug 2023
Cited by 4 | Viewed by 2173
Abstract
Scene text recognition is a crucial area of research in computer vision. However, current mainstream scene text recognition models suffer from incomplete feature extraction due to the small downsampling scale used to extract features and obtain more features. This limitation hampers their ability [...] Read more.
Scene text recognition is a crucial area of research in computer vision. However, current mainstream scene text recognition models suffer from incomplete feature extraction due to the small downsampling scale used to extract features and obtain more features. This limitation hampers their ability to extract complete features of each character in the image, resulting in lower accuracy in the text recognition process. To address this issue, a novel text recognition model based on multi-scale fusion and the convolutional recurrent neural network (CRNN) has been proposed in this paper. The proposed model has a convolutional layer, a feature fusion layer, a recurrent layer, and a transcription layer. The convolutional layer uses two scales of feature extraction, which enables it to derive two distinct outputs for the input text image. The feature fusion layer fuses the different scales of features and forms a new feature. The recurrent layer learns contextual features from the input sequence of features. The transcription layer outputs the final result. The proposed model not only expands the recognition field but also learns more image features at different scales; thus, it extracts a more complete set of features and achieving better recognition of text. The results of experiments are then presented to demonstrate that the proposed model outperforms the CRNN model on text datasets, such as Street View Text, IIIT-5K, ICDAR2003, and ICDAR2013 scenes, in terms of text recognition accuracy. Full article
Show Figures

Figure 1

19 pages, 2200 KiB  
Article
Automatic Recognition Reading Method of Pointer Meter Based on YOLOv5-MR Model
by Le Zou, Kai Wang, Xiaofeng Wang, Jie Zhang, Rui Li and Zhize Wu
Sensors 2023, 23(14), 6644; https://doi.org/10.3390/s23146644 - 24 Jul 2023
Cited by 10 | Viewed by 2699
Abstract
Meter reading is an important part of intelligent inspection, and the current meter reading method based on target detection has problems of low accuracy and large error. In order to improve the accuracy of automatic meter reading, this paper proposes an automatic reading [...] Read more.
Meter reading is an important part of intelligent inspection, and the current meter reading method based on target detection has problems of low accuracy and large error. In order to improve the accuracy of automatic meter reading, this paper proposes an automatic reading method for pointer-type meters based on the YOLOv5-Meter Reading (YOLOv5-MR) model. Firstly, in order to improve the detection performance of small targets in YOLOv5 framework, a multi-scale target detection layer is added to the YOLOv5 framework, and a set of Anchors is designed based on the lightning rod dial data set; secondly, the loss function and up-sampling method are improved to enhance the model training convergence speed and obtain the optimal up-sampling parameters; Finally, a new external circle fitting method of the dial is proposed, and the dial reading is calculated by the center angle algorithm. The experimental results on the self-built dataset show that the Mean Average Precision (mAP) of the YOLOv5-MR target detection model reaches 79%, which is 3% better than the YOLOv5 model, and outperforms other advanced pointer-type meter reading models. Full article
Show Figures

Figure 1

Review

Jump to: Research

22 pages, 32953 KiB  
Review
A Survey of Deep Learning-Based Low-Light Image Enhancement
by Zhen Tian, Peixin Qu, Jielin Li, Yukun Sun, Guohou Li, Zheng Liang and Weidong Zhang
Sensors 2023, 23(18), 7763; https://doi.org/10.3390/s23187763 - 8 Sep 2023
Cited by 14 | Viewed by 10939
Abstract
Images captured under poor lighting conditions often suffer from low brightness, low contrast, color distortion, and noise. The function of low-light image enhancement is to improve the visual effect of such images for subsequent processing. Recently, deep learning has been used more and [...] Read more.
Images captured under poor lighting conditions often suffer from low brightness, low contrast, color distortion, and noise. The function of low-light image enhancement is to improve the visual effect of such images for subsequent processing. Recently, deep learning has been used more and more widely in image processing with the development of artificial intelligence technology, and we provide a comprehensive review of the field of low-light image enhancement in terms of network structure, training data, and evaluation metrics. In this paper, we systematically introduce low-light image enhancement based on deep learning in four aspects. First, we introduce the related methods of low-light image enhancement based on deep learning. We then describe the low-light image quality evaluation methods, organize the low-light image dataset, and finally compare and analyze the advantages and disadvantages of the related methods and give an outlook on the future development direction. Full article
Show Figures

Figure 1

Back to TopTop