applsci-logo

Journal Browser

Journal Browser

Advances and Applications of Digital Image Processing and Deep Learning

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: closed (25 September 2023) | Viewed by 7011

Special Issue Editor


E-Mail Website
Guest Editor
School of Electrical & Electronic Engineering, Engineering Campus, Universiti Sains Malaysia, Nibong Tebal 14300, Malaysia
Interests: digital image processing; digital signal processing; deep learning

Special Issue Information

Dear Colleagues,

With the rapid development in imaging hardware and computing technology, the intelligent use of digital images in everyday life has become the norm. However, there are still shortcomings in existing image processing algorithms, and there is still room for improvement. Recent developments have seen the use of deep learning integrated with image processing to solve problems and enable more creative uses. Therefore, this Special Issue is intended to be a platform for researchers to present their new ideas or experimental results regarding advancements in digital image processing, especially approaches that employ deep learning.

The Special Issue will publish high-quality, original research papers, including but not restricted to:

  • Digital image contrast enhancement.
  • Digital noise reduction.
  • Digital image segmentation.
  • Digital image inpainting.
  • Digital image super-resolution.
  • Data visualization.
  • Applications of digital image processing, such as in medical, consumer electronics and biometrics.

Dr. Haidi Ibrahim
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • digital image processing
  • deep learning
  • contrast enhancement
  • noise reduction
  • image inpainting
  • image super-resolution

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

15 pages, 5820 KiB  
Article
Visual Ranging Based on Object Detection Bounding Box Optimization
by Zhou Shi, Zhongguo Li, Sai Che, Miaowei Gao and Hongchuan Tang
Appl. Sci. 2023, 13(19), 10578; https://doi.org/10.3390/app131910578 - 22 Sep 2023
Cited by 1 | Viewed by 1105
Abstract
Faster and more accurate ranging can be achieved by combining the object detection technique based on deep learning with conventional visual ranging. However, changes in scene, uneven lighting, fuzzy object boundaries and other factors may result in a non-fit phenomenon between the detection [...] Read more.
Faster and more accurate ranging can be achieved by combining the object detection technique based on deep learning with conventional visual ranging. However, changes in scene, uneven lighting, fuzzy object boundaries and other factors may result in a non-fit phenomenon between the detection bounding box and the object. The pixel spacing between the detection bounding box and the object can cause ranging errors. To reduce pixel spacing, increase the degree of fit between the object detection bounding box and the object, and improve ranging accuracy, an object detection bounding box optimization method is proposed. Two evaluation indicators, WOV and HOV, are also proposed to evaluate the results of bounding box optimization. The experimental results show that the pixel width of the bounding box is optimized by 1.19~19.24% and the pixel height is optimized by 0~12.14%. At the same time, the ranging experiments demonstrate that the optimization of the bounding box improves the ranging accuracy. In addition, few practical monocular range measurement techniques can also determine the distance to an object whose size is unknown. Therefore, a similar triangle ranging technique based on height difference is suggested to measure the distance to items of unknown size. A ranging experiment is carried out based on the optimization of the detecting bounding box, and the experimental results reveal that the ranging relative error within 6 m is between 0.7% and 2.47%, allowing for precise distance measurement. Full article
Show Figures

Figure 1

18 pages, 4099 KiB  
Article
Novel Paintings from the Latent Diffusion Model through Transfer Learning
by Dayin Wang, Chong Ma and Siwen Sun
Appl. Sci. 2023, 13(18), 10379; https://doi.org/10.3390/app131810379 - 16 Sep 2023
Cited by 1 | Viewed by 2087
Abstract
With the development of deep learning, image synthesis has achieved unprecedented achievements in the past few years. Image synthesis models, represented by diffusion models, demonstrated stable and high-fidelity image generation. However, the traditional diffusion model computes in pixel space, which is memory-heavy and [...] Read more.
With the development of deep learning, image synthesis has achieved unprecedented achievements in the past few years. Image synthesis models, represented by diffusion models, demonstrated stable and high-fidelity image generation. However, the traditional diffusion model computes in pixel space, which is memory-heavy and computing-heavy. Therefore, to ease the expensive computing and improve the accessibility of diffusion models, we train the diffusion model in latent space. In this paper, we are devoted to creating novel paintings from existing paintings based on powerful diffusion models. Because the cross-attention layer is adopted in the latent diffusion model, we can create novel paintings with conditional text prompts. However, direct training of the diffusion model on the limited dataset is non-trivial. Therefore, inspired by the transfer learning, we train the diffusion model with the pre-trained weights, which eases the training process and enhances the image synthesis results. Additionally, we introduce the GPT-2 model to expand text prompts for detailed image generation. To validate the performance of our model, we train the model on paintings of the specific artist from the dataset WikiArt. To make up for missing image context descriptions of the WikiArt dataset, we adopt a pre-trained language model to generate corresponding context descriptions automatically and clean wrong descriptions manually, and we will make it available to the public. Experimental results demonstrate the capacity and effectiveness of the model. Full article
Show Figures

Figure 1

16 pages, 5085 KiB  
Article
Vision-Based Hand Detection and Tracking Using Fusion of Kernelized Correlation Filter and Single-Shot Detection
by Mohd Norzali Haji Mohd, Mohd Shahrimie Mohd Asaari, Ong Lay Ping and Bakhtiar Affendi Rosdi
Appl. Sci. 2023, 13(13), 7433; https://doi.org/10.3390/app13137433 - 23 Jun 2023
Cited by 6 | Viewed by 3309
Abstract
Hand detection and tracking are key components in many computer vision applications, including hand pose estimation and gesture recognition for human–computer interaction systems, virtual reality, and augmented reality. Despite their importance, reliable hand detection in cluttered scenes remains a challenge. This study explores [...] Read more.
Hand detection and tracking are key components in many computer vision applications, including hand pose estimation and gesture recognition for human–computer interaction systems, virtual reality, and augmented reality. Despite their importance, reliable hand detection in cluttered scenes remains a challenge. This study explores the use of deep learning techniques for fast and robust hand detection and tracking. A novel algorithm is proposed by combining the Kernelized Correlation Filter (KCF) tracker with the Single-Shot Detection (SSD) method. This integration enables the detection and tracking of hands in challenging environments, such as cluttered backgrounds and occlusions. The SSD algorithm helps reinitialize the KCF tracker when it fails or encounters drift issues due to sudden changes in hand gestures or fast movements. Testing in challenging scenes showed that the proposed tracker achieved a tracking rate of over 90% and a speed of 17 frames per second (FPS). Comparison with the KCF tracker on 17 video sequences revealed an average improvement of 13.31% in tracking detection rate (TRDR) and 27.04% in object detection error (OTE). Additional comparison with MediaPipe hand tracker on 10 hand gesture videos taken from the Intelligent Biometric Group Hand Tracking (IBGHT) dataset showed that the proposed method outperformed the MediaPipe hand tracker in terms of overall TRDR and tracking speed. The results demonstrate the promising potential of the proposed method for long-sequence tracking stability, reducing drift issues, and improving tracking performance during occlusions. Full article
Show Figures

Figure 1

Back to TopTop