Previous Issue
Volume 13, November
 
 

Computers, Volume 13, Issue 12 (December 2024) – 2 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
25 pages, 1890 KiB  
Article
Integrating Few-Shot Learning and Multimodal Image Enhancement in GNut: A Novel Approach to Groundnut Leaf Disease Detection
by Imran Qureshi
Computers 2024, 13(12), 306; https://doi.org/10.3390/computers13120306 - 22 Nov 2024
Abstract
Groundnut is a vital crop worldwide, but its production is significantly threatened by various leaf diseases. Early identification of such diseases is vital for maintaining agricultural productivity. Deep learning techniques have been employed to address this challenge and enhance the detection, recognition, and [...] Read more.
Groundnut is a vital crop worldwide, but its production is significantly threatened by various leaf diseases. Early identification of such diseases is vital for maintaining agricultural productivity. Deep learning techniques have been employed to address this challenge and enhance the detection, recognition, and classification of groundnut leaf diseases, ensuring better management and protection of this important crop. This paper presents a new approach to the detection and classification of groundnut leaf diseases by the use of an advanced deep learning model, GNut, which integrates ResNet50 and DenseNet121 architectures for feature extraction and Few-Shot Learning (FSL) for classification. The proposed model overcomes groundnut crop diseases by addressing an efficient and highly accurate method of managing diseases in agriculture. Evaluated on a novel Pak-Nuts dataset collected from groundnut fields in Pakistan, the GNut model achieves promising accuracy rates of 99% with FSL and 95% without it. Advanced image preprocessing techniques, such as Multi-Scale Retinex with Color Restoration and Adaptive Histogram Equalization and Multimodal Image Enhancement for Vegetative Feature Isolation were employed to enhance the quality of input data, further improving classification accuracy. These results illustrate the robustness of the proposed model in real agricultural applications, establishing a new benchmark for groundnut leaf disease detection and highlighting the potential of AI-powered solutions to play a role in encouraging sustainable agricultural practices. Full article
(This article belongs to the Special Issue Machine Learning Applications in Pattern Recognition)
23 pages, 2110 KiB  
Article
Novel Advance Image Caption Generation Utilizing Vision Transformer and Generative Adversarial Networks
by Shourya Tyagi, Olukayode Ayodele Oki, Vineet Verma, Swati Gupta, Meenu Vijarania, Joseph Bamidele Awotunde and Abdulrauph Olanrewaju Babatunde
Computers 2024, 13(12), 305; https://doi.org/10.3390/computers13120305 - 22 Nov 2024
Viewed by 13
Abstract
In this paper, we propose a novel method for producing image captions through the utilization of Generative Adversarial Networks (GANs) and Vision Transformers (ViTs) using our proposed Image Captioning Utilizing Transformer and GAN (ICTGAN) model. Here we use the efficient representation learning of [...] Read more.
In this paper, we propose a novel method for producing image captions through the utilization of Generative Adversarial Networks (GANs) and Vision Transformers (ViTs) using our proposed Image Captioning Utilizing Transformer and GAN (ICTGAN) model. Here we use the efficient representation learning of the ViTs to improve the realistic image production of the GAN. Using textual features from the LSTM-based language model, our proposed model combines salient information extracted from images using ViTs. This merging of features is made possible using a self-attention mechanism, which enables the model to efficiently take in and process data from both textual and visual sources using the self-attention properties of the self-attention mechanism. We perform various tests on the MS COCO dataset as well as the Flickr30k dataset, which are popular benchmark datasets for image-captioning tasks, to verify the effectiveness of our proposed model. The outcomes represent that, on this dataset, our algorithm outperforms other approaches in terms of relevance, diversity, and caption quality. With this, our model is robust to changes in the content and style of the images, demonstrating its excellent generalization skills. We also explain the benefits of our method, which include better visual–textual alignment, better caption coherence, and better handling of complicated scenarios. All things considered, our work represents a significant step forward in the field of picture caption creation, offering a complete solution that leverages the complementary advantages of GANs and ViT-based self-attention models. This work pushes the limits of what is currently possible in image caption generation, creating a new standard in the industry. Full article
Show Figures

Figure 1

Previous Issue
Back to TopTop