Next Article in Journal
Lipidomics Reveals Dietary Alpha Linolenic Acid Facilitates Metabolism Related to Division of Labor in Honeybee Workers
Previous Article in Journal
Red Beetroot and Its By-Products: A Comprehensive Review of Phytochemicals, Extraction Methods, Health Benefits, and Applications
Previous Article in Special Issue
Research and Experiments on Adaptive Root Cutting Using a Garlic Harvester Based on a Convolutional Neural Network
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
This is an early access version, the complete PDF, HTML, and XML versions will be available soon.
Article

DCFA-YOLO: A Dual-Channel Cross-Feature-Fusion Attention YOLO Network for Cherry Tomato Bunch Detection

1
College of Mechatronics and Control Engineering, Shenzhen University, Shenzhen 518060, China
2
School of Computer and Information Science, Chongqing Normal University, Chongqing 401331, China
*
Author to whom correspondence should be addressed.
Agriculture 2025, 15(3), 271; https://doi.org/10.3390/agriculture15030271
Submission received: 5 January 2025 / Revised: 22 January 2025 / Accepted: 24 January 2025 / Published: 26 January 2025
(This article belongs to the Special Issue Computational, AI and IT Solutions Helping Agriculture)

Abstract

To better utilize multimodal information for agriculture applications, this paper proposes a cherry tomato bunch detection network using dual-channel cross-feature fusion. It aims to improve detection performance by employing the complementary information of color and depth images. Using the existing YOLOv8_n as the baseline framework, it incorporates a dual-channel cross-fusion attention mechanism for multimodal feature extraction and fusion. In the backbone network, a ShuffleNetV2 unit is adopted to optimize the efficiency of initial feature extraction. During the feature fusion stage, two modules are introduced by using re-parameterization, dynamic weighting, and efficient concatenation to strengthen the representation of multimodal information. Meanwhile, the CBAM mechanism is integrated at different feature extraction stages, combined with the improved SPPF_CBAM module, to effectively enhance the focus and representation of critical features. Experimental results using a dataset obtained from a commercial greenhouse demonstrate that DCFA-YOLO excels in cherry tomato bunch detection, achieving an mAP50 of 96.5%, a significant improvement over the baseline model, while drastically reducing computational complexity. Furthermore, comparisons with other state-of-the-art YOLO and other object detection models validate its detection performance. This provides an efficient solution for multimodal fusion for real-time fruit detection in the context of robotic harvesting, running at 52fps on a regular computer.
Keywords: cherry tomato bunch detection; robotic harvesting; multimodal image; feature extraction; feature fusion; YOLO network cherry tomato bunch detection; robotic harvesting; multimodal image; feature extraction; feature fusion; YOLO network

Share and Cite

MDPI and ACS Style

Chai, S.; Wen, M.; Li, P.; Zeng, Z.; Tian, Y. DCFA-YOLO: A Dual-Channel Cross-Feature-Fusion Attention YOLO Network for Cherry Tomato Bunch Detection. Agriculture 2025, 15, 271. https://doi.org/10.3390/agriculture15030271

AMA Style

Chai S, Wen M, Li P, Zeng Z, Tian Y. DCFA-YOLO: A Dual-Channel Cross-Feature-Fusion Attention YOLO Network for Cherry Tomato Bunch Detection. Agriculture. 2025; 15(3):271. https://doi.org/10.3390/agriculture15030271

Chicago/Turabian Style

Chai, Shanglei, Ming Wen, Pengyu Li, Zhi Zeng, and Yibin Tian. 2025. "DCFA-YOLO: A Dual-Channel Cross-Feature-Fusion Attention YOLO Network for Cherry Tomato Bunch Detection" Agriculture 15, no. 3: 271. https://doi.org/10.3390/agriculture15030271

APA Style

Chai, S., Wen, M., Li, P., Zeng, Z., & Tian, Y. (2025). DCFA-YOLO: A Dual-Channel Cross-Feature-Fusion Attention YOLO Network for Cherry Tomato Bunch Detection. Agriculture, 15(3), 271. https://doi.org/10.3390/agriculture15030271

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop