Development of Smart and Lean Pick-and-Place System Using EfficientDet-Lite for Custom Dataset
Abstract
:1. Introduction
- The addition of 8% optimized bright Alpha3 images resulted in a 7.5% increase in Average Precision and a 6.3% increase in F1-score.
- Obtain high detection scores over 80% and low variance of 1.65 by using 135-degree angle and level 0 illumination in accordance with Japanese Industrial Standard (JIS).
- In-depth analysis of EfficientDet-Lite models with training batch sizes 4, 8, and 16. Batch size 4 had the best performance with an overall mean of 66.8% and low standard deviation of 6.23%
2. Materials and Methods
2.1. Materials and Measurements Setup
2.2. Image Optimization Process to Improve Mean Average Process
2.3. Illumination Level Setup to Improve Detection Scores
2.4. Training Batch Size Configuration to Improve Mean Average Precision
3. Results
3.1. Results of Optimized Bright Images on Average Precision
3.2. Results of Illumination Level on Detection Scores
3.3. Results of Variation of Batch Size on Average Precision
3.4. Statistical Analysis on Variation of Batch Size
3.5. Performance Validation
3.6. Comparison mAP with COCO2017 Validation Dataset
4. Discussions
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Leung, H.K.; Chen, X.-Z.; Yu, C.-W.; Liang, H.-Y.; Wu, J.-Y.; Chen, Y.-L. A deep-learning-based vehicle detection approach for insufficient and nighttime illumination conditions. Appl. Sci. 2019, 9, 4769. [Google Scholar] [CrossRef]
- Bencak, P.; Vincetič, U.; Lerher, T. Product Assembly Assistance System Based on Pick-to-Light and Computer Vision Technology. Sensors 2022, 22, 9769. [Google Scholar] [CrossRef]
- Yin, X.; Fan, X.; Zhu, W.; Liu, R. Synchronous AR Assembly Assistance and Monitoring System Based on Ego-Centric Vision. Assem. Autom. 2019, 39, 1–16. [Google Scholar] [CrossRef]
- Zhao, W.; Jiang, C.; An, Y.; Yan, X.; Dai, C. Study on a Low-Illumination Enhancement Method for Online Monitoring Images Considering Multiple-Exposure Image Sequence Fusion. Electronics 2023, 12, 2654. [Google Scholar] [CrossRef]
- Kee, E.; Jie, C.J.; Jie, C.Z.; Lau, M. Low-cost and sustainable Pick and Place solution by machine vision assistance. In Proceedings of the 25th International Conference on Mechatronics Technology (ICMT), Kaohsiung, Taiwan, 18–21 November 2022. [Google Scholar]
- Tan, M.; Pang, R.; Le, Q.V. Efficientdet: Scalable and efficient object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 10781–10790. [Google Scholar]
- Kim, H.; Choi, Y. Lab Scale Model Experiment of Smart Hopper System to Remove Blockages Using Machine Vision and Collaborative Robot. Appl. Sci. 2022, 12, 579. [Google Scholar] [CrossRef]
- Jørgensen, T.B.; Jensen, S.H.N.; Aanæs, H.; Hansen, N.W.; Krüger, N. An adaptive robotic system for doing pick and place operations with deformable objects. J. Intell. Robot. Syst. 2019, 94, 81–100. [Google Scholar] [CrossRef]
- Luo, H.; Li, C.; Wu, M.; Cai, L. An Enhanced Lightweight Network for Road Damage Detection Based on Deep Learning. Electronics 2023, 12, 2583. [Google Scholar] [CrossRef]
- Jain, S. DeepSeaNet: Improving Underwater Object Detection using EfficientDet. arXiv 2023, arXiv:2306.06075. [Google Scholar]
- Čirjak, D.; Aleksi, I.; Lemic, D.; Pajač Živković, I. EfficientDet-4 Deep Neural Network-Based Remote Monitoring of Codling Moth Population for Early Damage Detection in Apple Orchard. Agriculture 2023, 13, 961. [Google Scholar] [CrossRef]
- Wu, C.; Chen, L.; Wu, S. A Novel Metric-Learning-Based Method for Multi-Instance Textureless Objects’ 6D Pose Estimation. Appl. Sci. 2021, 11, 10531. [Google Scholar] [CrossRef]
- Chakole, S.; Ukani, N. Low-Cost Vision System for Pick and Place application using camera and ABB Industrial Robot. In Proceedings of the 2020 11th International Conference on Computing, Communication and Networking Technologies (ICCCNT), Kharagpur, India, 1–3 July 2020. [Google Scholar]
- Konaite, M.; Owolawi, P.A.; Mapayi, T.; Malele, V.; Odeyemi, K.; Aiyetoro, G.; Ojo, J.S. Smart Hat for the blind with Real-Time Object Detection using Raspberry Pi and TensorFlow Lite. In Proceedings of the International Conference on Artificial Intelligence and Its Applications, Virtual, 9–10 December 2021. [Google Scholar]
- Barayan, M.A.; Qawas, A.A.; Alghamdi, A.S.; Alkhallagi, T.S.; Al-Dabbagh, R.A.; Aldabbagh, G.A.; Linjawi, A.I. Effectiveness of Machine Learning in Assessing the Diagnostic Quality of Bitewing Radiographs. Appl. Sci. 2022, 12, 9588. [Google Scholar] [CrossRef]
- Benhamida, A.; Várkonyi-Kóczy, A.R.; Kozlovszky, M. Traffic Signs Recognition in a mobile-based application using TensorFlow and Transfer Learning technics. In Proceedings of the IEEE 15th Conference of Systems of Systems of Engineering, Budapest, Hungary, 2–4 June 2020. [Google Scholar]
- Dua, S.; Kumar, S.S.; Albagory, Y.; Ramalingam, R.; Dumka, A.; Singh, R.; Rashid, M.; Gehlot, A.; Alshamrani, S.S.; AlGhamdi, A.S. Developing a Speech Recognition System for Recognizing Tonal Speech Signals Using a Convolutional Neural Network. Appl. Sci. 2022, 12, 6223. [Google Scholar] [CrossRef]
- Kim, I.S.; Jeong, Y.; Kim, S.H.; Jang, J.S.; Jung, S.K. Deep Learning based Effective Surveillance System for Low-Illumination Environments. In Proceedings of the 2019 Eleventh International Conference on Ubiquitous and Future Networks (ICUFN), Zagreb, Croatia, 2–5 July 2019. [Google Scholar]
- Nagata, F.; Miki, K.; Watanabe, K.; Habib, M.K. Visual Feedback Control and Transfer Learning-Based CNN for a Pick and Place Robot on a Sliding Rail. In Proceedings of the 2021 IEEE International Conference on Mechatronics and Automation (ICMA), Takamatsu, Japan, 8–11 August 2021; pp. 697–702. [Google Scholar]
- Malik, A.A.; Andersen, M.V.; Bilberg, A. Advances in machine vision for flexible feeding of assembly parts. Procedia Manuf. 2019, 38, 1228–1235. [Google Scholar] [CrossRef]
- TensorFlow Lite Model Maker. Available online: https://www.tensorflow.org/lite/models/modify/model_maker (accessed on 5 September 2023).
- Roboflow. Available online: https://roboflow.com (accessed on 6 September 2023).
- Google Colab Notebook. Available online: https://colab.research.google.com (accessed on 5 September 2023).
- JIS Z 9110:1979; Recommended Levels of Illumination. Japanese Standards Association: Tokyo, Japan, 2008.
- Keskar, N.S.; Mudigere, D.; Nocedal, J.; Smelyanskiy, M.; Tang, P.T. On large-batch training for deep learning: Generalization gap and sharp minima. arXiv 2016, arXiv:1609.04836. [Google Scholar]
- Kee, E.; Chong, J.J.; Choong, Z.J.; Lau, M. A Comparative Analysis of Cross-Validation Techniques for a Smart and Lean Pick-and-Place Solution with Deep Learning. Electronics 2023, 12, 2371. [Google Scholar] [CrossRef]
- Kasuya, E. Mann-Whitney U test when variances are unequal. Anim. Behav. 2001, 61, 1247–1249. [Google Scholar] [CrossRef]
- Nachar, N. The Mann-Whitney U: A test for assessing whether two independent samples come from the same distribution. Tutor. Quant. Methods Psychol. 2008, 4, 13–20. [Google Scholar] [CrossRef]
- Geweke, J.F.; Singleton, K.J. Interpreting the likelihood ratio statistic in factor models when sample size is small. J. Am. Stat. Assoc. 1980, 75, 133–137. [Google Scholar] [CrossRef]
Model | Setting | Description | Batch Size |
---|---|---|---|
Auto-Orient | Activated | Rotate image 15° counter-clockwise | Discard EXIF rotations and standardize |
Resize | 416 × 416 | Resize all the images to square size | 416 is divisible by 16 |
Model | Setting | Description | Comments |
---|---|---|---|
Rotation | −15° | Rotate image 15° counter-clockwise | Add variability to perspective to be more resilient to camera’s angle |
Rotation | 15° | Rotate image 15° clockwise | |
Shear | Horizontal 15° | Shear image horizontally by 15° | Add variability to perspective to be more resilient to camera’s pitch and yaw |
Shear | Vertical 15° | Shear image vertically 15° |
Test Experiment | Dataset | Test Application | TensorFlow Model | Batch Size | Number of Original Images | Number of Augmented Images | Augmented Ratio |
---|---|---|---|---|---|---|---|
1 | Dataset 1 | Optimized bright | EfficientDet-Lite 2 | 8 | 124 | 1006 | 8.12 |
2 | Dataset 1 | Illumination level | EfficientDet-Lite 2 | 8 | 124 | 1006 | 8.12 |
3 | Dataset 2 | Batch size | EfficientDet-Lite 0 | 4, 8, 16 | 82 | 333 | 4.05 |
EfficientDet-Lite 1 | |||||||
EfficientDet-Lite 2 | |||||||
EfficientDet-Lite 3 |
Total Objects | Number of Blue Cube | Number of Blue Cylinder | Number of Yellow Cylinder | Number of Yellow Cube | Number of Red Cube | Number of Red Cylinder |
---|---|---|---|---|---|---|
963 | 201 | 343 | 128 | 162 | 166 | 129 |
(100%) | (20.87%) | (35.61%) | (13.29%) | (16.82%) | (17.23%) | (13.33%) |
Optimization Process | Control Group | Alpha1 Dataset | Alpha2 Dataset | Alpha3 Dataset |
---|---|---|---|---|
Base image + 10 normal images | 124 + 10 normal | |||
Base image + 10 bright Alpha1 images | 124 + 10 bright level 1 | |||
Base image + 10 bright Alpha2 images | 124 + 10 bright level 2 | |||
Base image + 10 bright Alpha3 images | 124 + 10 bright level 3 |
Illumination Level | Lux Range | Work Areas |
---|---|---|
Level 0 | Less than 5 | Darkroom and indoor emergency stairways |
Level 1 | 150 to 300 | Wrapping and packing |
Level 2 | 300 to 750 | Assembly, test and ordinary visual work |
Level 3 | 750 to 1500 | Inspection, selection and precise visual work |
Level 4 | 1500 to 3000 | Inspection, selection and extremely precise visual work |
Illumination Level | On-Site Lux Measurement | Work Areas | Application |
---|---|---|---|
0 | 6 | Darkroom, indoor emergency stairways | Robot in INDOOR in darkroom |
1 | 242 | Wrapping and packing | Robot in INDOOR doing packing |
2 | 663 | Assembling, testing and ordinary visual work | Robot in INDOOR doing assembly |
3 | 950 | Inspection, selection and precise visual work | Robot in OUTDOOR doing inspection |
4 | 1212 | Inspection, selection and extremely precise visual work | Robot in OUTDOOR/DIRECT SUNLIGHT doing detailed inspection |
Model | Input Resolution | Learning Rate | Batch Size | Epochs |
---|---|---|---|---|
EfficientDet-Lite0 | 320 × 320 | 0.08 | 4, 8, 16 | 50 |
EfficientDet-Lite1 | 384 × 384 | 0.08 | 4, 8, 16 | 50 |
EfficientDet-Lite2 | 448 × 448 | 0.08 | 4, 8, 16 | 50 |
EfficientDet-Lite3 | 512 × 512 | 0.08 | 4, 8, 16 | 50 |
Total Number of Objects | Number of Blue Cube | Number of Blue Cylinder | Number of Yellow Cylinder | Number of Yellow Cube | Number of Red Cube | Number of Red Cylinder |
---|---|---|---|---|---|---|
1022 | 136 | 135 | 216 | 208 | 150 | 177 |
(100%) | (13.3%) | (13.2%) | (21.1%) | (20.3%) | (14.6%) | (17.3%) |
Average Precision | Control Dataset (%) | Alpha1 (%) | Alpha2 (%) | Alpha3 (%) |
---|---|---|---|---|
AP (mAP) | 73.5 | 75.7 | 70.9 | 81.0 (+7.5%) |
AP Tflite | 73.3 | 74.1 | 69.5 | 79.0 |
AR Max10 | 76.7 | 78.6 | 78.5 | 81.9 |
F1-score | 75.1 | 77.1 | 74.5 | 81.4 (+6.3%) |
Average Precision | Control Dataset (%) | Alpha1 (%) | Alpha2 (%) | Alpha3 (%) |
---|---|---|---|---|
Yellow cube | 78.9 | 78.1 | 78.9 | 76.2 |
Yellow cylinder | 76.3 | 72.2 | 82.7 | 83.3 |
Red cube | 73.6 | 85.1 | 45.0 (−28.6%) | 29.0 |
Red cylinder | 73.4 | 65.9 | 55.0 | 66.7 |
Blue cube | 65.8 | 66.1 | 77.6 | 90.40 (+24.6%) |
Blue cylinder | 71.9 | 77.3 | 73.8 | 78.4 |
Overall APs | 73.3 | 74.1 (+0.8%) | 69.5 | 65.8 |
Variance of APs (%) | 16.4 | 0.47 | 1.924 | 3.99 |
Angle and Lux Level of Lamp | Class | Reading 1 (%) | Reading 2 (%) | Reading 3 (%) | Average Reading (%) | Measured Lux Value (lm/m2) |
---|---|---|---|---|---|---|
180° Level 0 | Red cylinder | 85 | 77 | 75 | 79.00 | 25 |
Red cube | 83 | 80 | 69 | 77.33 | 23.5 | |
Yellow cylinder | 85 | 77 | 78 | 80.00 | 25.7 | |
Yellow cube | 91 | 77 | 85 | 84.33 | 25.7 | |
Blue cylinder | 77 | 78 | 75 | 76.67 | 24.1 | |
Blue cube | 77 | 73 | 75 | 75.00 | 24.1 | |
180° Level 1 | Red cylinder | 80 | 85 | 75 | 80.00 | 187.9 |
Red cube | 80 | 62 | 65 | 69.00 | 187.8 | |
Yellow cylinder | 85 | 80 | 85 | 83.33 | 188.6 | |
Yellow cube | 92 | 57 | 86 | 78.33 | 188.9 | |
Blue cylinder | 75 | 73 | 83 | 77.00 | 188.1 | |
Blue cube | 83 | 70 | 83 | 80.00 | 187.5 | |
135° Level 0 | Red cylinder | 78 | 83 | 86 | 82.33 | 50 |
Red cube | 80 | 83 | 83 | 82.00 | 48 | |
Yellow cylinder | 78 | 80 | 83 | 80.33 | 50 | |
Yellow cube | 77 | 78 | 85 | 80.00 | 49 | |
Blue cylinder | 77 | 82 | 77 | 78.67 | 50 | |
Blue cube | 85 | 71 | 83 | 79.67 | 49 | |
135° Level 1 | Red cylinder | 75 | 87 | 83 | 81.67 | 175 |
Red cube | 86 | 83 | 86 | 85.00 | 172 | |
Yellow cylinder | 76 | 75 | 83 | 78.00 | 177 | |
Yellow cube | 80 | 89 | 83 | 84.00 | 174 | |
Blue cylinder | 73 | 80 | 78 | 77.00 | 175 | |
Blue cube | 75 | 83 | 86 | 81.33 | 176 | |
90° Level 0 | Red cylinder | 89 | 51 | 89 | 76.33 | 28 |
Red cube | 91 | 80 | 88 | 86.33 | 25 | |
Yellow cylinder | 91 | 39 | 83 | 71.00 | 28 | |
Yellow cube | 94 | 48 | 85 | 75.67 | 26 | |
Blue cylinder | 80 | 57 | 78 | 71.67 | 28 | |
Blue cube | 89 | 70 | 69 | 76.00 | 26 | |
90° Level 1 | Red cylinder | 90 | 65 | 85 | 80 | 206 |
Red cube | 93 | 76 | 83 | 84 | 189 | |
Yellow cylinder | 91 | 49 | 82 | 73.67 | 209 | |
Yellow cube | 92 | 62 | 89 | 81 | 192 | |
Blue cylinder | 80 | 53 | 86 | 73 | 207 | |
Blue cube | 88 | 80 | 65 | 77.67 | 201 |
Class | 180° Level 0 (%) | 180° Level 1 (%) | 135° Level 0 (%) | 135° Level 1 (%) | 90° Level 0 (%) | 90° Level 1 (%) |
---|---|---|---|---|---|---|
Yellow cube | 79.00 | 80.00 | 82.33 | 81.67 | 76.33 | 80 |
Yellow cylinder | 77.33 | 69.00 | 82.00 | 85.00 | 86.33 | 84 |
Red cube | 80.00 | 83.33 | 80.33 | 78.00 | 71.00 | 73.67 |
Red cylinder | 84.33 | 78.33 | 80.00 | 84.00 | 75.67 | 81 |
Blue cube | 76.67 | 77.00 | 78.67 | 77.00 | 71.67 | 73 |
Blue cylinder | 75.00 | 80.00 | 79.67 | 81.33 | 76.00 | 77.67 |
Average | 78.72 | 77.94 | 80.50 | 81.17 | 76.17 | 78.22 |
Variance | 8.86 | 19.75 | 1.65 | 8.40 | 25.08 | 15.43 |
TFLite Model | Average Precision | Batch4 (%) | Batch8 (%) | Batch16 (%) |
---|---|---|---|---|
EfficientDet-Lite0 | Yellow cube | 72.1 | 64.4 | 66.3 |
Yellow cylinder | 47.9 | 53.2 | 40.2 | |
Red cube | 66.2 | 71.4 | 73.2 | |
Red cylinder | 62.7 | 64.8 | 56.6 | |
Blue cube | 63.4 | 62.2 | 55.6 | |
Blue cylinder | 59.1 | 59.9 | 54.4 | |
EfficientDet-Lite1 | Yellow cube | 56.9 | 73.7 | 55.5 |
Yellow cylinder | 72.5 | 50.3 | 40.7 | |
Red cube | 68.6 | 70.4 | 58.2 | |
Red cylinder | 68.1 | 64.0 | 53.9 | |
Blue cube | 63.9 | 63.8 | 49.6 | |
Blue cylinder | 71.4 | 59.3 | 50.7 | |
EfficientDet-Lite2 | Yellow cube | 73.5 | 70.8 | 67.7 |
Yellow cylinder | 62.4 | 59.0 | 49.2 | |
Red cube | 72.3 | 75.8 | 70.7 | |
Red cylinder | 68.4 | 63.0 | 55.5 | |
Blue cube | 63.3 | 63.8 | 58.3 | |
Blue cylinder | 70.8 | 70.3 | 56.8 | |
EfficientDet-Lite3 | Yellow cube | 72.5 | 69.9 | 59.3 |
Yellow cylinder | 57.6 | 57.6 | 54.7 | |
Red cube | 73.8 | 71.8 | 66.0 | |
Red cylinder | 66.0 | 68.9 | 57.4 | |
Blue cube | 63.1 | 62.5 | 55.5 | |
Blue cylinder | 68.4 | 67.2 | 53.3 | |
Average of APs | 66.8 | 65.4 | 57.4 | |
Standard deviation of APs | 6.23 | 6.26 | 7.87 |
Batch Size | TFLite Model | Yellow Cube (%) | Yellow Cylinder (%) | Red Cube (%) | Red Cylinder (%) | Blue Cube (%) | Blue Cylinder (%) |
---|---|---|---|---|---|---|---|
4 | 72.1 | 47.9 | 66.2 | 62.7 | 63.4 | 59.1 | |
8 | EfficientDet-Lite0 | 64.4 | 53.2 | 71.4 | 64.8 | 62.2 | 59.9 |
16 | 66.3 | 40.2 | 73.2 | 56.6 | 55.6 | 54.4 | |
4 | 56.9 | 72.5 | 68.6 | 68.1 | 63.9 | 71.4 | |
8 | EfficientDet-Lite1 | 73.7 | 50.3 | 70.4 | 64.0 | 63.8 | 59.3 |
16 | 55.5 | 40.7 | 58.2 | 53.9 | 49.6 | 50.7 | |
4 | 73.5 | 62.4 | 72.3 | 68.4 | 63.3 | 70.8 | |
8 | EfficientDet-Lite2 | 70.8 | 59.0 | 75.8 | 63.0 | 63.8 | 70.3 |
16 | 67.7 | 49.2 | 70.7 | 55.5 | 58.3 | 56.8 | |
4 | 72.5 | 57.6 | 73.8 | 66.0 | 63.1 | 68.4 | |
8 | EfficientDet-Lite3 | 69.9 | 57.6 | 71.8 | 68.9 | 62.5 | 67.2 |
16 | 59.3 | 54.7 | 66.0 | 57.4 | 55.5 | 53.3 | |
Average of APs | 66.9 | 53.8 | 69.9 | 62.4 | 60.4 | 61.8 | |
Standard Deviation of APs | 6.24 | 8.69 | 4.48 | 5.09 | 4.434 | 7.13 |
Class | EfficientDet-Lite2 with Alpha1 Dataset (%) | SSD MobileNet V2 FPNLite (%) | Improvement of Accuracy (%) |
---|---|---|---|
Yellow cube | 78.1 | 37.7 | 40.4 |
Yellow cylinder | 72.2 | 35.8 | 36.4 |
Red cube | 85.1 | 45.0 | 40.1 |
Red cylinder | 65.9 | 43.4 | 22.5 |
Blue cube | 66.1 | 48.8 | 17.3 |
Blue cylinder | 77.3 | 37.7 | 39.6 |
Overall mean | 74.1 | 41.4 | 32.7 |
Class | 135-Degree Lamp with Level 0 Illumination (%) | SSD MobileNet V2 FPNLite (%) | Comparison of Detection Scores (%) |
---|---|---|---|
Yellow cube | 80.00 | 91.50 | −11.5 |
Yellow cylinder | 80.33 | 94.67 | −14.34 |
Red cube | 82.00 | 62.67 | +19.33 |
Red cylinder | 82.33 | 61.67 | +20.66 |
Blue cube | 79.67 | 83.00 | −3.33 |
Blue cylinder | 78.67 | 58.33 | +20.34 |
Overall mean | 80.50 | 75.31 | +5.19 |
Model Architecture | COCO2017 Dataset (%) | Alpha1 Dataset (%) | Improvement of Accuracy (%) |
---|---|---|---|
EfficientDet-Lite0 | 25.69 | 78.1 | 52.41 |
EfficientDet-Lite1 | 30.55 | 72.2 | 41.65 |
EfficientDet-Lite2 | 33.97 | 85.1 | 51.13 |
EfficientDet-Lite3 | 37.7 | 65.9 | 28.20 |
Overall AP | 31.98 | 75.33 | 43.35 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Kee, E.; Chong, J.J.; Choong, Z.J.; Lau, M. Development of Smart and Lean Pick-and-Place System Using EfficientDet-Lite for Custom Dataset. Appl. Sci. 2023, 13, 11131. https://doi.org/10.3390/app132011131
Kee E, Chong JJ, Choong ZJ, Lau M. Development of Smart and Lean Pick-and-Place System Using EfficientDet-Lite for Custom Dataset. Applied Sciences. 2023; 13(20):11131. https://doi.org/10.3390/app132011131
Chicago/Turabian StyleKee, Elven, Jun Jie Chong, Zi Jie Choong, and Michael Lau. 2023. "Development of Smart and Lean Pick-and-Place System Using EfficientDet-Lite for Custom Dataset" Applied Sciences 13, no. 20: 11131. https://doi.org/10.3390/app132011131
APA StyleKee, E., Chong, J. J., Choong, Z. J., & Lau, M. (2023). Development of Smart and Lean Pick-and-Place System Using EfficientDet-Lite for Custom Dataset. Applied Sciences, 13(20), 11131. https://doi.org/10.3390/app132011131