An Image-Retrieval Method Based on Cross-Hardware Platform Features
Abstract
:1. Introduction
2. Related Work
3. Methods
3.1. Image Retrieval Based on Cross-Hardware Platform Features
3.2. Decoding Analysis
3.3. Hardware Architecture (GPU/NPU)
4. Results
4.1. Image-Retrieval Result Based on Cross-Hardware Platform Features
4.2. Decode Comparison
4.3. Feature Comparison
5. Discussion
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Xue, R.; Han, D.; Yan, M.; Zou, M.; Yang, X.; Wang, D.; Li, W.; Tang, Z.; Kim, J.; Ye, X.; et al. HiHGNN: Accelerating HGNNs Through Parallelism and Data Reusability Exploitation. IEEE Trans. Parallel Distrib. Syst. 2024, 35, 1122–1138. [Google Scholar] [CrossRef]
- Fan, Z.; Hu, W.; Liu, F.; Xu, D.; Guo, H.; He, Y.; Peng, M. A Hardware Design Framework for Computer Vision Models Based on Reconfigurable Devices. ACM Trans. Reconfigurable Technol. Syst. 2024, 17, 2. [Google Scholar] [CrossRef]
- Huang, B.-Y.; Lyubomirsky, S.; Li, Y.; He, M.; Smith, G.H.; Tambe, T.; Gaonkar, A.; Canumalla, V.; Cheung, A.; Wei, G.-Y.; et al. Application-level Validation of Accelerator Designs Using a Formal Software/Hardware Interface. ACM Trans. Des. Autom. Electron. Syst. 2024, 29, 35. [Google Scholar] [CrossRef]
- Surianarayanan, C.; Lawrence, J.J.; Chelliah, P.R.; Prakash, E.; Hewage, C. A Survey on Optimization Techniques for Edge Artificial Intelligence (AI). Sensors 2023, 23, 1279. [Google Scholar] [CrossRef] [PubMed]
- Mummidi, C.S.; Ferreira, V.C.; Srinivasan, S.; Kundu, S. Highly Efficient Self-checking Matrix Multiplication on Tiled AMX Accelerators. ACM Trans. Archit. Code Optim. 2024, 21, 21. [Google Scholar] [CrossRef]
- Santos, F.F.D.; Carro, L.; Vella, F.; Rech, P. Assessing the Impact of Compiler Optimizations on GPUs Reliability. ACM Trans. Archit. Code Optim. 2024, 21, 26. [Google Scholar] [CrossRef]
- T4-Tensor-Core-Product-Brief. Available online: https://www.nvidia.cn/content/dam/en-zz/Solutions/Data-Center/tesla-t4/t4-tensor-core-product-brief.pdf (accessed on 6 May 2024).
- Inference-Whitepaper. Available online: https://www.nvidia.com/en-us/lp/ai/inference-whitepaper/ (accessed on 6 May 2024).
- NVIDIA H100 TENSOR CORE GPU. Available online: https://images.nvidia.cn/aem-dam/en-zz/Solutions/data-center/h100/nvidia-h100-datasheet-nvidia-a4-2287922-r7-zhCN.pdf (accessed on 6 May 2024).
- NVIDIA A100 TENSOR CORE GPU. Available online: https://www.nvidia.cn/data-center/a100/ (accessed on 6 May 2024).
- Miliadis, P.; Theodoropoulos, D.; Pnevmatikatos, D.; Koziris, N. Architectural Support for Sharing, Isolating and Virtualizing FPGA Resources. ACM Trans. Archit. Code Optim. 2024, 21, 33. [Google Scholar] [CrossRef]
- Xie, K.; Lu, Y.; He, X.; Yi, D.; Dong, H.; Chen, Y. Winols: A Large-Tiling Sparse Winograd CNN Accelerator on FPGAs. ACM Trans. Archit. Code Optim. 2024, 21, 31. [Google Scholar] [CrossRef]
- Jouppi, N.P.; Young, C.; Patil, N.; Patterson, D.; Agrawal, G.; Bajwa, R.; Bates, S.; Bhatia, S.; Boden, N.; Borchers, A.; et al. In-Datacenter Performance Analysis of a Tensor Processing Unit. In Proceedings of the ISCA ‘17: The 44th Annual International Symposium on Computer Architecture, Toronto, ON, Canada, 24–28 June 2017. [Google Scholar]
- Apple a13. Available online: https://en.wikipedia.org/wiki/Apple_A13 (accessed on 6 May 2024).
- Snapdragon 865. Available online: https://www.qualcomm.com/products/snapdragon-865-5g-mobile-platform (accessed on 6 May 2024).
- KL730 AI Soc. Available online: https://www.kneron.com/cn/page/soc/ (accessed on 6 May 2024).
- Atlas 300V Video Analysis Card User Guide. Available online: https://support.huawei.com/enterprise/en/doc/EDOC1100285915/3965035e/product-features?idPath=23710424|251366513|22892968|252309139|256209253/ (accessed on 6 May 2024).
- Liao, H.; Tu, J.; Xia, J.; Liu, H.; Zhou, X.; Yuan, H.; Hu, Y. Ascend: A Scalable and Unified Architecture for Ubiquitous Deep Neural Network Computing: Industry Track Paper. In Proceedings of the 2021 IEEE International Symposium on High-Performance Computer Architecture (HPCA), Seoul, Republic of Korea, 27 February–3 March 2021; pp. 789–801. [Google Scholar]
- Deng, W.; Guan, K.; Shi, Y.; Liu, T.; Yuan, H. Research on Performance of Deep Learning Computing Card. Manuf. Upgrad. Today Chin. 2023, 7, 103–107. [Google Scholar]
- Lu, W.; Zhang, F.; He, Y.-X.; Chen, Y.; Zhai, J.; Du, X. Performance Evaluation and Optimization of Huawei Centeng Neural Net-work Accelerator. Chin. J. Comput. 2022, 45, 1618–1637. [Google Scholar] [CrossRef]
- Kum, S.; Oh, S.; Yeom, J.; Moon, J. Optimization of Edge Resources for Deep Learning Application with Batch and Model Management. Sensors 2022, 22, 6717. [Google Scholar] [CrossRef]
- Goel, A.; Tung, C.; Hu, X.; Thiruvathukal, G.K.; Davis, J.C.; Lu, Y.-H. Efficient Computer Vision on Edge Devices with Pipeline-Parallel Hierarchical Neural Networks. In Proceedings of the 2022 27th Asia and South Pacific Design Automation Conference (ASP-DAC), Taipei, Taiwan, 17–20 January 2022. [Google Scholar]
- Wu, Q.; Shen, Y.; Zhang, M. Heterogeneous Computing and Applications in Deep Learning: A Survey. In Proceedings of the 5th International Conference on Computer Science and Software Engineering, Guilin, China, 21–23 October 2022; pp. 383–387. [Google Scholar]
- Zhang, X.; Hao, C.; Zhou, P.; Jones, A.; Hu, J. H2H: Heterogeneous Model to Heterogeneous System Mapping with Computation and Communication Awareness. In Proceedings of the Proceedings of the 59th ACM/IEEE Design Automation Conference, San Francisco, CA, USA, 10–14 July 2022; pp. 601–606. [Google Scholar]
- Zhuang, J.; Huang, X.; Yang, Y.; Chen, J.; Yu, Y.; Gao, W.; Li, G.; Chen, J.; Zhang, T. OpenMedIA: Open-Source Medical Image Analysis Toolbox and Benchmark under Heterogeneous AI Computing Platforms. arXiv 2022, arXiv:2208.05616. [Google Scholar]
- Prerna, M.; Kumar, S.; Chaube, M.K. An Efficient Image Retrieval Method Using Fused Heterogeneous Feature. Pattern Recognit. Image Anal. 2020, 30, 674–690. [Google Scholar] [CrossRef]
- Alsmadi, M.K. Content-Based Image Retrieval Using Color, Shape and Texture Descriptors and Features. Arab. J. Sci. Eng. 2020, 45, 3317–3330. [Google Scholar] [CrossRef]
- Chhabra, P.; Garg, N.K.; Kumar, M. Content-based image retrieval system using ORB and SIFT features. Neural Comput. Appl. 2020, 32, 2725–2733. [Google Scholar] [CrossRef]
- Öztürk, Ş. Stacked auto-encoder based tagging with deep features for content-based medical image retrieval. Expert Syst. Appl. 2020, 161, 113693. [Google Scholar] [CrossRef]
- Li, X.; Yang, J.; Ma, J. Recent developments of content-based image retrieval (CBIR). Neurocomputing 2021, 452, 675–689. [Google Scholar] [CrossRef]
- Putzu, L.; Piras, L.; Giacinto, G. Convolutional neural networks for relevance feedback in content based image retrieval. Multimed. Tools Appl. 2020, 79, 26995–27021. [Google Scholar] [CrossRef]
- Muthusami, R. Content-based image retrieval using the heterogeneous features. In Proceedings of the 2010 International Conference on Signal and Image Processing, Chennai, India, 15–17 December 2010; pp. 494–497. [Google Scholar]
- Wu, J.; He, Y.; Guo, X.; Zhang, Y.; Zhao, N. Heterogeneous Manifold Ranking for Image Retrieval. IEEE Access 2017, 5, 16871–16884. [Google Scholar] [CrossRef]
- Wang, L.; Wang, H. Improving feature matching strategies for efficient image retrieval. Signal Process. Image Commun. 2017, 53, 86–94. [Google Scholar] [CrossRef]
- Yu, Y.; Yang, L.; Zhou, H.; Zhao, R.; Li, Y.; Tong, H.; Miao, X. In-Memory Search for Highly Efficient Image Retrieval. Adv. Intell. Syst. 2023, 5, 2200268. [Google Scholar] [CrossRef]
- Rani, L.N.; Yuhandri, Y. Similarity Measurement on Logo Image Using CBIR (Content Base Image Retrieval) and CNN ResNet-18 Architecture. In Proceedings of the 2023 International Conference on Computer Science, Information Technology and Engineering (ICCoSITE), Jakarta, Indonesia, 16 February 2023. [Google Scholar]
- Xin, J.; Ye, F.; Xia, Y.; Luo, Y.; Chen, X. A New Remote Sensing Image Retrieval Method Based on CNN and YOLO. J. Internet Technol. 2023, 24, 233–242. [Google Scholar] [CrossRef]
- Lee, G.-W.; Maeng, J.-H.; Song, S. Content based Image Retrieval Method that Combining CNN based Image Features and Object Recognition Information. J. Korean Inst. Inf. Technol. 2022, 20, 31–37. [Google Scholar] [CrossRef]
- Wang, L.; Qian, X.; Zhang, Y.; Shen, J.; Cao, X. Enhancing Sketch-Based Image Retrieval by CNN Semantic Re-ranking. IEEE Trans. Cybern. 2020, 50, 3330–3342. [Google Scholar] [CrossRef]
- Zhang, N.; Liu, Y.; Li, Z.; Xiang, J.; Pan, R. Fabric image retrieval based on multi-modal feature fusion. Signal Image Video Process. 2024, 18, 2207–2217. [Google Scholar] [CrossRef]
- Zhan, Z.; Zhou, G.; Yang, X. A Method of Hierarchical Image Retrieval for Real-Time Photogrammetry Based on Multiple Features. IEEE Access 2020, 8, 21524–21533. [Google Scholar] [CrossRef]
- Ye, F.; Xiao, H.; Zhao, X.; Dong, M.; Luo, W.; Min, W. Remote Sensing Image Retrieval Using Convolutional Neural Network Features and Weighted Distance. IEEE Geosci. Remote Sens. Lett. 2018, 15, 1535–1539. [Google Scholar] [CrossRef]
- Fan, Z.; Guan, Y. Secure Image Retrieval Based on Deep CNN Features in Cloud Computing. In Proceedings of the 2022 3rd International Conference on Pattern Recognition and Machine Learning (PRML), Chengdu, China, 22–24 July 2022; pp. 186–192. [Google Scholar]
- Lv, Y.; Wang, C.; Yuan, W.; Qian, X.; Yang, W.; Zhao, W. Transformer-Based Distillation Hash Learning for Image Retrieval. Electronics 2022, 11, 2810. [Google Scholar] [CrossRef]
- Li, Y.; Guan, C.; Gao, J. TsP-Tran: Two-Stage Pure Transformer for Multi-Label Image Retrieval. In Proceedings of the 2023 ACM International Conference on Multimedia Retrieval, Thessaloniki, Greece, 12–15 June 2023; pp. 425–433. [Google Scholar]
- Li, X.; Yu, J.; Jiang, S.; Lu, H.; Li, Z. MSViT: Training Multiscale Vision Transformers for Image Retrieval. Trans. Multi. 2023, 26, 2809–2823. [Google Scholar] [CrossRef]
- Hu, Y.; Liu, Y.; Liu, Z. A Survey on Convolutional Neural Network Accelerators: GPU, FPGA and ASIC. In Proceedings of the 2022 14th International Conference on Computer Research and Development (ICCRD), Shenzhen, China, 7–9 January 2022; pp. 100–107. [Google Scholar]
- Zhang, H.; Subbian, D.; Lakshminarayanan, G.; Ko, S.-B. Application-Specific and Reconfigurable AI Accelerator. In Artificial Intelligence and Hardware Accelerators; Mishra, A., Cha, J., Park, H., Kim, S., Eds.; Springer International Publishing: Cham, Switzerland, 2023; pp. 183–223. [Google Scholar]
- Kalapothas, S.; Flamis, G.; Kitsos, P. Efficient Edge-AI Application Deployment for FPGAs. Information 2022, 13, 279. [Google Scholar] [CrossRef]
- Nechi, A.; Groth, L.; Mulhem, S.; Merchant, F.; Buchty, R.; Berekovic, M. FPGA-based Deep Learning Inference Accelerators: Where Are We Standing? ACM Trans. Reconfigurable Technol. Syst. 2023, 16, 60. [Google Scholar] [CrossRef]
- Wu, R.; Guo, X.; Du, J.; Li, J. Accelerating Neural Network Inference on FPGA-Based Platforms—A Survey. Electronics 2021, 10, 1025. [Google Scholar] [CrossRef]
- Machupalli, R.; Hossain, M.; Mandal, M. Review of ASIC accelerators for deep neural network. Microprocess. Microsyst. 2022, 89, 104441. [Google Scholar] [CrossRef]
- Tang, Y.; Zhou, P.; Zhang, W.; Hu, H.; Yang, Q.; Xiang, H.; Liu, T.; Shan, J.; Huang, R.; Zhao, C.; et al. Exploring Performance and Cost Optimization with ASIC-Based CXL Memory. In Proceedings of the Nineteenth European Conference on Computer Systems, Athens, Greece, 22–25 April 2024; pp. 818–833. [Google Scholar]
- CANN6.0.1. Available online: https://www.hiascend.com/document/detail/en/canncommercial/601/inferapplicationdev/aclpythondevg/aclpythondevg_01_0309.html (accessed on 6 May 2024).
- CANN. Available online: https://www.hiascend.com/en/software/cann (accessed on 6 May 2024).
- Recommended 8-Bit YUV Formats for Video Rendering. Available online: https://learn.microsoft.com/en-us/windows/win32/medfound/recommended-8-bit-yuv-formats-for-video-rendering (accessed on 30 May 2024).
- NVIDIA-Turing-Architecture-Whitepaper. Available online: https://images.nvidia.cn/aem-dam/en-zz/Solutions/design-visualization/technologies/turing-architecture/NVIDIA-Turing-Architecture-Whitepaper.pdf (accessed on 6 May 2024).
- NVIDIA GPU Architecture: From Pascal to Turing to Ampere. Available online: https://wolfadvancedtechnology.com/articles/nvidia-gpu-architecture (accessed on 20 May 2024).
- Redmon, J.; Farhadi, A. YOLOv3: An Incremental Improvement. arXiv 2018, arXiv:1804.02767. [Google Scholar]
- Bochkovskiy, A.; Wang, C.-Y.; Liao, H.-Y. YOLOv4: Optimal Speed and Accuracy of Object Detection. arXiv 2020, arXiv:2004.10934. [Google Scholar]
- Wang, C.-Y.; Bochkovskiy, A.; Liao, H.-Y. YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors. arXiv 2022, arXiv:2207.02696. [Google Scholar]
- Terven, J.; Cordova-Esparza, D. A Comprehensive Review of YOLO: From YOLOv1 to YOLOv8 and Beyond. arXiv 2023, arXiv:2304.00501. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016. [Google Scholar]
- Xie, S.; Girshick, R.; Dollar, P.; Tu, Z.; He, K. Aggregated Residual Transformations for Deep Neural Networks. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017. [Google Scholar]
- Howard, A.; Zhu, M.; Chen, B.; Kalenichenko, D.; Wang, W.; Weyand, T.; Andreetto, M.; Adam, H. MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications. arXiv 2017, arXiv:1704.04861. [Google Scholar]
- Ma, N.; Zhang, X.; Zheng, H.-T.; Sun, J. ShuffleNet V2: Practical Guidelines for Efficient CNN Architecture Design. In Computer Vision—ECCV 2018, Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2018; pp. 122–138. [Google Scholar]
- Zhang, X.; Zhou, X.; Lin, M.; Sun, J. ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–23 June 2018. [Google Scholar]
- Szegedy, C.; Ioffe, S.; Vanhoucke, V.; Alemi, A. Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning. Proc. AAAI Conf. Artif. Intell. 2017, 31, 4278–4284. [Google Scholar] [CrossRef]
- Atlas 300V Pro Video Analysis Card User Guide. Available online: https://support.huawei.com/enterprise/en/doc/EDOC1100209002 (accessed on 6 May 2024).
Platform | TeslaT4 | Atlas 300V | Atlas 300V Pro |
---|---|---|---|
Architecture | NVIDIA Turing | DaVinci | DaVinci |
Memory size | 16 GB GDDR6 | 24 GB LPDDR4X | 48 GB LPDDR4X |
Memory BW | 320 GByte/s | 204.8 GByte/s | 204.8 GByte/s |
Power | 70 W | 72 W | 72 W |
PCIe | ×16 PCIe Gen3 | ×16 PCIe Gen4 | ×16 PCIe Gen4 |
FP16 | 65 TFLOPS | 50 TFLOPS | 70 TFLOPS |
INT8 | 130 TOPS | 100 TOPS | 140 TOPS |
H.264/H.265 decoder | - | 100 channels 1080P 25 FPS, 80 channels 1080P 30FPS, or 10 channels 4K 60FPS | 128 channels 1080P 30FPS or 16 channels 4K 60FPS |
H.264/H.265 encoder | - | 24 channels 1080P 30FPS or 3 channels 4K 60FPS | 24 channels 1080P 30FPS or 3 channels 4K 60FPS |
JPEG decoder | - | 4K 512FPS | 4K 512FPS |
JPEG encoder | - | 4K 256FPS | 4K 256FPS |
Platform | Dataset | Number of Images | Number of Feature | Feature Extraction Success Rate (%) |
---|---|---|---|---|
Atlas 300V | Query | 41,788 | 41,614 | 99.58 |
Gallery | 277,440 | 276,495 | 99.66 | |
Interference | 101,664 | 97,925 | 96.32 | |
TeslaT4 | Query | 41,788 | 41,613 | 99.58 |
Gallery | 277,440 | 276,500 | 99.66 | |
Interference | 101,664 | 97,969 | 96.36 |
Similarity Threshold | TopN | Query | Gallery | Recall (%) | mAP (%) |
---|---|---|---|---|---|
0.3 | 200 | TeslaT4 | TeslaT4 | 96.32 | 83.12 |
0.3 | 200 | Atlas 300V | Atlas 300V | 96.25 | 82.90 |
0.3 | 200 | TeslaT4 | Atlas 300V | 96.28 | 82.99 |
0.3 | 200 | Atlas 300V | TeslaT4 | 96.29 | 83.03 |
Atlas 300V | |||
Quality factor | Image 1 (200w pixel) | Image 2 (200w pixel) | Image 3 (200w pixel) |
90 | 619 KB | 602 KB | 582 KB |
95 | 848 KB | 824 KB | 799 KB |
TeslaT4 | |||
Quality factor | Image 1 (200w pixel) | Image 2 (200w pixel) | Image 3 (200w pixel) |
75 | 350 KB | 328 KB | 335 KB |
90 | 561 KB | 527 KB | 534 KB |
95 | 802 KB | 751 KB | 763 KB |
Decode Compare | Number of Test Images | Pixel Difference |1| (Image) | Pixel Difference |1| (Pixel) |
---|---|---|---|
TeslaT4 hard vs. Atlas 300V hard | 28w | 99.82% | 96.27% |
TeslaT4 hard vs. CPU soft | 28w | 99.65% | 95.36% |
Atlas 300V hard Vs. CPU soft | 28w | 99.99% | 99.98% |
Decode Type | Number of Test Images | Decode Success Rate |
---|---|---|
TeslaT4 hard | 28w | 98.20% |
Atlas 300V hard | 28w | 97.26% |
CPU soft | 28w | 99.75% |
Test Type | Randomly Selected (Hard Scaling) | Small Target (Hard Scaling) | Small Target (Soft Scaling) |
---|---|---|---|
Similarity ≥ 0.900 | 99.96594 | 98.22877 | 99.02169 |
Similarity ≥ 0.950 | 99.81309 | 98.04452 | 98.89877 |
Similarity ≥ 0.970 | 99.44975 | 97.93458 | 98.80734 |
Similarity ≥ 0.990 | 96.61813 | 97.82261 | 98.72505 |
Similarity ≥ 0.995 | 89.34083 | 96.43209 | 97.61570 |
Max | 0.99995 | 0.99999 | 0.99999 |
Min | 0.62485 | 0.25368 | 0.25638 |
Mean | 0.99731 | 0.99121 | 0.99537 |
Var | 0.00003 | 0.00440 | 0.00214 |
Type | Platform | Mark Target | Capture Rate (%) | Accuracy of Selection (%) |
---|---|---|---|---|
Rider | TeslaT4 | 7207 | 89.45 | 69.99 |
Atlas 300V | 7207 | 89.14 | 70.05 | |
Non-motor vehicle | TeslaT4 | 6011 | 90.75 | 76.63 |
Atlas 300V | 6011 | 90.84 | 75.99 |
Type | Platform | Correlation Label Quantity | Correlation Accuracy (%) | Target Association Rate (%) |
---|---|---|---|---|
Rider-non-motor vehicle | TeslaT4 | 3181 | 91.90 | 81.77 |
Atlas 300V | 3177 | 91.58 | 81.74 |
Test Type | 0 | (0,10] | (10,30] | (30,60) | 60+ |
---|---|---|---|---|---|
Quantity proportion | 0.24669 | 0.72476 | 0.02119 | 0.00458 | 0.00275 |
Max | 0.99995 | 0.99995 | 0.99949 | 0.99793 | 0.99719 |
Min | 0.95837 | 0.89377 | 0.75044 | 0.62485 | 0.68105 |
Mean | 0.99891 | 0.99730 | 0.98648 | 0.97940 | 0.97181 |
Var | 0.00000 | 0.00001 | 0.00031 | 0.00111 | 0.00160 |
Similarity ≥ 0.900 | 1.00000 | 0.99997 | 0.99340 | 0.98095 | 0.96507 |
Similarity ≥ 0.950 | 1.00000 | 0.99949 | 0.95879 | 0.93714 | 0.87619 |
Similarity ≥ 0.970 | 0.99989 | 0.99791 | 0.88092 | 0.83047 | 0.75873 |
Similarity ≥ 0.990 | 0.99833 | 0.97143 | 0.61392 | 0.45333 | 0.26666 |
Similarity ≥ 0.995 | 0.98353 | 0.88584 | 0.38030 | 0.11809 | 0.05079 |
Test Type | 0 | (0,10] | (10,30] | (30,60) | 60+ |
---|---|---|---|---|---|
Quantity proportion | 0.26178 | 0.67103 | 0.06462 | 0.00163 | 0.00091 |
Max | 0.99999 | 0.99999 | 0.99999 | 0.99998 | 0.99981 |
Min | 0.26454 | 0.25428 | 0.25368 | 0.26778 | 0.27480 |
Mean | 0.99517 | 0.99143 | 0.98288 | 0.90586 | 0.43359 |
Var | 0.00206 | 0.00427 | 0.00918 | 0.03914 | 0.03834 |
Similarity ≥ 0.900 | 0.98926 | 0.98260 | 0.96802 | 0.80124 | 0.08888 |
Similarity ≥ 0.950 | 0.98751 | 0.98091 | 0.96471 | 0.77639 | 0.08888 |
Similarity ≥ 0.970 | 0.98615 | 0.97999 | 0.96298 | 0.77018 | 0.08888 |
Similarity ≥ 0.990 | 0.98499 | 0.97894 | 0.96125 | 0.77018 | 0.08888 |
Similarity ≥ 0.995 | 0.97095 | 0.96548 | 0.94408 | 0.72049 | 0.07777 |
Test Type | 0 | (0,10] | (10,30] | (30,60) | 60+ |
---|---|---|---|---|---|
Quantity proportion | 0.56585 | 0.39010 | 0.04351 | 0.00025 | 0.00028 |
Max | 0.99999 | 0.99999 | 0.99998 | 0.99997 | 0.99548 |
Min | 0.26454 | 0.26694 | 0.25638 | 0.31350 | 0.27480 |
Mean | 0.99525 | 0.99639 | 0.99225 | 0.92336 | 0.39257 |
Var | 0.00205 | 0.00176 | 0.00394 | 0.04362 | 0.01679 |
Similarity ≥ 0.900 | 0.98965 | 0.99242 | 0.98459 | 0.88000 | 0.03571 |
Similarity ≥ 0.950 | 0.98798 | 0.99182 | 0.98342 | 0.88000 | 0.03571 |
Similarity ≥ 0.970 | 0.98678 | 0.99132 | 0.98248 | 0.88000 | 0.03571 |
Similarity ≥ 0.990 | 0.98556 | 0.99104 | 0.98202 | 0.88000 | 0.03571 |
Similarity ≥ 0.995 | 0.96385 | 0.45969 | 0.14557 | 0.02597 | 0.00000 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Yin, J.; Wu, F.; Su, H. An Image-Retrieval Method Based on Cross-Hardware Platform Features. Appl. Syst. Innov. 2024, 7, 64. https://doi.org/10.3390/asi7040064
Yin J, Wu F, Su H. An Image-Retrieval Method Based on Cross-Hardware Platform Features. Applied System Innovation. 2024; 7(4):64. https://doi.org/10.3390/asi7040064
Chicago/Turabian StyleYin, Jun, Fei Wu, and Hao Su. 2024. "An Image-Retrieval Method Based on Cross-Hardware Platform Features" Applied System Innovation 7, no. 4: 64. https://doi.org/10.3390/asi7040064
APA StyleYin, J., Wu, F., & Su, H. (2024). An Image-Retrieval Method Based on Cross-Hardware Platform Features. Applied System Innovation, 7(4), 64. https://doi.org/10.3390/asi7040064