HISP: Heterogeneous Image Signal Processor Pipeline Combining Traditional and Deep Learning Algorithms Implemented on FPGA
Abstract
:1. Introduction
- Detailed analysis of the strengths and weaknesses of traditional ISP and DLISP, and proposal of the concept of HISP to combine the two, leveraging their advantages while minimizing their drawbacks.
- Integration of different traditional ISP modules with DLISP to create multiple pipelines, which will be evaluated through multiple dimensions of image quality assessment (IQA). Proposing an HISP allocation plan that divides processing tasks for traditional and deep-learning modules and achieves the optimal balance among processing speed, resource consumption, and development difficulty.
- Implementation of a dedicated DPU for UNet on FPGA, achieving a 14.67× acceleration ratio. In addition, we designed a heterogeneous ISP that combines traditional ISP and DLISP based on the optimal division of labor, all on FPGA, resulting in the best image quality in edge scenarios and costing only 8.56 W power.
2. Related Work
2.1. Traditional ISP Principle and Pipeline
2.2. Deep Learning ISP
2.3. No-Reference Image Quality Assessment Scheme
3. Analysis
3.1. How to Allocate the Task?
3.2. HISP May Work Better
4. Implementation
4.1. FPGA Implementation of NVDLA
4.2. FPGA Implementation of a Dedicated DPU for UNet
4.3. FPGA Implementation of HISP Pipeline
Kg = K/Gaver;
Kb = K/Baver;
Gnew = G × Kg;
Bnew = B × Kb;
5. Results
5.1. Optimal Acceleration Scheme
5.2. Optimal Task Allocation Scheme
6. Conclusions and Future Work
6.1. Conclusions
6.2. Future Work
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Liu, Y.-C.; Chan, W.-H.; Chen, Y.-Q. Automatic white balance for digital still camera. IEEE Trans. Consum. Electron. 1995, 41, 460–466. [Google Scholar] [CrossRef]
- Lu, D.; Yan, L. Face Detection and Recognition Algorithm in Digital Image Based on Computer Vision Sensor. J. Sens. 2021, 2021, 4796768. [Google Scholar] [CrossRef]
- Liu, W.; Quijano, K.; Crawford, M.M. YOLOv5-Tassel: Detecting Tassels in RGB UAV Imagery With Improved YOLOv5 Based on Transfer Learning. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2022, 15, 8085–8094. [Google Scholar] [CrossRef]
- Meng, Z.; Xia, X.; Xu, R.; Liu, W.; Ma, J. HYDRO-3D: Hybrid Object Detection and Tracking for Cooperative Perception Using 3D LiDAR. IEEE Trans. Intell. Veh. 2023. [Google Scholar] [CrossRef]
- Andrea, C.; Secci, F. RGB Cameras Failures and Their Effects in Autonomous Driving Applications. IEEE Trans. Dependable Secur. Comput. 2020, 20, 2731–2745. [Google Scholar]
- Jiang, H.; Tian, Q.; Farrell, J.; Wandell, B. Learning the Image Processing Pipeline. IEEE Trans. Image Process. 2016, 26, 5032–5042. [Google Scholar] [CrossRef]
- El Gamal, A.; Helmy, E. CMOS image sensors. IEEE Circuits Devices Mag. 2005, 21, 6–20. [Google Scholar] [CrossRef]
- Bao, Z.; Fu, G.; Duan, L.; Xiao, C. Interactive lighting editing system for single indoor low-light scene images with corresponding depth maps. Vis. Inform. 2022, 6, 90–99. [Google Scholar] [CrossRef]
- Lucie, Y.; Jonathan, H.; Yogamani, S.; Eising, C.; Deegan, B. Impact analysis and tuning strategies for camera Image Signal Processing parameters in Computer Vision. In Proceedings of the 20th Irish Machine Vision and Image Processing Conference, Belfast, UK, 28–30 August 2018. [Google Scholar]
- Ancuti, C.O.; Ancuti, C.; De Vleeschouwer, C.; Bekaert, P. Color Balance and Fusion for Underwater Image Enhancement. IEEE Trans. Image Process. 2018, 27, 379–393. [Google Scholar] [CrossRef]
- Huang, Y.; Chouzenoux, E.; Pesquet, J.-C. Unrolled Variational Bayesian Algorithm for Image Blind Deconvolution. IEEE Trans. Image Process. 2023, 32, 430–445. [Google Scholar] [CrossRef]
- Song, L.; Huang, H. Simultaneous Destriping and Image Denoising Using a Nonparametric Model With the EM Algorithm. IEEE Trans. Image Process. 2023, 32, 1065–1077. [Google Scholar] [CrossRef]
- Gkillas; Ampeliotis, D.; Berberidis, K. Connections Between Deep Equilibrium and Sparse Representation Models With Application to Hyperspectral Image Denoising. IEEE Trans. Image Process. 2023, 32, 1513–1528. [Google Scholar] [CrossRef] [PubMed]
- Hansen, P.; Vilkin, A.; Krustalev, Y.; Imber, J.; Talagala, D.; Hanwell, D.; Mattina, M.; Whatmough, P.N. ISP4ML: The Role of Image Signal Processing in Efficient Deep Learning Vision Systems. In Proceedings of the 2020 25th International Conference on Pattern Recognition (ICPR), Milan, Italy, 10–15 January 2021; pp. 2438–2445. [Google Scholar] [CrossRef]
- Long, J.; Shelhamer, E.; Darrell, T. Fully convolutional networks for semantic segmentation. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; pp. 3431–3440. [Google Scholar]
- Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. arXiv 2015, arXiv:1505.04597. [Google Scholar]
- Dong, C.; Loy, C.C.; He, K.; Tang, X. Image Super-Resolution Using Deep Convolutional Networks. IEEE Trans. Pattern Anal. Mach. Intell. 2014, 38, 295–307. [Google Scholar] [CrossRef] [PubMed]
- Kim, J.; Lee, J.K.; Lee, K.M. Accurate Image Super-Resolution Using Very Deep Convolutional Networks. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 1646–1654. [Google Scholar]
- Christian, L.; Theis, L.; Huszár, F.; Caballero, J.; Aitken, A.P.; Tejani, A.; Totz, J.; Wang, Z.; Shi, W. Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 105–114. [Google Scholar]
- Chen, C.; Chen, Q.; Xu, J.; Koltun, V. Learning to See in the Dark. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 3291–3300. [Google Scholar]
- Schwartz, E.; Giryes, R.; Bronstein, A.M. DeepISP: Toward Learning an End-to-End Image Processing Pipeline. IEEE Trans. Image Process. 2019, 28, 912–923. [Google Scholar] [CrossRef]
- Ignatov, A.; Van Gool, L.; Timofte, R. Replacing Mobile Camera ISP with a Single Deep Learning Model. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Seattle, WA, USA, 14–19 June 2020; pp. 2275–2285. [Google Scholar]
- Waqas, Z.S.; Arora, A.; Khan, S.H.; Hayat, M.; Khan, F.S.; Yang, M.-H.; Shao, L. CycleISP: Real Image Restoration via Improved Data Synthesis. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; pp. 2693–2702. [Google Scholar]
- Hsyu, M.-C.; Liu, C.-W.; Chen, C.-H.; Chen, C.-W.; Tsai, W.-C. CSANet: High Speed Channel Spatial Attention Network for Mobile ISP. In Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Nashville, TN, USA, 19–25 June 2021; pp. 2486–2493. [Google Scholar] [CrossRef]
- Kim, B.-H.; Song, J.; Ye, J.C.; Baek, J. PyNET-CA: Enhanced PyNET with Channel Attention for End-to-End Mobile Image Signal Processing. arXiv 2021, arXiv:2104.02895. [Google Scholar]
- Buckler, M.; Jayasuriya, S.; Sampson, A. Reconfiguring the Imaging Pipeline for Computer Vision. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 975–984. [Google Scholar] [CrossRef]
- Yoshimura, M.; Otsuka, J.; Irie, A.; Ohashi, T. DynamicISP: Dynamically Controlled Image Signal Processor for Image Recognition. arXiv 2022, arXiv:2211.01146. [Google Scholar]
- Lubana, E.S.; Dick, R.P.; Aggarwal, V.; Pradhan, P.M. Minimalistic Image Signal Processing for Deep Learning Applications. In Proceedings of the 2019 IEEE International Conference on Image Processing (ICIP), Taipei, Taiwan, 22–25 September 2019; pp. 4165–4169. [Google Scholar] [CrossRef]
- Shafi, O.; Rai, C.; Sen, R.; Ananthanarayanan, G. Demystifying TensorRT: Characterizing Neural Network Inference Engine on Nvidia Edge Devices. In Proceedings of the 2021 IEEE International Symposium on Workload Characterization (IISWC), Storrs, CT, USA, 7–9 November 2021; pp. 226–237. [Google Scholar] [CrossRef]
- Zhou, Y.; Dong, X.; Akin, B.; Tan, M.; Peng, D.; Meng, T.; Yazdanbakhsh, A.; Huang, D.; Narayanaswami, R.; Laudon, J. Rethinking Co-design of Neural Architectures and Hardware Accelerators. arXiv 2021, arXiv:2102.08619. [Google Scholar]
- Kumar, A.; Yazdanbakhsh, A.; Hashemi, M.; Swersky, K.; Levine, S. Data-Driven Offline Optimization For Architecting Hardware Accelerators. arXiv 2021, arXiv:2110.11346. [Google Scholar]
- Qi, Z.; Chen, W.; Naqvi, R.A.; Siddique, K. Designing Deep Learning Hardware Accelerator and Efficiency Evaluation. Comput. Intell. Neurosci. 2022, 2022, 1291103. [Google Scholar] [CrossRef]
- Kikuchi, K.; Nukada, Y.; Aoki, Y.; Kanou, T.; Endo, Y.; Nishitani, T. A single-chip 16-bit 25 ns realtime video/image signal processor. In Proceedings of the IEEE International Solid-State Circuits Conference, 1989 ISSCC. Digest of Technical Papers, New York, NY, USA, 15–17 February 1989; pp. 170–171. [Google Scholar] [CrossRef]
- Palum, R.J. Image Sampling with the Bayer Color Filter Array. In Proceedings of the Image Processing, Image Quality, Image Capture Systems Conference, Montréal, QC, Canada, 22–25 April 2001. [Google Scholar]
- Popescu, C.; Farid, H. Exposing digital forgeries in color filter array interpolated images. IEEE Trans. Signal Process. 2005, 53, 3948–3959. [Google Scholar] [CrossRef]
- Malvar, H.S.; He, L.W.; Cutler, R. High-quality linear interpolation for demosaicing of Bayer-patterned color images. In Proceedings of the 2004 IEEE International Conference on Acoustics, Speech, and Signal Processing, Montreal, QC, Canada, 17–21 May 2004; Volume 3. [Google Scholar]
- Bovik, A.C. The Essential Guide to Image Processing; Academic Press, Inc.: Cambridge, MA, USA, 2009. [Google Scholar]
- Alleysson, D.; Susstrunk, S.; Herault, J. Linear demosaicing inspired by the human visual system. IEEE Trans. Image Process. 2005, 14, 439–449. [Google Scholar] [CrossRef] [PubMed]
- Singh, H.; Agrawal, N.; Kumar, A.; Singh, G.K.; Lee, H.-N. A novel gamma correction approach using optimally clipped sub-equalization for dark image enhancement. In Proceedings of the 2016 IEEE International Conference on Digital Signal Processing (DSP), Beijing, China, 16–18 October 2016; pp. 497–501. [Google Scholar] [CrossRef]
- Kykta, M. Gamma, Brightness, and Luminance Considerations for HD Displays. Inf. Disp. 2009, 25, 20–25. [Google Scholar] [CrossRef]
- Lagunas, A.; Domínguez, O.; Martinez-Conde, S.; Macknik, S.L.; del-Río, C. Human Eye Visual Hyperacuity: A New Paradigm for Sensing? arXiv 2017, arXiv:1703.00249. [Google Scholar]
- Guo, X.; Li, Y.; Ling, H. LIME: Low-light image enhancement via illumination map estimation. IEEE Trans. Image Process. 2017, 26, 2. [Google Scholar] [CrossRef]
- Hasinoff, S.W.; Sharlet, D.; Geiss, R.; Adams, A.; Barron, J.T.; Chen, J.; Levoy, M. Burst photography for high dynamic range and low-light imaging on mobile cameras. ACM Trans. Graph. 2016, 35, 192. [Google Scholar] [CrossRef]
- Ramakrishnan, R.; Jui, S.; Partovi Nia, V. Deep Demosaicing for Edge Implementation. In Proceedings of the International Conference on Image Analysis and Recognition, Waterloo, ON, Canada, 27–29 August 2019. [Google Scholar]
- Zhu, H.; Zhao, Y.; Wang, R.; Wang, R.; Chen, W.; Gao, X. LLISP: Low-Light Image Signal Processing Net via Two-Stage Network. IEEE Access. 2021, 9, 16736–16745. [Google Scholar] [CrossRef]
- Hu, X.; Chu, L.; Pei, J.; Liu, W.; Bian, J. Model complexity of deep learning: A survey. Knowl. Inf. Syst. 2021, 63, 2585–2619. [Google Scholar] [CrossRef]
- Ratnasingam, S. Deep Camera: A Fully Convolutional Neural Network for Image Signal Processing. In Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), Seoul, Republic of Korea, 27–28 October 2019; pp. 3868–3878. [Google Scholar] [CrossRef]
- Mittal, A.; Moorthy, A.K.; Bovik, A.C. No-Reference Image Quality Assessment in the Spatial Domain. IEEE Trans. Image Process. 2012, 21, 4695–4708. [Google Scholar] [CrossRef]
- Mittal, A.; Soundararajan, R.; Bovik, A.C. Making a “Completely Blind” Image Quality Analyzer. IEEE Signal Process. Lett. 2013, 20, 209–212. [Google Scholar] [CrossRef]
- Venkatanath, N.; Praneeth, D.; Bh, M.C.; Channappayya, S.S.; Medasani, S.S. Blind image quality evaluation using perception based features. In Proceedings of the 2015 Twenty First National Conference on Communications (NCC), Mumbai, India, 27 February–1 March 2015; pp. 1–6. [Google Scholar] [CrossRef]
- Hashimoto, N.; Bautista, P.A.; Yamaguchi, M.; Ohyama, N.; Yagi, Y. Referenceless image quality evaluation for whole slide imaging. J. Pathol. Inform. 2012, 3, 9. [Google Scholar] [CrossRef]
- Liu, Y.-H.; Yang, K.-F.; Yan, H.-M. No-Reference Image Quality Assessment Method Based on Visual Parameters. J. Electron. Sci. Technol. 2019, 17, 171–184. [Google Scholar] [CrossRef]
- Wang, H.; Chen, X.; Du, S.; Xu, B.; Liu, Y. Overview and research progress of no reference image quality evaluation methods. J. Phys. Conf. Ser. 2021, 1914, 012035. [Google Scholar] [CrossRef]
- Cieszewski, R.; Linczuk, M.; Pozniak, K.; Romaniuk, R. Review of parallel computing methods and tools for FPGA technology. Proc. SPIE—Int. Soc. Opt. Eng. 2013, 8903, 890321. [Google Scholar] [CrossRef]
- Xu, Q.; Arafin, M.T.; Qu, G. Security of Neural Networks from Hardware Perspective: A Survey and Beyond. In Proceedings of the 2021 26th Asia and South Pacific Design Automation Conference (ASP-DAC), Tokyo, Japan, 18–21 January 2021; pp. 449–454. [Google Scholar]
- Barron, J.T. Convolutional Color Constancy. In Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV), Santiago, Chile, 7–13 December 2015; pp. 379–387. [Google Scholar]
- Hu, Y.; Wang, B.; Lin, S. FC4: Fully Convolutional Color Constancy with Confidence-Weighted Pooling. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 4085–4094. [Google Scholar] [CrossRef]
- Afifi, M.; Barron, J.T.; LeGendre, C.; Tsai, Y.-T.; Bleibel, F. Cross-Camera Convolutional Color Constancy. In Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision (ICCV), Montreal, QC, Canada, 10–17 October 2021; pp. 1961–1970. [Google Scholar] [CrossRef]
- Bilcu, R.C. Multiframe Auto White Balance. IEEE Signal Process. Lett. 2011, 18, 165–168. [Google Scholar] [CrossRef]
- Davies, E.R. Computer and Machine Vision, Theory, Algorithms, Practicalities, 4th ed.; Academic Press, Inc.: Cambridge, MA, USA, 2012. [Google Scholar]
- Dollár, P.; Zitnick, C.L. Fast Edge Detection Using Structured Forests. IEEE Trans. Pattern Anal. Mach. Intell. 2014, 37, 1558–1570. [Google Scholar] [CrossRef]
- Zitnick, C.; Dollar, P. Edge Boxes: Locating Object Proposals from Edges. In Computer Vision–ECCV 2014, Proceedings of the 13th European Conference, Zurich, Switzerland, 6–12 September 2014; Springer: Cham, Switzerland, 2014; Volume 8693. [Google Scholar] [CrossRef]
- Zhang, Z.; Xing, F.; Shi, X.; Yang, L. SemiContour: A Semi-Supervised Learning Approach for Contour Detection. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 251–259. [Google Scholar] [CrossRef]
- Zhou, M.-G.; Cao, X.-Y.; Lu, Y.-S.; Wang, Y.; Bao, Y.; Jia, Z.; Fu, Y.; Yin, H.-L.; Chen, Z.-B. Experimental Quantum Advantage with Quantum Coupon Collector. Research 2022, 2022, 9798679. [Google Scholar] [CrossRef] [PubMed]
- Zhou, M.-G.; Liu, Z.-P.; Yin, H.-L.; Li, C.-L.; Xu, T.; Chen, Z.-B. Quantum Neural Network for Quantum Neural Computing. Research 2023, 6, 134. [Google Scholar] [CrossRef] [PubMed]
- Jerbi, S.; Fiderer, L.J.; Nautrup, H.P.; Kübler, J.M.; Briegel, H.J.; Dunjko, V. Quantum machine learning beyond kernel methods. Nat. Commun. 2021, 14, 517. [Google Scholar] [CrossRef] [PubMed]
- Kwak, Y.; Yun, W.J.; Jung, S.; Kim, J. Quantum Neural Networks: Concepts, Applications, and Challenges. In Proceedings of the 2021 Twelfth International Conference on Ubiquitous and Future Networks (ICUFN), Jeju Island, Republic of Korea, 17–20 August 2021; pp. 413–416. [Google Scholar]
BRISQUE | PIQE | NIQE | NIMA | RankIQA | ABS | Final Score | |
---|---|---|---|---|---|---|---|
Gt1 | 12.13 | 24.60 | 3.10 | 5.17 | 4.21 | 7.70 | 10.30 |
Output1 | 30.66 | 43.55 | 3.77 | 4.66 | 4.05 | 6.30 | 3.82 |
Gt2 | 24.40 | 16.02 | 3.77 | 4.76 | 4.83 | 8.90 | 10.69 |
Output2 | 36.80 | 51.58 | 4.31 | 4.62 | 4.39 | 7.30 | 3.16 |
Gt3 | 22.85 | 12.02 | 4.55 | 4.96 | 4.27 | 8.00 | 9.19 |
Output3 | 33.27 | 54.02 | 4.65 | 5.19 | 3.73 | 7.10 | 2.64 |
BRISQUE | PIQE | NIQE | NIMA | RankIQA | ABS | PSNR | SSIM | |
Gt | 42.62 | 69.36 | 5.03 | 4.79 | 2.97 | 8.70 | - | - |
Traditional ISP Output | 25.56 | 24.76 | 3.35 | 3.43 | 2.66 | 5.30 | 10.0495 | 0.1087 |
DLISP Output | 43.59 | 66.74 | 4.86 | 3.75 | 2.17 | 6.10 | 10.4020 | 0.3085 |
DL Output + Sharpen | 44.44 | 76.30 | 8.03 | 4.14 | 2.45 | 7.90 | 10.4043 | 0.3171 |
DL Output + Sharpen + Contrast | 43.08 | 77.30 | 8.20 | 4.03 | 2.72 | 6.40 | 9.5036 | 0.2178 |
DL Output + Sharpen + Contrast +Denoise | 46.11 | 80.77 | 6.01 | 3.96 | 2.93 | 7.10 | 9.5323 | 0.2336 |
Device | Frequency (MHz) | OS | Software | Lantency (ms) | Total On-Chip Power (Watt) | |||
---|---|---|---|---|---|---|---|---|
Conv | Maxpool | Deconv | Total | |||||
x86 CPU | 3600 | Windows 10 | Python 3.6.4 | 127,835.99 | 2776.71 | 32,452.11 | 166,625.00 | 125 |
x86 CPU | 3600 | Windows 10 | C (gcc 8.1) | 16,679.21 | 11.47 | 3829.87 | 21,551.00 | 125 |
x86 CPU | 3600 | Ubuntu 18.04 | Tengine Lite 1.0 | 289.00 | 5.40 | 287.30 | 609.36 | 125 |
ARM CPU | 1333 | Ubuntu 18.04 | Tengine Lite 1.0 | 3785.90 | 113.9 | 3543.50 | 7675.00 | 0.52 |
ARM CPU + DLA on FPGA | 1333 & 200 | Ubuntu 18.04 | Tengine Lite 1.0 | 2958.32 | 79.73 | 2763.39 | 6007.23 | 3.85 |
ARM CPU + DPU on FPGA | 1333 & 200 | - | - | 423.75 | 2.46 | 97.10 | 523.28 | 4.04 |
BRISQUE | PIQE | NIQE | NIMA | RankIQA | ABS | |
---|---|---|---|---|---|---|
Gt | 42.62 | 69.36 | 5.03 | 4.79 | 2.97 | 8.70 |
DL | 46.44 | 80.92 | 5.52 | 3.13 | 2.19 | 5.40 |
Pipeline 1 | 51.01 | 77.08 | 5.53 | 3.44 | 2.50 | 7.90 |
Pipeline 2 | 54.93 | 83.12 | 5.00 | 3.50 | 2.04 | 6.50 |
Pipeline 3 | 45.01 | 60.21 | 5.42 | 3.59 | 1.92 | 6.20 |
Pipeline 4 | 49.82 | 78.45 | 5.58 | 3.20 | 1.94 | 3.10 |
Pipeline 5 | 39.90 | 52.73 | 5.70 | 3.47 | 2.31 | 8.30 |
Pipeline 6 | 45.57 | 57.26 | 5.15 | 3.41 | 1.90 | 4.70 |
Pipeline 7 | 50.30 | 69.20 | 4.93 | 3.27 | 2.18 | 7.00 |
Pipeline 8 | 57.01 | 80.89 | 4.99 | 3.23 | 1.81 | 4.00 |
Pipeline 9 | 46.11 | 68.27 | 4.79 | 3.25 | 2.38 | 7.70 |
Pipeline 10 | 54.70 | 66.09 | 4.90 | 3.19 | 1.92 | 4.60 |
Lantency (Microseconds) | Hardware Resource Consumption | Power (Watts) | |||
---|---|---|---|---|---|
LUTs | Registers | BRAMs | |||
Pipeline 1 | 968 | 120 | 182 | 0 | 4.611 |
Pipeline 2 | 965 | 128 | 216 | 0 | 6.319 |
Pipeline 3 | 970 | 327 | 319 | 0 | 6.964 |
Pipeline 4 | 968 | 213 | 52 | 1.5 | 3.823 |
Pipeline 5 | 1650 | 376 | 423 | 0 | 8.562 |
Pipeline 6 | 1651 | 492 | 293 | 1.5 | 7.712 |
Pipeline 7 | 1657 | 384 | 457 | 0 | 8.575 |
Pipeline 8 | 1655 | 270 | 190 | 1.5 | 5.435 |
Pipeline 9 | 2327 | 433 | 527 | 0 | 10.182 |
Pipeline 10 | 2328 | 526 | 561 | 1.5 | 9.313 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Chen, J.; Wang, B.; He, S.; Xing, Q.; Su, X.; Liu, W.; Gao, G. HISP: Heterogeneous Image Signal Processor Pipeline Combining Traditional and Deep Learning Algorithms Implemented on FPGA. Electronics 2023, 12, 3525. https://doi.org/10.3390/electronics12163525
Chen J, Wang B, He S, Xing Q, Su X, Liu W, Gao G. HISP: Heterogeneous Image Signal Processor Pipeline Combining Traditional and Deep Learning Algorithms Implemented on FPGA. Electronics. 2023; 12(16):3525. https://doi.org/10.3390/electronics12163525
Chicago/Turabian StyleChen, Jie, Binghao Wang, Shupei He, Qijun Xing, Xing Su, Wei Liu, and Ge Gao. 2023. "HISP: Heterogeneous Image Signal Processor Pipeline Combining Traditional and Deep Learning Algorithms Implemented on FPGA" Electronics 12, no. 16: 3525. https://doi.org/10.3390/electronics12163525
APA StyleChen, J., Wang, B., He, S., Xing, Q., Su, X., Liu, W., & Gao, G. (2023). HISP: Heterogeneous Image Signal Processor Pipeline Combining Traditional and Deep Learning Algorithms Implemented on FPGA. Electronics, 12(16), 3525. https://doi.org/10.3390/electronics12163525