Low-Cost Image Compressive Sensing with Multiple Measurement Rates for Object Detection
Abstract
:1. Introduction
1.1. Challenges and Motivations
1.2. Contributions
- To reduce the model size and computation cost while improving the accuracy of object detection in MRCS, we propose a smaller real-time object detector with a depthwise feature pyramid network name MYOLO3. MYOLO3 is mainly built on bottleneck residual blocks and depthwise separable convolutions, instead of standard residual blocks and convolutions.
- To reduce the required transmission bandwidth and storage space for CS measurements, we propose a CS approach with multiple MRs for sampling natural images known as MRCS. MRCS uses higher MRs on image regions that users are interested in, while adopting lower MRs on other image regions to sample an entire image.
- We propose a half-precision presentation method of CS measurements to further reduce the size of CS measurements, which represents the values of CS measurements with 16-bit half-precision floats instead of 32-bit single-precision floats.
2. Related Work
2.1. Object Detection
2.2. CS Construction
3. Overview of Proposed MRCS
4. Architecture of MYOLOv3
4.1. Depthwise Separable Convolutions
4.2. Bottleneck Residual Blocks
4.3. Depthwise Feature Pyramid Network
4.3.1. Nearest Neighbor Upsampling Layers
4.3.2. Route Layers
4.3.3. YOLO Layers
5. Compressive Sensing with Multiple MRs
5.1. CS Sampling with Multiple MRs
Algorithm 1 The algorithm for CS sampling with multiple MRs in the proposed MRCS |
Input: A natural image of with channels, and k measurement matrices such as generated with a higher measurement rate and generated with a lower measurement rate. |
Output: The results of half-precision CS measurement |
|
5.2. DNN-Based CS Reconstruction with Multiple MRs
6. Experiment Approaches
6.1. Implementation Approaches
6.1.1. Training and Test for MYOLO3
6.1.2. Training and Test for CS with multiple MRs
6.2. Evaluation Metrics
6.2.1. Metrics for MYOLO3
6.2.2. Metrics for CS with Multiple MRs
7. Evaluation Results
7.1. Comparison with Other Object Detectors
7.2. Performance of CS Sampling and Reconstruction with Multiple MRs
7.3. Performance of Half-Precision CS Measurements
8. Conclusions
Author Contributions
Funding
Acknowledgments
Conflicts of Interest
References
- Pietrow, D.; Matuszewski, J. Objects detection and recognition system using artificial neural networks and drones. In Proceedings of the Signal Processing Symposium (SPSympo), Jachranka, Poland, 12–14 September 2017; pp. 1–5. [Google Scholar]
- Chen, C.; Li, K.; Teo, S.G.; Chen, G.; Zou, X.; Yang, X.; Vijay, R.C.; Feng, J.; Zeng, Z. Exploiting Spatio-Temporal Correlations with Multiple 3D Convolutional Neural Networks for Citywide Vehicle Flow Prediction. In Proceedings of the IEEE International Conference on Data Mining (ICDM), Singapore, 17–20 November 2018; pp. 893–898. [Google Scholar]
- Satyanarayanan, M. The Emergence of Edge Computing. Computer 2017, 50, 30–39. [Google Scholar] [CrossRef]
- Duarte, M.F.; Davenport, M.A.; Takhar, D.; Laska, J.N.; Sun, T.; Kelly, K.F.; Baraniuk, R.G. Single-pixel imaging via compressive sampling. IEEE Signal Process. Mag. 2008, 25, 83–91. [Google Scholar] [CrossRef] [Green Version]
- Ma, R.; Hu, F.; Hao, Q. Active Compressive Sensing via Pyroelectric Infrared Sensor for Human Situation Recognition. IEEE Trans. Syst. Man. Cybern. Syst. 2017, 47, 3340–3350. [Google Scholar] [CrossRef]
- Cho, S.; Kim, D.H.; Park, Y.W. Learning drone-control actions in surveillance videos. In Proceedings of the 17th International Conference on Control, Automation and Systems (ICCAS), Jeju, Korea, 18–21 October 2017; pp. 700–703. [Google Scholar]
- Chang, K.; Ding, P.L.K.; Li, B. Compressive Sensing Reconstruction of Correlated Images Using Joint Regularization. IEEE Signal Process. Lett. 2016, 23, 449–453. [Google Scholar] [CrossRef]
- Song, X.; Peng, X.; Xu, J.; Shi, G.; Wu, F. Distributed Compressive Sensing for Cloud-Based Wireless Image Transmission. IEEE Trans. Multimed. 2017, 19, 1351–1364. [Google Scholar] [CrossRef]
- Ma, Y.; Liu, Y.; Liu, S.; Zhang, Z. Multiple Object Detection and Tracking in Complex Background. Int. J. Pattern Recognit. Artif. Intell. 2017, 31, 1755003. [Google Scholar] [CrossRef]
- Zhang, X.; Xu, C.; Sun, X.; Baciu, G. Salient Object Detection via Nonlocal Diffusion Tensor. Int. J. Pattern Recognit. Artif. Intell. 2015, 29, 1555013. [Google Scholar] [CrossRef]
- Kuang, P.; Ma, T.; Li, F.; Chen, Z. Real-Time Pedestrian Detection Using Convolutional Neural Networks. Int. J. Pattern Recognit. Artif. Intell. 2018, 32, 1856014. [Google Scholar] [CrossRef]
- Ou, X.; Yan, P.; He, W.; Kim, Y.K.; Zhang, G.; Peng, X.; Hu, W.; Wu, J.; Guo, L. Adaptive GMM and BP Neural Network Hybrid Method for Moving Objects Detection in Complex Scenes. Int. J. Pattern Recognit. Artif. Intell. 2019, 33, 1950004. [Google Scholar] [CrossRef]
- Zhang, S.; Wen, L.; Bian, X.; Lei, Z.; Li, S.Z. Single-Shot Refinement Neural Network for Object Detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 4203–4212. [Google Scholar]
- Hu, Q.; Zhai, L. RGB-D Image Multi-Target Detection Method Based on 3D DSF R-CNN. Int. J. Pattern Recognit. Artif. Intell. 2019, 1954026. [Google Scholar] [CrossRef]
- Duan, M.; Li, K.; Li, K. An Ensemble CNN2ELM for Age Estimation. IEEE Trans. Inf. Forensics Secur. 2018, 13, 758–772. [Google Scholar] [CrossRef]
- Girshick, R.; Donahue, J.; Darrell, T.; Malik, J. Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 24–27 June 2014; pp. 580–587. [Google Scholar]
- Girshick, R. Fast R-CNN. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Boston, MA, USA, 7–12 June 2015; pp. 1440–1448. [Google Scholar]
- Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. In Advances in Neural Information Processing Systems (NIPS); Curran Associates, Inc.: Red Hook, NY, USA, 2015. [Google Scholar]
- Dai, J.; Li, Y.; He, K.; Sun, J. R-FCN: Object Detection via Region-based Fully Convolutional Networks. In Advances in Neural Information Processing Systems 29; Curran Associates, Inc.: Red Hook, NY, USA, 2016; pp. 379–387. [Google Scholar]
- Lin, T.; Dollár, P.; Girshick, R.; He, K.; Hariharan, B.; Belongie, S. Feature Pyramid Networks for Object Detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 936–944. [Google Scholar]
- Cai, Z.; Vasconcelos, N. Cascade R-CNN: Delving Into High Quality Object Detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 6154–6162. [Google Scholar]
- Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.Y.; Berg, A.C. Ssd: Single shot multibox detector. In European Conference on Computer Vision; Springer: Cham, Switzerland, 2016; pp. 21–37. [Google Scholar]
- Redmon, J.; Farhadi, A. YOLO9000: Better, faster, stronger. In Proceedings of the The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 7263–7271. [Google Scholar]
- Redmon, J.; Farhadi, A. YOLOv3: An Incremental Improvement. arXiv 2018, arXiv:1804.02767. [Google Scholar]
- Yang, S.; Hao, K.; Ding, Y.; Liu, J. Vehicle Driving Direction Control Based on Compressed Network. Int. J. Pattern Recognit. Artif. Intell. 2018, 32, 1850025. [Google Scholar] [CrossRef]
- Howard, A.G.; Zhu, M.; Chen, B.; Kalenichenko, D.; Wang, W.; Weyand, T.; Andreetto, M.; Adam, H. MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications. arXiv 2017, arXiv:1704.04861. [Google Scholar]
- Chollet, F. Xception: Deep Learning with Depthwise Separable Convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 1800–1807. [Google Scholar]
- Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J.; Wojna, Z. Rethinking the Inception Architecture for Computer Vision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 26 June–1 July 2016; pp. 2818–2826. [Google Scholar]
- Sandler, M.; Howard, A.; Zhu, M.; Zhmoginov, A.; Chen, L. MobileNetV2: Inverted Residuals and Linear Bottlenecks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 4510–4520. [Google Scholar]
- Dong, W.; Shi, G.; Li, X.; Ma, Y.; Huang, F. Compressive Sensing via Nonlocal Low-Rank Regularization. IEEE Trans. Image Process. 2014, 23, 3618–3632. [Google Scholar] [CrossRef]
- Fei, X.; Wei, Z.; Xiao, L. Iterative Directional Total Variation Refinement for Compressive Sensing Image Reconstruction. IEEE Signal Process. Lett. 2013, 20, 1070–1073. [Google Scholar]
- Metzler, C.A.; Mousavi, A.; Baraniuk, R.G. Learned D-AMP: Principled Neural network based compressive image recovery. In Advances in Neural Information Processing Systems (NIPS 2017); Curran Associates, Inc.: Red Hook, NY, USA, 2017; pp. 1772–1784. [Google Scholar]
- Metzler, C.A.; Maleki, A.; Baraniuk, R.G. From Denoising to Compressed Sensing. IEEE Trans. Inf. Theory 2016, 62, 5117–5144. [Google Scholar] [CrossRef]
- Majumdar, A.; Ward, R.K. Compressed sensing of color images. Signal Process. 2010, 90, 3122–3127. [Google Scholar] [CrossRef]
- Mousavi, A.; Patel, A.B.; Baraniuk, R.G. A deep learning approach to structured signal recovery. In Proceedings of the 53rd Annual Allerton Conference on Communication, Control, and Computing (Allerton), Monticello, IL, USA, 29 September–2 October2015; pp. 1336–1343. [Google Scholar]
- Mousavi, A.; Baraniuk, R.G. Learning to invert: Signal recovery via Deep Convolutional Networks. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), New Orleans, LA, USA, 5–9 March 2017; pp. 2272–2276. [Google Scholar]
- Kulkarni, K.; Lohit, S.; Turaga, P.; Kerviche, R.; Ashok, A. ReconNet: Non-Iterative Reconstruction of Images from Compressively Sensed Measurements. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 26 June–1 July 2016; pp. 449–458. [Google Scholar]
- Zhang, J.; Bernard, G. ISTA-Net: Interpretable Optimization-Inspired Deep Network for Image Compressive Sensing. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 1828–1837. [Google Scholar]
- Yao, H.; Dai, F.; Zhang, D.; Ma, Y.; Zhang, S.; Zhang, Y.; Tian, Q. DR2-net: Deep residual reconstruction network for image compressive sensing. arXiv 2017, arXiv:1702.05743. [Google Scholar]
- Han, D.; Kim, J.; Kim, J. Deep Pyramidal Residual Networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 6307–6315. [Google Scholar]
- Neubeck, A.; Van Gool, L. Efficient Non-Maximum Suppression. In Proceedings of the18th International Conference on Pattern Recognition (ICPR’06), Hong Kong, China, 20–24 August 2006; Volume 3, pp. 850–855. [Google Scholar]
- Nandakumar, S.R.; Gallo, M.L.; Boybat, I.; Rajendran, B.; Sebastian, A.; Eleftheriou, E. Mixed-precision architecture based on computational memory for training deep neural networks. In Proceedings of the IEEE International Symposium on Circuits and Systems (ISCAS), Florence, Italy, 27–30 May 2018; pp. 1–5. [Google Scholar]
- Micikevicius, P.; Narang, S.; Alben, J.; Diamos, G.F.; Elsen, E.; Garca, D.; Ginsburg, B.; Houston, M.; Kuchaiev, O.; Venkatesh, G.; Wu, H. Mixed Precision Training. In Proceedings of the 6th International Conference on Learning Representations (ICLR), Vancouver, BC, Canada, 30 April–3 May 2018. [Google Scholar]
- Boufounos, P.T.; Jacques, L.; Krahmer, F.; Saab, R. Quantization and Compressive Sensing. In Compressed Sensing and its Applications: MATHEON Workshop 2013; Birkhäuser, Cham: Basel, Switzerland, 2015; pp. 193–237. [Google Scholar] [Green Version]
- Merve Gürel, N.; Kara, K.; Stojanov, A.; Smith, T.; Alistarh, D.; Püschel, M.; Zhang, C. Compressive Sensing with Low Precision Data Representation: Theory and Applications. arXiv 2018, arXiv:1802.04907. [Google Scholar]
- Cerone, V.; Fosson, S.M.; Regruto, D. A linear programming approach to sparse linear regression with quantized data. arXiv 2019, arXiv:1903.07156. [Google Scholar]
- Liao, L.; Li, K.; Li, K.; Yang, C.; Tian, Q. UHCL-Darknet: An OpenCL-based Deep Neural Network Framework for Heterogeneous Multi-/Many-core Clusters. In Proceedings of the 47th International Conference on Parallel Processing, Eugene, OR, USA, 13–16 August 2018; pp. 44:1–44:10. [Google Scholar]
- Jia, Y.; Shelhamer, E.; Donahue, J.; Karayev, S.; Jonathan, L.; Girshick, R.; Guadarrama, S.; Darrell, T. Caffe: Convolutional Architecture for Fast Feature Embedding. In Proceedings of the 22nd ACM International Conference on Multimedia, Orlando, FL, USA, 3–7 November 2014; pp. 675–678. [Google Scholar]
- Everingham, M.; Eslami, S.M.A.; Van Gool, L.; Williams, C.K.I.; Winn, J.; Zisserman, A. The Pascal Visual Object Classes Challenge: A Retrospective. Int. J. Comput. Vis. 2015, 111, 98–136. [Google Scholar] [CrossRef]
- Bai, S.; Bai, X.; Tian, Q.; Latecki, L.J. Regularized Diffusion Process on Bidirectional Context for Object Retrieval. IEEE Trans. Pattern Anal. Mach. Intell. 2018. [Google Scholar] [CrossRef] [PubMed]
- Wang, J.; Bohn, T.; Ling, C. Pelee: A Real-Time Object Detection System on Mobile Devices. In Advances in Neural Information Processing Systems 31; Curran Associates, Inc.: Red Hook, NY, USA, 2018; pp. 1963–1972. [Google Scholar]
Layers | Operation Type | Input | Filter Size | s | t | cout | n |
---|---|---|---|---|---|---|---|
1 | Convolution | 416 × 416 × 3 | 3 × 3 | 2 | - | 32 | 1 |
2–4 | Bottleneck residual block | 208 × 208 × 32 | - | 1 | 1 | 16 | 1 |
5–6 | Bottleneck residual block | 208 × 208 × 16 | - | 2 | 6 | 24 | 1 |
8–10 | Bottleneck residual block | 104 × 104 × 24 | - | 1 | 6 | 24 | 1 |
11–13 | Bottleneck residual block | 104 × 104 × 24 | - | 2 | 6 | 32 | 1 |
14–19 | Bottleneck residual block | 52 × 52 × 32 | - | 1 | 6 | 32 | 2 |
20–22 | Bottleneck residual block | 52 × 52 × 32 | - | 2 | 6 | 64 | 1 |
23–31 | Bottleneck residual block | 26 × 26 × 64 | - | 1 | 6 | 64 | 3 |
32–34 | Bottleneck residual block | 26 × 26 × 64 | - | 1 | 6 | 96 | 1 |
35–40 | Bottleneck residual block | 26 × 26 × 96 | - | 1 | 6 | 96 | 2 |
41–43 | Bottleneck residual block | 26 × 26 × 96 | - | 2 | 6 | 160 | 1 |
44–49 | Bottleneck residual block | 13 × 13 × 160 | - | 1 | 6 | 160 | 2 |
50–52 | Bottleneck residual block | 13 × 13 × 160 | - | 1 | 6 | 320 | 1 |
53 | Convolution | 13 × 13 × 320 | 1 × 1 | 1 | - | 1280 | 1 |
54 | Convolution | 13 × 13 × 1280 | 1 × 1 | 1 | - | 512 | 1 |
55–62 | Depthwise separable convolution | 13 × 13 × 512 | - | 1 | - | 512 | 4 |
63 | Convolution | 13 × 13 × 512 | 1 × 1 | 1 | - | 75 ( = 20) | 1 |
64 | YOLO | - | - | - | - | - | 1 |
65 | Route | 60 | - | - | - | - | 1 |
66 | Convolution | 13 × 13 × 512 | 1 × 1 | 1 | - | 256 | 1 |
67 | Nearest neighbor upsampling | - | - | 2 | - | - | 1 |
68 | Route | 67, 40 | - | - | - | - | 1 |
69 | Convolution | 26 × 26 × 352 | 1 × 1 | 1 | - | 256 | 1 |
70–77 | Depthwise separable convolution | 26 × 26 × 256 | - | 1 | - | 256 | 4 |
78 | Convolution | 26 × 26 × 256 | 1 × 1 | 1 | - | 75 ( = 20) | 1 |
79 | YOLO | - | - | - | - | - | 1 |
80 | Route | 75 | - | - | - | - | 1 |
81 | Convolution | 26 × 26 × 256 | 1 × 1 | 1 | - | 128 | 1 |
82 | Nearest neighbor upsampling | - | - | 2 | - | - | 1 |
83 | Route | 82, 19 | - | - | - | - | 1 |
84 | Convolution | 52 × 52 × 160 | 1 × 1 | 1 | - | 128 | 1 |
85–92 | Depthwise separable convolution | 52 × 52 × 128 | - | 1 | - | 128 | 4 |
93 | Convolution | 52 × 52 × 128 | 1 × 1 | 1 | - | 75 ( = 20) | 1 |
94 | YOLO | - | - | - | - | - | 1 |
Model | Input Size (Height × Width) | Computation Cost (MAdds) | Model Size (Parameters) | mAP0.5 (%) |
---|---|---|---|---|
Tiny-YOLOv2 [23] | 416 × 416 | 3490 M | 15.86 M | 57.1 |
Tiny-YOLOv3 [24] | 416 × 416 | 2742 M | 8.72 M | 58.4 |
MobileNet+SSD [26] | 300 × 300 | 1150 M | 5.77 M | 68.0 |
PeleeNet [51] | 304 × 304 | 1210 M | 5.43 M | 70.9 |
MYOLO3 (ours) | 416 × 416 | 1978 M | 4.80 M | 74.0 |
MRh/MRl | Single-Precision CS Reconstruction | Half-Precision CS Reconstruction | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
mCR | Bwidth (Mb/s) | mPSNR (dB) | AP0.5 (%) | mCR | Bwidth (Mb/s) | mPSNR (dB) | AP0.5 (%) | |||||
Person | Bicycle | Car | Person | Bicycle | Car | |||||||
0.25/0.25 | 0.89 | 14.21 | 26.11 | 74.5 | 74.6 | 77.3 | 1.78 | 7.11 | 26.11 | 74.5 | 74.6 | 77.3 |
0.10/0.10 | 2.23 | 5.69 | 23.20 | 63.1 | 58.4 | 60.8 | 4.45 | 2.85 | 23.20 | 63.1 | 58.1 | 60.9 |
0.04/0.04 | 5.64 | 2.26 | 20.84 | 40.9 | 24.6 | 33.8 | 11.28 | 1.12 | 20.84 | 41.0 | 24.8 | 33.7 |
0.01/0.01 | 24.26 | 0.53 | 18.14 | 6.9 | 4.6 | 4.9 | 48.53 | 0.26 | 18.14 | 6.9 | 4.6 | 5.0 |
0.25/0.10 | 1.43 | 9.38 | 24.61 | 72.3 | 73.4 | 74.0 | 2.87 | 4.69 | 24.61 | 72.3 | 73.3 | 73.9 |
0.25/0.04 | 2.12 | 7.41 | 23.19 | 69.5 | 70.5 | 71.2 | 4.24 | 3.71 | 23.19 | 69.5 | 71.1 | 71.1 |
0.25/0.01 | 3.23 | 6.43 | 21.22 | 65.8 | 67.6 | 70.4 | 6.45 | 3.22 | 21.22 | 65.7 | 70.4 | 70.4 |
0.10/0.04 | 3.61 | 3.74 | 22.04 | 60.4 | 55.0 | 57.4 | 7.21 | 1.87 | 22.04 | 60.6 | 57.4 | 57.4 |
0.10/0.01 | 6.35 | 2.76 | 20.37 | 58.5 | 55.3 | 60.1 | 12.67 | 1.38 | 20.38 | 58.5 | 60.2 | 60.2 |
0.04/0.01 | 11.51 | 1.27 | 19.45 | 40.0 | 24.4 | 37.1 | 22.92 | 0.64 | 19.45 | 40.0 | 37.1 | 37.1 |
1.00/0.25 | 0.89 | 14.22 | 31.12 | 79.1 | 81.6 | 82.5 | 1.29 | 10.18 | 31.12 | 79.2 | 81.6 | 82.4 |
1.00/0.10 | 1.44 | 9.38 | 28.07 | 76.5 | 80.0 | 79.8 | 1.94 | 7.76 | 28.07 | 76.5 | 79.9 | 79.8 |
1.00/0.04 | 2.12 | 7.42 | 25.68 | 73.6 | 76.9 | 77.6 | 2.68 | 6.78 | 25.68 | 73.6 | 76.9 | 77.6 |
1.00/0.01 | 3.23 | 6.44 | 22.72 | 69.4 | 73.0 | 75.4 | 3.66 | 6.29 | 22.72 | 69.4 | 73.0 | 75.4 |
© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Liao, L.; Li, K.; Yang, C.; Liu, J. Low-Cost Image Compressive Sensing with Multiple Measurement Rates for Object Detection. Sensors 2019, 19, 2079. https://doi.org/10.3390/s19092079
Liao L, Li K, Yang C, Liu J. Low-Cost Image Compressive Sensing with Multiple Measurement Rates for Object Detection. Sensors. 2019; 19(9):2079. https://doi.org/10.3390/s19092079
Chicago/Turabian StyleLiao, Longlong, Kenli Li, Canqun Yang, and Jie Liu. 2019. "Low-Cost Image Compressive Sensing with Multiple Measurement Rates for Object Detection" Sensors 19, no. 9: 2079. https://doi.org/10.3390/s19092079
APA StyleLiao, L., Li, K., Yang, C., & Liu, J. (2019). Low-Cost Image Compressive Sensing with Multiple Measurement Rates for Object Detection. Sensors, 19(9), 2079. https://doi.org/10.3390/s19092079