Beyond PRNU: Learning Robust Device-Specific Fingerprint for Source Camera Identification
Abstract
:1. Introduction
- We make evident the presence of a new non-PRNU device-specific fingerprint in digital images and develop a data-driven approach using CNN to extract the fingerprint.
- We show that the new device-specific fingerprint is mainly embedded in the low- and mid-frequency components of the image; hence, it is more robust than a fingerprint (e.g., PRNU) residing in the high-frequency band.
- We show that the global, stochastic, and location-independent characteristics of the new device fingerprint make it a robust fingerprint for forensic applications.
- We also demonstrate the reliability of the new device-specific fingerprint in real-world forensic scenarios through open-set evaluation.
- We validate the robustness of the new fingerprint on common image processing operations.
2. Literature Review
3. Methodology
3.1. Generation of PRNU-Free Images with Down Sampling
3.2. Generation of PRNU-Free Images with Random Sampling
3.3. Learning Device-Specific Fingerprints
3.4. Verification of Fingerprint’s Effectiveness in Source-Camera Identification
4. Experiments and Discussions
4.1. Experimental Setup
4.2. Evaluation of Downsampled Patches
4.3. Evaluation on Randomly Sampled Patches
4.4. Open-Set Validation of New Device-Specific Fingerprint
4.5. Analysis of Effect of PRNU Removal on State-of-the-Art Techniques
4.6. Evaluation of Robustness against Image Manipulations
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Bernacki, J. A survey on digital camera identification methods. Forensic Sci. Int. Digit. Investig. 2020, 34, 300983. [Google Scholar] [CrossRef]
- San Choi, K.; Lam, E.Y.; Wong, K.K. Automatic source camera identification using the intrinsic lens radial distortion. Opt. Express 2006, 14, 11551–11565. [Google Scholar] [CrossRef]
- Swaminathan, A.; Wu, M.; Liu, K.R. Nonintrusive component forensics of visual sensors using output images. IEEE Trans. Inf. Forensics Secur. 2007, 2, 91–106. [Google Scholar] [CrossRef] [Green Version]
- Bayram, S.; Sencar, H.; Memon, N.; Avcibas, I. Source camera identification based on CFA interpolation. In Proceedings of the IEEE International Conference on Image Processing 2005, Genoa, Italy, 11–14 September 2005; Volume 3, p. III-69. [Google Scholar]
- Cao, H.; Kot, A.C. Accurate detection of demosaicing regularity for digital image forensics. IEEE Trans. Inf. Forensics Secur. 2009, 4, 899–910. [Google Scholar]
- Chen, C.; Stamm, M.C. Camera model identification framework using an ensemble of demosaicing features. In Proceedings of the 2015 IEEE International Workshop on Information Forensics and Security (WIFS), Rome, Italy, 16–19 November 2015; pp. 1–6. [Google Scholar]
- Deng, Z.; Gijsenij, A.; Zhang, J. Source camera identification using auto-white balance approximation. In Proceedings of the 2011 International Conference on Computer Vision, Barcelona, Spain, 6–13 November 2011; pp. 57–64. [Google Scholar]
- Sorrell, M.J. Digital camera source identification through JPEG quantisation. In Multimedia Forensics and Security; IGI Global: Hershey, PA, USA, 2009; pp. 291–313. [Google Scholar]
- Mullan, P.; Riess, C.; Freiling, F. Towards Open-Set Forensic Source Grouping on JPEG Header Information. Forensic Sci. Int. Digit. Investig. 2020, 32, 300916. [Google Scholar] [CrossRef]
- Lukas, J.; Fridrich, J.; Goljan, M. Digital camera identification from sensor pattern noise. IEEE Trans. Inf. Forensics Secur. 2006, 1, 205–214. [Google Scholar] [CrossRef]
- Goljan, M.; Fridrich, J.; Filler, T. Large scale test of sensor fingerprint camera identification. Media Forensics Secur. SPIE 2009, 7254, 170–181. [Google Scholar]
- Akshatha, K.; Karunakar, A.; Anitha, H.; Raghavendra, U.; Shetty, D. Digital camera identification using PRNU: A feature based approach. Digit. Investig. 2016, 19, 69–77. [Google Scholar] [CrossRef]
- Li, C.T.; Li, Y. Color-decoupled photo response non-uniformity for digital image forensics. IEEE Trans. Circ. Syst. Video Technol. 2012, 22, 260–271. [Google Scholar] [CrossRef] [Green Version]
- Lin, X.; Li, C.T. Enhancing sensor pattern noise via filtering distortion removal. IEEE Signal Process. Lett. 2016, 23, 381–385. [Google Scholar] [CrossRef] [Green Version]
- Lin, X.; Li, C.T. Preprocessing reference sensor pattern noise via spectrum equalization. IEEE Trans. Inf. Forensics Secur. 2015, 11, 126–140. [Google Scholar] [CrossRef] [Green Version]
- Marra, F.; Poggi, G.; Sansone, C.; Verdoliva, L. Blind PRNU-based image clustering for source identification. IEEE Trans. Inf. Forensics Secur. 2017, 12, 2197–2211. [Google Scholar] [CrossRef]
- Capasso, P.; Cimmino, L.; Abate, A.F.; Bruno, A.; Cattaneo, G. A PNU-Based Methodology to Improve the Reliability of Biometric Systems. Sensors 2022, 22, 6074. [Google Scholar] [CrossRef]
- Ferrara, P.; Iuliani, M.; Piva, A. PRNU-Based Video Source Attribution: Which Frames Are You Using? J. Imaging 2022, 8, 57. [Google Scholar] [CrossRef] [PubMed]
- Rouhi, R.; Bertini, F.; Montesi, D. User profiles’ image clustering for digital investigations. Forensic Sci. Int. Digit. Investig. 2021, 38, 301171. [Google Scholar] [CrossRef]
- Hou, J.U.; Lee, H.K. Detection of hue modification using photo response nonuniformity. IEEE Trans. Circ. Syst. Video Technol. 2016, 27, 1826–1832. [Google Scholar] [CrossRef]
- Iuliani, M.; Fontani, M.; Shullani, D.; Piva, A. Hybrid reference-based video source identification. Sensors 2019, 19, 649. [Google Scholar] [CrossRef] [Green Version]
- Pande, A.; Chen, S.; Mohapatra, P.; Zambreno, J. Hardware architecture for video authentication using sensor pattern noise. IEEE Trans. Circ. Syst. Video Technol. 2013, 24, 157–167. [Google Scholar] [CrossRef]
- Li, C.T. Source Camera Identification Using Enhanced Sensor Pattern Noise. IEEE Trans. Inf. Forensics Secur. 2010, 5, 280–287. [Google Scholar]
- Quan, Y.; Li, C.T. On addressing the impact of ISO speed upon PRNU and forgery detection. IEEE Trans. Inf. Forensics Secur. 2020, 16, 190–202. [Google Scholar] [CrossRef]
- Karaküçük, A.; Dirik, A.E. Adaptive photo-response non-uniformity noise removal against image source attribution. Digit. Investig. 2015, 12, 66–76. [Google Scholar] [CrossRef]
- Li, C.T.; Chang, C.Y.; Li, Y. On the repudiability of device identification and image integrity verification using sensor pattern noise. In Proceedings of the International Conference on Information Security and Digital Forensics, London, UK, 7–9 September 2009; pp. 19–25. [Google Scholar]
- Lin, X.; Li, C.T. Large-scale image clustering based on camera fingerprints. IEEE Trans. Inf. Forensics Secur. 2016, 12, 793–808. [Google Scholar] [CrossRef]
- Amerini, I.; Caldelli, R.; Crescenzi, P.; Del Mastio, A.; Marino, A. Blind image clustering based on the normalized cuts criterion for camera identification. Signal Process. Image Commun. 2014, 29, 831–843. [Google Scholar] [CrossRef]
- Li, C.T.; Lin, X. A fast source-oriented image clustering method for digital forensics. EURASIP J. Image Video Process. 2017, 2017, 1–16. [Google Scholar] [CrossRef] [Green Version]
- Li, R.; Li, C.T.; Guan, Y. Inference of a compact representation of sensor fingerprint for source camera identification. Pattern Recognit. 2018, 74, 556–567. [Google Scholar] [CrossRef]
- Al Shaya, O.; Yang, P.; Ni, R.; Zhao, Y.; Piva, A. A new dataset for source identification of high dynamic range images. Sensors 2018, 18, 3801. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Dirik, A.E.; Karaküçük, A. Forensic use of photo response non-uniformity of imaging sensors and a counter method. Opt. Express 2014, 22, 470–482. [Google Scholar] [CrossRef] [PubMed]
- Dirik, A.E.; Sencar, H.T.; Memon, N. Analysis of seam-carving-based anonymization of images against PRNU noise pattern-based source attribution. IEEE Trans. Inf. Forensics Secur. 2014, 9, 2277–2290. [Google Scholar] [CrossRef]
- Bondi, L.; Baroffio, L.; Güera, D.; Bestagini, P.; Delp, E.J.; Tubaro, S. First steps toward camera model identification with convolutional neural networks. IEEE Signal Process. Lett. 2016, 24, 259–263. [Google Scholar] [CrossRef] [Green Version]
- Tuama, A.; Comby, F.; Chaumont, M. Camera model identification with the use of deep convolutional neural networks. In Proceedings of the 2016 IEEE International Workshop on Information Forensics and Security (WIFS), Abu Dhabi, United Arab Emirates, 4–7 December 2016; pp. 1–6. [Google Scholar]
- Yao, H.; Qiao, T.; Xu, M.; Zheng, N. Robust multi-classifier for camera model identification based on convolution neural network. IEEE Access 2018, 6, 24973–24982. [Google Scholar] [CrossRef]
- Freire-Obregon, D.; Narducci, F.; Barra, S.; Castrillon-Santana, M. Deep learning for source camera identification on mobile devices. Pattern Recognit. Lett. 2019, 126, 86–91. [Google Scholar] [CrossRef] [Green Version]
- Huang, N.; He, J.; Zhu, N.; Xuan, X.; Liu, G.; Chang, C. Identification of the source camera of images based on convolutional neural network. Digit. Investig. 2018, 26, 72–80. [Google Scholar] [CrossRef]
- Wang, B.; Yin, J.; Tan, S.; Li, Y.; Li, M. Source camera model identification based on convolutional neural networks with local binary patterns coding. Signal Process. Image Commun. 2018, 68, 162–168. [Google Scholar] [CrossRef]
- Manisha; Karunakar, A.K.; Li, C.T. Identification of source social network of digital images using deep neural network. Pattern Recognit. Lett. 2021, 150, 17–25. [Google Scholar] [CrossRef]
- Xie, H.; Ni, J.; Shi, Y.Q. Dual-Domain Generative Adversarial Network for Digital Image Operation Anti-forensics. IEEE Trans. Circ. Syst. Video Technol. 2021, 32, 1701–1706. [Google Scholar] [CrossRef]
- Wang, B.; Zhao, M.; Wang, W.; Dai, X.; Li, Y.; Guo, Y. Adversarial Analysis for Source Camera Identification. IEEE Trans. Circ. Syst. Video Technol. 2020, 31, 4174–4186. [Google Scholar] [CrossRef]
- Chen, Y.; Huang, Y.; Ding, X. Camera model identification with residual neural network. In Proceedings of the 2017 IEEE International Conference on Image Processing (ICIP), Beijing, China, 17–20 September 2017; pp. 4337–4341. [Google Scholar]
- Yang, P.; Ni, R.; Zhao, Y.; Zhao, W. Source camera identification based on content-adaptive fusion residual networks. Pattern Recognit. Lett. 2019, 119, 195–204. [Google Scholar] [CrossRef]
- Ding, X.; Chen, Y.; Tang, Z.; Huang, Y. Camera identification based on domain knowledge-driven deep multi-task learning. IEEE Access 2019, 7, 25878–25890. [Google Scholar] [CrossRef]
- Sameer, V.U.; Naskar, R. Deep siamese network for limited labels classification in source camera identification. Multimed. Tools Appl. 2020, 79, 28079–28104. [Google Scholar] [CrossRef]
- Mandelli, S.; Cozzolino, D.; Bestagini, P.; Verdoliva, L.; Tubaro, S. CNN-based fast source device identification. IEEE Signal Process. Lett. 2020, 27, 1285–1289. [Google Scholar] [CrossRef]
- Liu, Y.; Zou, Z.; Yang, Y.; Law, N.F.B.; Bharath, A.A. Efficient source camera identification with diversity-enhanced patch selection and deep residual prediction. Sensors 2021, 21, 4701. [Google Scholar] [CrossRef] [PubMed]
- Chen, M.; Fridrich, J.; Goljan, M.; Lukás, J. Determining image origin and integrity using sensor noise. IEEE Trans. Inf. Forensics Secur. 2008, 3, 74–90. [Google Scholar] [CrossRef] [Green Version]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Identity mappings in deep residual networks. In Proceedings of the European Conference on Computer Vision, Amsterdam, The Netherlands, 11–14 October 2016; pp. 630–645. [Google Scholar]
- Shullani, D.; Fontani, M.; Iuliani, M.; Shaya, O.A.; Piva, A. VISION: A video and image dataset for source identification. EURASIP J. Inf. Secur. 2017, 2017, 15. [Google Scholar] [CrossRef]
- Quan, Y.; Li, C.T.; Zhou, Y.; Li, L. Warwick Image Forensics Dataset for Device Fingerprinting in Multimedia Forensics. In Proceedings of the 2020 IEEE International Conference on Multimedia and Expo (ICME), London, UK, 6–10 July 2020; pp. 1–6. [Google Scholar]
- Tian, H.; Xiao, Y.; Cao, G.; Zhang, Y.; Xu, Z.; Zhao, Y. Daxing Smartphone Identification Dataset. IEEE Access 2019, 7, 101046–101053. [Google Scholar] [CrossRef]
- Bruno, A.; Capasso, P.; Cattaneo, G.; Petrillo, U.F.; Improta, R. A novel image dataset for source camera identification and image based recognition systems. Multimed. Tools Appl. 2022, 1–17. [Google Scholar] [CrossRef]
- Cozzolino, D.; Verdoliva, L. Noiseprint: A CNN-based camera model fingerprint. arXiv 2018, arXiv:1808.08396. [Google Scholar] [CrossRef] [Green Version]
- Zuo, Z. Camera Model Identification with Convolutional Neural Networks and Image Noise Pattern; University Library, University of Illinois: Urbana, IL, USA, 2018. [Google Scholar]
- Quan, Y.; Lin, X.; Li, C.T. Provenance Inference for Instagram Photos Through Device Fingerprinting. IEEE Access 2020, 8, 168309–168320. [Google Scholar] [CrossRef]
Dataset | Model | No. of Devices | No. of Images from Each Device | Resolution |
---|---|---|---|---|
Vision | iPhone 5c | 2 | 204 | 3264 × 2448 |
iPhone 4s | 2 | 200 | 3264 × 2448 | |
Warwick | Fujifilm X-A10 | 2 | 197 | 3264 × 2448 |
Daxing | Huawei P20 | 2 | 190 | 2976 × 3968 |
iPhone 6 | 3 | 190 | 3264 × 2448 | |
iPhone 7 | 3 | 190 | 4032 × 3024 | |
iPhone 8 Plus | 1 | 190 | 4032 × 3024 | |
Oppo R9 | 2 | 190 | 3120 × 4160 | |
Xiaomi 4A | 5 | 190 | 3120 × 4160 | |
UNISA2020 | Nikon D90 | 10 | 100 | 4288 × 2848 |
Custom-built | Redmi Note 3 | 2 | 200 | 4608 × 3456 |
Asus Zenfone MaxProM1 | 2 | 250 | 4608 × 3456 |
Device | Average PCE of PRNUs of 30 Original Images with Respect to | Average PCE of PRNUs of 30 PRNU-Free Images with Respect to | Average PCE of PRNUs of 30 PRNU-Free Images with Respect to |
---|---|---|---|
iPhone 5c-1 | 16,067 | 1.52 | 2.19 |
iPhone 5c-2 | 9848 | 1.96 | 2.93 |
iPhone 4s-1 | 17,674 | 1.22 | 1.89 |
Fujifilm X-A10-1 | 1095 | 1.51 | - |
Redmi Note 3-1 | 14,845 | 1.51 | - |
Exp. | Device Used | Testing Accuracy (%) | ||
---|---|---|---|---|
Downsampling from Original Images | Random Sampling from Original Images | Random Sampling from Downsampled Images | ||
1 | 2 iPhone 5c | 97.06 | 83.86 | 84.44 |
2 | 2 iPhone 4s | 98.00 | 85.70 | 86.38 |
3 | 2 Fujifilm X-A10 | 95.92 | 84.39 | 86.27 |
4 | 2 Redmi Note 3 | 97.00 | 85.38 | 92.26 |
5 | 2 Asus Zenfone Max Pro M1 | 95.16 | 84.25 | 86.24 |
6 | 5 Xiaomi 4A | 99.57 | 94.01 | 95.79 |
7 | 16 devices of 6 different models from the Daxing dataset | 99.20 | 98.15 | 99.58 |
Cameras Used to Train the Fingerprint Extractor | Testing Accuracy Achieved on Open-Set Cameras (%) | ||||
---|---|---|---|---|---|
6 Oppo R9 (6 Cameras) | 2 Asus Zenfone Max Pro M1 2 iPhone 5 4 Vivo X9 (8 Cameras) | 2 Fujifilm X-A10 2 Redmi Note 5 Pro 6 Oppo R9 (10 Cameras) | 2 Fujifilm X-A10 2 Redmi Note 5 Pro 2 Redmi Note 3 6 Oppo R9 (12 Cameras) | 33 Devices from 11 Brands of VISION Dataset | |
2 iPhone 5c | 99.40 | 92.16 | 90.40 | 83.33 | 63.78 |
5 Xiaomi 4A | 99.70 | 96.61 | 91.06 | 86.52 | 65.37 |
Method | Testing Accuracy (%) | |||
---|---|---|---|---|
2 iPhone 4s | 2 Fujifilm X-A10 | 2 Redmi Note 3 | 16 Devices (Daxing Dataset) | |
CNN + SVM [34] | 50.00 | 50.00 | 50.00 | 6.25 |
CNN-based multi-classifier [36] | 76.00 | 58.16 | 75.00 | 76.60 |
CNN with 6 layers [37] | 73.00 | 45.00 | 76.00 | 21.15 |
CNN with 11 layers [38] | 50.00 | 50.00 | 50.00 | 6.25 |
LBP + modified AlexNet [39] | 64.71 | 50.00 | 53.00 | 6.25 |
Residual neural network [43] | 80.00 | 81.63 | 79.00 | 91.88 |
Content-adaptive fusion network [44] | 70.00 | 59.00 | 62.73 | 51.06 |
Proposed downsampling from original images | 98.00 | 95.92 | 97.00 | 99.20 |
Proposed random sampling from original images | 85.70 | 84.39 | 85.38 | 98.15 |
Proposed random sampling from downsampled images | 86.38 | 86.27 | 92.26 | 99.58 |
Method | Testing Accuracy (%) |
---|---|
CNN + SVM [34] | 25.00 |
CNN-based multi-classifier [36] | 74.05 |
CNN with 6 layers [37] | 75.53 |
CNN with 11 layers [38] | 62.27 |
Residual neural network [43] | 77.12 |
Content-adaptive fusion network [44] | 38.14 |
Proposed random sampling from original images | 96.02 |
Device Used for the Experiment | Original | Gamma Correction | Rotation | JPEG Compression | |||||
---|---|---|---|---|---|---|---|---|---|
2 iPhone 5c | 97.06 | 97.06 | 96.08 | 96.08 | 87.25 | 91.18 | 97.06 | 96.08 | 96.08 |
2 Fujifilm X-A10 | 95.92 | 95.92 | 94.95 | 91.84 | 88.78 | 85.71 | 95.92 | 95.92 | 95.92 |
2 Redmi Note 3 | 97.00 | 97.00 | 94.00 | 97.00 | 94.00 | 83.00 | 97.00 | 97.00 | 96.00 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Manisha; Li, C.-T.; Lin, X.; Kotegar, K.A. Beyond PRNU: Learning Robust Device-Specific Fingerprint for Source Camera Identification. Sensors 2022, 22, 7871. https://doi.org/10.3390/s22207871
Manisha, Li C-T, Lin X, Kotegar KA. Beyond PRNU: Learning Robust Device-Specific Fingerprint for Source Camera Identification. Sensors. 2022; 22(20):7871. https://doi.org/10.3390/s22207871
Chicago/Turabian StyleManisha, Chang-Tsun Li, Xufeng Lin, and Karunakar A. Kotegar. 2022. "Beyond PRNU: Learning Robust Device-Specific Fingerprint for Source Camera Identification" Sensors 22, no. 20: 7871. https://doi.org/10.3390/s22207871
APA StyleManisha, Li, C. -T., Lin, X., & Kotegar, K. A. (2022). Beyond PRNU: Learning Robust Device-Specific Fingerprint for Source Camera Identification. Sensors, 22(20), 7871. https://doi.org/10.3390/s22207871