Lie to Me: Shield Your Emotions from Prying Software
Abstract
:1. Introduction
2. Background
2.1. Emotion Adversarial Attacks
2.2. Image Filters
2.3. Image Quality Assessment
3. Related Works
4. Emotion Recognition Settings
4.1. Data Set and Data Preparation
4.2. Emotion Recognizer Structure and Training
5. Algorithm for Adversarial Attacks
5.1. Outer Algorithm
- Initial population: Generated by randomly selecting l filters from the S available set, and their parameters are initialized to 1.
- Crossover: We use a one-point crossover to generate new off-springs (i.e., children) from random members. Each child is assured of inheriting genetic information from both parents.
- Mutation: A filter is replaced with another, on a probability of mutation. The new filter is initialized with random parameters, assuring their complete mutation.
- Selection: At each iteration, the N best individuals are chosen from the set of 2N candidates (i.e., parents and offsprings), according to their fitness. The same process is repeated until the algorithm spends the fixed amount of generations. The selection is implemented as a multi-objective evolutionary problem based on two criteria: Attack Success Rate (ASR) and image quality (evaluated by SSIM). The addition of the image quality assessment in the population evaluation phase gives the algorithm the capabilities to create high-quality and natural-looking adversarial examples. Given F a target facial emotion recognizer, an original facial image, derived from by applying a sequence of filters, the fitness function is evaluated by the following:
5.2. Inner Algorithm
6. Experiments and Discussion
6.1. Experimental Setup
6.2. Evaluation
6.3. Results and Generated Images
7. Conclusions and Future Work
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Picard, R.W. Affective Computing: Challenges. Int. J. Hum.-Comput. Stud. 2003, 59, 55–64. [Google Scholar] [CrossRef]
- Gervasi, O.; Franzoni, V.; Riganelli, M.; Tasso, S. Automating facial emotion recognition. Web Intell. 2019, 17, 17–27. [Google Scholar] [CrossRef]
- Sagonas, C.; Tzimiropoulos, G.; Zafeiriou, S.; Pantic, M. A Semi-automatic Methodology for Facial Landmark Annotation. In Proceedings of the 2013 IEEE Conference on Computer Vision and Pattern Recognition Workshops, Portland, OR, USA, 23–28 June 2013; pp. 896–903. [Google Scholar]
- Kazemi, V.; Sullivan, J. One millisecond face alignment with an ensemble of regression trees. In Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 1867–1874. [Google Scholar]
- Curumsing, M.K.; Fernando, N.; Abdelrazek, M.; Vasa, R.; Mouzakis, K.; Grundy, J. Emotion-oriented requirements engineering: A case study in developing a smart home system for the elderly. J. Syst. Softw. 2019, 147, 215–229. [Google Scholar] [CrossRef]
- Franzoni, V.; Biondi, G.; Perri, D.; Gervasi, O. Enhancing Mouth-Based Emotion Recognition Using Transfer Learning. Sensors 2020, 20, 5222. [Google Scholar] [CrossRef] [PubMed]
- Generosi, A.; Ceccacci, S.; Mengoni, M. A deep learning-based system to track and analyze customer behavior in retail store. In Proceedings of the 2018 IEEE 8th International Conference on Consumer Electronics-Berlin (ICCE-Berlin), Berlin, Germany, 2–5 September 2018; pp. 1–6. [Google Scholar]
- Gorrini, A.; Crociani, L.; Vizzari, G.; Bandini, S. Stress estimation in pedestrian crowds: Experimental data and simulations results. Web Intell. 2019, 17, 85–99. [Google Scholar] [CrossRef]
- Xing, Y.; Hu, Z.; Huang, Z.; Lv, C.; Cao, D.; Velenis, E. Multi-Scale Driver Behaviors Reasoning System for Intelligent Vehicles Based on a Joint Deep Learning Framework. In Proceedings of the 2020 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Toronto, ON, Canada, 11–14 October 2020; pp. 4410–4415. [Google Scholar]
- Ferrara, E.; Yang, Z. Quantifying the effect of sentiment on information diffusion in social media. PeerJ Comput. Sci. 2015, 1, e26. [Google Scholar] [CrossRef] [Green Version]
- D’Errico, F.; Poggi, I. “Humble” Politicians and Their Multimodal Communication. In Proceedings of the Computational Science and Its Applications—ICCSA 2017, Trieste, Italy, 3–6 July 2017; Part III, Lecture Notes in Computer Science. Springer: Berlin/Heidelberg, Germany, 2017; Volume 10406, pp. 705–717. [Google Scholar] [CrossRef]
- Carpenter, J. The Quiet Professional: An Investigation of US Military Explosive Ordnance Disposal Personnel Interactions with Everyday Field Robots. Ph.D. Thesis, University of Washington, Washington, DC, USA, 2013. [Google Scholar]
- Baia, A.E.; Di Bari, G.; Poggioni, V. Effective Universal Unrestricted Adversarial Attacks Using a MOE Approach. In Proceedings of the EvoAPPS 2021, Virtual Event, 7–9 April 2021. [Google Scholar]
- Baia, A.E.B.; Milani, A.; Poggioni, V. Combining Attack Success Rate and Detection Rate for effective Universal Adversarial Attacks. In Proceedings of the ESANN 2021, online event, 6–8 October 2021. [Google Scholar]
- Moosavi-Dezfooli, S.M.; Fawzi, A.; Frossard, P. DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 2574–2582. [Google Scholar]
- Shamsabadi, A.S.; Oh, C.; Cavallaro, A. Edgefool: An Adversarial Image Enhancement Filter. In Proceedings of the ICASSP 2020, Barcelona, Spain, 4–8 May 2020. [Google Scholar]
- Shahin Shamsabadi, A.; Sanchez-Matilla, R.; Cavallaro, A. ColorFool: Semantic Adversarial Colorization. In Proceedings of the CVPR 2020, Virtual, 14–19 June 2020. [Google Scholar]
- Carlini, N.; Wagner, D. Towards Evaluating the Robustness of Neural Networks. In Proceedings of the 2017 IEEE Symposium on Security and Privacy (SP), San Jose, CA, USA, 22–26 May 2017; pp. 39–57. [Google Scholar]
- Goodfellow, I.J.; Shlens, J.; Szegedy, C. Explaining and Harnessing Adversarial Examples. arXiv 2015, arXiv:1412.6572. [Google Scholar]
- Kurakin, A.; Goodfellow, I.J.; Bengio, S. Adversarial examples in the physical world. arXiv 2017, arXiv:1607.02533. [Google Scholar]
- Szegedy, C.; Zaremba, W.; Sutskever, I.; Bruna, J.; Erhan, D.; Goodfellow, I.J.; Fergus, R. Intriguing properties of neural networks. arXiv 2013, arXiv:1312.6199. [Google Scholar]
- Moosavi-Dezfooli, S.M.; Fawzi, A.; Fawzi, O.; Frossard, P. Universal Adversarial Perturbations. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 86–94. [Google Scholar]
- Hayes, J.; Danezis, G. Learning universal adversarial perturbations with generative models. In Proceedings of the 2018 IEEE Security and Privacy Workshops (SPW), San Francisco, CA, USA, 24 May 2018; pp. 43–49. [Google Scholar]
- Mopuri, K.R.; Ganeshan, A.; Babu, R.V. Generalizable data-free objective for crafting universal adversarial perturbations. IEEE Trans. Pattern Anal. Mach. Intell. 2018, 41, 2452–2465. [Google Scholar] [CrossRef] [Green Version]
- Reddy Mopuri, K.; Krishna Uppala, P.; Venkatesh Babu, R. Ask, acquire, and attack: Data-free uap generation using class impressions. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 19–34. [Google Scholar]
- Bae, H.; Jang, J.; Jung, D.; Jang, H.; Ha, H.; Lee, H.; Yoon, S. Security and privacy issues in deep learning. arXiv 2018, arXiv:1807.11655. [Google Scholar]
- Shokri, R.; Shmatikov, V. Privacy-preserving deep learning. In Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security, Denver, CO, USA, 12–16 October 2015; pp. 1310–1321. [Google Scholar]
- Mireshghallah, F.; Taram, M.; Vepakomma, P.; Singh, A.; Raskar, R.; Esmaeilzadeh, H. Privacy in deep learning: A survey. arXiv 2020, arXiv:2004.12254. [Google Scholar]
- Liu, Y.; Zhang, W.; Yu, N. Protecting Privacy in Shared Photos via Adversarial Examples Based Stealth. Secur. Commun. Netw. 2017, 2017, 1897438. [Google Scholar] [CrossRef] [Green Version]
- Liu, B.; Ding, M.; Zhu, T.; Xiang, Y.; Zhou, W. Using Adversarial Noises to Protect Privacy in Deep Learning Era. In Proceedings of the 2018 IEEE Global Communications Conference (GLOBECOM), Abu Dhabi, United Arab Emirates, 9–13 December 2018; pp. 1–6. [Google Scholar]
- Xue, M.; Sun, S.; Wu, Z.; He, C.; Wang, J.; Liu, W. SocialGuard: An Adversarial Example Based Privacy-Preserving Technique for Social Images. arXiv 2020, arXiv:2011.13560. [Google Scholar] [CrossRef]
- Sánchez-Matilla, R.; Li, C.; Shamsabadi, A.S.; Mazzon, R.; Cavallaro, A. Exploiting Vulnerabilities of Deep Neural Networks for Privacy Protection. IEEE Trans. Multimed. 2020, 22, 1862–1873. [Google Scholar] [CrossRef]
- Arcelli, D.; Baia, A.E.B.; Milani, A.; Poggioni, V. Enhance while protecting: Privacy preserving image filtering. In Proceedings of the IEEE/WIC/ACM International Conference on Web Intelligence (WI-IAT ’21), Melbourne, Australia, 14–17 December 2021. [Google Scholar]
- Li, F.; Sun, Z.; Niu, B.; Guo, Y.; Liu, Z. SRIM Scheme: An Impression-Management Scheme for Privacy-Aware Photo-Sharing Users. Engineering 2018, 4, 85–93. [Google Scholar] [CrossRef]
- Such, J.M.; Criado, N. Resolving Multi-Party Privacy Conflicts in Social Media. IEEE Trans. Knowl. Data Eng. 2016, 28, 1851–1863. [Google Scholar] [CrossRef] [Green Version]
- Xu, Y.; Price, T.; Frahm, J.M.; Monrose, F. Virtual U: Defeating Face Liveness Detection by Building Virtual Models from Your Public Photos. In Proceedings of the 25th USENIX Security Symposium (USENIX Security 16), Austin, TX, USA, 10–12 August 2016. [Google Scholar]
- Wang, Z.; Bovik, A.; Sheikh, H.; Simoncelli, E. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [Green Version]
- Papernot, N.; McDaniel, P.; Wu, X.; Jha, S.; Swami, A. Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks. In Proceedings of the 2016 IEEE Symposium on Security and Privacy (SP), San Jose, CA, USA, 22–26 May 2016. [Google Scholar]
- Akhtar, N.; Liu, J.; Mian, A. Defense Against Universal Adversarial Perturbations. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 3389–3398. [Google Scholar]
- Xu, W.; Evans, D.; Qi, Y. Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks. arXiv 2018, arXiv:1704.01155. [Google Scholar]
- Mollahosseini, A.; Hasani, B.; Mahoor, M.H. AffectNet: A Database for Facial Expression, Valence, and Arousal Computing in the Wild. IEEE Trans. Affect. Comput. 2019, 10, 18–31. [Google Scholar] [CrossRef] [Green Version]
- Zhao, Z.; Liu, Z.; Larson, M. Adversarial Color Enhancement: Generating Unrestricted Adversarial Images by Optimizing a Color Filter. In Proceedings of the British Machine Vision Virtual Conference (BMVC), Virtual, 7–10 September 2020. [Google Scholar]
- Wang, Y.; Wu, S.; Jiang, W.; Hao, S.; Tan, Y.a.; Zhang, Q. Demiguise Attack: Crafting Invisible Semantic Adversarial Perturbations with Perceptual Similarity. arXiv 2021, arXiv:2107.01396. [Google Scholar]
- Wang, L. A survey on IQA. arXiv 2021, arXiv:2109.00347. [Google Scholar]
- Xu, S.; Jiang, S.; Min, W. No-reference/Blind Image Quality Assessment: A Survey. IETE Tech. Rev. 2017, 34, 223–245. [Google Scholar] [CrossRef]
- Zhai, G.; Min, X. Perceptual image quality assessment: A survey. Sci. China Inf. Sci. 2020, 63, 211301. [Google Scholar] [CrossRef]
- Sun, Y.; Wu, C.; Zheng, K.; Niu, X. Adv-emotion: The facial expression adversarial attack. Int. J. Pat. Recogn. Artif. Intell. 2021, 35, 2152016. [Google Scholar] [CrossRef]
- Sun, Y.; Yin, J.; Wu, C.; Zheng, K.; Niu, X. Generating facial expression adversarial examples based on saliency map. Image Vis. Comput. 2021, 116, 104318. [Google Scholar] [CrossRef]
- Sharif, M.; Bhagavatula, S.; Bauer, L.; Reiter, M.K. Accessorize to a Crime: Real and Stealthy Attacks on State-of-the-Art Face Recognition. In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, Vienna, Austria, 24–28 October 2016; Association for Computing Machinery: New York, NY, USA, 2016; pp. 1528–1540. [Google Scholar] [CrossRef] [Green Version]
- Sandler, M.; Howard, A.; Zhu, M.; Zhmoginov, A.; Chen, L.C. MobileNetV2: Inverted Residuals and Linear Bottlenecks. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 4510–4520. [Google Scholar] [CrossRef] [Green Version]
- Ekman, P.; Friesen, W.V. A new pan-cultural facial expression of emotion. Motiv. Emot. 1986, 10, 159–168. [Google Scholar] [CrossRef] [Green Version]
- Deng, J.; Dong, W.; Socher, R.; Li, L.J.; Li, K.; Fei-Fei, L. ImageNet: A large-scale hierarchical image database. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; pp. 248–255. [Google Scholar] [CrossRef] [Green Version]
- Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 2015, 115, 211–252. [CrossRef] [Green Version]
Original | 3 filters | 4 filters | 5 filters |
---|---|---|---|
surprise | fear | fear | fear |
happiness | contempt | contempt | disgust |
anger | sadness | sadness | sadness |
happiness | contempt | contempt | contempt |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Baia, A.E.; Biondi, G.; Franzoni, V.; Milani, A.; Poggioni, V. Lie to Me: Shield Your Emotions from Prying Software. Sensors 2022, 22, 967. https://doi.org/10.3390/s22030967
Baia AE, Biondi G, Franzoni V, Milani A, Poggioni V. Lie to Me: Shield Your Emotions from Prying Software. Sensors. 2022; 22(3):967. https://doi.org/10.3390/s22030967
Chicago/Turabian StyleBaia, Alina Elena, Giulio Biondi, Valentina Franzoni, Alfredo Milani, and Valentina Poggioni. 2022. "Lie to Me: Shield Your Emotions from Prying Software" Sensors 22, no. 3: 967. https://doi.org/10.3390/s22030967
APA StyleBaia, A. E., Biondi, G., Franzoni, V., Milani, A., & Poggioni, V. (2022). Lie to Me: Shield Your Emotions from Prying Software. Sensors, 22(3), 967. https://doi.org/10.3390/s22030967