Predicting Group Contribution Behaviour in a Public Goods Game from Face-to-Face Communication
Abstract
:1. Introduction
1.1. Related Work
1.2. Contribution
2. Automatic Facial Expression Analysis Approach
2.1. Facial Features (FF)
2.2. Facial Activity Descriptors (FADs)
2.3. Group Activity Descriptors (GADs)
2.4. Classification
3. Dataset
4. Experiments and Results
4.1. Experimental Setup and Results
4.1.1. Model and Feature Selection
4.1.2. RFc Trees
4.1.3. Optimal Threshold
4.1.4. Temporal Splits
4.2. Ablation Studies
4.3. Feature Importance in RFc
4.4. Content Analysis
5. Discussion
Author Contributions
Funding
Conflicts of Interest
References
- Balliet, D. Communication and Cooperation in Social Dilemmas: A Meta-Analytic Review. J. Confl. Resolut. 2010, 54, 39–57. [Google Scholar] [CrossRef]
- Chaudhuri, A. Sustaining cooperation in laboratory public goods experiments: A selective survey of the literature. Exp. Econ. 2011, 14, 47–83. [Google Scholar] [CrossRef]
- Brosig, J.; Weimann, J.; Ockenfels, A. The effect of communication media on cooperation. Ger. Econ. Rev. 2003, 4, 217–241. [Google Scholar] [CrossRef]
- Public Goods Game. Available online: https://en.wikipedia.org/w/index.php?title=Public_goods_game&oldid=892553907 (accessed on 18 June 2019).
- Altemeyer-Bartscher, M.; Bershadskyy, D.; Schreck, P.; Timme, F. Endogenous Institution Formation in Public Good Games: The Effect of Economic Education. Available online: https://hdl.handle.net/10419/173200 (accessed on 18 June 2019).
- Mehrabian, A. Silent Messages; Wadsworth: Belmont, CA, USA, 1971; pp. 40–56. ISBN 0-534-00059-2. [Google Scholar]
- Mehrabian, A.; Ferris, S.R. Inference of attitudes from nonverbal communication in two channels. J. Consult. Psychol. 1967, 31, 248–252. [Google Scholar] [CrossRef] [PubMed]
- Knapp, M.L.; Hall, J.A. Nonverbal Communication in Human Interaction; Wadsworth Publishing: Belmont, CA, USA, 2009; pp. 59–88. ISBN 9780495568698. [Google Scholar]
- Zhang, S.; Zhao, X.; Lei, B. Robust Facial Expression Recognition via Compressive Sensing. Sensors 2012, 12, 3747–3761. [Google Scholar] [CrossRef]
- Aran, O.; Hung, H.; Gatica-Perez, D. A Multimodal Corpus for Studying Dominance in Small Group Conversations. In Proceedings of the LREC workshop on Multimodal Corpora: Advances in Capturing, Coding and Analyzing Multimodality, Valletta, Malta, 18 May 2010; pp. 2–6. [Google Scholar]
- Horgan, T.G.; Hall, J.A.; Knapp, M.L. Non-Verbal Communication in Human Interaction, 8th ed.; Wadsworth: Belmont, CA, USA; Cengage Learning: Boston, MA, USA, 2014; Volume 8, p. 528. ISBN 9781133311591. [Google Scholar]
- Ekman, P. Emotion in the Human Face, 2nd ed.; Cambridge University Press: Cambridge, UK, 1982; pp. 178–211. [Google Scholar]
- Keltner, D.; Haidt, J. Social functions of emotions at four levels of analysis. Cogn. Emot. 1999, 13, 505–521. [Google Scholar] [CrossRef]
- Pantic, M. Machine Analysis of Facial Behaviour: Naturalistic & Dynamic Behaviour 2. The Process of Automatic Facial Behaviour Analysis. Philos. Trans. R. Soc. Lond. Ser. B Biol. Sci. 2009, 364, 3505–3513. [Google Scholar] [CrossRef]
- Paul, E. Facial expression and emotion. Am. Psychol. 1993, 48, 384–392. [Google Scholar]
- Aran, O.; Gatica-Perez, D. Analysis of Group Conversations: Modeling Social Verticality. In Computer Analysis of Human Behaviored; Salah, A.A., Gevers, T., Eds.; Springer: Berlin/Heidelberg, Germany, 2011; pp. 293–323. ISBN 978-0-85729-993-2. [Google Scholar]
- Krumhuber, E.; Manstead, A.S.R.; Cosker, D.; Marshall, D.; Rosin, P.L.; Kappas, A. Facial Dynamics as Indicators of Trustworthiness and Cooperative Behavior. Emotion 2007, 7, 730–735. [Google Scholar] [CrossRef]
- Nalini, A.; Robert, R. Thin slices of expressive behavior as predictors of interpersonal consequences: A meta-analysis. Psychol. Bull. 1992, 111, 256–274. [Google Scholar]
- Forgas, J.P.; Jones, R. Interpersonal Behaviour: The Psychology of Social Interaction; Pergamon Press: Elmsford, NY, USA, 1985; ISBN 9780080298542. [Google Scholar]
- Jacques, J.C.S.; Güçlütürk, Y.; Marc, P.; Güçlü, U.; Andújar, C.; Baró, X.; Escalante, H.J.; Guyon, I.; Gerven, M.V.; Lier, R.V.; et al. First Impressions: A Survey on Computer Vision-Based Apparent Personality Trait Analysis. arXiv 2018, arXiv:1804.08046. [Google Scholar]
- Okoro, E.; CWashington, M.; Thomas, O. The Impact of Interpersonal Communication Skills on Organizational Effectiveness and Social Self-Efficacy: A Synthesis. Int. J. Lang. Linguist. 2017, 4, 28–32. [Google Scholar]
- Gatica-Perez, D. Automatic nonverbal analysis of social interaction in small groups: A review. Image Vis. Comput. 2009, 27, 1775–1787. [Google Scholar] [CrossRef]
- George, S.; Pascal, L. An approach to automatic analysis of learners’ social behavior during computer-mediated synchronous conversations. In Proceedings of the International Conference on Intelligent Tutoring Systems, Biarritz, France, 2–7 June 2002; pp. 630–640. [Google Scholar]
- Jayagopi, D.B.; Hung, H.; Yeo, C.; Gatica-Perez, D. Modeling Dominance in Group Conversations Using Nonverbal Activity Cues. IEEE Trans. Audiospeechand Lang. Process. 2009, 17, 501–513. [Google Scholar] [CrossRef] [Green Version]
- Jaques, N.; McDuff, D.; Kim, Y.L.; Picard, R. Understanding and predicting bonding in conversations using thin slices of facial expressions and body language. In Proceedings of the International Conference on Intelligent Virtual Agents, Los Angeles, CA, USA, 20–23 September 2016; pp. 64–74. [Google Scholar]
- Ekman, P.; Friesen, W.V. Facial Action Coding System: A Technique for the Measurement of Facial Movement; Consulting Psychologists Press: Palo Alto, CA, USA, 1978; Volume 1. [Google Scholar]
- Bartlett, M.; Littlewort, G.; Vural, E.; Lee, K.; Cetin, M.; Ercil, A.; Movellan, J. Data mining spontaneous facial behavior with automatic expression coding. In Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2008; Volume 5042 LNAI, pp. 1–20. [Google Scholar] [CrossRef]
- Stratou, G.; Schalk JV, D.; Hoegen, R.; Gratch, J. Refactoring facial expressions: An automatic analysis of natural occurring facial expressions in iterative social dilemma. In Proceedings of the Seventh International Conference on Affective Computing and Intelligent Interaction (ACII), San Antonio, TX, USA, 23–26 October 2017; pp. 427–433. [Google Scholar]
- Carrer, S.; Gottman, J.M. Predicting divorce among newlyweds from the first three minutes of a marital conflict discussion. Fam. Process 1999, 38, 293–301. [Google Scholar] [CrossRef]
- Levenson, R.W.; Gottman, J.M. Marital processes predictive of later dissolution: Behavior, physiology, and health. J. Personal. Soc. Psychol. 1992, 63, 221–233. [Google Scholar]
- Gottman, J.M.; Notarius, C.I. Decade review: Observing marital interaction. J. Marriage Fam. 2000, 62, 927–947. [Google Scholar] [CrossRef]
- Grobova, J.; Colovic, M.; Marjanovic, M.; Njegus, A.; Demire, H.; Anbarjafari, G. Automatic Hidden Sadness Detection Using Micro-Expressions. In Proceedings of the 12th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2017), Washington, DC, USA, 30 May–3 June 2017. [Google Scholar] [CrossRef]
- Guo, J.; Lei, Z.; Wan, J.; Avots, E.; Hajarolasvadi, N.; Knyazev, B.; Kuharenko, A.; Junior, J.C.S.J.; Baró, X.; Demirel, H.; et al. Dominant and Complementary Emotion Recognition from Still Images of Faces. IEEE Access 2018, 6, 26391–26403. [Google Scholar] [CrossRef]
- Kulkarni, K.; Corneanu, C.A.; Ofodile, I.; Escalera, S.; Baro, X.; Hyniewska, S.; Allik, J.; Anbarjafari, G. Automatic Recognition of Facial Displays of Unfelt Emotions. J. IEEE Trans. Affect. Comput. 2018. [Google Scholar] [CrossRef]
- Noroozi, F.; Marjanovic, M.; Njegus, A.; Escalera, S.; Anbarjafari, G. Audio-Visual Emotion Recognition in Video Clips. IEEE Trans. Affect. Comput. 2017, 10, 26391–26403. [Google Scholar] [CrossRef]
- Girard, J.M.; Chu, W.-S.; Jeni, L.A.; Cohn, J.F. Sayette Group Formation Task (GFT) Spontaneous Facial Expression Database. In Proceedings of the 12th IEEE International Conference on Automatic Face and Gesture Recognition (FG), Washington, DC, USA, 30 May–3 June 2017; pp. 581–588. [Google Scholar]
- Ringeval, F.; Sonderegger, A.; Sauer, J.; Lalanne, D. Introducing the RECOLA Multimodal Corpus of Remote Collaborative and Affective Interactions. Presented at the Automatic Face and Gesture Recognition (FG), 2013 10th IEEE International Conference and Workshops, Shanghai, China, 22–26 April 2013. [Google Scholar]
- Bonnefon, J.-F.; Hopfensitz, A.; Neys, W.D. Can We Detect Cooperators by Looking at Their Face? Curr. Dir. Psychol. Sci. 2017, 26, 276–281. [Google Scholar] [CrossRef]
- Brinke, L.T.; Vohs, K.D.; Carney, D.R. Can Ordinary People Detect Deception After All? Trends Cogn. Sci. 2016, 20, 579–588. [Google Scholar] [CrossRef] [PubMed]
- Bershadskyy, D.; Othman, E.; Saxen, F. Predicting Free-Riding in a Public Goods Game: Analysis of Content and Dynamic Facial Expressions in Face-to-Face Communication; Publications by Researchers at Halle Institute for Economic Research (IWH), IWH Discussion Papers; Halle Institute for Economic Research (IWH): Halle, Germany, 2019; Available online: http://hdl.handle.net/10419/196589 (accessed on 18 June 2019).
- Belot, M.; Bhaskar, V.; Ven, J.V.D. Can Observers Predict Trustworthiness? Rev. Econ. Stat. 2012, 94, 246–259. [Google Scholar] [CrossRef] [Green Version]
- Sylwester, K.; Lyons, M.; Buchanan, C.; Nettlea, D.; Roberts, G. The role of Theory of Mind in assessing cooperative intentions. Personal. Individ. Differ. 2012, 52, 113–117. [Google Scholar] [CrossRef]
- Bijleveld, C. Fare dodging and the strong arm of the law. J. Exp. Criminol. 2007, 3, 183–199. [Google Scholar] [CrossRef] [Green Version]
- Kulkarni, V.; Chapuis, B.; Garbinato, B.; Mahalunkar, A. Addressing the Free-Rider Problem in Public Transport System. arXiv 2018, arXiv:1803.04389. [Google Scholar]
- Campion, M.A.; Medsker, G.J.; Higgs, A.C. Relations Between Work Group Characteristics and Effectiveness: Implications for Designing Effective Work Groups. Pers. Psychol. 1993, 46, 823–847. [Google Scholar] [CrossRef]
- Bochet, O.; TalbotPage Putterman, L. Communication and punishment in voluntary contribution experiments. J. Econ. Behav. Organ. 2006, 60, 11–26. [Google Scholar] [CrossRef] [Green Version]
- Baltrušaitis, T.; Robinson, P.; Morency, L.-P. OpenFace: An open source facial behavior analysis toolkit. In Proceedings of the IEEE Winter Conference on Applications of Computer Vision (WACV), Lake Placid, NY, USA, 7–10 March 2016. [Google Scholar]
- Werner, P.; Al-Hamadi, A.; Limbrecht-Ecklundt, K.; Walter, S.; Gruss, S.; Harald, C. Traue Automatic Pain Assessment with Facial Activity Descriptors. IEEE Trans. Affect. Comput. 2017, 8, 286–299. [Google Scholar] [CrossRef]
- Saxen, F.; Werner, P.; Al-Hamadi, A. Real vs. Fake Emotion Challenge: Learning to Rank Authenticity from Facial Activity Descriptors. In Proceedings of the IEEE International Conference on Computer Vision Workshops (ICCVW), Venice, Italy, 22–29 October 2017; pp. 3073–3078. [Google Scholar]
- Fischbacher, U. Z-Tree: Zurich toolbox for ready-made economic experiments. Exp. Econ. 2007, 10, 171–178. [Google Scholar] [CrossRef]
- Bock, O.; Nicklisch, A.; Baetge, I. Hroot: Hamburg registration and organization online tool Olaf. Eur. Econ. Rev. 2014, 71, 117–120. [Google Scholar] [CrossRef]
- Werner, P.; Al-Hamadi, A.; Niese, R.; Walter, S.; Gruss, S.; Traue, H.C. Automatic pain recognition from video and biomedical signals. Presented at the 22nd International Conference on Pattern Recognition, Stockholm, Sweden, 24–28 August 2014. [Google Scholar]
- Hughes, G.P. On the Mean Accuracy of Statistical Pattern Recognizers. IEEE Trans. Inf. Theory 1968, 14, 55–63. [Google Scholar] [CrossRef]
Head Pose | AU | AU Full Name | Prediction | AU | AU Full Name | Prediction | ||
---|---|---|---|---|---|---|---|---|
Yaw | AU1 | Inner brow raiser | I | P* | AU14 | Dimpler | I* | P |
Pitch | AU2 | Outer brow raiser | I* | P* | AU15 | Lip corner depressor | I* | P |
Roll | AU4 | Brow lowerer | I | P* | AU17 | Chin raiser | I* | P |
AU5 | Upper lid raiser | I | P* | AU20 | Lip stretched | I* | P* | |
AU6 | Cheek raiser | I | P | AU23 | Lip tightener | I* | P* | |
AU7 | Lid tightener | I* | P | AU26 | Jaw drop | I | P* | |
AU9 | Nose wrinkler | I | P* | AU28 | Lip suck | - | P* | |
AU10 | Upper lip raiser | I | P | AU45 | Blink | I* | P* | |
AU12 | Lip corner puller | I | P |
Combined Models (11 Splits) | Beginning Models (11 Splits) | End Models (11 Splits) | |||
---|---|---|---|---|---|
1st & 3rd third split | 1st & 4th & 5th fifth split | 1st third split | 3rd fifth split | 3rd third split | 5th fifth split |
1st & 4th quarter split | 1st & 2nd & 4th & 5th fifth split | 1st quarter split | 1st & 2nd fifth split | 3rd quarter split | 3rd & 4th fifth split |
1st & 3rd & 4th quarter split | 1st & 2nd & 3rd & 5th fifth split | 2nd quarter split | 1st & 3rd fifth split | 4th quarter split | 3rd & 5th fifth split |
1st & 2nd & 4th quarter split | 1st & 3rd & 4th & 5th fifth split | 1st & 2nd quarter split | 2nd & 3rd fifth split | 3rd & 4th quarter split | 4th & 5th fifth split |
1st & 5th fifth split | 1st & 2nd & 3rd & 4th & 5th fifth split | 1st fifth split | 1st & 2nd & 3rd fifth split | 3rd fifth split | 3rd & 4th & 5th fifth split |
1st & 2nd & 5th fifth split | 2nd fifth split | 4th fifth split |
Models | p-Value |
---|---|
uninformed guess & trivial models | 0.0000* |
uninformed guess & combined models | 0.0000* |
uninformed guess & beginning models | 0.0000* |
uninformed guess & end models | 0.0000* |
trivial models & combined models | 0.0530 |
trivial models & beginning models | 0.0519 |
trivial models & end models | 0.0082* |
combined models & beginning models | 0.9872 |
combined models & end models | 0.4435 |
beginning models & end models | 0.5147 |
Method | Exp. 1 | Exp. 2 | Exp. 3 | Exp. 4 | Exp. 5 |
---|---|---|---|---|---|
Using all frames with RFc | √ | √ | √ | ||
Using RFc with 5k trees | √ | √ | √ | √ | |
Applying optimal threshold | √ | √ | √ | ||
Using combined models | √ | ||||
Using end models (11 splits) | √ | ||||
Average Accuracy | 65.29% | 67.39% | 68.11% | 68.57% | 70.18% |
Imp | Feature | Value | Imp | Feature | Value | Imp | Feature | Value | Imp | Feature | Value |
---|---|---|---|---|---|---|---|---|---|---|---|
1 | pose_y | 0.1387 | 7 | AU17_I | 0.1029 | 13 | AU07_I | 0.0759 | 19 | AU26_P | 0.0696 |
2 | pose_p | 0.1367 | 8 | AU02_I | 0.1022 | 14 | AU20_P | 0.0744 | 20 | AU09_P | 0.0667 |
3 | pose_r | 0.1349 | 9 | AU15_I | 0.1012 | 15 | AU45_P | 0.0736 | 21 | AU01_P | 0.0640 |
4 | AU45_I | 0.1060 | 10 | AU23_P | 0.0769 | 16 | AU02_P | 0.0721 | |||
5 | AU20_I | 0.1036 | 11 | AU05_P | 0.0766 | 17 | AU28_P | 0.0717 | |||
6 | AU23_I | 0.1033 | 12 | AU14_I | 0.0763 | 18 | AU04_P | 0.0707 |
Dependent Variable: Group Contributions | Coefficients of Joint Observations | Dependent Variable: Group Contributions | Coefficients of Joint Observations |
---|---|---|---|
Number of words | 0.088 (0.102) | Number of males | −4.973 (0.361) |
End-Games | 39.855 (0.006) | Aggregated age | 0.108 (0.887) |
Invest All | 64.899 (0.073) | Constant | 3.395 (0.964) |
Subjects Against | −23.850 (0.205) | R-square | 0.0447 |
Previous Experience | 6.795 (0.554) | Number of Observations | 127 |
Threats & Consequences | −7.159 (0.604) | LR-Chi2 | 23.01 |
Number of economists | 1.349 (0.819) |
© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Othman, E.; Saxen, F.; Bershadskyy, D.; Werner, P.; Al-Hamadi, A.; Weimann, J. Predicting Group Contribution Behaviour in a Public Goods Game from Face-to-Face Communication. Sensors 2019, 19, 2786. https://doi.org/10.3390/s19122786
Othman E, Saxen F, Bershadskyy D, Werner P, Al-Hamadi A, Weimann J. Predicting Group Contribution Behaviour in a Public Goods Game from Face-to-Face Communication. Sensors. 2019; 19(12):2786. https://doi.org/10.3390/s19122786
Chicago/Turabian StyleOthman, Ehsan, Frerk Saxen, Dmitri Bershadskyy, Philipp Werner, Ayoub Al-Hamadi, and Joachim Weimann. 2019. "Predicting Group Contribution Behaviour in a Public Goods Game from Face-to-Face Communication" Sensors 19, no. 12: 2786. https://doi.org/10.3390/s19122786
APA StyleOthman, E., Saxen, F., Bershadskyy, D., Werner, P., Al-Hamadi, A., & Weimann, J. (2019). Predicting Group Contribution Behaviour in a Public Goods Game from Face-to-Face Communication. Sensors, 19(12), 2786. https://doi.org/10.3390/s19122786