A Machine Walks into an Exhibit: A Technical Analysis of Art Curation
Abstract
:1. Introduction
- RQ1: What differences does the computer vision code identify between the curations and the overall open-access collection?
- RQ2: How are these findings different from the reported exhibit visitor responses?
2. Materials and Methods
Data Analysis
- All exhibit pieces and the Met collection sample;
- Instagram-selected exhibit pieces and the Met collection sample;
- Human (artist)-selected exhibit pieces and the Met collection sample;
- Instagram-selected exhibit pieces and the Human (artist)-selected exhibit pieces.
3. Results
Statistical Analysis
4. Discussion
4.1. Compared to Previous Research
4.2. Human-in-the-Loop
4.3. Limitations
5. Conclusions
Supplementary Materials
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
Abbreviations
AI | Artificial Intelligence |
CV | Computer Vision |
1 | Image retrieved from the Metropolitan Museum of Art’s Open Access collection via Creative Commons licensing. The piece is “Visit to a Shrine at the Hour of the Ox (Ushi no toki mairi),” 1765. |
2 | Image retrieved from the Metropolitan Museum of Art’s Open Access collection via Creative Commons licensing. The piece is “Fragment of a Red-Ground Harshang Carpet,” early 19th century. |
3 | Image retrieved from the Metropolitan Museum of Art’s Open Access collection via Creative Commons licensing. The piece is “Sofa (part of a set),” circa 1835 |
References
- Alaoui, Sarah Fdili. 2019. Making an interactive dance piece: Tensions in integrating technology in art. Presented at the DIS 2019—The 2019 ACM Designing Interactive Systems Conference, San Francisco, CA, USA, June 23–28; pp. 1195–208. [Google Scholar] [CrossRef]
- Alvarado, Oscar, and Annika Waern. 2018. Towards Algorithmic Experience: Initial Efforts for Social Media Contexts. Presented at the 2018 CHI Conference on Human Factors in Computing Systems, Montreal, QC, Canada, April 21–26. [Google Scholar] [CrossRef]
- Bailey, Jason. 2017. Machine Learning for Art Valuation. An Interview with Ahmed Hosny. Artnome, December 9. [Google Scholar]
- Bakhshi, Saeideh, David A. Shamma, and Eric Gilbert. 2014. Faces engage us: Photos with faces attract more likes and comments on instagram. Presented at the SIGCHI Conference on Human Factors in Computing Systems—CHI’14, Toronto, ON, Canada, April 26; New York: Association for Computing Machinery, pp. 965–74. [Google Scholar] [CrossRef]
- Binns, Reuben. 2022. Human judgment in algorithmic loops: Individual justice and automated decision-making. Regulation & Governance 16: 197–211. [Google Scholar] [CrossRef]
- Bo, Yihang, Jinhui Yu, and Kang Zhang. 2018. Computational aesthetics and applications. Visual Computing for Industry, Biomedicin, and Art 1: 6. [Google Scholar] [CrossRef]
- Borowski, Judy, Christina M. Funke, Karolina Stosio, Wieland Brendel, T. Wallis, and Matthias Bethge. 2019. The notorious difficulty of comparing human and machine perception. Presented at the 2019 Conference on Cognitive Computational Neuroscience, Berlin, Germany, September 13–16; pp. 642–646. [Google Scholar]
- Bullot, Nicolas J., and Rolf Reber. 2013. The Artful mind meets art history: Toward a psycho-historical framework for the science of art appreciation. Behavioral and Brain Sciences 36: 123–37. [Google Scholar] [CrossRef]
- Canny, John. 1986. A Computational Approach to Edge Detection. IEEE Transactions on Pattern Analysis and Machine Intelligence PAMI- 8: 679–98. [Google Scholar] [CrossRef]
- Caramiaux, Baptiste, and Sarah Fdili Alaoui. 2022. “Explorers of unknown planets”: Practices and politics of artificial intelligence in visual arts. Proceedings of the ACM on Human–Computer Interaction 6: 1–24. [Google Scholar] [CrossRef]
- Castelo, Noah, Maarten W. Bos, and Donald R. Lehmann. 2019. Task-dependent algorithm aversion. Journal of Marketing Research 56: 809–25. [Google Scholar] [CrossRef]
- Clerwall, Christer. 2014. Enter the robot journalist. Journalism Practice 8: 519–31. [Google Scholar] [CrossRef]
- Dietvorst, Berkeley J., Joseph P. Simmons, and Cade Massey. 2015. Algorithm aversion: People erroneously avoid algorithms after seeing them err. Journal of Experimental Psychology: General 144: 114–26. [Google Scholar] [CrossRef]
- Fosnot, Catherine Twomey, and Randall Stewart Perry. 2005. Constructivism: A Psychological Theory of Learning. New York: Teacher College. [Google Scholar]
- Funke, Christina M., Judy Borowski, Karolina Stosio, Wieland Brendel, Thomas S.A. Wallis, and Matthias Bethge. 2021. Five points to check when comparing visual perception in humans and machines. Journal of Vision 21: 1–23. [Google Scholar] [CrossRef]
- Gao, Catherine A., Frederick M. Howard, Nikolay S. Markov, Emma C. Dyer, Siddhi Ramesh, Yuan Luo, and Alexander T. Pearson. 2023. Comparing scientific abstracts generated by chatgpt to real abstracts with detectors and blinded human reviewers. npj Digital Medicine 6: 1–5. [Google Scholar] [CrossRef]
- Graham, Beryl, and Sarah Cook. 2015. Rethinking Curating. Cambridge: The MIT Press. [Google Scholar]
- Greenberger, Alex. 2023. Artist wins photography contest after submitting AI-generated image. ARTnews, April 17. [Google Scholar]
- Haimson, Oliver L., Daniel Delmonaco, Andrea Wegner, and Peipei Nie. 2021. Disproportionate removals and difering content moderation experiences for conservative, transgender, and black social media users: Marginalization and moderation gray areas. Proceedings of the ACM on Human–Computer Interaction 5: 1–35. [Google Scholar] [CrossRef]
- Haken, Hermann. 1991. Comparisons Between Human Perception and Machine “Perception”. Berlin/Heidelberg: Springer, vol. 50, pp. 133–48. [Google Scholar] [CrossRef]
- Harris, Chris, and Mike Stephens. 1998. A Combined Corner and Edge Detector. Presented at the 4th Alvey Vision Conference, Manchester, UK, August 31–September 2; pp. 147–51. [Google Scholar]
- Herman, Laura M., and Caterina Moruzzi. 2024. The algorithmic pedestal: A practice-based study of algorithmic & artistic curation. Leonardo, 485–92. [Google Scholar] [CrossRef]
- Hoenig, Florian. 2005. Defining Computational Aesthetics. Computational Aesthetics in Graphics, Visualization and Imaging 2005: 13–18. [Google Scholar]
- Hong, Joo-Wha. 2018. Bias in Perception of Art Produced by Artificial Intelligence. Cham: Springer International Publishing, pp. 290–303. [Google Scholar]
- Hosny, Ahmed, Jili Huang, and Yingyi Wang. 2014. The Green Canvas. Github. Available online: https://github.com/ahmedhosny/theGreenCanvas (accessed on 15 October 2021).
- Jensen, Nina. 1999. Children, Teenagers and Adults in Museums: A Developmental Perspective, 2nd ed. London: Routledge, pp. 110–7. [Google Scholar]
- Joshi, Dhiraj, Ritendra Datta, Elena Fedorovskaya, Quang Tuan Luong, James Z. Wang, Jia Li, and Jiebo Luo. 2011. Aesthetics and emotions in images. IEEE Signal Processing Magazine 28: 94–115. [Google Scholar] [CrossRef]
- Karizat, Nadia, Dan Delmonaco, Motahhare Eslami, and Nazanin Andalibi. 2021. Algorithmic folk theories and identity: How tiktok users co-produce knowledge of identity and engage in algorithmic resistance. Proceedings of the ACM on Human–Computer Interaction 5: 1–44. [Google Scholar] [CrossRef]
- Köbis, Nils, and Luca D. Mossink. 2021. Artificial intelligence versus maya angelou: Experimental evidence that people cannot differentiate ai-generated from human-written poetry. Computers in Human Behavior 114: 106553. [Google Scholar] [CrossRef]
- Krizhevsky, Alex, Ilya Sutskever, and Geoffrey E. Hinton. 2012. Imagenet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems. Edited by Fernando Pereira, Christopher Burges, Leon Bottou and Kilian Weinberger. Red Hook: Curran Associates, Inc., vol. 25. [Google Scholar]
- Lepori, Michael A., and Chaz Firestone. 2022. Can you hear me now? Sensitive comparisons of human and machine perception. Cognitive Science 46: e13191. [Google Scholar] [CrossRef]
- Liu, Yaqi, Qingxiao Guan, Xianfeng Zhao, and Yun Cao. 2018. Image forgery localization based on multi-scale convolutional neural networks. Presented at the 6th ACM Workshop on Information Hiding and Multimedia Security—IH&MMSec’18, Innsbruck, Austria, June 20–22; New York: Association for Computing Machinery, pp. 85–90. [Google Scholar] [CrossRef]
- Longoni, Chiara, Andrea Bonezzi, and Carey K. Morewedge. 2019. Resistance to medical artificial intelligence. Journal of Consumer Research 46: 629–50. [Google Scholar] [CrossRef]
- MacDowall, Lachlan, and Kylie Budge. 2021. Art after Instagram: Art Spaces, Audiences, Aesthetics. London: Routledge. [Google Scholar]
- Makino, Taro, Stanisław Jastrzębski, Witold Oleszkiewicz, Celin Chacko, Robin Ehrenpreis, Naziya Samreen, Chloe Chhor, Eric Kim, Jiyon Lee, Kristine Pysarenko, and et al. 2022. Differences between human and machine perception in medical diagnosis. Scientific Reports 12: 6877. [Google Scholar] [CrossRef]
- Manovich, Lev. 2017. Instagram and Contemporary Image. manovich.net. Available online: https://manovich.net/index.php/projects/instagram-and-contemporary-image (accessed on 17 May 2023).
- Manovich, Lev. 2021. Computer vision, human senses, and language of art. AI and Society 36: 1145–52. [Google Scholar] [CrossRef]
- Manovich, Lev, and Emanuele Arielli. 2023. Artificial Aesthetics: A Critical Guide to AI, Media and Design. manovich.net. Available online: https://manovich.net/index.php/projects/artificial-aesthetics (accessed on 17 May 2023).
- Metoyer-Duran, Cheryl. 1993. Information gatekeepers. Annual Review of Information Science and Technology (ARIST) 28: 111–50. [Google Scholar]
- Mochocki, Michał. 2021. Heritage sites and video games: Questions of authenticity and immersion. Games and Culture 16: 951–77. [Google Scholar] [CrossRef]
- Moruzzi, Caterina. 2021. Measuring creativity: An account of natural and artificial creativity. European Journal for Philosophy of Science 11: 1. [Google Scholar] [CrossRef]
- Mosqueira-Rey, Eduardo, Elena Hernández-Pereira, David Alonso-Ríos, José Bobes-Bascarán, and Ángel Fernández-Leal. 2023. Human-in-the-loop machine learning: A state of the art. Artificial Intelligence Review 56: 3005–54. [Google Scholar] [CrossRef]
- Rae, Irene. 2024. The effects of perceived AI use on content perceptions. Presented at the CHI’ 24: CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, May 11–16; pp. 1–14. [Google Scholar] [CrossRef]
- Ragot, Martin, Nicolas Martin, and Salomé Cojean. 2020. AI-generated vs. human artworks. a perception bias towards artificial intelligence? Presented at the CHI’ 20: CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, April 25–30. [Google Scholar] [CrossRef]
- Roose, Kevin. 2022. AI-generated art won a prize. Artists aren’t happy. The New York Times, September 2. [Google Scholar]
- Shamir, Lior, Jenny Nissel, and Ellen Winner. 2016. Distinguishing between abstract art by artists vs. children and animals: Comparison between human and machine perception. ACM Transactions on Applied Perception (TAP) 13: 1–17. [Google Scholar] [CrossRef]
- Stack, John. 2019. What the Machine Saw. Github. Available online: https://github.com/johnstack/what-the-machine-saw (accessed on 15 March 2022).
- Steward, Jeff. 2015. Harvard Art Museums Api. Available online: https://github.com/harvardartmuseums/api-docs (accessed on 10 December 2023).
- Szeliski, Richard. 2022. Computer Vision: Algorithms and Applications, 2nd ed. Berlin and Heidelberg: Springer Nature. [Google Scholar]
- Uliasz, Rebecca. 2021. Seeing like an algorithm: Operative images and emergent subjects. AI and Society 36: 1233–41. [Google Scholar] [CrossRef]
- Villaespesa, Elena, and Oonagh Murphy. 2021. This is not an apple! Benefits and challenges of applying computer vision to museum collections. Museum Management and Curatorship 36: 362–83. [Google Scholar] [CrossRef]
- Viola, Paul, and Michael Jones. 2001. Rapid object detection using a boosted cascade of simple features. Presented at the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Kauai, HI, USA, December 8–14, vol. 1. [Google Scholar] [CrossRef]
- Vitulano, Sergio, Vito Di Gesú, Virginio Cantoni, Roberto Marmo, and Alessandra Setti. 2005. Human and Machine Perception: Communication, Interaction, and Integration. Singapore: World Scientific Publishing Co. [Google Scholar]
- Von Davier, Thomas Şerban. 2023. Designing for Appreciation: How Digital Spaces Can Support Art and Culture. Presented at the CHI’ 23: CHI Conference on Human Factors in Computing Systems, Hambrug, Germany, April 23–28. [Google Scholar] [CrossRef]
- Wes McKinney. 2010. Data Structures for Statistical Computing in Python. SciPy 445: 51–56. [Google Scholar] [CrossRef]
- Zanzotto, Fabio Massimo. 2019. Viewpoint: Human-in-the-loop artificial intelligence. Journal of Artificial Intelligence Research 64: 243–52. [Google Scholar] [CrossRef]
- Zhang, Jingyi, Jiaxing Huang, Sheng Jin, and Shijian Lu. 2024. Vision-language models for vision tasks: A survey. IEEE Transactions on Pattern Analysis and Machine Intelligence 46: 5625–44. [Google Scholar] [CrossRef]
- Zhang, Jiajing, Yongwei Miao, Junsong Zhang, and Jinhui Yu. 2020. Inkthetics: A Comprehensive Computational Model for Aesthetic Evaluation of Chinese Ink Paintings. IEEE Access 8: 225857–71. [Google Scholar] [CrossRef]
- Zhang, Lixuan, Iryna Pentina, and Yuhong Fan. 2021. Who do you choose? comparing perceptions of human vs robo-advisor in the context of financial services. Journal of Services Marketing 35: 634–46. [Google Scholar] [CrossRef]
- Zylinska, Joanna. 2023. The Perception Machine: Our Photographic Future between the Eye and AI. Cambridge: The MIT Press. [Google Scholar] [CrossRef]
CV Variable | Definition |
---|---|
Dominant Color | Returns the hex code and RGB hue of the most common color in the image file based on a pre-defined number of clusters. |
Brightness | Average brightness of the image file pixels. |
Ratio of Unique Colors | A ratio of the total number of unique colors in regards to the total number of pixels within the image file. A value of 1 indicates highly colorful, while a value of 0 would be a greyscale image file. |
Threshold Black Percentage | In using a pre-defined threshold value of 127, each pixel is compared to the threshold value, and whether it falls above (white) or below (black) this is determined to provide a calculation for the ratio of black pixels in a greyscale or inverted image. |
High Brightness Percentage | Ratio of pixels that have two times the average brightness of the overall image file compared to the total number of pixels. |
Low Brightness Percentage | Ratio of pixels that have less than half of the average brightness of the overall image compared to the total number of pixels. |
Corner Percentage | Harris Corner Detection (Harris and Stephens (1998)) is used to identify corner pixels and then calculate what percentage of pixels within the image file register as corners. |
Edge Percentage | Using Canny Edge Detection (Canny (1986)), we identify the edge pixels in the image file and calculate the percentage of these pixels compared to the overall file. |
Face Count | Haar cascade face detection (Viola and Jones (2001)) from OpenCV is a common basic form of face detection software; we apply it to the image file to see if there are any noticeable faces. |
CV Variable | Test Statistic | p-Value |
---|---|---|
Brightness | 0.101384 | 0.92040 |
Ratio of Unique Colors | −2.30718 | 0.033441 |
Threshold Black Percentage | −0.07543 | 0.940718 |
High Brightness Percentage | 1.38500 | 0.181199 |
Low Brightness Percentage | −0.35159 | 0.72933 |
Corner Percentage | 0.47165 | 0.642551 |
Edge Percentage | −1.36619 | 0.189242 |
Face Count | 0.062826 | 0.95060 |
CV Variable | Test Statistic | p-Value |
---|---|---|
Brightness | 0.724514 | 0.48039 |
Ratio of Unique Colors | −1.32710 | 0.20561 |
Threshold Black Percentage | −0.32200 | 0.75211 |
High Brightness Percentage | −1.09492 | 0.291933 |
Low Brightness Percentage | −1.24428 | 0.233566 |
Corner Percentage | −0.71589 | 0.485799 |
Edge Percentage | −1.8290 | 0.08851 |
Face Count | 5.238054 | 2.835 |
CV Variable | Test Statistic | p-Value |
---|---|---|
Brightness | 0.552951 | 0.583944 |
Ratio of Unique Colors | −2.16496 | 0.037811 |
Threshold Black Percentage | −0.28687 | 0.775964 |
High Brightness Percentage | −0.763722 | 0.45049 |
Low Brightness Percentage | −1.19127 | 0.241977 |
Corner Percentage | −0.62744 | 0.534768 |
Edge Percentage | −2.28754 | 0.028678 |
Face Count | 1.36220 | 0.18150 |
CV Variable | Test Statistic | p-Value |
---|---|---|
Brightness | 0.453195 | 0.65361 |
Ratio of Unique Colors | −0.45925 | 0.651656 |
Threshold Black Percentage | −0.20527 | 0.83883 |
High Brightness Percentage | −1.32712 | 0.204521 |
Low Brightness Percentage | −0.84416 | 0.406565 |
Corner Percentage | −0.77744 | 0.449341 |
Edge Percentage | −0.72626 | 0.47422 |
Face Count | 1.56614 | 0.133185 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
von Davier, T.Ş.; Herman, L.M.; Moruzzi, C. A Machine Walks into an Exhibit: A Technical Analysis of Art Curation. Arts 2024, 13, 138. https://doi.org/10.3390/arts13050138
von Davier TŞ, Herman LM, Moruzzi C. A Machine Walks into an Exhibit: A Technical Analysis of Art Curation. Arts. 2024; 13(5):138. https://doi.org/10.3390/arts13050138
Chicago/Turabian Stylevon Davier, Thomas Şerban, Laura M. Herman, and Caterina Moruzzi. 2024. "A Machine Walks into an Exhibit: A Technical Analysis of Art Curation" Arts 13, no. 5: 138. https://doi.org/10.3390/arts13050138
APA Stylevon Davier, T. Ş., Herman, L. M., & Moruzzi, C. (2024). A Machine Walks into an Exhibit: A Technical Analysis of Art Curation. Arts, 13(5), 138. https://doi.org/10.3390/arts13050138