Next Article in Journal
Correction: Lončar and Pavlović (2024). “Beyond Quantum Music”—A Pioneering Art and Science Project as a Platform for Building New Instruments and Creating a New Musical Genre. Arts 13: 127
Next Article in Special Issue
The Distributed Authorship of Art in the Age of AI
Previous Article in Journal / Special Issue
Art Notions in the Age of (Mis)anthropic AI
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Machine Walks into an Exhibit: A Technical Analysis of Art Curation

by
Thomas Şerban von Davier
1,*,
Laura M. Herman
2 and
Caterina Moruzzi
3
1
Department of Computer Science, University of Oxford, Oxford OX1 3QD, UK
2
Oxford Internet Institute, University of Oxford, Oxford OX1 3JS, UK
3
Edinburgh College of Art, University of Edinburgh, Edinburgh EH3 9DF, UK
*
Author to whom correspondence should be addressed.
Arts 2024, 13(5), 138; https://doi.org/10.3390/arts13050138
Submission received: 29 May 2024 / Revised: 16 August 2024 / Accepted: 25 August 2024 / Published: 31 August 2024
(This article belongs to the Special Issue Artificial Intelligence and the Arts)

Abstract

:
Contemporary art consumption is predominantly online, driven by algorithmic recommendation systems that dictate artwork visibility. Despite not being designed for curation, these algorithms’ machinic ways of seeing play a pivotal role in shaping visual culture, influencing artistic creation, visibility, and associated social and financial benefits. The Algorithmic Pedestal was a gallery, practice-based research project that reported gallerygoers’ perceptions of a human’s curation and curation achieved by Instagram’s algorithm. This paper presents a technical analysis of the same exhibit using computer vision code, offering insights into machines’ perception of visual art. The computer vision code assigned values on various metrics to each image, allowing statistical comparisons to identify differences between the collections of images selected by the human and the algorithmic system. The analysis reveals statistically significant differences between the exhibited images and the broader Metropolitan Museum of Art digital collection. However, the analysis found minimal distinctions between human-curated and Instagram-curated images. This study contributes insights into the perceived value of the curation process, shedding light on how audiences perceive artworks differently from machines using computer vision.

1. Introduction

The impact of Artificial Intelligence (AI) and algorithms on the art world is rife with controversy. One side argues that these technological developments are the end of the arts as we know them, accompanied by scandalous revelations of AI images winning art contests (Greenberger (2023); Roose (2022)). On the other hand, AI equips artists and designers with tools to elevate their craft to achieve new forms of creative expression (Alaoui (2019); Caramiaux and Alaoui (2022)). At the heart of the controversy is the question of perception: what exactly do audiences think when they encounter algorithmic decision making in the arts? The fact that algorithms powering social media are deciding what art is shown to people based on proprietary, obscure metrics means that they are filling the role of curators in our daily lives.
Herman and Moruzzi set out to explore what audiences think about curation carried out by an algorithm compared to that carried out by a human (Herman and Moruzzi (2024)). Using practice-based research methods, they produced an exhibit—the Algorithmic Pedestal—which included a selection of images curated by Instagram’s algorithm and the London-based artist Fabienne Hess, taken from the Metropolitan Museum of Art’s Open Access collection. Visitors then participated in surveys, interviews, and observations (Herman and Moruzzi (2024)). Their work builds on research that has examined users’ perceptions of AI-created work where people considered work created by AI to be less creative and less interesting (Hong (2018); Köbis and Mossink (2021); Rae (2024); Ragot et al. (2020)).
Based on the apparent differences reported by the participants, we were motivated to see if computer vision also noted differences in algorithmic and human curation. Using fundamental computer vision code informed by the field of computational aesthetics (Hoenig (2005)), we conducted a technical analysis of the Algorithmic Pedestal. A computer vision analysis of an image allows an algorithm to convert the data stored in individual pixels into “handcrafted features” (Zhang et al. (2020)), meaningful, comparative metrics. As there are currently no universally accepted quantitative metrics for artistic quality or creativity (Moruzzi (2021)), our research has the opportunity to explore if the computer’s metrics align with audiences’ opinions.
Using the technical analysis of the exhibit, we explored two research questions:
  • RQ1: What differences does the computer vision code identify between the curations and the overall open-access collection?
  • RQ2: How are these findings different from the reported exhibit visitor responses?
Our results reveal a handful of statistically significant differences between the exhibit and the overall Metropolitan Museum of Art Open Access collection. Specifically, the algorithmic curation had a higher ratio of unique colors, and the human curation had a higher face count. However, the computer did not see any statistically significant differences when comparing the two curations. Our discussion of these results contributes to our thoughts on how the curation process is more than quantifying artistic qualities. Furthermore, we argue that algorithmic curation needs a human-in-the-loop to account for the differences between audience and machine perception of artworks.
Ultimately, this work highlights how the metrics drawn from computer vision notably differ from the reported observations stated by the exhibit-goers. These differences reveal opportunities for designers, developers, and art scholars to consider how computation will partner with human perspectives in a world of growing human-AI collaboration within the arts.

2. Materials and Methods

The research outlined within this paper reflects on the findings of previous work on the impact of algorithmic curation. In 2023, the Algorithmic Pedestal was an exhibit-based research project in London, focused on comparing human and machine curation (Herman and Moruzzi (2024)). We use the definition of curate as outlined in the original paper, which is to identify, select, and display (taken from Graham and Cook (2015)) the contents of an exhibit. The exhibit centered around a single sheet of fabric printed with the images selected by the artist and those selected by Instagram on opposite sides. All the images came from the Metropolitan Museum of Art’s Open Access collection, with both sides having 15–16 images. The artist, Fabienne Hess, could curate and organize her images according to how she liked them. At the same time, the algorithmic curation occurred in the order in which Instagram presented the images to a specific account. The researchers conducted a qualitative user study on the exhibition visitors who attended the exhibit space over seven days. The qualitative study involved interviews, surveys, and ethnographic observations, which aligns with other research that has explored audiences’ perceptions of recommendations made by AI (Castelo et al. (2019); Clerwall (2014); Dietvorst et al. (2015); Longoni et al. (2019); Zhang et al. (2021)). In general, human audiences combine folk theories about the power of algorithms with a general dislike of anything attempting to replace people. In the qualitative study run by Herman and Moruzzi (2024), they highlight how some participants expressed relief when the human artist curated the side of the fabric they preferred. Others found it evident that one side was too straightforward, which aligned with their perceptions of algorithmic behavior. The previous work presents what people believe goes into algorithmic curation and how it differs from the curation carried out by human art experts for human audiences. Therefore, there is a gap in which it is possible to question how a machine would perceive an art exhibit curated by a human artist and another machine.
The differences between machine and human perception have been under consideration for years (Haken (1991); Vitulano et al. (2005); Zylinska (2023)). Many use cases in medical and cognitive sciences have sought to compare and implement machine perception alongside human perception. In medical science, researchers saw that humans take in higher-level information from their medical training to inform their observations while machines look for patterns and deviations in the input data (Makino et al. (2022)). The researchers would then compare the performance of human and machine subjects on a perception task that often measures accuracy or correctness.
In cognitive science, the research argues that humans perceive auditory and visual stimuli differently than machines (Lepori and Firestone (2022)). Their experiments involved a machine and a human encountering a stimulus and transcribing or labeling the signal. The idea of comparing perception boils down to comparing the accuracy of the two observers. Researchers have pushed back on this experimental system, arguing that there is a risk of confounding understanding with recognition (Funke et al. (2021)). Other factors, especially on the human side, like bias and higher-level processing, make it an uneven comparison. Therefore, these researchers call for research to understand more nuanced differences between how machines and humans perceive information. They argue that it is more important to understand how exactly the two groups differ in their perceptions, a task this paper explores.
In this study, the research addresses machine perception on its own. Building on previous research, we sought to gain perspectives on the machine’s observed similarities and differences between human and algorithmic curation. In contrast to human perception, this paper presents the metrics derived from applying computer vision to the exhibit pieces. This approach is comparable to art world projects that explored the interpretation of art through computers as exhibit pieces and exhibit critiques (Stack (2019); Villaespesa and Murphy (2021)). We build on this by revealing the fundamental metric differences revealed by computer vision code processing artistic images.
This paper presents a technical analysis of the Algorithmic Pedestal exhibit using baseline, replicable, and open-source computer vision code used to process and compare the images within the exhibit. While the past few years have seen massive improvements in the field of computer vision, it is not young, spanning back to the 1960s and 1970s. Please see Szeliski’s textbook for a thorough background on computer vision and its possible applications (Szeliski (2022)).
Computer vision advancement went hand-in-hand with the advancement of computer graphics. In the 1960s and 1970s, hardware and computational resource limitations hindered advancements. Nonetheless, the early findings outlined the edge detection process, a concept drawn from human visual perception processes; detecting an edge is often the starting point for larger-shape recognition and depth perception (Canny (1986)). The 1980s and 1990s continued these advancements, and computer scientists could represent 3D figures using multiple images layered on one another. In 2001, the Viola–Jones facial recognition paper marked a massive step forward for the field (Viola and Jones (2001)). Since then, computational power and resources have made it possible for advanced systems using convolutional neural networks (CNNs) to excel at image recognition and other tasks (Krizhevsky et al. (2012); Liu et al. (2018)). As Szeliski mentioned (Szeliski (2022)), these advancements are associated with a specific problem that task researchers are setting out to solve, often requiring some degree of ground truth (i.e., the image either contains a dog or it does not).
As we shift focus to the arts, testing for a specific ground truth or task becomes arguably more challenging. Researchers have set forth the field of computational aesthetics as one potential use case for computer vision in the arts (Hoenig (2005)). This field allows researchers to create specific computational systems that consider aesthetic theories devised in art history and apply them using an algorithm, for example, overlaying the golden ratio onto an image (Bo et al. (2018)). In addition to these higher-order interpretations of images and aesthetics, Bo et al. state that the field also applies some of the early computer vision methods of edge detection, contrast, and color identification (Bo et al. (2018)). As a result, these researchers can develop specific systems that review art pieces on these various metrics.
Within this paper, our technical analysis of the art gallery takes pieces from background work in general computer vision and computational aesthetics. Using a set of computer vision software combined by Hosny et al. (2014), we analyzed the pieces from the Metropolitan Museum of Art’s Open Access collection and the pieces selected for the exhibit. The computer vision code was released as open source and started as an MIT class project picked up by Artnome, a digital collection of articles focused on understanding the role of data in the art world (Bailey (2017)). Based on this public acceptance among art data publications and the legitimate computer vision methods, such as edge detection, contrast, and face detection contained within the codebase, it was deemed appropriate for this project. The original code needed some adjustments to account for out-of-date code libraries. Since the images were open-access images from a public digital collection and the curation was carried out using established research methods, they offer a controlled sample upon which the code could be analyzed.
Nine metrics were run to establish the metadata for our statistical analysis and comparative study (Table 1). Like other computer vision software popularized by the field of computational aesthetics (see Joshi et al. (2011); Zhang et al. (2020) for a detailed overview), this approach involved breaking down the image files into the data contained in each pixel of each file. A breakdown of pixel data for feature extraction refers to extracting “handcrafted features” for image analysis (Zhang et al. (2020)). Through the analysis of the individual pixels, the overall image could be built through averages and ratios of the pixel data. Once the overall image had a data point assigned, they were set into the specifically labeled data frames for comparison. In addition to capturing the metrics, we also wrote code to visualize the metric breakdown, offering insight into the process of pixel analysis. Section 3 provides some of these visuals.

Data Analysis

All of Table 1’s metrics (aside from Dominant Color) are quantitative variables associated with each image file via a comprehensive pandas data frame (Wes McKinney (2010)). To understand the potential quantitative differences between images of various groupings and selections, we visualized the metrics to check for a normal distribution among the samples taken for the exhibit from the overall sample of the images from the Metropolitan Museum of Art’s Open Access collection (from hereon, “the Met collection”). With normalization established, we ran various independent t-tests to ascertain if differences in mean values for the various samples significantly differed from the original sample taken from the Met collection and uploaded to Instagram. With these t-tests, we compared the differences of all metrics in a variety of conditions:
  • All exhibit pieces and the Met collection sample;
  • Instagram-selected exhibit pieces and the Met collection sample;
  • Human (artist)-selected exhibit pieces and the Met collection sample;
  • Instagram-selected exhibit pieces and the Human (artist)-selected exhibit pieces.
These comparisons reflect the conditions that the gallerygoers also observed. Information about the human and Instagram-selected pieces was readily available within the physical gallery space through floor labels, and a presented tablet allowed visitors to explore the overall Met collection from which the pieces were sampled (Herman and Moruzzi (2024)). Therefore, when attempting to understand how a machine would view the selected pieces, the statistical comparison tests replicated the same comparisons.

3. Results

Applying the collection of fundamental computer vision scripts to the images of the exhibit and the larger sample of pieces from the Met collection allowed us to provide quantitative insights into the curatorial selections made by both humans and machines. The adapted scripts from Hosny et al. (2014) allowed us to provide visual examples of the computer vision process and establish datasets for statistical comparison. Using the visual outputs, we can see the computer vision’s interpretations of each set of artworks, thus giving us insights to answer RQ1.
As the previous literature discussed in Section 2 has shown, computer vision works by breaking down the individual data pieces stored within an image file into usable metrics that can be processed algorithmically in other scripts. The visuals show that these basic scripts are particularly accurate at the pixel level to identify details such as edges and corners (Figure 1)1. However, the scripts struggle with more complicated aggregate tasks, specifically facial recognition (Figure 2).
Faces. Haar cascade facial recognition is a widely accepted and available tool offered through OpenCV. Nonetheless, even the original paper introducing the concept highlights the false positive percentage as 40% (Viola and Jones (2001)), leading to entertaining measurements such as this tapestry with a face count of 4 (Figure 2)2. This image gives us one prime example of how human and machine perceptions can differ, leading to separate interpretations of a piece. Previous research has shown that images with faces tend to receive more engagement on Instagram (Bakhshi et al. (2014)), leading to a folk theory (Karizat et al. (2021)) that Instagram favors images with faces. It remains a folk theory as Instagram’s, algorithm is proprietary. However, our findings on the usage of fundamental computer vision software provide some insight into how and why certain images were selected. The computational analysis enabled us to “see like an algorithm” (Uliasz (2021)), thereby elucidating its decision-making process.
Context. Additionally, context can be quite essential for piece interpretation. See Figure 3, a sofa positioned against a blank background3. Many of us would describe the sofa as blue, white, and perhaps quite ornate, but when the computer script calculated the dominant color, it stated gray as the output. Unlike us, the script considers the whole image (including the blank grey-white background) when processing an image. Unless we invoke a higher complexity masking script to ignore the background and solely take the item in the focal point of the image, the computer will process all of the pixels and data equally, leading to yet another interpretation that is quite different from that of human visitors. This finding may offer insights for artists and curators when digitizing a physical work. The resulting image file will include the background and framing of the piece; therefore, unless the computer vision processing removes the noisy data, it will process all of the background information.

Statistical Analysis

Once the data for each variable was processed, we were able to run comparative statistics between the different groups of images. We compared the metrics of the human-selected, Instagram-selected, total-selected, and overall Met samples against each other to see if there were any statistically significant differences between the groups.
Beginning with the statistically significant results, we found that the Instagram-selected pieces had a higher ratio of unique colors than the overall sample of Met collection pieces used for the study (Table 2 t(1) = 2.307, p = 0.0334). The presence of a higher number of unique colors reflects visitors’ and researchers’ (MacDowall and Budge (2021)) expectations of Instagram’s bias for bright colors. Similarly, these results suggest that artists and curators aiming to achieve a modern or social media-inspired experience could employ bright and unique color combinations. On the other hand, an artist may receive negative reviews labeling them clout chasers if their work has similar color ratios to art seen on social media. Audiences’ interpretation of this metric could be either beneficial or detrimental based on their impression of social media and perspective on the artwork.
Simultaneously, the human-selected pieces had a higher number of faces than the overall sample of the Met collection (Table 3 t(1) = 5.238 p = 2.835 × 10 5 ). The presence of a significant number of faces on the human-curated side also speaks to researchers’ expectations of human and algorithmic processing. Previous research argues that images with faces receive higher engagement on Instagram (Bakhshi et al. (2014)). For some, these findings hint at Instagram’s bias for faces. In contrast, for others, it suggests a psychological human bias for images that contain faces, a subject familiar and comfortable to us. We are likely seeing the result of a feedback loop where human engagement with faces drives the algorithm to show more images with faces. Encountering this loop leads to the folk theory that success on social media requires the usage of faces (Karizat et al. (2021)). As artists and curators consider how they present themselves and their collections, they can capitalize on the power of faces. In this case, the self-fulfilling loop of humans and algorithms responding positively to faces can make highlighting portraits a winning strategy for attracting audiences interested in exhibits. In the case of the Algorithmic Pedestal, it appears that the artist chose more pieces with faces. As a result, our work may provide early evidence of what came first: people liking faces or algorithms promoting faces.
Finally, we found that when grouped together, all of the pieces in the exhibit (both human- and Instagram-selected) contained more unique colors and more edges than the overall sample from the Met collection (Table 4 t(1) = −2.164, p = 0.0378 and t(1) = −2.287, p = 0.0286, respectively). The other computational metrics comparing the image sets were not statistically significant.
When comparing the human-selected and Instagram-selected pieces (Table 5), the lack of statistically significant differences offers an intriguing insight into the key differences between human and machine perception of the art exhibit. According to the computational data, the pieces selected in both conditions were comparable. Nonetheless, human participants noted meaningful differences between the two bodies of images; we summarize the human observations here but see additional work by Herman and Moruzzi (under review) for more information. Specifically, the human participants reported that human-selected pieces were more aesthetically interesting to look at; they found these images to be more complex and engaging. On the other hand, the algorithmically selected images were deemed straightforward and easy to process. Regarding similarities, the gallerygoers were interested in the same type of information from both the human and algorithmic sides; they wanted to understand “why” the selected images were selected. The gallerygoers were as interested in the artist’s thought process as in which aspects of the recommendation algorithm led to the chosen images.
In this way, our results indicate that the critical difference between computational and human perceptions lies in the disparity between top-down and bottom-up processing. Human perception focuses on top-down meaning-based considerations (like the story or high-level aesthetics). At the same time, the machine necessarily operates through low-level perceptual input such as pixel data and engagement metrics. The software’s inability to detect high-level differences in meaning or aesthetics, which are the focus of human users, demonstrates the disparity between user expectations and machine capabilities for art recommendations.

4. Discussion

Our study provides an overview of the statistically notable findings obtained by analyzing the Algorithmic Pedestal exhibit. By systematically measuring various metrics across each subdivision, we were able to present comparative statistics that reveal distinctions separating the pieces in the exhibit from the overall pieces of the digital collection. Notably, the human-curated side exhibited a higher incidence of faces detected using the Viola–Jones Haar cascade facial detection software. As indicated in the results, it is important to note that the system’s recording of a face within the image was not always accurate (refer to Figure 2). These variations could be attributed to potential false positives associated with the open-source Haar cascade facial detection code, especially when applied to abstract or fragmented images.
Similarly, the Instagram-curated images showcased a significantly greater ratio of unique colors within the presented pieces than the overall Met Collection. This finding aligns with prior evidence suggesting that Instagram is inclined to prioritize bright and multi-colored images (MacDowall and Budge (2021)). However, when comparing the two sides of the exhibit, the machine failed to discern any statistically significant differences, contrasting with qualitative human responses. Our results indicate that while the computer vision software successfully identified differences in specific metrics rooted in the history of computer vision and computational aesthetics, these metrics may be insufficient to detect other differences noticed or prioritized by human participants. This disparity underscores a disconnect between human and machine perceptions of the same curated images.

4.1. Compared to Previous Research

In this research, we demonstrate how humans and machines perceive artistic curation differently, thereby building on established research that outlines the differences in human and machine perception and the risks of assuming that they are either completely the same or wholly different (Borowski et al. (2019); Shamir et al. (2016)).
A notable disparity emerges when comparing the computational analysis results to human observations from the gallery experience. Previous qualitative findings highlighted users’ views on the differences between human and machine outputs (Gao et al. (2023); Köbis and Mossink (2021); Ragot et al. (2020)). The human work was anecdotally more abstract, holistic, contextual, and emotionally resonant, whereas the algorithmic work was more object-oriented, recognizable, and individualistic. However, the computational analysis did not discern these same differences, prompting questions about the limitations of computational perception. As outlined in the brief history of computer vision and computational aesthetics in Section 2, machine perception focuses on objective data measures when processing an image. Therefore, even as society applies machines in artistic contexts such as Instagram or museum curation, computational methods cannot measure contextual meaning and emotionality, which human audiences prioritize.
Alternatively, it is plausible that biases influence human perceptions, such as those introduced by floor labels, leading to exaggerated differences between the two sides. Rae recently highlighted how labeling work, by human or algorithm, might lead to negative perceptions of the work by audiences (Rae (2024)). Similar claims come from other research stating that humans consider context and background knowledge when forming their ideas of the world (Makino et al. (2022)). This research draws inspiration from Piaget’s psychological theory of constructivism in learning (Fosnot and Perry (2005)). Such theories have informed museum practitioners on how better to serve audiences in different age groups and life experiences (Jensen (1999)). Social media has also attempted to personalize experiences based on segmenting audiences, but this has made some audiences feel targeted or excluded (Haimson et al. (2021)). This form of processing injects ideas and presumptions that alter the perception of the experience, potentially leading to the participants reporting the differences they identified. This juxtaposition raises fundamental questions about the nature of truth in perception—is it shaped by human interpretation or revealed by machines? Likely, the answer lies in a combination of both perspectives.
This discussion resonates with previous research comparing human perception to machine perception, highlighting how humans employ top-down and bottom-up processes to interpret information. In contrast, the metrics with which machines are programmed form inherent limitations. Beyond the distinctions between the two sides of the exhibit, gallerygoers shared common questions about the overall exhibition, irrespective of whether they viewed the artist-curated or Instagram-curated works, bridging the inquiry to the next section of this discussion.

4.2. Human-in-the-Loop

The exploration of art perception dynamics reveals that individuals are often more captivated by the narrative and presentation of a piece of art than by its inherent distinguishing features (Bullot and Reber (2013)). Unlike tasks in medical imaging, where specific objectives can be clearly defined, engaging with art in a gallery setting is multifaceted and subjective. Consequently, gallerygoers frequently ponder meta-level questions regarding the motivation behind the art and the selection processes employed by platforms like Instagram. These inquiries are equally significant to the audience as the individual observations of the artworks themselves. In contrast, a machine analyzing handcrafted pixel features cannot contemplate such contextual elements. Therefore, an opportunity exists to enhance computational systems that process gallery data by incorporating appraisal data, curatorial insights, and advancing computational aesthetic processing.
Appraisal data inherently provide a machine with information about the story and history of an artwork through details about its provenance and condition (Von Davier (2023)). Similarly, curatorial data have been increasingly digitized since the COVID-19 pandemic, enabling museums and galleries to make information accessible outside of the physical space (Steward (2015)). These data provide insights into the art selection process for certain special exhibits and how human curators organized them. Finally, computational aesthetics is a growing field (Manovich (2017, 2021); Manovich and Arielli (2023)). As the resources become available within the digital humanities, new research and cultural system applications arise, combining machine processes with human artistic knowledge (Mochocki (2021)). As we look toward the technology rapidly becoming available today, future work can explore the potential for multi-model systems to combine these data types to implement computational art processes better. An example of a multi-model approach would be to combine visual systems with language models into the aptly named visual language models (VLMs). These systems specialize in pre-training computer vision tasks by associating images with pre-existing text labels, and in prediction tasks downstream; they take this corpus of labeled images and use them for additional, accurate image processing (Zhang et al. (2024)). Through integrating these data formats, there is the potential to build on the research of this paper and develop it further into a tool that can elevate and expand the gallery experience.
While research and cultural systems can be expanded to interpret galleries better, it is crucial to question how images are recommended and selected when they disregard the human perspective and rely solely on measurable metrics. In doing so, we risk overlooking the rich motivations and narratives that inform the presentation of art, reducing it to a mere list of parameters devoid of historical or curatorial context. Therefore, we suggest empowering art viewers by enabling them to provide direct input into the algorithmic experience. We recommend a human-in-the-loop (Binns (2022); Mosqueira-Rey et al. (2023); Zanzotto (2019)) approach for all algorithmic systems that enable art viewing. As social media systems integrate VLMs that can process visual and textual data, they must consider the human experience and perspective on art. It would empower users and improve the platform’s experience and interaction potential (Alvarado and Waern (2018)). In this way, the users would be able to sort the artworks according to their interests, tastes, and considerations—essential in the subjective and personal experience of art viewing. It is worth underscoring that we suggest taking a human-in-the-loop approach versus a human-on-the-loop approach; the human user should not just check the algorithms’ process but also be able to use their experience with the algorithm. Our call to action challenges the established system of “information gatekeepers” (Metoyer-Duran (1993)), where users are just passively receiving content based on obscure metrics. This paper aims to share how some of those metrics work in the context of art and how they only partially satisfy human expectations for art experiences.
As collaboration between humans and algorithms becomes a reality, we argue for active, beneficial partnerships rather than one-sided content pipelines. This paper provides a practical example of how a machine can perceive a gallery as wholly similar, even though the reported audience reactions capture notable differences that greatly inform their appreciation of the work. Understanding these different perceptions and how they relate can aid in evolving the art of curation in the modern era.

4.3. Limitations

The work presented in this paper compares quantitative analyses to previously reported qualitative user responses. Even with an open-source quantitative approach, certain limitations must be addressed. For the data collected, we must note the potential limitations or biases involved in developing the presented findings. Of the 490,000 pieces in the Metropolitan Museum of Art’s complete Open Access collection, the Algorithmic Pedestal selected only 1204, with the exhibit curating 33 pieces. While there were enough pieces for some basic statistical comparisons, this work does not apply computer vision analyses to a complete array of artworks. As future collections and exhibits apply similar approaches, the metrics and statistical analysis may reveal more insight into the similarities and differences between the various curated pieces.
As for the quantitative analysis, it was intentionally straightforward and focused on handcrafted feature analysis. Such analyses are less computationally intensive than some of the more advanced graph neural network (GNN) analyses conducted in the computational aesthetics literature (Joshi et al. (2011); Zhang et al. (2020)). More purpose-built image classifiers or art-focused tools (like VLMs) would result in different measurements and observations. Nonetheless, these foundational scripts form a baseline from which comparisons can be made with the machine’s measurements when processing an image.

5. Conclusions

This paper presents the findings of a technical analysis of an exhibit-based research project that compared human curation with algorithmic curation. This paper illustrates how baseline computer vision software processes art images, revealing similarities and differences between the pieces selected by the curator and the algorithm. Ultimately, by comparing the metrics with the previously reported human observations, we identified apparent differences between the information that human audiences consider valuable and the metrics that the computer vision software considers statistically significant. In presenting these differences, we urge computer science and art researchers to deeply consider the ever-evolving relationship between computation and the arts, driven bidirectionally by objective metrics and subjective perceptions. In particular, we encourage technologists and researchers to consider updating computational systems for the arts to account for human users’ perceptions rather than relying on current computer vision protocol, which results in notably different decisions than those of human users.

Supplementary Materials

The following supporting information can be downloaded at https://www.mdpi.com/article/10.3390/arts13050138/s1: exhibit_w_cv copy.csv, contains the alculated metrics for the exhibit pieces; insta_files_filled copy.csv, contains the calculated metrics for the Met collection pieces; Data_Vis.ipynb, contains the data visualization code; Streamline_File_Analysis.ipynb, contains the data analysis code, streamlined; Technical_Analysis_of_Art.ipynb, contains the data analysis code, original draft; haarcascade_frontalface_default.xml, basis for the facial recognition code and can be downloaded from OpenCV.

Author Contributions

Conceptualization, T.Ş.v.D., L.M.H., and C.M.; data curation, L.M.H., and C.M.; formal analysis, T.Ş.v.D., L.M.H., and C.M.; investigation, T.Ş.v.D.; methodology, T.Ş.v.D.; project administration, L.M.H.; resources, T.Ş.v.D.; software, T.Ş.v.D.; supervision, C.M.; validation, T.Ş.v.D.; visualization, T.Ş.v.D.; writing—original draft, T.Ş.v.D.; writing—review and editing, T.Ş.v.D., L.M.H., and C.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

This research did not involve humans or animals, requiring no additional ethics review.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in the study are included in the article/Supplementary Mterial; further inquiries can be directed to the corresponding author/s.

Acknowledgments

We would like to acknowledge the many people and institutions that were involved researching the exhibit, the artist Fabienne Hess, the website designer Weezy Dai, the exhibit designers Parasite 2.0, the J/M Gallery and Joanna and Marcus in particular, the media and communications manager of the Oxford Internet Institute Sara Spinks, Kathryn Eccles for her invaluable mentoring and support, Lev Manovich for the insightful conversations. We also want to thank the HCAI Lab within the Department of Computer Science for their support along with the mentorship of Max Van Kleek and Nigel Shadbolt.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AIArtificial Intelligence
CVComputer Vision

Notes

1
Image retrieved from the Metropolitan Museum of Art’s Open Access collection via Creative Commons licensing. The piece is “Visit to a Shrine at the Hour of the Ox (Ushi no toki mairi),” 1765.
2
Image retrieved from the Metropolitan Museum of Art’s Open Access collection via Creative Commons licensing. The piece is “Fragment of a Red-Ground Harshang Carpet,” early 19th century.
3
Image retrieved from the Metropolitan Museum of Art’s Open Access collection via Creative Commons licensing. The piece is “Sofa (part of a set),” circa 1835

References

  1. Alaoui, Sarah Fdili. 2019. Making an interactive dance piece: Tensions in integrating technology in art. Presented at the DIS 2019—The 2019 ACM Designing Interactive Systems Conference, San Francisco, CA, USA, June 23–28; pp. 1195–208. [Google Scholar] [CrossRef]
  2. Alvarado, Oscar, and Annika Waern. 2018. Towards Algorithmic Experience: Initial Efforts for Social Media Contexts. Presented at the 2018 CHI Conference on Human Factors in Computing Systems, Montreal, QC, Canada, April 21–26. [Google Scholar] [CrossRef]
  3. Bailey, Jason. 2017. Machine Learning for Art Valuation. An Interview with Ahmed Hosny. Artnome, December 9. [Google Scholar]
  4. Bakhshi, Saeideh, David A. Shamma, and Eric Gilbert. 2014. Faces engage us: Photos with faces attract more likes and comments on instagram. Presented at the SIGCHI Conference on Human Factors in Computing Systems—CHI’14, Toronto, ON, Canada, April 26; New York: Association for Computing Machinery, pp. 965–74. [Google Scholar] [CrossRef]
  5. Binns, Reuben. 2022. Human judgment in algorithmic loops: Individual justice and automated decision-making. Regulation & Governance 16: 197–211. [Google Scholar] [CrossRef]
  6. Bo, Yihang, Jinhui Yu, and Kang Zhang. 2018. Computational aesthetics and applications. Visual Computing for Industry, Biomedicin, and Art 1: 6. [Google Scholar] [CrossRef]
  7. Borowski, Judy, Christina M. Funke, Karolina Stosio, Wieland Brendel, T. Wallis, and Matthias Bethge. 2019. The notorious difficulty of comparing human and machine perception. Presented at the 2019 Conference on Cognitive Computational Neuroscience, Berlin, Germany, September 13–16; pp. 642–646. [Google Scholar]
  8. Bullot, Nicolas J., and Rolf Reber. 2013. The Artful mind meets art history: Toward a psycho-historical framework for the science of art appreciation. Behavioral and Brain Sciences 36: 123–37. [Google Scholar] [CrossRef]
  9. Canny, John. 1986. A Computational Approach to Edge Detection. IEEE Transactions on Pattern Analysis and Machine Intelligence PAMI- 8: 679–98. [Google Scholar] [CrossRef]
  10. Caramiaux, Baptiste, and Sarah Fdili Alaoui. 2022. “Explorers of unknown planets”: Practices and politics of artificial intelligence in visual arts. Proceedings of the ACM on Human–Computer Interaction 6: 1–24. [Google Scholar] [CrossRef]
  11. Castelo, Noah, Maarten W. Bos, and Donald R. Lehmann. 2019. Task-dependent algorithm aversion. Journal of Marketing Research 56: 809–25. [Google Scholar] [CrossRef]
  12. Clerwall, Christer. 2014. Enter the robot journalist. Journalism Practice 8: 519–31. [Google Scholar] [CrossRef]
  13. Dietvorst, Berkeley J., Joseph P. Simmons, and Cade Massey. 2015. Algorithm aversion: People erroneously avoid algorithms after seeing them err. Journal of Experimental Psychology: General 144: 114–26. [Google Scholar] [CrossRef]
  14. Fosnot, Catherine Twomey, and Randall Stewart Perry. 2005. Constructivism: A Psychological Theory of Learning. New York: Teacher College. [Google Scholar]
  15. Funke, Christina M., Judy Borowski, Karolina Stosio, Wieland Brendel, Thomas S.A. Wallis, and Matthias Bethge. 2021. Five points to check when comparing visual perception in humans and machines. Journal of Vision 21: 1–23. [Google Scholar] [CrossRef]
  16. Gao, Catherine A., Frederick M. Howard, Nikolay S. Markov, Emma C. Dyer, Siddhi Ramesh, Yuan Luo, and Alexander T. Pearson. 2023. Comparing scientific abstracts generated by chatgpt to real abstracts with detectors and blinded human reviewers. npj Digital Medicine 6: 1–5. [Google Scholar] [CrossRef]
  17. Graham, Beryl, and Sarah Cook. 2015. Rethinking Curating. Cambridge: The MIT Press. [Google Scholar]
  18. Greenberger, Alex. 2023. Artist wins photography contest after submitting AI-generated image. ARTnews, April 17. [Google Scholar]
  19. Haimson, Oliver L., Daniel Delmonaco, Andrea Wegner, and Peipei Nie. 2021. Disproportionate removals and difering content moderation experiences for conservative, transgender, and black social media users: Marginalization and moderation gray areas. Proceedings of the ACM on Human–Computer Interaction 5: 1–35. [Google Scholar] [CrossRef]
  20. Haken, Hermann. 1991. Comparisons Between Human Perception and Machine “Perception”. Berlin/Heidelberg: Springer, vol. 50, pp. 133–48. [Google Scholar] [CrossRef]
  21. Harris, Chris, and Mike Stephens. 1998. A Combined Corner and Edge Detector. Presented at the 4th Alvey Vision Conference, Manchester, UK, August 31–September 2; pp. 147–51. [Google Scholar]
  22. Herman, Laura M., and Caterina Moruzzi. 2024. The algorithmic pedestal: A practice-based study of algorithmic & artistic curation. Leonardo, 485–92. [Google Scholar] [CrossRef]
  23. Hoenig, Florian. 2005. Defining Computational Aesthetics. Computational Aesthetics in Graphics, Visualization and Imaging 2005: 13–18. [Google Scholar]
  24. Hong, Joo-Wha. 2018. Bias in Perception of Art Produced by Artificial Intelligence. Cham: Springer International Publishing, pp. 290–303. [Google Scholar]
  25. Hosny, Ahmed, Jili Huang, and Yingyi Wang. 2014. The Green Canvas. Github. Available online: https://github.com/ahmedhosny/theGreenCanvas (accessed on 15 October 2021).
  26. Jensen, Nina. 1999. Children, Teenagers and Adults in Museums: A Developmental Perspective, 2nd ed. London: Routledge, pp. 110–7. [Google Scholar]
  27. Joshi, Dhiraj, Ritendra Datta, Elena Fedorovskaya, Quang Tuan Luong, James Z. Wang, Jia Li, and Jiebo Luo. 2011. Aesthetics and emotions in images. IEEE Signal Processing Magazine 28: 94–115. [Google Scholar] [CrossRef]
  28. Karizat, Nadia, Dan Delmonaco, Motahhare Eslami, and Nazanin Andalibi. 2021. Algorithmic folk theories and identity: How tiktok users co-produce knowledge of identity and engage in algorithmic resistance. Proceedings of the ACM on Human–Computer Interaction 5: 1–44. [Google Scholar] [CrossRef]
  29. Köbis, Nils, and Luca D. Mossink. 2021. Artificial intelligence versus maya angelou: Experimental evidence that people cannot differentiate ai-generated from human-written poetry. Computers in Human Behavior 114: 106553. [Google Scholar] [CrossRef]
  30. Krizhevsky, Alex, Ilya Sutskever, and Geoffrey E. Hinton. 2012. Imagenet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems. Edited by Fernando Pereira, Christopher Burges, Leon Bottou and Kilian Weinberger. Red Hook: Curran Associates, Inc., vol. 25. [Google Scholar]
  31. Lepori, Michael A., and Chaz Firestone. 2022. Can you hear me now? Sensitive comparisons of human and machine perception. Cognitive Science 46: e13191. [Google Scholar] [CrossRef]
  32. Liu, Yaqi, Qingxiao Guan, Xianfeng Zhao, and Yun Cao. 2018. Image forgery localization based on multi-scale convolutional neural networks. Presented at the 6th ACM Workshop on Information Hiding and Multimedia Security—IH&MMSec’18, Innsbruck, Austria, June 20–22; New York: Association for Computing Machinery, pp. 85–90. [Google Scholar] [CrossRef]
  33. Longoni, Chiara, Andrea Bonezzi, and Carey K. Morewedge. 2019. Resistance to medical artificial intelligence. Journal of Consumer Research 46: 629–50. [Google Scholar] [CrossRef]
  34. MacDowall, Lachlan, and Kylie Budge. 2021. Art after Instagram: Art Spaces, Audiences, Aesthetics. London: Routledge. [Google Scholar]
  35. Makino, Taro, Stanisław Jastrzębski, Witold Oleszkiewicz, Celin Chacko, Robin Ehrenpreis, Naziya Samreen, Chloe Chhor, Eric Kim, Jiyon Lee, Kristine Pysarenko, and et al. 2022. Differences between human and machine perception in medical diagnosis. Scientific Reports 12: 6877. [Google Scholar] [CrossRef]
  36. Manovich, Lev. 2017. Instagram and Contemporary Image. manovich.net. Available online: https://manovich.net/index.php/projects/instagram-and-contemporary-image (accessed on 17 May 2023).
  37. Manovich, Lev. 2021. Computer vision, human senses, and language of art. AI and Society 36: 1145–52. [Google Scholar] [CrossRef]
  38. Manovich, Lev, and Emanuele Arielli. 2023. Artificial Aesthetics: A Critical Guide to AI, Media and Design. manovich.net. Available online: https://manovich.net/index.php/projects/artificial-aesthetics (accessed on 17 May 2023).
  39. Metoyer-Duran, Cheryl. 1993. Information gatekeepers. Annual Review of Information Science and Technology (ARIST) 28: 111–50. [Google Scholar]
  40. Mochocki, Michał. 2021. Heritage sites and video games: Questions of authenticity and immersion. Games and Culture 16: 951–77. [Google Scholar] [CrossRef]
  41. Moruzzi, Caterina. 2021. Measuring creativity: An account of natural and artificial creativity. European Journal for Philosophy of Science 11: 1. [Google Scholar] [CrossRef]
  42. Mosqueira-Rey, Eduardo, Elena Hernández-Pereira, David Alonso-Ríos, José Bobes-Bascarán, and Ángel Fernández-Leal. 2023. Human-in-the-loop machine learning: A state of the art. Artificial Intelligence Review 56: 3005–54. [Google Scholar] [CrossRef]
  43. Rae, Irene. 2024. The effects of perceived AI use on content perceptions. Presented at the CHI’ 24: CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, May 11–16; pp. 1–14. [Google Scholar] [CrossRef]
  44. Ragot, Martin, Nicolas Martin, and Salomé Cojean. 2020. AI-generated vs. human artworks. a perception bias towards artificial intelligence? Presented at the CHI’ 20: CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, April 25–30. [Google Scholar] [CrossRef]
  45. Roose, Kevin. 2022. AI-generated art won a prize. Artists aren’t happy. The New York Times, September 2. [Google Scholar]
  46. Shamir, Lior, Jenny Nissel, and Ellen Winner. 2016. Distinguishing between abstract art by artists vs. children and animals: Comparison between human and machine perception. ACM Transactions on Applied Perception (TAP) 13: 1–17. [Google Scholar] [CrossRef]
  47. Stack, John. 2019. What the Machine Saw. Github. Available online: https://github.com/johnstack/what-the-machine-saw (accessed on 15 March 2022).
  48. Steward, Jeff. 2015. Harvard Art Museums Api. Available online: https://github.com/harvardartmuseums/api-docs (accessed on 10 December 2023).
  49. Szeliski, Richard. 2022. Computer Vision: Algorithms and Applications, 2nd ed. Berlin and Heidelberg: Springer Nature. [Google Scholar]
  50. Uliasz, Rebecca. 2021. Seeing like an algorithm: Operative images and emergent subjects. AI and Society 36: 1233–41. [Google Scholar] [CrossRef]
  51. Villaespesa, Elena, and Oonagh Murphy. 2021. This is not an apple! Benefits and challenges of applying computer vision to museum collections. Museum Management and Curatorship 36: 362–83. [Google Scholar] [CrossRef]
  52. Viola, Paul, and Michael Jones. 2001. Rapid object detection using a boosted cascade of simple features. Presented at the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Kauai, HI, USA, December 8–14, vol. 1. [Google Scholar] [CrossRef]
  53. Vitulano, Sergio, Vito Di Gesú, Virginio Cantoni, Roberto Marmo, and Alessandra Setti. 2005. Human and Machine Perception: Communication, Interaction, and Integration. Singapore: World Scientific Publishing Co. [Google Scholar]
  54. Von Davier, Thomas Şerban. 2023. Designing for Appreciation: How Digital Spaces Can Support Art and Culture. Presented at the CHI’ 23: CHI Conference on Human Factors in Computing Systems, Hambrug, Germany, April 23–28. [Google Scholar] [CrossRef]
  55. Wes McKinney. 2010. Data Structures for Statistical Computing in Python. SciPy 445: 51–56. [Google Scholar] [CrossRef]
  56. Zanzotto, Fabio Massimo. 2019. Viewpoint: Human-in-the-loop artificial intelligence. Journal of Artificial Intelligence Research 64: 243–52. [Google Scholar] [CrossRef]
  57. Zhang, Jingyi, Jiaxing Huang, Sheng Jin, and Shijian Lu. 2024. Vision-language models for vision tasks: A survey. IEEE Transactions on Pattern Analysis and Machine Intelligence 46: 5625–44. [Google Scholar] [CrossRef]
  58. Zhang, Jiajing, Yongwei Miao, Junsong Zhang, and Jinhui Yu. 2020. Inkthetics: A Comprehensive Computational Model for Aesthetic Evaluation of Chinese Ink Paintings. IEEE Access 8: 225857–71. [Google Scholar] [CrossRef]
  59. Zhang, Lixuan, Iryna Pentina, and Yuhong Fan. 2021. Who do you choose? comparing perceptions of human vs robo-advisor in the context of financial services. Journal of Services Marketing 35: 634–46. [Google Scholar] [CrossRef]
  60. Zylinska, Joanna. 2023. The Perception Machine: Our Photographic Future between the Eye and AI. Cambridge: The MIT Press. [Google Scholar] [CrossRef]
Figure 1. The original Japanese print (left) next to the corner (center) and edge detection (right) visuals created by the software.
Figure 1. The original Japanese print (left) next to the corner (center) and edge detection (right) visuals created by the software.
Arts 13 00138 g001
Figure 2. A comparison of the original rug image from the online collection, next to the code’s output calculating four faces. Four red squares are placed around the rug where the pattern triggered the code to identify faces.
Figure 2. A comparison of the original rug image from the online collection, next to the code’s output calculating four faces. Four red squares are placed around the rug where the pattern triggered the code to identify faces.
Arts 13 00138 g002
Figure 3. An image showing the sofa in front of a grey-white background. While the sofa is the focus of the image, the whole image file is analyzed by the computer vision software.
Figure 3. An image showing the sofa in front of a grey-white background. While the sofa is the focus of the image, the whole image file is analyzed by the computer vision software.
Arts 13 00138 g003
Table 1. This outlines the common computer vision metrics outlined by Hosny et al. (2014) that were applied to the images used for the exhibit.
Table 1. This outlines the common computer vision metrics outlined by Hosny et al. (2014) that were applied to the images used for the exhibit.
CV VariableDefinition
Dominant ColorReturns the hex code and RGB hue of the most common color in the image file based on a pre-defined number of clusters.
BrightnessAverage brightness of the image file pixels.
Ratio of Unique ColorsA ratio of the total number of unique colors in regards to the total number of pixels within the image file. A value of 1 indicates highly colorful, while a value of 0 would be a greyscale image file.
Threshold Black PercentageIn using a pre-defined threshold value of 127, each pixel is compared to the threshold value, and whether it falls above (white) or below (black) this is determined to provide a calculation for the ratio of black pixels in a greyscale or inverted image.
High Brightness PercentageRatio of pixels that have two times the average brightness of the overall image file compared to the total number of pixels.
Low Brightness PercentageRatio of pixels that have less than half of the average brightness of the overall image compared to the total number of pixels.
Corner PercentageHarris Corner Detection (Harris and Stephens (1998)) is used to identify corner pixels and then calculate what percentage of pixels within the image file register as corners.
Edge PercentageUsing Canny Edge Detection (Canny (1986)), we identify the edge pixels in the image file and calculate the percentage of these pixels compared to the overall file.
Face CountHaar cascade face detection (Viola and Jones (2001)) from OpenCV is a common basic form of face detection software; we apply it to the image file to see if there are any noticeable faces.
Table 2. Results of the images from Instagram compared to the total selection from the Met collection. Statistically significant results are in bold typeface.
Table 2. Results of the images from Instagram compared to the total selection from the Met collection. Statistically significant results are in bold typeface.
CV VariableTest Statisticp-Value
Brightness0.1013840.92040
Ratio of Unique Colors−2.307180.033441
Threshold Black Percentage−0.075430.940718
High Brightness Percentage1.385000.181199
Low Brightness Percentage−0.351590.72933
Corner Percentage0.471650.642551
Edge Percentage−1.366190.189242
Face Count0.0628260.95060
Table 3. Results of the images selected by Fabienne Hess compared to the total selection from the Met collection. Statistically significant results are in bold typeface.
Table 3. Results of the images selected by Fabienne Hess compared to the total selection from the Met collection. Statistically significant results are in bold typeface.
CV VariableTest Statisticp-Value
Brightness0.7245140.48039
Ratio of Unique Colors−1.327100.20561
Threshold Black Percentage−0.322000.75211
High Brightness Percentage−1.094920.291933
Low Brightness Percentage−1.244280.233566
Corner Percentage−0.715890.485799
Edge Percentage−1.82900.08851
Face Count5.2380542.835 × 10 5
Table 4. Results of all the images used in the exhibition compared to the ones sampled from the Met collection. Statistically significant results are in bold typeface.
Table 4. Results of all the images used in the exhibition compared to the ones sampled from the Met collection. Statistically significant results are in bold typeface.
CV VariableTest Statisticp-Value
Brightness0.5529510.583944
Ratio of Unique Colors−2.164960.037811
Threshold Black Percentage−0.286870.775964
High Brightness Percentage−0.7637220.45049
Low Brightness Percentage−1.191270.241977
Corner Percentage−0.627440.534768
Edge Percentage−2.287540.028678
Face Count1.362200.18150
Table 5. Results of the images from the human artist compared to the pieces Instagram selected.
Table 5. Results of the images from the human artist compared to the pieces Instagram selected.
CV VariableTest Statisticp-Value
Brightness0.4531950.65361
Ratio of Unique Colors−0.459250.651656
Threshold Black Percentage−0.205270.83883
High Brightness Percentage−1.327120.204521
Low Brightness Percentage−0.844160.406565
Corner Percentage−0.777440.449341
Edge Percentage−0.726260.47422
Face Count1.566140.133185
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

von Davier, T.Ş.; Herman, L.M.; Moruzzi, C. A Machine Walks into an Exhibit: A Technical Analysis of Art Curation. Arts 2024, 13, 138. https://doi.org/10.3390/arts13050138

AMA Style

von Davier TŞ, Herman LM, Moruzzi C. A Machine Walks into an Exhibit: A Technical Analysis of Art Curation. Arts. 2024; 13(5):138. https://doi.org/10.3390/arts13050138

Chicago/Turabian Style

von Davier, Thomas Şerban, Laura M. Herman, and Caterina Moruzzi. 2024. "A Machine Walks into an Exhibit: A Technical Analysis of Art Curation" Arts 13, no. 5: 138. https://doi.org/10.3390/arts13050138

APA Style

von Davier, T. Ş., Herman, L. M., & Moruzzi, C. (2024). A Machine Walks into an Exhibit: A Technical Analysis of Art Curation. Arts, 13(5), 138. https://doi.org/10.3390/arts13050138

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop