Skip to main content

Verified by Psychology Today

Artificial Intelligence

AI as Curator: More Than Meets the Eye

The psychological understanding of AI in museums.

Source: Dalle-3/OpenAI
AI Robots organizing an exhibit
Source: Dalle-3/OpenAI

What is happening these days in the execution of museum exhibitions would have been unimaginable as little as four years ago. Artificial Intelligence (AI) has been given a test run as a museum curator. The Nasher Museum of Art at Duke University recently embarked on an innovative experiment integrating AI into the curatorial process. This initiative, detailed by Julianne Miao, sought to explore the potential of generative AI in curating art exhibitions. The experiment presents a case study for psychological insights into how AI simulates human cognitive functions in complex creative tasks.

How Psychology Applies to This AI-Generated Museum Exhibit

In the experiment, the AI, specifically a large language model (LLM) like ChatGPT, was tasked with curating an exhibition from the Nasher Museum's collection. The model's initial inability to accurately select appropriate pieces from the museum’s collection highlighted the limitations of AI’s "knowledge." AI operates on data it was trained on, lacking real-time updates or access to external databases unless specifically programmed. This limitation is akin to human memory constraints, where recall accuracy depends on exposure and retention of information.

In addition, the concept of AI "hallucinating" information, as observed when ChatGPT misidentified artworks, invites comparisons with human cognitive biases and errors in memory recall. Psychology examines such phenomena, often attributing errors to neural misfiring or the influence of existing cognitive schemas—frameworks that help organize and interpret information. AI, similarly, uses its training data to generate responses influenced by the 'schemas' it was programmed with, albeit with less flexibility than the human brain.

The customization of ChatGPT for this project involved integrating it with a database of 14,000 records from Nasher’s collection, enhancing its accuracy. This aspect of the experiment underscores the psychological principle of 'learning'—the modification of behavior through practice and experience. By interfacing ChatGPT with specific data, the experiment essentially 'taught' the AI about the museum's collection, paralleling how neural pathways in the human brain strengthen with repeated use.

ChatGPT’s proposed themes for the exhibition—dreams, the subconscious, utopia, and dystopia—further align with psychological interests in how the human mind constructs narratives and themes from perceived realities. The AI’s selection process, driven by keywords and learned data, mimics human cognitive processes involved in thematic thinking and association, albeit in a more rudimentary form.

The Limitation of AI

The challenges faced during the exhibition layout planning, where AI suggestions were impractical, reflect the current limits of AI in understanding the complexities of spatial and aesthetic judgments—areas where the human brain excels. Psychology could offer insights into developing more sophisticated AI that can better simulate these aspects of human cognition.

Summary

In conclusion, the Nasher Museum’s experiment not only tested the functional capabilities of AI in a creative domain but also highlighted the psychological dimensions of AI's attempt to replicate human cognitive processes. While AI can simulate certain aspects of human thought, the nuanced understanding and emotional depth associated with true curatorial insight remain distinctly human. This experiment, nevertheless, serves as a reminder of both the potential and limitations of AI and its ongoing development may benefit from psychological research to bridge the gap between human and machine cognition, paving the way for more intuitive and empathetic AI systems in the future.

advertisement
More from Shirley M. Mueller M.D.
More from Psychology Today