Skip to main content

Verified by Psychology Today

Artificial Intelligence

Intellectual Humility in the Age of AI

Understanding intellectual humility and attitudes toward ChatGPT.

Key points

  • Intellectual humility reflects people's recognition of the fallibility of beliefs, opinions, and knowledge.
  • Artificial intelligence (AI) tools like ChatGPT may be "acceptable" to use given the challenges they face.
  • People with higher intellectual humility show more acceptance and greater willingness to use ChatGPT.
  • Openness to experience mediates the relationship between intellectual humility and acceptance of ChatGPT.

Many moons ago, I wrote an “infamous” paper, “Modest Systems Psychology: A Neutral Complement to Positive Psychological Thinking.” I call it infamous because, at the time, it was a tough “take-down” of the positive psychology movement. I focused on the bias and simplicity of positive psychology thinking—we needed to do better in our efforts to understand and influence happiness and well-being. A modest approach, I argued, is an approach that recognizes and works with the complexity of human systems. As the field of positive psychology research matured, I gravitated toward a more substantive intellectual humility—I accepted that we have to start somewhere, bias is something we work against collectively over time, and we generally move from simple to more complex and fit-for-purpose thinking and practice as our science in a given field progresses.

I recognized a similar sentiment within myself recently in my response to ChatGPT. In my early testing of ChatGPT—What does it generate when I ask questions about social science?—I recognized its bias, simplicity, and many outlandish errors (e.g., making up study citations that don’t exist). I also studied the underlying language models and how they work and decided that ChatGPT was fundamentally flawed as an aid to intellectual activity. However, flawed as it is, I have decided I need to engage with it and continue to explore its functionality as it develops. This aligns with my purpose as a teacher, researcher, and collective intelligence facilitator. Simply put, if we can find ways to put AI to good use, part of my job as a teacher and collective intelligence facilitator is to explore these possibilities.

When it comes to intellectual humility, I came across an interesting study recently focused directly on ChatGPT. The study by Li (2023) explores the relationship between intellectual humility and acceptance of ChatGPT. Intellectual humility is a measure of how much individuals recognize the fallibility of their beliefs, opinions, and knowledge. Intellectual humility is related to the personality disposition of openness to experience. Li (2023) suggests that people who recognize their own intellectual limitations may be more likely to accept AI as potentially useful. This is consistent with previous findings indicating that people with higher intellectual humility are less likely to feel threatened by developments in computer science and more likely to adopt new technologies.

Li (2023) conducted four studies. In Study 1, 309 students from southwest China completed a survey measuring intellectual humility along with a series of questions focused on acceptance and fear of ChatGPT. Analysis of this survey data indicated that higher intellectual humility was associated with higher acceptance of ChatGPT and lower fear.

Study 2 moved beyond a simple survey to evaluate a behavioral indicator of acceptance. A total of 144 students were contacted about an upcoming university event, and they were asked to help with a decision regarding invitation letters that would be sent to guests of honor. They were informed that two versions of the invitation letter were available: one written by ChatGPT and the other written by a professional writer. The students were also informed that both ChatGPT and the professional writer could write engaging and high-quality content and that a pilot survey indicated that the two versions do not differ in terms of many key criteria, such as credibility, effectiveness, readability, and valence.

Students were simply asked to indicate their preference and were not asked to read the invitations as such. When presented with this scenario, the majority of students (82.64 percent) proposed using the invitation letter they thought was written by a human, while only 17.36 percent of students proposed using the version they thought was generated by ChatGPT. Interestingly, however, the study also found that the odds of selecting the ChatGPT version were significantly higher amongst students higher in intellectual humility. In simple terms, they were more open to using the ChatGPT invites.

Study 3 worked with a sample of 172 Chinese non-student adults. Participants were randomly assigned to an experimental condition that prompted either intellectual humility or intellectual certainty. Specifically, one group of participants read an article on the benefits of admitting one’s own intellectual limitations (thus prompting intellectual humility), while the second group read an article on the benefits of demonstrating what you know and not being bashful in doing so (prompting intellectual certainty). Participants were next shown a photo of a beautiful mountain lake scene and were told that the government was going to run an advertisement to promote the scenic spot. Two text advertisements were available: one produced by ChatGPT and another produced by a professional writer. (In reality, both texts were written by the same person.)

Again, participants were asked to select one text or the other. And again, we see the same subtle and interesting effect: 85.7 percent of participants in the intellectual certainty condition selected the text they believed was written by a human, but a significantly smaller percentage of participants in the intellectual humility condition, 68.9 percent, did the same. These findings suggest that intellectual humility, even when it is temporarily prompted, may result in more favorable attitudes toward ChatGPT.

Study 4 adopted a similar design to Study 3, but it compared the intellectual humility prompt with a more neutral control reading prompt. Participants were asked to select an invitation letter as in Study 2; only this time, participants were able to read the invitation letters. They were told one letter was generated by ChatGPT, and the other was drafted by a professional writer. (Both were written by the same person and were of similar quality.)

Again, prompting intellectual humility resulted in more favorable attitudes toward ChatGPT: 77.11 percent of participants in the control condition selected the letter they thought was written by a human, whereas 63.75 percent of the participants in the intellectual humility condition did so. Further statistical analysis also revealed that the personality trait of openness to experience mediated the relationship between intellectual humility and letter selection (i.e., the tendency for those prompted with intellectual humility to select ChatGPT letters was explained in part by higher openness).

Overall, while the statistical effects reported across each of the four studies are subtle (i.e., statistically significant but somewhat weak), collectively, the studies do point to an effect of intellectual humility on acceptance of ChatGPT. Li (2023) notes that many other factors are likely to operate when it comes to understanding the responses that people have to ChatGPT, and more research is needed. I’m doing my best to maintain intellectual humility, and I think the best approach for now is a case-by-case analysis of different potential applications of ChatGPT. As a teacher, there is no point in hiding my head in the sand and hoping that traditional models of teaching and learning will be sustained in the new world of AI. But if I identify problems with ChatGPT and the way it is being used, I’ll be sure to let my students know. We’ll figure it out together!

References

Li H (2023). Rethinking human excellence in the AI age: The relationship between intellectual humility and attitudes toward ChatGPT. Personality and Individual Differences 215:112401. doi: https://doi.org/10.1016/j.paid.2023.112401.

advertisement
More from Michael Hogan Ph.D.
More from Psychology Today