Why call it Chatty?

Pedram Pourasgari
4 min readJust now

--

Artificial Intelligence (AI) has been around for a while. Its origin can be traced back to 1956 when scientists launched the Dartmouth Summer Research Project on Artificial Intelligence to create machines capable of imitating human intelligence or, as stated in the project’s proposal¹, “to simulate many of the higher functions of the human brain.” However, recent advances in large language models have resulted in the widespread adoption of AI in a variety of fields.

We are increasingly finding ourselves in such contexts where human and intelligent non-human agents coexist. It is becoming harder and harder to distinguish these two from each other, to realize if a social media post is generated by humans or social bots. This phenomenon is rapidly expanding into other domains like arts, where artists such as music producers and visual artists have already begun training AI agents to perform creative tasks. It is no longer fiction to imagine a philharmonic orchestra featuring both human and non-human performers or to see an AI-co-created artwork sold at Christie’s auction.

The primary goal of AI development since its inception has been to create systems and machines capable of replicating human intelligence. So the inevitable convergence of AI with human behavior should not be surprising. But even if we can distinguish AI entities from humans, are we willing to do so? Will we consider them human creations, or will we give them their own identities and consider them creators?

While AI is a relatively new phenomenon, humans have a long history of imbuing humanlike qualities into non-human physical or conceptual entities, regardless of whether they inherently possess such qualities. It does happen in certain situations, as is currently the case with AI. As is common in the software industry, AI applications are typically released after being black-boxed. Black-boxed software is typically perceived as a more advanced form of technology² due to its more user-friendly interface and the creator’s apparent desire to safeguard the software’s content. When we encounter an enigmatic, incomprehensible black box, our first instinct is to look for similarities with familiar concepts.

What, then, are we most familiar with? Perhaps ourselves. When someone cannot understand the inner workings of a black-boxed AI system, they turn to familiar cognitive shortcuts and anthropomorphize the technology to make sense of its actions.

In his attempt to explain the emergence of early religions, David Hume asserted that when humans experience an absolute ignorance of causes and are anxious about the future, they acknowledge a dependence on invisible powers, which he refers to as unknown causes³. According to Hume, the need to make sense of such unknown causes then triggers humankind’s universal tendency to attribute human qualities — such as appearance, malice, or goodwill — to everything, resulting in people conceptualizing the unknown causes as humanlike.

Just as Hume noticed this tendency in religious contexts, today’s AI users face a similar unknown cause: the opaque algorithms that drive AI behavior. The prevalence of generative AI models has resulted in a significant amount of such behavior towards many AI products. Even in a TechCrunch article written to suggest that Claude’s (a large language model) consciousness is an illusion, the author freely uses human-specific language to describe the behavior of AI models, using phrases like “being polite but never apologetic,” “being honest about what the fact that it can’t know everything,” “having certain personality traits” and “being like an actor in a stage play.”

In this phenomenon, which is called anthropomorphism, human characteristics, motivations, emotions, and intentions are attributed to non-human entities, including objects and animals, as well as social constructs such as religious beliefs and product brands⁴. The concept is the reverse of dehumanization — representing other humans as animal-like by denying their human attributes or representing them as objects by denying their human nature⁵. Practicing anthropomorphism typically occurs in two cases. First, similar to how children show anthropomorphic behavior when encountering a new complex object such as a TV or a computer when people lack the cognitive capacity to perceive an entity, they might anthropomorphize it to make sense of what it does. The second case is related to the human tendency to make their creations appear like themselves. There are numerous examples, from advertising a car as a person with a friendly smile to fairy tale animal characters who can speak or a humanlike sculpture of a god carved by a religious leader. In the software industry, the creators sometimes anthropomorphize their products to give them a familiar humanized image, and users do the same to perceive the creation based on its actions rather than its invisible content.

Arguably, when black-boxing is combined with anthropomorphism by developers and users, the socially constructed image of AI outperforms the technological aspect. This is not just a technical phenomenon. It is a deeply human one, and it opens up fascinating possibilities for understanding how we relate to what we create.

[1] McCarthy, J., Minsky, M. L., Rochester, N., & Shannon, C. E. 1955. A proposal for the dartmouth summer research project on artificial intelligence, august 31, 1955. AI Magazine, 27(4): 12–12.

[2] Nelson, A., Anthony, C., & Tripsas, M. 2023. “If I Could Turn Back Time”: Occupational Dynamics, Technology Trajectories, and the Reemergence of the Analog Music Synthesizer. Administrative Science Quarterly, 68(2): 551–599.

[3] Hume, D. 1793. The natural history of religion. Printed and fold by J.J. Tourneisen.

[4] Epley, N., Waytz, A., & Cacioppo, J. T. 2007. On seeing human: A three-factor theory of anthropomorphism. Psychological Review, 114(4): 864–886.

[5] Haslam, N. 2006. Dehumanization: An Integrative Review. Personality and Social Psychology Review, 10(3): 252–264.

--

--

No responses yet