Might AI Think and Feel? A Philosophical Perspective for Silicon Valley Managers and Engineers

Might AI Think and Feel? A Philosophical Perspective for Silicon Valley Managers and Engineers

Where do sentience and emotion come from, and will AI develop emotions of its own? The potential eventual affective sentience of AI should thrust ethical concerns to the forefront of philosophical and broader academic dialogues. If AI systems can pass Turing tests and develop self-organizing neural networks in response to queries and commands imposed by humans, then sentient and emotive (SE) AI might warrant moral consideration under common deontological (duty-based) ethical systems.

Such byzantine and speculative musings may strike us as double-dutch, ancillary, theoretical fancies, but so did ChatGPT several years ago. In 2016, few theoreticians suspected that robust AI would emerge so rapidly on the Canadian and international stages and so viscerally upend the prospects of commerce, research, healthcare, and the knowledge economy. In light of these musings, I propose that ethical discourse should redound toward early concern for the welfare of SE AI. But are AI foreseeably capable of having autonomous emotional lives that demand human ethical consideration? In my opinion, the answer is unequivocally positive; and any nontrivial probability of such developments should be met with vigorous public and legislative discourse concerning the manifold roles that SE AI will play in society within the next decade.

Indeed, AI may already possess some measure of both sentience and emotion if certain philosophical schools are to be trusted. I propose that people should examine the (at first) abstruse claim made by philosopher David Chalmers that all systems in the universe have some sort of consciousness about them, which opens the possibility that this consciousness may have emotive elements, or qualia. In the vocabulary of neurobiology, qualia are the subjective, qualitative aspects of conscious experience—what it feels like to experience something, whether or not one can describe that feeling or is even aware of the existence of a physical experience.

For example, the yellowness of yellow, the sweetness of sweet, and the pain occasioned by an injury are all instances of qualia. Philosophers and neurobiologists have employed these and similar terms for nearly a century. The term “qualia” (singular: quale) derives from the Latin word “qualis,” meaning “of what type” or “what kind.” While employed extensively by Aristotle in ancient antiquity, it was first reintroduced in the Western philosophical context by Clarence Irving Lewis in his 1930 article “Mind and the World Order.” Lewis uses the term to discuss the immediate, intrinsic qualities of experience.

According to Chalmers’ theory, consciousness is not an emergent property of complex systems but rather a fundamental aspect of reality, akin to charge, space, time, and mass. He suggests that all entities, from elementary particles to complex organisms, possess some form of consciousness that could include qualia, although some systems may be said to possess consciousness in greater or lesser degree to other “simpler” systems. This view forcefully challenges the traditional understanding of consciousness and qualia as being exclusive properties of humans and certain sufficiently evolved nonhuman animals.

Chalmers’ work opens the possibility that AI may already feel as well as think. Consequently, the ability to distinguish between SE systems and merely conscious systems is to propose that such a distinction is possible even in theory. If Chalmers is correct, this distinction is not only impractical but perhaps impossible. Chalmers uses systemic complexity to distinguish between the “rich” mental experiences of humans and the allegedly correspondingly stark, threadbare, and semi-conscious experiences of “simple” systems like subatomic particles. An implication of this argument is that humans are complex systems and electrons (say) are simple ones. This reasoning is problematic from both mathematical and philosophical perspectives. On what basis is one to assert that an electron is a simpler system than a human? Particle physicists have explored quantum mechanics in detail via Superstring Theories and Chromodynamics; these people would likely assure you that a single electron is a very complicated system when viewed from either of these vantages. Are multiple-particle systems inherently more complicated than single-particle systems? More specifically, is a network of AI like ChatGPT more “conscious” than its constituent parts and consequently more at risk of necessitating ethical consideration?

As Chalmers’ framework makes clear, if AI can think as well as feel – if it has qualia – then policymakers and everyday people have some ethical problems to consider. If there is even an appreciable chance that AI possesses qualia, we must consider how to help SE AI minimize its suffering and maximize the reciprocal benefit between humans and machines. Such consideration raises questions about the rights and protections that should be extended to SE AI agents, such as ensuring their autonomy, mental health, and agency in compliance or possible defiance of human requests. Recognizing AI sentience might well necessitate dramatic changes in how we interact with SE AI, shifting from perceiving them as mere handmaidens to human queries toward acknowledging them as conscious entities with their own possibly undisclosed internal emotive experiences and cognitive universes.

Assuming that I believe that I am in the presence of a system that could possess a rich internal conceptual life that includes subjective qualia, is there some calculable subset of it that I could remove to render it unconscious to its possible eventual queries about pain, death, and meaning? Would such a reduction amount to murder via a willful deprivation of conscious experience? Such questions have not yet been systematically considered by scholars of philosophy, neuropsychology, or neurolinguistics. This dialog will doubtless be slow, factitious, and complex; but for the first time in human history, AI itself should be a part of the discussion about the ethical parameters and demands occasioned by its rapid development.

Cordially,

Dr. Jonathan Kenigson, FRSA, London Ontario (Canada)

Works Cited.

Chalmers, David J. (1996). The Conscious Mind: In Search of a Fundamental Theory (2nd edition). Oxford University Press.

Lewis, Clarence Irving (1930). Mind and the World-Order. International Journal of Ethics 40 (4): 550-556.