Artificial Intelligence and the Category Mistake

In 1949, a philosopher by the name of Gilbert Ryle introduced the concept of the “category mistake” to describe the semantic error of comparing objects in different ontological categories as if they are in the same category. For example, from The Concept of Mind:

When two terms belong to the same category, it is proper to construct conjunctive propositions embodying them. Thus, a purchaser may say he bought a left-hand glove and a right-hand glove, but not that he bought a left-hand glove, a right-hand glove, and a pair of gloves.

In other words, it is proper to compare a left-hand and right-hand glove because they belong in the same category, but not either glove and a pair of gloves, which are semantically different. A more in depth discussion of the concept of category mistakes can be found at the Stanford Encyclopedia of Philosophy.

With regards to artificial intelligence, comparisons are often made between the operation of the AI and a human intelligence performing the same or similar task. However, I believe these comparisons are category mistakes. Before delving into why I believe this is the case, I want to discuss why this is important and not just an esoteric take. In short, why you should care.

Comparisons between the capabilities of some form of AI and humans generally take one of two paths. One is a comparison in vary narrow terms to suggest that superiority in one task implies the potential of superiority in many tasks or in general. For example, that statistically stringing words together in the structural form of a poem is the same as the vibrant human task of creating a poem. The other is a reductionist argument that seeks to minimize or devalue the fidelity of experience a human contributes to a task to imply that the narrow conditions under which the AI performs are sufficient for all conditions. For example, that an AI demonstrating the capacity to navigate safely via a pre-programmed route under a specific set of conditions generalizes to superiority to humans in all forms of navigation in all conditions.

Consider a case where someone is comparing a color image of an orange in one hand to the actual orange in the other hand. One might notice that the image of the orange appears more colorfully vibrant than the actual orange, and therefore make the argument that the image must also be superior in other characteristics, such as taste or nutritional value. Conversely, one might seek to minimize the value of taste or nutritional value of the actual orange to suggest that, specifically in terms of the appearance of vibrancy, that the image is either superior or good enough to replace the orange. However, what makes either comparison a category mistake is that the orange and the image of the orange fundamentally exist in different ontological categories; the comparison itself is meaningless in the same way that Ryle’s example of the left- or right-hand glove compared to a pair of gloves is meaningless. To meaningfully compare the orange and image would require that the image of the orange be ‘upsized’ to the category of ‘orange’ or the orange be ‘downsized’ to the category of ‘image,’ neither of which has meaningful comparative value. The image is not an orange, nor is the orange an image. For the comparisons posed, they exist in non-comparable states of being.

The example posed above regarding poems generated by an AI and a human is an example of trying to ‘upsize’ the capabilities of the AI for comparison. In constructing what structurally appears to be a poem, the LLM is statistically stringing words together in response to a certain input to produce output simulating a poem. For the human, constructing a poem is an act of labor for the purpose of communicating an emotion, feeling, thought, or whatever from their mind to the mind of another. The output of the AI and human may resemble one another in shape and form, and indeed the AI’s output may more eloquently roll off the tongue (appear more vibrant), but the poem generated by the AI has no value beyond the words. There is no mind-to-mind communication. It has no nutritional value, so to speak, nor can it ever have value beyond the mere task of placing one word after the other. They are categorically different modes of communication.

The example posed above regarding the comparison between the relative navigation abilities of humans and AI is an attempt to ‘downsize’ or minimize the importance of human experience, and the fidelity to respond to novel situations, to suggest that navigating along a pre-programmed route and responding to the environment in pre-programmed ways (the image of driving) is functionally the same as the human capability to navigate nearly everywhere in nearly all ways in response to an ever-changing and fundamentally unpredictable environment (the actual social act of driving). The AI is an example of automation, and the human is an example of real-time social interaction and novel problem solving. They are categorically different modes of moving through space and time.

There are cases where comparisons are appropriate, for example machine learning’s capability to process vast amounts of data for further analysis by humans. But for the cases where AI and humans are compared in terms of reasoning ability or why they reason at all, the comparison is almost always a category mistake, either made without understanding categorical comparisons or with the express purpose of deception by attempting to devalue the human component or enhance the value of the AI component.

Always view the comparisons between humans and AI with extreme skepticism, and never take them at face value. They are most likely made in error or to promote an agenda to make humans less than we are.

Comments

Leave a comment