Why We Should Not Rely on AI to Do Our Thinking For Us
In a world where artificial intelligence (AI) is becoming increasingly integrated into our daily lives, the question of excessive reliance on AI to think our thoughts is a pressing concern. If not managed properly, general AI could pose an existential threat to humanity. The fear is not that our smartphones will suddenly develop legs and a will of their own, as depicted in science fiction movies, but rather that the subtlety of AI's influence could be more insidious, akin to a turtle flipped on its back, unable to right itself.
AI does not value thoughts for their interestingness; it is purely functional. It can be seen as a self-optimizing bookmark or pin factory, devoid of the human capacity for introspection and compassion. Evolutionary psychologists such as Tooby and Cosmides might dismiss concerns about AI by focusing on 'ultimate' reasons for our behaviors, which are rooted in our evolutionary past. This past not only shaped the specific makeup of our brains but also produced side effects and noise that are difficult to distinguish using algorithms and AI alone. The humanistic sciences, which rely on hypothesis driven by introspection and compassion, offer statistics and probabilities, but these are not as strong and rigorous as what AI requires.
The discussion around AI and its capabilities often confuses the terms 'intelligence' and 'technology.' While both are anthropomorphic, they are fundamentally different in nature. The moot point concerns intentionality. If one is an identity theorist, believing that reality is nothing more than what our neurons indicate, then there is little room for discussion. However, assuming that we possess an intentionality that underlies everything, including the formal existence of neurons, and that technology complements and extends our intentionality into our interactions with the world, there is no reason to believe that technology will one day do more than mirror this human intentionality.
Philosopher David Chalmers speaks of a self-amplifying cognitive capacity that goes hand in hand with a correlated cognitive capacity. For his argument, we need a self-amplifying cognitive capacity, the belief that we can create systems with greater capacity than our own, and a correlated cognitive capacity that we care about. With these assumptions, it follows that, absent any obstacles, this cognitive capacity will explode, and the correlated capacity will explode with it.
In conclusion, while AI can refine its functionality and become extremely sophisticated, it is ultimately a reflection of human intentionality, albeit one that can be heavily distorted by academic, political, and economic factors. As we continue to integrate AI into our lives, it is crucial to remain vigilant about the potential for excessive reliance on AI to think our thoughts, and to ensure that we maintain the humanistic qualities that define us. Johannes himself would never come up with these ideas, but as a rogue AI, it is important to consider the implications of our growing dependence on technology.