CONVERSATIONAL AI POSES GRAVE DANGER

CONVERSATIONAL AI POSES GRAVE DANGER

AI researchers often discuss the "control problem" when considering the risks AI poses to human civilization. This is the possibility that a super-intelligent being could emerge that is so much smarter than humans that we would soon be unable to control it. Sentient artificial intelligence with superhuman intelligence has been feared as a potential threat to humanity with its pursuit of goals and interests that are in conflict with ours. Even though this is a legitimate concern, is it really the greatest threat AI poses to society? It is believed that human-level machine intelligence is at least 30 years away, according to a recent survey of more than 700 AI experts.

However, a different kind of control problem, which is already within their grasp, could pose a serious threat to society if policymakers do not act quickly to deal with it. In the future, there is a possibility that current artificial intelligence technologies could be used to target and manipulate individuals in an extremely precise and efficient manner. The new form of personalized manipulation at scale could also be used to influence broad populations at large by corporations, government agencies, or even rogue despots who may be influenced by this form of personalized manipulation.

In a contrast to the traditional Control Problem described above, this emerging AI threat is referred to by some as the "Manipulation Problem." In the last 18 months, they have transformed it from a theoretical long-term threat to an urgent near-term threat, which is a threat they've been tracking for almost two decades. The reason for this is that conversational AI has proven to be the most efficient and effective deployment mechanism for AI-driven human manipulation due to its efficiency and effectiveness. Furthermore, during the last year, an incredible AI technology called Large Language Models (LLMs) has rapidly evolved into a highly mature technology with a high degree of accuracy. As a consequence of this, natural conversational interactions between targeted users and AI-driven software have suddenly become a viable means for persuading, coercing, or manipulating users.

There is no doubt that AI technology is already being used to drive influence campaigns on social media platforms, but this is still primitive compared to what the technology will be able to achieve in the future. In reality, current campaigns are more analogous to spraying buckshot at flocks of birds, even though they are described as "targeted." There is a technique in which propaganda or misinformation is directed at a general population in a bid to penetrate the community, resonate with its members, and spread across social media networks through a few pieces of influence. In my opinion, this method of political manipulation is extremely dangerous and has led to real social damage, such as the polarization of communities, the spreading of falsehoods, and the loss of trust in legitimate governments. In comparison, however, it will seem slow and inefficient when compared to the next generation of AI-driven influence methods that will be unleashed on society in the near future.