Here Is Why AI Apps Such As Artflow AI and GPT-4 Needs to Pause.
Mar 30, 2023
Today, the Future of Life Institute published an open letter urging all AI laboratories to halt training of any large-scale AI systems that are more powerful than the present state-of-the-art technology known as GPT-4 or other programs such as Artflow AI for at least six months. The “Pause Letter” was signed by Louis Rosenberg and a number of other notable business figures, including Elon Musk and Steve Wozniak. He is unsure whether a halt will occur, but I feel there is serious danger coming. This is why.
He’s been concerned about the influence of artificial intelligence on society for quite some time. He is not referring to the long-term existential threat posed by a super-intelligent AI taking over the world, but rather to the near-term risk of AI being used to manipulate society by strategically influencing the information we receive and mediating how that information spreads across populations. This danger has risen dramatically in the previous year as a result of significant performance improvements in two overlapping technologies known as “generative AI” and “conversational AI.”
Let him express his reservations about each.
AI-generated content today includes anything from photographs, artwork, and videos to essays, poetry, computer software, music, and scientific publications.
Formerly, generative material was remarkable, but no one would have confused it for human-created stuff. It has all been altered in the previous year.
AI systems have suddenly developed the ability to construct artifacts that may easily deceive humans into believing they are either actual human inventions or genuine films or photographs acquired in the real world. These skills are now being implemented at a large scale, posing a variety of serious threats to society.
TThe most obvious threat is to the job market. Since the goods they produce are so good, specialists now refer to LLM systems as “human-competitive intelligence” capable of replacing employees who would have generated the material otherwise. This trend has an impact on a wide range of occupations, from artists and authors to computer programmers and financial analysts. In fact, a new research by Open AI, OpenResearch, and the University of Pennsylvania looked at the influence of AI on the labor market in the United States by comparing GPT-4 or Artflow AI skills to employment needs. The study’s authors anticipate that GPT-4 will affect at least 50% of the activities performed by 20% of the US workforce, with higher-paying jobs bearing the brunt of the impact.
The potential impact on jobs is worrying, but that is not why he signed the Pause Letter. His concern regarding the labor force is that the information provided by AI can appear legitimate and authoritative, but it can also contain factual inaccuracies. There are no accuracy standards or regulating authorities in place to assist guarantee that these systems, which will become an important element of the workforce, do not transmit inaccuracies ranging from minor blunders to outrageous fabrications. We need time to put safeguards in place and increase regulatory authority to guarantee that safeguards are applied.
The next most obvious risk is the possibility for bad actors to purposefully publish content with factual inaccuracies as part of planned influence efforts that promote misinformation, disinformation, and outright falsehoods. This risk is not novel, but the scale enabled by generative AI is. Flooding the globe with something that appears authoritative but is entirely bogus is frighteningly simple. Deepfakes, in which prominent people may be persuaded to do or say anything in realistic-looking films, pose a similar risk to what Artflow AI generates. As AI becomes more sophisticated, the general population will be unable to discriminate between the genuine and the manufactured. Watermarking systems and other technologies that differentiate AI-generated content from actual content are required. We need time for such safeguards to be designed and implemented.
Moreover, conversational AI systems bring a unique set of dangers. This type of generative AI may engage consumers in real-time conversation via text and voice chat. These technologies have recently progressed to the point where AI can have a cohesive conversation with humans, tracking the conversational flow and context over time. These are the technologies that most concern me because they create a brand-new type of influence campaign that authorities are unprepared for conversational influence.
Every salesperson understands that engaging someone in conversation allows you to convey your arguments, analyze their emotions, and then change your approach to accommodate their resistance or worries. With the introduction of GPT-4, it is obvious that AI systems can engage users in real-time genuine dialogues as a targeted influence. His concern is that third parties will insert commercial aims into what appear to be genuine conversations and that unknowing consumers will be duped into purchasing things they don’t want, signing up for services they don’t need, or believing incorrect information.
He calls this the “AI manipulation issue.” It was a theoretical risk for a long time, but technology now allows for the deployment of individualized conversational influence campaigns that target individuals based on their beliefs, interests, and history to promote sales, propaganda, or disinformation. Indeed, He is concerned that AI-powered conversational influence will be the most potent type of focused persuasion that humans have ever devised. We need time to put safeguards in place. Personally, Louis feels that those safeguards may necessitate rules that prohibit or severely restrict the use of AI-mediated conversational influence.
So, sure, he did sign the letter. After all, he believes that the new and rapidly evolving threats posed by AI systems are moving too quickly for the AI community to respond, from academics and business experts to regulators and policymakers.
Will the letter have any effect? He is unsure whether the industry would agree to the six-month freeze, but the letter has drawn attention to the issue. He believes we must set as many alarms as possible to wake up regulators, policymakers, and business leaders. They must take action. Normally, more subtle warnings would suffice, but this technology goes quicker into the market than anything he has seen. That is why we must halt and catch up. We require time.