Sky News quoted a British scientist who is known for his contributions in artificial intelligence. He said that powerful AI systems are “uncontrollable” and already “causing harm”.
Professor Stuart Russell is one of the more than 1,000 experts that signed last month an open letter asking for a 6-month pause on the development of new systems, even more capable than OpenAI’s GPT-4 which is the successor to ChatGPT powered by GPT 3.5.
The new model has the ability to recognize and explain images.
In an interview with Sky’s Sophy Ridge, Professor Russell spoke of the letter. “I signed it as I thought it needed to be said that it is important to say that we do not understand how these systems [more powerful] work. We don’t even know what these systems are capable of. We can’t control or get them to behave.
He stated that “people are concerned about the disinformation and about the racial or gender bias in these systems’ outputs”.
He argued that with the rapid progression of and AI it was necessary to take the time to “develop regulations” that would ensure that the systems were beneficial for people, rather than harmful.
He said that disinformation and deep fakes (videos of photos or videos of people whose faces or bodies have been digitally altered to make them appear as someone else) were the two biggest concerns.
He said that even though disinformation is used for “propaganda”, the difference today is that he can ask GPT-4, as an example, to “manipulate her” so she will be “less supportive of Ukraine”.
He said that the technology would “adjust her news feed” gradually by reading Ridge’s social networking presence, what she had ever written or said.
Professor Russell said to Ridge: “The only difference is that I can ask GPT-4 read everything Sophy Ridge ever wrote or said, about Sophy Ridge and her friends, and then begin a gradual campaign by adjusting your feed. Maybe occasionally send some fake news into your feed so you’re less supportive of Ukraine and you push harder on politicians saying we should support Ukraine for the war against Russia, etc.
It will be easy. The really frightening thing is that before lunch we could do it to a hundred different people.
The expert, a professor of Computer Science at the University of California in Berkeley, warned of a “huge impact” with these systems, as they could be used to manipulate people without them even realising it.
Ridge called it “genuinely scary” and asked the professor if this kind of thing happened now. The professor responded: “Quite possibly, yes.”
He said China and Russia have large teams that “pump disinformation”, and AI has “given them a powerful tool”.
The letter really refers to the next generation system. Currently, the systems are limited in their ability construct complex plans.
Click here to read more
What is GPT-4, and how does it differ from ChatGPT
Elon Musk announces plan to build “TruthGPT” despite AI warnings
He said that the next generation or the following one could have AI run corporations. He added that AI systems could be used to organize military campaigns.
How can humans control systems that are far more powerful than they? The open letter is really about this concern.”
Subscribe to the Sophy Ridge Sunday podcast
The professor stated that he wanted to convince governments to begin planning for the future when “we will need to change how our entire digital ecosystem… works.”
Microsoft’s ChatGPT, backed by OpenAI and released in 2013, has encouraged rivals to develop similar large language models. It also encourages companies to incorporate generative AI into their products.
UK announces ‘light touch regulations’ for AI
The UK government has recently released draft proposals for a “light-touch” regulatory framework relating to AI.
In a paper describing the government’s policy, the approach would be to split responsibility for AI regulation between the regulators of human rights, safety and health, and competition rather than creating a new agency dedicated to this technology.