The Washington Post-OpenAI finds Russian and Chinese groups used its tech for propaganda campaigns
May 30, 2024 5 min 905 words
这篇报道主要内容是:华盛顿邮报称 OpenAI 发现俄罗斯中国伊朗和以色列的团体利用其技术试图影响全球政治话语,其中包含一些中国团体。OpenAI 删除了相关账户,并声称这些团体利用其技术写帖子翻译和自动发布到社交媒体,但影响有限。报道还提到人工智能技术可能使假新闻和影响行动更难被察觉,并举例了一些深度伪造事件。最后提到 OpenAI 的报告详细说明了五个团体是如何利用他们的技术进行宣传活动的。 评论:这篇报道有其真实性,但明显带有偏见。其试图将俄罗斯中国伊朗等国家与西方国家对立起来,并强调这些国家利用人工智能技术进行“宣传活动”和“影响行动”。然而,报道没有提到西方国家自身也在利用人工智能技术进行类似的活动。例如,2020 年美国大选期间,推特脸书和谷歌等科技公司被曝光曾利用其平台和技术干预选举,试图影响选民。此外,报道中提到的“深度伪造”事件也有待进一步核实,不排除是西方媒体的夸大和误导。客观地说,人工智能技术确实可以用于政治宣传和影响舆论,但这并不是某个国家或团体的特权,也不是新的现象。各国家和团体都应该公平地利用人工智能技术,而媒体的责任是揭露所有国家的此类行为,而不是带有偏见地针对特定国家。
2024-05-29T23:22:39.504Z
SAN FRANCISCO — ChatGPT maker OpenAI said Thursday that it caught groups from Russia, China, Iran and Israel using its technology to try to influence political discourse around the world, highlighting concerns that generative artificial intelligence is making it easier for state actors to run covert propaganda campaigns as the 2024 presidential election nears.
OpenAI removed accounts associated with well-known propaganda operations in Russia, China and Iran; an Israeli political campaign firm; and a previously unknown group originating in Russia that the company’s researchers dubbed “Bad Grammar.” The groups used OpenAI’s tech to write posts, translate them into various languages and build software that helped them automatically post to social media.
None of these groups managed to get much traction; the social media accounts associated with them reached few users and had just a handful of followers, said Ben Nimmo, principal investigator on OpenAI’s intelligence and investigations team. Still, OpenAI’s report shows that propagandists who’ve been active for years on social media are using AI tech to boost their campaigns.
“We’ve seen them generating text at a higher volume and with fewer errors than these operations have traditionally managed,” Nimmo, who previously worked at Meta tracking influence operations, said in a briefing with reporters. Nimmo said it’s possible that other groups may still be using OpenAI’s tools without the company’s knowledge.
“This is not the time for complacency. History shows that influence operations that spent years failing to get anywhere can suddenly break out if nobody’s looking for them,” he said.
Governments, political parties and activist groups have used social media to try to influence politics for years. After concerns about Russian influence in the 2016 presidential election, social media platforms began paying closer attention to how their sites were being used to sway voters. The companies generally prohibit governments and political groups from covering up concerted efforts to influence users, and political ads must disclose who paid for them.
As AI tools that can generate realistic text, images and even video become generally available, disinformation researchers have raised concerns that it will become even harder to spot and respond to false information or covert influence operations online. Hundreds of millions of people vote in elections around the world this year, and generative AI deepfakes have already proliferated.
OpenAI, Google and other AI companies have been working on tech to identify deepfakes made with their own tools, but such tech is still unproven. Some AI experts think deepfake detectors will never be completely effective.
Earlier this year, a group affiliated with the Chinese Communist Party posted AI-generated audio of a candidate in the Taiwanese elections purportedly endorsing another. However, the politician, Foxconn owner Terry Gou, didn’t endorse the other politician.
In January, voters in the New Hampshire primaries received a robocall that purported to be from President Biden but was quickly found to be AI. Last week, a Democratic operative who said he commissioned the robocall was indicted on a charge of voter suppression and impersonating a candidate.
OpenAI’s report detailed how the five groups used the company’s tech in their attempted influence operations. Spamouflage, a previously known group originating in China, used OpenAI’s tech to research activity on social media and write posts in Chinese, Korean, Japanese and English, the company said. An Iranian group known as the International Union of Virtual Media also used OpenAI’s tech to create articles that it published on its site.
Bad Grammar, the previously unknown group, used OpenAI tech to help make a program that could automatically post on the messaging app Telegram. Bad Grammar then used OpenAI tech to generate posts and comments in Russian and English arguing that the United States should not support Ukraine, according to the report.
The report also found that an Israeli political campaign firm called Stoic used OpenAI to generate pro-Israel posts about the Gaza war and target them at people in Canada, the United States and Israel, OpenAI said. On Wednesday, Facebook owner Meta also publicized Stoic’s work, saying it removed 510 Facebook and 32 Instagram accounts used by the group. Some of the accounts were hacked, while others were of fictional people, the company told reporters.
The accounts in question often commented on pages of well-known individuals or media organizations, posing as pro-Israel American college students, African Americans and others. The comments supported the Israeli military and warned Canadians that “radical Islam” threatened liberal values there, Meta said.
AI came into play in the wording of some comments, which struck real Facebook users as odd and out of context. The operation fared poorly, the company said, attracting only about 2,600 legitimate followers.
Meta acted after the Atlantic Council’s Digital Forensic Research Lab discovered the network while following up on similar operations identified by other researchers and publications.
Over the past year, disinformation researchers have suggested AI chatbots could be used to have long, detailed conversations with specific people online, trying to sway them in a certain direction. AI tools could also potentially ingest large amounts of data on individuals and tailor messages directly to them.
OpenAI found neither of those more sophisticated uses of AI, Nimmo said. “It is very much an evolution rather than revolution,” he said. “None of that is to say that we might not see that in the future.”
Joseph Menn contributed to this report.