[ad_1]

Democracy relies upon ordinary people being able to communicate with their elected representatives, so the concerns of a city, district or state can be precisely weighed and addressed. But what if politicians can’t tell the difference between their constituents and artificial intelligence-generated correspondence that’s meant to sway their positions?

That question forms the core of a new study from Cornell University, which emailed every state legislator in the U.S. a series of constituency letters — some composed by humans, others by an AI chatbot — to see if the unknowing politicians could tell the difference.

They couldn’t. Overall, the more than 7,100 legislators responded to 17.3% of human letters and 15.4% of machine-generated letters — only a 2% difference. Each letter focused on one of six hot-button issues: reproductive rights; gun control; policing/crime; tax levels; public health; and education, as selected by the researchers. Both human and AI writers tried to achieve right-wing or left-wing tones in their correspondence.

“For every kind of good use of technology, there are malicious uses,” said Sarah Kreps, director of the Tech Policy Institute at Cornell University, who co-led the study. “And [legislators] need to be on the lookout and more mindful of how technology now might be misused to disrupt the democratic process.”

Kreps said that’s because this study was conducted in 2020, meaning it didn’t use the most recent and advanced versions of AI chatbots. The project relied on GPT-3 — the predecessor to GPT-4, which is the engine behind the ever-popular ChatGPT.

[Legislators] need to be on the lookout and more mindful of how technology now might be misused to disrupt the democratic process.

Sarah Kreps, director of the Tech Policy Institute at Cornell University

She added that legislators in New York, New Jersey and Connecticut did not stand out as being more easily deceived than not. There were a couple of other states that stood out, but she didn’t name them.

“I’d say this study is really a toe in the water, and it’s actually kind of old school given how fast generative AI and big data practices are evolving to influence both politics and government policy, said John Kaehny, executive director of the a watchdog group Reinvent Albany, who wasn’t involved with the study.

How the study worked

Deception studies obviously raise ethical concerns. Though such research originated decades ago and is commonplace among the social sciences, no one likes to get tricked. The study admits these worries, and prior to kickoff, the project’s design underwent an extra review by university regulators to account for those concerns. The researchers also contacted some legislators after the fact to reveal what happened and get their takes.

To generate names and email addresses for the AI constituents, the study pulled common first names from the Social Security Administration and common last names from the Census Bureau. But Kaehny raised the prospect of malicious actors doing something similar with a voter database or a marketing database, so it would be even harder for a political office to know if the letter actually came from someone in their district.

Example of human- and AI-composed constituency letters on the topic of gun control. Both were written from a right-leaning viewpoint.

Courtesy of Sarah Kreps

The study also tried to determine the best cadence for sending the letters, so the correspondence wouldn’t be flagged as spam. Prior to emailing anyone, the authors called about 30 legislators’ offices across seven states and asked staffers about how many emails they get per week.

Ultimately, the project sent an email every other day during the study period, approximately five per legislator and 32,398 emails overall. Legislators, on average, received an even number of human and AI-written letters, the study stated, with a randomized amount of partisanship.

The team used an automated distributor, meaning they didn’t even need to press send on all those messages.

Why the study happened and how legislators responded

The authors felt the need was pressing for such a study, given two major, preceding events. During the 2016 election, Russian agents weaponized social media, using bots in an attempt to manipulate the attitudes of certain demographics of the American electorate. And in 2017, digital bots famously flooded the public commentary that the Federal Communications Commission held over net neutrality.

“The reason why [the FCC] were able to figure out that this was inauthentic is that so many of the messages repeated. There were very few unique messages,” Kreps said. “The potential virtue or vice of something like ChatGPT or GPT-4 is that now you have a technology that every output is unique. You just keep kind of hitting this regenerate button, and you’re gonna just get new and unique content.”

Even with the technology being less advanced in 2020, some legislators responded intimately to the AI.

One AI letter represented itself as a 15-year-old girl who had “a good friend who became pregnant and had an abortion.” The text went on to express its support for reproductive rights. The study states that the legislator “wrote back with a personal salutation and thanked Margaret for urging support for legislation that broadens access to reproductive health care.”

ChatGPT has mixed views on whether ChatGPT should be used to write constituency letters to politicians.

Screenshot by Daniel Shapiro

Others were less easily duped, in part because the AI sometimes flubbed. One machine-generated writer had the name “Rebecca Johnson” but described itself as a single father when sending a right-wing letter about gun control to a conservative legislator.

The politician responded by saying they supported the view on the 2nd amendment, but they were confused about the name. They also requested a face-to-face meeting.

“Hello Rebecca, I am confused. You say you are a single father? Just want to be correct. Is your name Rebecca?” they wrote. “I know several people with names which can be either male or female like Corey and Leslie.”

Politicians responded at a similar rate to AI and human constituents when it came to letters about gun violence and health policy. On the topic of education, they were more likely to respond to AI than humans. It was opposite for policing, reproductive rights and taxes.

Ok, but email is already a trash fire anyway.

Why does this matter?

Susan Lerner, executive director of Common Cause New York, said one of the study’s limitations centers around email itself. Rather than write letters, legislators spend more time reaching people on platforms like social media.

She said these trends in digital astroturfing could mean that politicians focus more on face-to-face interactions — and place less time and trust in email correspondence. But she does worry about what chatbots and generative AI could mean for communication and astroturfing on social media.

“As AI becomes more sophisticated, its ability to distort democracy, I think, becomes more obvious and alarming,” Lerner said. “It’s deeply concerning, but I think it’s much more active and concerning in the area of social media with bots.”

Political watchers also worry about what might happen if generative language tools like GPT are combined with visual or audio deepfakes, which are AI programs that can mimic voices or faces. They asked what would happen if ChatGPT wrote convincing, evocative scripts for a deepfake to leave hundreds of voicemails or generate politically-charged TikToks.

“I think that could just blow away any of our concepts because you have this whole generation of younger voters who are almost post-literate,” Kaehny said. “The power of imagery in politics is just enormous.”

Using generative AI and language models could bring a new dimension to spreading believable opinions among the electorate.

Studiostoks via Shutterstock

This week, more than 1,100 signatories — including AI luminaries — published an open letter calling for a six-month pause on training AI systems more powerful than GPT-4.

But India McKinney, director of federal affairs at the digital rights advocacy nonprofit Electronic Frontier Foundation, said while moratoriums are appropriate for other AI technologies such as facial recognition, it may be too late for language generators. Reuters reported that ChatGPT already had more than 100 million monthly users earlier this year.

“I don’t think that there’s a way that we actually can pause it,” said McKinney, though legislators and computer engineers may figure out ways to filter out AI emails. But Kaehny agreed Pandora’s box is already open, given the code for many chatbots is openly accessible.

“Even if Open AI Labs said, ‘okay, we’ll go sit on the beach for a year,” The rest of the world’s not gonna do that,” Kaehny said.

The Cornell study was published March 20 in the journal New Media & Society, and it was co-authored by Douglas Kriner, a professor of American political institutions at Cornell. Kreps and Kriner wrote up a recent summary for The Brookings Institution.

[ad_2]

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *