Excerpts in case of paywall; the archive.org link isn’t working:

Generative conversational artificial-intelligence systems, such as OpenAI’s ChatGPT, are being used to optimize tasks, plan holidays and seek advice on matters ranging from the trivial to the existential… Against this backdrop, the urgent question is: can the same conversational skills that make AI into helpful assistants also turn them into powerful political actors? In a pair of studies in Nature and Science, researchers show that dialogues with large language models (LLMs) can shift people’s attitudes towards political candidates and policy issues. The researchers also identify which features of conversational AI systems make them persuasive, and what risks they might pose for democracy.

The effects were striking. Conversations favouring one candidate increased support for that candidate by around 2–3 points on a scale of 0–100, which is larger than the average effect of political advertising. Persuasion was stronger when the chat focused on policy issues rather than the candidate’s personality, and when the AI provided specific evidence or examples. Importantly, roughly one-third of the effect persisted when participants were contacted a month later, going against the intuitive critique that the initial shifts were probably volatile and ultimately inconsequential.

The persuasive influence was also asymmetric: AI chatbots were more successful at persuading ‘out-party’ participants (that is, those who initially opposed the targeted candidate) than at mobilizing existing supporters. In the state-level ballot-measure experiment in Massachusetts, persuasion effects were even larger, reaching double digits on the 0–100 scale.

Analysing 27 rhetorical strategies used by the AI models to persuade voters who engaged with them, the team found that supplying factual information was one of the strongest predictors of success… Yet ‘facts’ were not always factual. When the team fact-checked thousands of statements produced by the AI models, they found that most were accurate, but not all. Across countries and language models, claims made by AI chatbots that promoted right-leaning candidates were substantially more inaccurate than claims advocating for left-leaning ones. These findings carry the uncomfortable implication that political persuasion by AI tools can exploit imbalances in what the models ‘know’, spreading uneven inaccuracies even under explicit instructions to remain truthful.

It is important to note that these findings come from controlled online experiments. It is unclear how such persuasive effects would play out in real political environments in which exposure to persuasive AI agents is (often) voluntary and conscious. Such environments also contain a myriad of contrasting messages competing for attention, and users can ultimately decide to avoid or ignore specific information sources.

The Nature paper: https://www.nature.com/articles/s41586-025-09771-9

Somehow I can’t find or access the Science paper mentioned by the news article. If someone can find it please comment