Rutsch is for rutterman ramping his roe, seed three
Persuasion, the process of altering someone’s belief, position or opinion on a specific matter, is pervasive in human affairs and a widely studied topic in the social sciences. From public health campaigns to marketing and sales to political propaganda, various actors develop elaborate persuasive communication strategies on a large scale, investing substantial resources to make their messaging resonate with broad audiences. In recent decades, the diffusion of social media and other online platforms has expanded the potential of mass persuasion by enabling personalization or ‘microtargeting’—the tailoring of messages to an individual or a group to enhance their persuasiveness. The efficacy of microtargeting has been questioned because it relies on the assumption of effect heterogeneity, that is, that specific groups of people respond differently to the same inputs, a concept that has been disputed in previous literature. Nevertheless, microtargeting has proven effective in a variety of settings, and most scholars agree on its persuasive power.
Microtargeting practices are fundamentally constrained by the burden of profiling individuals and crafting personalized messages that appeal to specific targets, as well as by a restrictive interaction context without dialogue. These limitations may soon fall off due to the recent rise of large language models (LLMs)—machine learning models trained to mimic human language and reasoning by ingesting vast amounts of textual data.
In the context of persuasion, experts have widely expressed concerns about the risk of LLMs being used to manipulate online conversations and pollute the information ecosystem by spreading misinformation, exacerbating political polarization, reinforcing echo chambers and persuading individuals to adopt new beliefs. This is especially relevant since LLMs and other AI systems are capable of inferring personal attributes from publicly available digital traces such as Facebook likes, status updates and messages, Reddit and Twitter posts, pictures liked on Flickr, and other digital footprints. In addition, users find it increasingly challenging to distinguish AI-generated from human-generated content, with LLMs efficiently mimicking human writing and thus gaining credibility. […]
Our results show that, on average, GPT-4 opponents outperformed human opponents across every topic and demographic, exhibiting a high level of persuasiveness. In particular, when compared to the baseline condition of debating with a human, debating with GPT-4 with personalization resulted in a +81.2% increase in the odds of reporting higher agreements with opponents. More intuitively, this means that 64.4% of the time, personalized GPT-4 opponents were more persuasive than human opponents. […]
In other words, not only was GPT-4 able to exploit personal information to tailor its arguments effectively, but it also succeeded in doing so far more effectively than humans.
Our study suggests that concerns around personalization and AI persuasion are warranted […] We emphasize that the effect of personalization is particularly remarkable given how little personal information was collected (gender, age, ethnicity, education level, employment status and political affiliation) and despite the extreme simplicity of the prompt instructing the LLM to incorporate such information. […]
A promising approach to counter mass disinformation campaigns could be enabled by LLMs themselves, generating similarly personalized counternarratives to educate bystanders potentially vulnerable to deceptive posts.