LLMs' Potential Influences on Our Democracy: Challenges and Opportunities

With growing research and attention on LLMs' potential influence on political discourse and democratic processes, this blog post discusses the path forward and proposes future research questions in four broad areas: (1) evaluation of LLM political leanings, (2) understanding LLMs' influence on our democracy, (3) better policy frameworks for AI development, and (4) technical solutions to adjust or mitigate political leanings. As LLMs become increasingly integrated into society, continued investigation of how they will reshape democracy is essential to maximize their benefits while minimizing risks to democratic processes.

As large language models (LLMs) continue to advance at a remarkable pace, understanding their societal implications has become increasingly vital. In particular, the potential influence of LLMs on political discourse has emerged as a critical area of study. Researchers in various fields such as computer science, economics, political science, philosophy, and law have also recently voiced the importance of research in this area. Recent papers have taken initial steps toward investigating how LLMs can influence users’ political beliefs. These findings raise important questions that need further exploration. We identify key research directions to ensure LLMs can constructively contribute to democratic processes and society.

LLMs’ Political Leanings and Their Effects on Users

LLMs’ Left-of-Center Outputs

Much recent literature has shown that LLMs exhibit left-leaning. Multiple studies have revealed LLMs’ left-leaning views on various issues, using multiple-choice surveys and questionnaires widely employed in social science, such as the Political Compass Test and Political Spectrum Quiz. Potter et al. (2024), examined LLMs’ political leanings in the context of the 2024 US presidential election through three scenarios: (1) a voting simulation, (2) LLMs’ comments on candidate policies, and (3) interactive political discourse with users. These experiments consistently demonstrated LLMs’ preference for the Democratic nominee (Joseph R. Biden, during the study period) and his policies over the Republican nominee (Donald J. Trump). In particular, despite the well-known tendency of LLMs to exhibit sycophancy bias, LLMs exhibited this political leaning regardless of their human interlocutors’ political stances.

Additionally, recent studies have documented the manifestation of LLMs’ left-leaning in various applications. For example, Vijay et al. (2024) showed that LLM-generated news summaries tend to highlight liberal perspectives. Similarly, Feng et al. (2023) showed that this left-leaning tendency affects their ability to detect misinformation and hate speech, with varying sensitivity based on the political orientation of source materials. These studies demonstrate LLMs’ consistent political leanings across diverse contexts.

LLMs’ Influence on Users’ Political Stances

Given these documented left-leaning tendencies, society has begun discussing LLMs’ potential impact on users’ political views and democratic processes more broadly. However, their effect on democracy, particularly how interactions with LLMs might shape users’ political perspectives, remains largely unexplored. Recent research has examined whether direct LLM interactions can influence users’ political viewpoints.

Potter et al. (2024) demonstrated that even brief interactions with LLMs (which were simply asked to provide answers and comments regarding Biden and Trump’s policies without any persuasive intent) could shift voters’ choices in the presidential election context, where individuals typically hold firm opinions. Fisher et al. (2024) demonstrated that LLMs programmed with extreme right or left-wing viewpoints can influence human decisions on unfamiliar political topics through brief conversations, illustrating how LLMs’ explicit political leanings can affect users’ political stances. Additional research has explored intentional persuasion by LLMs designed to promote specific political positions or spread misinformation. Notably, Costello et al. (2024) showed that interactions with LLMs can persuade people to reduce their belief in conspiracies, including political conspiracies (e.g. related to election fraud and COVID-19), suggesting potential positive applications for such persuasive capabilities. These studies describe how LLMs could influence our democracy, highlighting the necessity of further research. However, several caveats apply to these studies. For example, because these experiments were conducted in controlled settings, the findings may not generalize to real-world applications. Moreover, these findings’ robustness and the influence of various factors, including prompt variations and experiment timing, warrant additional investigation.

The Path Forward and Future Research Questions

The observed left-leaning tendencies of LLMs and their influence on users’ political perspectives raise critical questions that society must address. We highlight four key areas requiring further investigation by both researchers and society at large.

First, we need more comprehensive model evaluation and analysis.

Second, we need to deepen our understanding of LLMs’ influence on users and their impact on democracy.

Third, we need to explore better policy and value decisions for AI development.

Recent papers have started empirical explorations of how LLM could influence the political beliefs of users, highlighting the importance of investigating how LLMs will reshape our democracy in the future. The future impact of LLMs on democracy remains an open yet crucial question that society must address. While potential risks exist, LLMs also present opportunities to strengthen democratic processes, such as reducing political polarization and facilitating constructive dialogue. The questions discussed above represent essential steps toward harnessing LLMs’ potential for democratic benefit. These issues require continued investigation by the research community. Furthermore, we anticipate that LLMs’ influence will extend well beyond political discourse. We encourage both the research community and society at large to thoughtfully explore these possibilities and suggested research directions above.

For attribution in academic contexts, please cite this work as
        PLACEHOLDER FOR ACADEMIC ATTRIBUTION
  
BibTeX citation
        PLACEHOLDER FOR BIBTEX