The manipulation of public policy evidence with Generative Artificial Intelligence: the risks of deepfakes

Main Article Content

Christian Cruz-Meléndez

Abstract

The article analyzes the risks of using Generative Artificial Intelligence (GAI) and deepfakes in the formulation of public policies. The justification lies in the growing presence of digital media that allows citizens to generate evidence of public issues, and the risk of manipulating this evidence through te­chnologies like GAI, which could distort governmental decision-making. The objective is to highlight how Deepfakes can alter the public perception of social issues, distorting the political agenda and the government’s response. The research is qualitative, with an exploratory and descriptive approach, utili­zing documentary review on public policies, GAI, and deepfakes. It is emphasized that in countries like Mexico, the lack of specific legislation on GAI amplifies these risks, and it is suggested that both go­vernments and civil society work on regulation and promoting greater transparency in digital informa­tion, to ensure that public policies are based on real evidence. The conclusions highlight that deepfakes create significant distortion in the evidence of public issues, which can lead to erroneous decisions. The creation of regulatory frameworks and the use of content verification technologies are recommended to mitigate these risks. The relationship between public policies and GAI is noted as a novel field of study for generating new knowledge.

Article Details

Section
Dossier Temático