[Feature] Satire or Misinformation? Deepfakes Shape Korean Political Discourse
Ahead of the presidential election in South Korea, held on June 3, 2025, a deepfake video showing a major candidate wearing a prison uniform went viral on social media. As generative artificial intelligence (AI) tools became more accessible, ordinary citizens began using them to create political content. As a result, manipulated videos and images became a notable part of the online political conversation.
An illustration highlighting deepfake use in politics
Photo: Maeil Business Newspaper (mk.co.kr)
Deepfakes Actively Used in Korean Politics
According to the National Election Commission, from April 4 to June 2, a total of 10,448 deepfakes related to the presidential election were reported and requested for removal through platforms like the Korea Internet Self-Governance Organization (KISO). This was about 26 times higher than during the general election in April 2024. In response, the commission filed charges against three individuals who created and spread deepfake content targeting a specific candidate. These individuals shared 35 manipulated images and 10 videos, some designed to look like real news reports.
Deepfake technology has already been used in other countries to influence public opinion. In the 2023 Turkish presidential election, President Erdoğan released a fake video showing his rival alongside a terrorist group leader. The video spread widely and helped him reverse the poll numbers, ultimately winning the election with 52.2 percent of the vote. However, the Korean case differs in one key way. The lower technical barrier and the structure of social media allowed not only politicians and influencers, but also everyday users to participate in producing and spreading fake content. This marks a new stage in how misinformation is created and shared online.
Why Deepfake Content Spreads So Easily: Technological and Social Issues
There are two major reasons behind the rapid spread of deepfake videos: easy access to the technology and the way social media platforms operate. New tools like Google’s Veo 3 can generate both video and audio simultaneously. With only a few prompts, users can create realistic fake content, including interview-style clips or news broadcasts, without requiring any editing skills.
However, the problem is not just the technology itself. Social and cultural factors also play a role. According to Professor Choi Jung-ouk from the Graduate School of Media & Communication at Kyung Hee University (KHU), viewers tend to react more strongly to visuals than to text. “Well-made videos can trigger the belief that seeing is believing,” he said, adding that this effect can weaken critical thinking.
Google Veo 3, a new tool used for deepfake video making
Photo: outrightCRM (outrightcrm.com)
This tendency is intensified by social media algorithms, which often prioritize emotional or shocking content. As a result, people are more likely to encounter deepfakes without being able to verify whether the content is real. Prof. Choi also noted that Korea’s advanced digital infrastructure and platform-centered news consumption make the country more vulnerable to the spread of deepfakes. “In today’s digital culture, people care more about reactions and popularity than about verifying facts,” he said. “This creates an environment where manipulated content spreads very quickly.”
In such an environment, influencers who specialize in deepfake political satire have emerged. They create and post satirical videos about politicians on Instagram, YouTube, and other platforms, and their content often goes viral.
Divided Opinions on Deepfake Satire
Experts have differing views on deepfake political satire. Some argue that it promotes freedom of expression and increases public engagement with politics. Others worry about the spread of false information and its potential harm to democracy.
Prof. Kim Chang-nam from the Graduate School of Media & Communication at KHU believes such content can offer fresh insights into society and politics. “It can provide psychological and political relief,” he explained. “Through this content, the public can escape from a passive role and become more aware of their power as citizens.” He viewed the trend as a modern version of traditional political satire, now transformed by digital technology.
However, other experts are more cautious. If a fake video looks too real, it can confuse viewers and blur the line between fiction and reality. Prof. Kim Soo-jin from the Dept. of Political Science and International Relations of KHU argued that deepfake political news is bound to be more deceptive than traditional methods of satire like cartoons.
She added, “Of all the political attitudes we should have in an age of deepfake AI technology, I think we should be vigilant about the possibility of deepfake news and also be open to diverse media channels without focusing only on popular social media outlets.”
Prof. Choi expressed a similar view. He emphasized the importance of considering the creator’s purpose and how clearly the content is labeled. If the goal is to trick viewers by making the video appear real, then it becomes misinformation. “We must respect freedom of expression, but we also need balanced regulations to prevent harmful manipulation of public opinion,” he said. He suggested that labeling, intent, and audience understanding should all be considered when developing rules for this type of media.
Deepfake political satire is spreading quickly in Korea due to powerful AI tools and the nature of social media. Now, even ordinary people are creating and sharing such content, increasing its influence. While it may help raise interest in politics, it also heightens the risk of public confusion and opinion distortion.
For this reason, it is important to set clear standards to distinguish satire from fake news, while encouraging media users to think critically. In the future, addressing this issue will require more than just technical tools. Clear social norms, responsible platform management, and stronger media literacy education will be essential.
There are no registered comments.
I agree to the collection of personal information. [view]