The “Framework Act on the Development of Artificial Intelligence and the Creation of a Foundation for Trust” (hereinafter, the “AI Basic Act”) and its Enforcement Decree came into effect on January 22, 2026. Following the enforcement of the AI Basic Act, the Ministry of Science and ICT announced the “AI Transparency Guidelines,” “AI Safety Guidelines,” “High-Impact AI Assessment Guidelines,” “High-Impact AI Operator Responsibility Guidelines,” and “AI Impact Assessment Guidelines.”

In this newsletter, we focus on the “AI Transparency Guidelines” and examine the obligations to ensure transparency, with particular emphasis on their impact on content providers.

 

1. Scope and applicability

The AI Basic Act designates “AI business operators” as the subjects of its obligations and classifies them into “AI developers” and “AI-using business operators” (Article 2, Clause 7 of the AI Basic Act).

An AI developer is an individual or entity that develops and provides AI systems, such as Naver’s HyperCLOVA and OpenAI’s ChatGPT. An AI-using business operator is an individual or entity that uses AI systems developed by AI developers to offer AI products or services. For example, Wrtn Technologies, Inc., which provides writing services using AI models, would fall into this category.

Simply using AI-generated outputs to create or provide one’s own content does not make a person an AI business operator. They are considered a user. For example, a YouTuber who produces videos using OpenAI’s Sora would be classified as a user and therefore not subject to the obligations under the AI Basic Act.

The AI Basic Act also contains explicit extraterritorial provisions, meaning it applies to activities conducted abroad if they affect the domestic market or users in Korea (Article 4 of the AI Basic Act).

 

2. Obligation to ensure transparency

Article 31 of the AI Basic Act imposes various transparency obligations on AI business operators depending on the type of products or services they provide. These obligations apply to AI business operators that ultimately deliver AI products or services to end users, rather than to the users of those products or services themselves.

A. Overview of key obligations 

AI Basic Act Scope Obligations
Article 31(1) If you provide products or services using high-impact AI or generative AI. The fact that the “product or service” operates based on high-impact or generative AI. Prior Notification
Article 31(2) If you provide generative AI service or products/services that utilize it. The fact that the “output” was generated by generative AI. Labeling
Article 31(3) If an AI system is used to provide outputs, such as audio, images, or videos, that are difficult to distinguish from reality (so-called “deepfake outputs”). The fact that the deepfake output was generated by an AI system. Notification or labeling in a way that users can clearly recognize

B. Prior notice pursuant to Article 31(1) of the AI Basic Act 

If an AI business operator offers products or services that use high-impact AI or generative AI, it must notify users in advance that these “products or services” operate based on such AI (Article 31(1) of the Act and Article 23(1) of its Enforcement Decree). This prior notice can be given using the following methods.1 

Location Notice
General terms of use / Contract Specify in the general terms of service, service registration process, or contract that generative AI or high-impact AI is being utilized.
On-screen display Indicate on the screen that generative AI or high-impact AI is being utilized when providing the service through software, mobile applications, or similar platforms.
Post at the place where the product/service is provided In the case of offline services, post the information at a location where it can be clearly recognized before use.


C. Duty of labeling pursuant to Article 31(2) of the AI Basic Act

When an AI business operator provides generative AI or products/services utilizing it, they must label that the “output” was generated by generative AI. The labeling must be done using one of the following methods (Article 31(2) of the Act, Article 23(2) of the Enforcement Decree).

1. A method perceivable by humans
2. A method readable by machines, in which case a notification, such as a text or voice message, must be provided at least once.


D. Prior notice or labeling pursuant to Article 31(3) of the AI Basic Act

When an AI business operator provides deepfake “output” using an AI system, the obligations for prior notice or labeling apply. The notice or label must indicate that the output was generated by an AI system and must be presented in a way that allows users to clearly recognize this fact, taking the following points into consideration (Article 31(3) of the Act and Article 23(3) of its Enforcement Decree).

1. Provide the notice or label in a manner that users can readily perceive, whether visually, audibly, or via software.
2. Consider the primary users’ age, physical abilities, and social circumstances when presenting the notice or label.

However, Article 31(3) provides an exception for deepfake outputs that are artistic or creative works, allowing the notice or label to be presented in a way that does not interfere with the exhibition or enjoyment of the content. Normally, for video deepfakes, the AI-generated nature must be indicated throughout playback (for example, via a logo). Under the exception in Article 31(3), however, if the work is artistic or creative, the notice or label may be shown at different times than the main content or conveyed through non-visible means, so as not to disrupt the viewer’s immersion.


E. Specific methods for fulfilling the labeling obligation

The labeling methods under Sections C and D vary slightly depending on whether the output is used within the service or exported externally. When the output is provided within the service, it is sufficient to indicate it on the AI-generated output itself or through the service UI so that users can recognize it. However, if the output is exported externally, such as through downloading or sharing, the labeling must be applied directly to the output itself.

1) Labeling method when the output is provided within the service2

  • Applies when the output is presented within the service environment (UI).
Type of Service Methods for fulfilling the labeling obligation
Conversation-based services ① Provide an initial notice or continuously display a logo in the chat window or other ongoing conversation interface so that users can clearly recognize it.
② Indicate on the service screen that “the AI model generates real-time data.”
③ If there is insufficient space to display text, use a tooltip or similar method to convey the information.
Voice assistant services ① Provide a notice, before use, in a manner the users can recognize through audio or on-screen text.
② For physical products such as voice-enabled appliances, indicate on the exterior that the product operates based on AI.
※ For services that are initiated upon user request, it is not necessary to repeat the notice for each individual voice interaction.
Game or Metaverse ① Where AI is used for in-game elements, inform users that a character or NPC they interact with is AI, by indicating it in the character/NPC name or through an initial dialogue notice.
② Where AI-based voice services are provided, notify users that the voice is AI-generated when they activate gameplay (e.g., upon login).
Productivity enhancement services (e.g., document drafting support services) Display a logo within the user interface and provide a notice prior to use (no need to label individual outputs).

2) Labeling method when the outcome is exported externally3

  • Applies when AI-generated outputs may be exported, for example, by downloading or sharing.
  • Even if disclosed in the service UI, the output itself must be labeled upon export.
  • If using non-visible methods (e.g., metadata), users must be notified at least once through text, audio, or similar means at the time of download.
  • Among the implementation methods described below, the “machine-readable method” does not apply to deepfake outputs.
Type of Service Methods for fulfilling the labeling obligation
Text outputs ① When provided in file form, indicate it in the document header or in the file metadata.
② For code generation tools, indicate it in the project description or within code comments, etc.
Image outputs ① Human-perceivable methods
Visible measures, such as inserting a logo within the image
② Machine-readable methods
Non-visible measures, such as using digital watermarking or metadata
Video outputs ① Human-perceivable methods
Visible measures include placing a logo on part of the screen or displaying a notice at the start of the video indicating that it was AI-generated. For deepfakes, only human-perceivable methods are allowed, and the AI-generated nature must be indicated throughout the entire playback (for example, by showing a logo for the full duration). However, for artistic or creative works, the notice may be presented in a way that does not disrupt viewing or enjoyment.
② Machine-readable methods
Non-visible measures, such as using digital watermarking or metadata
Audio outputs ① Human-perceivable methods
Provide a notice at the beginning of the audio indicating that it was generated by AI (there is no need to indicate this throughout the entire playback; a brief statement at the start of the audio content is sufficient).
② Machine-readable methods
Non-visible measures, such as using audio watermarking (detectable after the fact) or metadata.
Outputs in other file formats ① Human-perceivable methods
Indicate the AI-generated nature in a manner appropriate to the file format (e.g., in the header or at the beginning of the document, or in part of the first slide for slide-based formats).
② Machine-readable methods
Indicate that the content is AI-generated in metadata fields, such as the file author information.

 

3. Impact of transparency obligations on content providers

Among the obligations under the AI Basic Act, the transparency requirement is expected to have the greatest impact on business operators. In particular, content providers involved in the creation, production, or distribution of content are likely to be subject to labeling requirements for AI-generated outputs. Using “AI de-aging” technology as an example, content providers and related business operators that must comply with the transparency obligation can be classified as follows.4

Required to comply with transparency obligation Not required to comply with transparency obligation
AI de-aging technology developer Provider of AI de-aging services for film production Film production company OTT Platform Consumer
Develop and provide core AI technologies for facial recognition and modification Provide AI de-aging services tailored to the film industry using specialized AI de-aging technology Plan, shoot, and produce final content using AI de-aging technology Provide the final content to consumers OTT platform subscriptions
AI developer AI-using business operator User N/A

As illustrated in the table above, in the video content industry, a company that develops and provides an AI system implementing de-aging technology would be considered an AI developer under the AI Basic Act and a company that uses that AI system to offer de-aging services to film or video producers would be classified as an AI-using business operator. Both companies would therefore be subject to the transparency obligations under the Act.

In contrast, even if AI technology is used in content production, a party that does not develop the AI system or provide AI products or services is generally treated as a user, and the transparency obligation would not apply. For instance, a film producer incorporating AI-generated CG into a movie is simply using the output from an AI service and would be considered a user rather than an AI business operator under the AI Basic Act.5 However, whether specific obligations apply or not may vary depending on the role of each business and its operational structure, so each case should be assessed individually.6

In the gaming and metaverse industries, companies that provide games featuring AI interaction or dynamic (real-time) AI content generation functions are considered providers of AI products or services and, therefore, qualify as AI business operators subject to the transparency obligations under the AI Basic Act. Accordingly, when a game company intends to offer AI-powered services, it must ① indicate that characters or NPCs with which users interact are AI-driven, either through the character/NPC name or an initial dialogue notice, and ② notify users at the start of gameplay (e.g., upon login) that in-game voices are generated by AI.7

For AI-powered conversational services, companies must ① provide an initial notice or continuously display a logo in ongoing dialogue interfaces, such as chat windows, so that users can clearly recognize the AI interaction, ② indicate on the service screen that “the AI model generates real-time data”, or ③ if there is insufficient space for text, use tooltips or other similar methods (e.g., explanatory text appears when hovering over a designated area or activating a separate button).

Therefore, in the entertainment industry, creating a virtual artist that interacts with users using AI would be considered a conversational service, and the corresponding labeling obligations would apply.

 

4. Conclusion

The AI Basic Act regulates not only AI developers but also content providers that use AI systems to offer games, metaverse experiences, virtual characters, conversational content, or audio/video-generated content, classifying them as AI-using business operators. In today’s environment, where AI adoption is rapidly expanding across the content industry, the new regulatory framework under the AI Basic Act poses significant practical challenges for companies, including legal compliance and the development of implementation systems.

Among the obligations under the Act, the transparency requirement is likely to have the most direct impact on the services and operations of content providers. As a result, businesses planning to offer AI tools for music or video production, or AI-driven services that enable interactions with virtual characters, will need to carefully consider, from the service planning stage, how AI will be used and how the transparency obligations will be met.

The transparency obligation goes beyond merely requiring the labeling of AI-generated outputs; it is likely to affect content production, distribution structures, and operational processes across the industry. As lower-level regulations and guidelines are further clarified and supervisory or enforcement cases accumulate, the practical scope and standards of the AI Basic Act will become clearer. Accordingly, content providers should continuously monitor these regulatory developments, regularly assess whether their service structures and AI usage align with the law’s intent and requirements, and proactively implement the necessary internal management systems.

The Content & Entertainment Team at Shin & Kim LLC closely monitors the latest domestic and international rulings and regulatory developments related to AI, conducting in-depth research on the relevant legal principles. In addition to advisory work such as reviewing AI copyright issues and drafting guidelines for AI use, the team regularly conducts seminars and lectures for institutions and companies. This experience enables the team to deliver leading and highly effective solutions on AI and copyright matters.
 

1 See AI Transparency Guidelines p. 6.
2 See AI Transparency Guidelines pp. 11-14.
3 See AI Transparency Guidelines pp. 15-19.
4 See High-Impact AI Assessment Guidelines p.16.
5 See AI Transparency Guidelines p. 2.
6 Under the AI Basic Act, only AI business operators are subject to regulation. Therefore, there is currently no legal basis to penalize a party other than the AI business operators for the removal of watermarks or other markings during distribution. In response, a bill has been introduced in the National Assembly to amend the Act on Promotion of Information and Communications Network Utilization and Information Protection, which would impose labeling obligations on ad publishers.
7 See AI Transparency Guidelines p.13.

 

[Korean version] 인공지능 기본법과 콘텐츠 산업 – ‘투명성 확보 의무’를 중심으로