top of page

South Korea "Personal Information Processing Guide for the Development and Use of Generative Artificial Intelligence (AI).

  • Writer: Ksenia Laputko
    Ksenia Laputko
  • Aug 10
  • 3 min read

On August 6, 2025, South Korea’s Personal Information Protection Commission (PIPC), chaired by Koh Hak-soo, released its Personal Information Processing Guide for the Development and Use of Generative Artificial Intelligence (AI). Announced during the Generative Artificial Intelligence and Privacy seminar at the Jeongdong 1928 Art Center in Seoul, the guide establishes clear personal information security standards for companies and institutions developing or using generative AI services.

The PIPC describes the guide as a critical step in eliminating uncertainty about how the Personal Information Protection Act (PIPA) applies to AI development and deployment. It is intended to enhance voluntary compliance by providing concrete, actionable standards for businesses using commercial large-scale language models (LLMs) such as ChatGPT, as well as those fine-tuning open-source models like LLaMA or developing their own in-house AI systems.

ree

Key things from the document

1. 4 stage AI lifecycle with minimum safety measures

The PIPC divides the lifecycle of developing and utilizing generative AI into four stages, presenting minimum safety measures to be verified at each stage. It also categorizes the methods and contexts in which AI is developed and used, setting legal and safety standards for each type:

  • Service-type LLM (e.g., ChatGPT API integration)

  • Ready-made LLM (e.g., fine-tuning LLaMA open-source model)

  • In-house development (e.g., development of a lightweight SLM model)


Stage details:

  1. Goal-setting 

    Clearly define the purpose of AI development, and establish the legal basis for training AI depending on the type and source of personal data.


    Let's look in the text of the document:"Define the purpose of using generative AI, specifying what types of personal information will be processed and for what purpose. The Guide distinguishes between public data and user-provided data, providing legal criteria for each:

    • Publicly available data may be lawfully used under the “legitimate interests” provision of PIPA (Article 15(1)(6)).

    • Reuse of user data already held by an organization requires assessing whether it falls under “use within the original purpose,” “additional use,” or “use for a separate purpose,” with criteria and examples provided."


  2. Strategy establishment 

    Categorize development methods and present risk mitigation measures for each type.

    "Plan the development approach and assess privacy risks based on whether the AI will use an API-connected commercial LLM, fine-tuned open-source LLM, or in-house-developed model.

    For service-type LLMs, the Guide recommends contractual safeguards in licenses and terms of service to control how data is processed. For ready-made LLMs, developers must address post-release risk discoveries quickly, while users should apply updates and patches regularly."


  3. Learning and development 

    Present multi-layered safety measures against risks such as data poisoning and jailbreak attacks, including management of AI agents.

    "Implement multi-layered safeguards at the data, model, and system levels to mitigate risks such as memorization of training data, data contamination, and jailbreak exploits. Maintain a feedback loop to evaluate and improve privacy safety.

    Special attention should be paid to search-type AI agents (which may access illegal or harmful data) and memory-type AI agents (which may pose profiling risks)."


  4. Application and management 

    Ensure protection of data subject rights, and establish governance centered on a Chief Privacy Officer (CPO) to embed privacy principles throughout the process.

    "Before deployment:

    • Test accuracy, resistance to safety bypass attempts, and risk of training data exposure.

    After deployment:

    • Monitor AI outputs and ensure mechanisms for protecting data subject rights, even if traditional rights like access, rectification, and erasure are technically limited."


    It’s interesting to see that AI governance responsibilities are being placed under the remit of the DPO. I’ve pointed out many times that one of the most cost-effective strategies for companies is to equip their DPOs with solid AI governance expertise. By doing so, organizations not only strengthen compliance with privacy laws but also ensure that AI systems are developed and deployed responsibly — without having to create an entirely separate governance structure from scratch.

    It seems that this document provides only a brief outline and still lacks detailed guidance for each stage. I believe it’s reasonable to expect more specific clarifications on AI and its regulatory requirements later this year, especially as more people within supervisory authorities gain a deeper understanding of both the technical and legal aspects of AI.

    Full text (korean) https://www.pipc.go.kr/np/cop/bbs/selectBoardArticle.do?bbsId=BS074&mCode=C020010000&nttId=11410#LINK



 
 
 

Comments


bottom of page