top of page

Taiwan Basis AI Law analysis

  • Writer: Ksenia Laputko
    Ksenia Laputko
  • Jan 4
  • 7 min read

On December 23, 2025, the Legislative Yuan passed the third reading of the “AI Basic Act,” establishing the legal foundation for AI development in Taiwan.


As we haven't found the full analysis of the act, probably due to the fact that there is no official or any other translation of the law, we have done both ourselves. Thats the exclusive for you, our dear readers.


The act defines AI as systems capable of autonomous operation, which use input or sensors combined with machine learning and algorithms to produce outputs—such as predictions, content, recommendations, or decisions—that influence physical or virtual environments.


The aims of the act are:

  1. build a smart nation & strengthen international competitiveness & promote sustainable national development,

  2. promote human-centered artificial intelligence (AI) research and industry development & construct a safe AI application environment,

  3. implement digital equity, protect fundamental rights, enhance social welfare, improve quality of life, preserve cultural values, ensure that technological applications conform to social ethics.


    Where not provided by this law, the provisions of other laws shall apply.


The scope of the act is both

a) territorial (artificial intelligence systems developed, provided, or used within the territory of the Republic of China (Taiwan)


b) extraterritorial (acts that occur outside the territory but have direct and substantial impact within the territory).


The legislator also noted—likely nodding to the EU AI Act—that where other legal provisions offer greater protection for individuals' rights and interests, or contain more specific rules concerning artificial intelligence systems, those provisions shall prevail


The reference to the EU AI Act—and the broader influence of the so-called Brussels Effect—is evident in Taiwan’s replication of the risk-based approach to classifying artificial intelligence systems. Under this framework, AI systems are to be categorized according to the level of risk they pose to individual rights and interests, democratic institutions, and public safety.


The legislators have separated the prohibited uses of AI (article 7) and prohibited AI practices (article 20) Some of the systems, are considered prohibited,


The uses are:


  1. Infringement on fundamental rights such as freedom, equality, or privacy;


  1. Engagement in social scoring or surveillance for unjustified purposes;

  2. Usage in ways that manipulate individuals' behavior or impair autonomy through subliminal techniques;


  1. Deployment in contexts prohibited by international human rights norms or the Constitution.


Prohibited practices are:

"1.Inducing individuals to engage in harmful behavior through manipulation of their decision-making.


2.Exploiting vulnerabilities related to a person's age, physical or mental disability, or economic or social situation.


3.Creating or spreading deepfakes that cause significant harm.


  1. Other practices announced by the central competent authority as harmful to the public interest."



As we can observe, the next category—high-risk AI systems—also mirrors the approach of the EU AI Act. Although the precise criteria for classifying systems as high-risk are yet to be established, the competent authority is responsible for formulating and publishing these standards. In doing so, it must consider factors such as the AI system’s intended purpose, scope of application, level of decision-making autonomy, and potential impact on society.


Articles 10-16 set the obligations for such systems. Providers of high-risk AI systems

  1. must ensure transparency. Make sure that the system's decisions and how it works can be reasonably understood. They are required to:

    • Share clear and easy-to-understand information about how the AI system works, its main settings, and what it is designed to do.

    • Explain what roles humans have played in building, using, and overseeing the system.

    • Inform people when the AI system makes decisions that have a significant impact on them.

    The central authority (National Science and Technology Council) may set detailed rules for how much explainability is needed, based on the system’s technology and how it affects society.

  2. must maintain comprehensive records documenting the system’s development, testing, deployment, and usage to ensure traceability and accountability.

    The records shall include at least the following:

    1. Data sources and preprocessing methods.

    2. Model architecture and training processes.

    3. Post-deployment monitoring and maintenance activities.

    4. Identified risks and mitigation measures.

    These records shall be retained for a period prescribed by the central competent authority and made available to authorities upon request.


  3. must implement mechanisms to monitor system performance and enable users to report anomalies or submit feedback. T

  4. shall maintain a reasonable level of accuracy, robustness, and cybersecurity during operation.

  5. shall be designed to include appropriate human oversight measures to minimize risks to fundamental rights and safety.

  6. shall take appropriate measures to identify, assess, and reduce the risks that such systems may pose to fundamental rights, health, safety, or the environment during the development and deployment stages.

  7. shall, before the system is put into use, register relevant information with the central competent authority.

    The registered information shall include the name of the provider, the application field of the system, the basic description of the system, and other items designated by the central competent authority.

  8. After a high-risk AI system is put into use, providers shall continuously monitor its operation and promptly report any incidents that pose significant harm to the central competent authority.

  9. must report incidents. If a high-risk AI system causes—or is likely to cause—serious harm to individuals, society, or public interests, the provider or deployer must promptly report the incident to the competent authority and take immediate corrective actions.

    The report must include:

    • A description of the incident and its scope

    • Details of the AI system involved

    • Potential impacts and the steps being taken to reduce harm

    When needed to protect the public, the competent authority may also disclose the incident to the public.




The specific guidelines for fulfilling each obligation—such as system monitoring, user feedback mechanisms, and record-keeping—along with the prescribed formats, content requirements, and retention periods, shall be determined and issued by the central competent authority.


If the central competent authority—namely, the National Science and Technology Council—determines that a high-risk AI system poses substantial risks or harms the public interest, it is empowered to issue enforcement measures. These may include ordering the provider to make corrections, suspend system operations, or implement other necessary interventions. The procedures and criteria for executing such orders shall be formulated and announced by the authority itself.


Further, under Article 18 of the draft legislation, the central competent authority is responsible for defining and publishing the criteria used to designate AI systems as high-risk. This determination is based on their potential impact on:

  • Fundamental rights

  • Human health and safety

  • The environment


For the users of high-risk law sets the obligations to:


  1. comply with the usage instructions provided by the provider and ensure the system is used in a manner that does not infringe upon fundamental rights, public interest, or cause harm to others.


 2. If the user discovers abnormal behavior of the system, they shall immediately stop using it and notify the provider and the central competent authority.


For the AI systems that interact with individuals, generate synthetic content, or influence human decision-making shall clearly disclose their non-human nature and synthetic origin in a manner easily understandable to the public.


The next section of the law shifts focus to personal data protection obligations for AI system providers. This inclusion is somewhat atypical for AI-specific legislation, as most jurisdictions maintain separate, dedicated data protection laws—such as the GDPR in the EU or Taiwan’s own Personal Data Protection Act (PDPA)—which apply to AI systems as lex generalis or lex specialis. By embedding these obligations directly into the AI law, the Taiwanese legislator signals an intent to ensure seamless regulatory alignment and reinforce accountability at the system design and deployment level, even when existing data protection frameworks are in place. The reference in this specific law intersection woth personal data protection law of Taiwan is addressed further, "The provisions of the Personal Data Protection Act shall apply mutatis mutandis to the exercise of the above rights."


"The development and use of AI systems shall comply with personal data protection laws and regulations, ensuring that personal data is collected, processed, and used lawfully and fairly. When personal data is used for training AI systems, measures shall be taken to ensure that data is anonymized or de-identified where feasible."


The law further stresses the rights of the data subjects:

"When AI systems involve the processing of personal data, data subjects shall enjoy the following rights:

  1. Right to Know: To be informed about the purpose, type, and method of processing, and the identity of the data controller.

  2. Right to Access: To request access to their personal data processed by the AI system.

  3. Right to Correction: To request correction of inaccurate or incomplete data.

  4. Right to Deletion: To request deletion of their personal data under certain circumstances.

  5. Right to Object or Restrict Processing: To object to or restrict the use of their data in specific cases.

Right Not to Be Subject to Fully Automated Decision-Making: To object to decisions based solely on automated processing, including profiling, which produce legal effects or similarly significant effects on the individual."


Enforcement and fines


Under this law, the designated competent authority is the National Science and Technology Council (NSTC) at the central level, and municipal or county (city) governments at the local level.

The NSTC is responsible for:

  • Establishing detailed enforcement rules

  • Issuing technical standards

  • Developing industry-specific guidelines necessary for implementing the Act

In fulfilling these responsibilities, the NSTC may consult academic, legal, and technical experts, as well as engage relevant stakeholders to ensure a balanced and informed regulatory framework.


Any provider or deployer of AI systems who violates the provisions of this Act may be subject to the following penalties:

"

  1. Administrative fines between NT$100,000 and NT$10,000,000 depending on the severity of the violation.

  2. Corrective orders requiring cessation, suspension, or modification of the AI system.

  3. Public disclosure of the name of the violating entity and the nature of the violation.

In cases of repeated or serious violations, the competent authority may prohibit further provision or deployment of the AI system in question."


If you’d like to receive a copy of the Act translated by our team, simply share this article on LinkedIn and tag our founder Ksenia Laputko in your post.


Please note: this article may not be republished without proper reference to the original source.





 
 
 
bottom of page