The passage of the Artificial Intelligence Basic Act in 2025 marks Taiwan's formal entry into the institutional construction phase of AI governance. This is a milestone worth affirming, but passage is merely the starting point, not the destination. The Basic Act is fundamentally a principles-based framework; while its seven guiding principles establish core values such as transparency, accountability, and humanism, without supporting subsidiary regulations, inter-ministry coordination mechanisms, and enforcement capacity, even the most refined legislative language will amount to nothing more than a declaratory document. Looking internationally, the EU AI Act has designed a transition period of 24 to 36 months from legislative passage to the effective date of each provision, and is equipped with a dedicated European AI Office. U.S. AI governance, meanwhile, advances through a parallel approach of executive orders and federal agency accountability guidelines. How Taiwan can translate the spirit of the Basic Act into an executable institutional framework with limited administrative resources is the core challenge that policy elites must now confront head-on.

I. Legislative Design Logic: The Structural Implications of the Seven Principles

The seven guiding principles established by the Artificial Intelligence Basic Act -- human-centricity, transparency and explainability, privacy and data protection, safety and reliability, fairness and non-discrimination, sustainable development, and innovation promotion -- are not randomly arranged, but reflect an internally consistent governance philosophy. A close reading of the Legislative Yuan deliberation process reveals that legislators deliberately sought a balance between "promoting innovation" and "risk management": of the seven principles, the first six are all constraining principles, with only the final one being an enabling principle. This ordering implicitly conveys a legislative intent of "responsibility before development."[1]

However, the arrangement of principles cannot automatically resolve the challenges of implementation. The algorithmic explainability requirements embedded in the "transparency and explainability" principle constitute a dual technical and legal challenge that remains unresolved in today's era of ubiquitous deep learning models. The "fairness and non-discrimination" principle requires that AI system outputs must not produce systematically adverse effects on specific groups, but what is the legal definition of "unfairness"? Who makes that determination? What testing methods are used? These questions are absent from the Basic Act and must be addressed by subsequent subsidiary regulations. The value of the Basic Act lies more in establishing Taiwan's AI governance "constitutional conventions" -- providing an interpretive framework for future legislation and administrative interpretation, rather than directly generating operationally binding norms with legal force.[2]

II. The Institutional Dilemma of Inter-Ministry Coordination

AI governance is inherently cross-domain, yet Taiwan's administrative system is vertically compartmentalized -- this structural contradiction is the greatest obstacle to implementing the AI Basic Act. The Ministry of Digital Affairs is responsible for overall digital policy coordination, the National Science and Technology Council controls R&D resources, the Ministry of Health and Welfare is concerned with medical AI safety standards, the Financial Supervisory Commission oversees financial AI risks, the Ministry of Labor faces the social impact of AI-driven job displacement, and the Ministry of Education contemplates talent cultivation strategies for the AI era. Seven ministries, seven sets of priorities, seven bureaucratic cultures.[3]

The deeper issue is the ambiguity surrounding the "lead ministry." The Basic Act does not explicitly designate which ministry serves as the lead agency for AI governance, referring only generically to the "central competent authority" and authorizing the Executive Yuan to specify further. While this design preserves flexibility, it also plants the seeds for inter-ministry buck-passing. Consider Japan's experience: in 2023, Japan established the "AI Strategy Council" within the Cabinet Office, directly under the Prime Minister's supervision, as the highest-level coordination platform across ministries and agencies, with explicit statutory authority granted to the Cabinet Office to coordinate AI-related policies across all ministries. Without a similar top-level coordination architecture, Taiwan's AI governance risks descending into a situation of "multiple drivers, each going their own way."

III. International Comparison: Governance Model Insights from the EU, the U.S., and Singapore

Three international models each offer lessons worth Taiwan's consideration. The EU AI Act employs a "risk-tiered regulation" framework, classifying AI applications into four risk levels: unacceptable risk (complete prohibition), high risk (strict regulation), limited risk (transparency obligations), and minimal risk (voluntary codes of conduct). The advantage of this framework is regulatory resource focus -- high-risk domains such as biometric identification, critical infrastructure, and credit scoring are subject to the strictest pre-market compliance reviews; low-risk applications face virtually no regulation, preserving space for innovation. The disadvantage is that determining classification boundaries itself requires extremely high levels of technical and legal expertise, and compliance costs for small and medium enterprises may create market entry barriers.[4]

The Biden administration's 2023 Executive Order on Safe, Secure, and Trustworthy AI took a markedly different approach: rather than legislation, it uses an executive order to require each federal agency to develop its own AI usage guidelines, while authorizing the Department of Commerce (NIST) to establish an AI Risk Management Framework (AI RMF). The advantage of this "decentralized" model is speed and flexibility; the disadvantage is the lack of unified legal force and oversight mechanisms. Singapore, meanwhile, has taken a pragmatic "soft law" approach -- using its Model AI Governance Framework as voluntary guidelines for enterprises to follow, emphasizing accountability and explainability, and complementing these with regulatory sandbox mechanisms to encourage innovation.[5]

Taiwan's optimal solution may be a hybrid model combining "EU framework + U.S. flexibility + Singaporean pragmatism": using risk tiering as the foundational architecture, administrative guidelines as rapid response tools, and regulatory sandboxes as testing grounds for frontier technologies. This requires not only legislative wisdom but also a massive enhancement of administrative capacity.

IV. Inter-Ministry Implementation Blueprint: A Pathway from the Basic Act to Executable Institutions

Translating the spirit of the Basic Act into an executable institutional framework requires completing the following key actions within 18 months of legislative passage:

  1. Establish a cross-ministry AI Governance Committee -- directly under the Premier of the Executive Yuan, with the Minister of Digital Affairs concurrently serving as Executive Secretary, to build a top-level architecture with statutory coordination authority, preventing ministry parochialism from undermining horizontal integration.
  2. Formulate AI risk-tiering subsidiary regulations -- jointly developed by the National Science and Technology Council and the Ministry of Digital Affairs, clearly defining the standards for high-risk AI applications, compliance requirements, and competent authorities, referencing the EU AI Act's Annex I list of high-risk use cases to establish Taiwan's own risk map.
  3. Build an algorithmic transparency auditing mechanism -- requiring AI decision-making systems used by government agencies (such as social welfare eligibility screening and criminal risk assessment) to undergo regular third-party audits with publicly disclosed audit summary reports, implementing the governance principle that "public sector AI transparency obligations precede those of the private sector."
  4. Expand the AI governance talent pipeline -- revising the civil service position classification system through the Examination Yuan to add an "AI Governance and Policy Analysis" job category, and establishing AI regulatory capacity training programs at the National Academy of Civil Service, equipping each ministry with foundational technical review capabilities.
  5. Promote bilateral mutual recognition agreements for AI governance -- leveraging Taiwan's semiconductor and AI industry strengths as bargaining chips, pursuing bilateral mutual recognition of AI safety and trust standards with democratic partners such as Japan, the EU, and the UK, transforming AI governance capabilities into diplomatic assets.

References

  1. Legislative Yuan. (2025). Artificial Intelligence Basic Act. Legislative Yuan Gazette, Vol. 114, No. 1. ly.gov.tw
  2. Kaminski, M. E. (2019). Binary Governance: Lessons from the GDPR's Approach to Algorithmic Accountability. Southern California Law Review, 92(6), 1529-1616.
  3. Executive Yuan. (2023). Taiwan AI Action Plan 2.0. National Development Council. ndc.gov.tw
  4. European Parliament. (2024). Regulation (EU) 2024/1689 of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act). Official Journal of the European Union.
  5. The White House. (2023). Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. Executive Order 14110.
  6. Personal Data Protection Commission, Singapore. (2023). Model Artificial Intelligence Governance Framework (3rd ed.). pdpc.gov.sg
  7. NIST. (2023). Artificial Intelligence Risk Management Framework (AI RMF 1.0). National Institute of Standards and Technology. nist.gov
  8. Dafoe, A. (2018). AI Governance: A Research Agenda. Future of Humanity Institute, University of Oxford.
Back to Insights