When an enterprise's core decisions — from credit reviews to talent recruitment, from product pricing to supply chain management — are increasingly made by algorithms rather than humans, the fundamental framework of corporate governance faces existential challenges. Does the board's fiduciary duty extend to oversight of AI systems? Who bears legal liability for harm caused by algorithmic bias? When AI decision-making logic is a black box even to the directors themselves, how should the standard of "due diligence" be redefined? These questions are no longer theoretical hypotheticals — they are realities that every independent director, every audit committee, and every listed company's board must confront today. As a legal scholar turned business school educator, throughout my tenure directing the MBA program and executive education at Zhejiang University's International Joint Business School, I have continuously helped corporate decision-makers understand AI's impact on governance. This article combines legal analysis, management practice, and insights from game theory to propose a systematic thinking framework for board governance in the AI era.
I. Three New Categories of Governance Risk Posed by AI
The core function of corporate governance is managing risk — ensuring that while the enterprise pursues shareholder value maximization, it does not suffer catastrophic losses from uncontrolled risk exposure. The large-scale adoption of AI has not changed this core function, but it has fundamentally altered the nature and form of risk. In my research and consulting practice, I classify the new governance risks introduced by AI into three categories.
The first category is "algorithmic bias risk." AI systems learn patterns from training data, and if the training data reflects historical biases — racial discrimination, gender discrimination, geographic discrimination — the AI system will "inherit" and amplify these biases. This is not a hypothetical risk: Amazon's AI recruitment system was scrapped for systematically discriminating against female applicants; the U.S. criminal justice risk assessment system COMPAS was proven by research to be biased against Black defendants; and multiple banks' AI credit assessment models were found to disadvantage specific minority groups. For boards, the governance challenge of algorithmic bias risk lies in its "covert nature" — bias is not deliberately coded in by people but "emerges" from the data, making it difficult for traditional compliance review processes to detect.[1]
The second category is "AI accountability risk." When an AI system makes a decision that causes significant losses — for example, flash crashes triggered by automated trading systems, personal injuries caused by autonomous driving systems, or misdiagnoses by medical AI — how should the chain of accountability be traced? Is it the enterprise that deployed the AI? The technical team that developed it? Or the board that approved its deployment? The EU's AI Liability Directive is attempting to build a legal framework to answer these questions, but from a corporate governance perspective, what matters more is ex-ante accountability mechanism design rather than ex-post legal recourse. The mechanism design principles that Professor Wilson repeatedly emphasized in our conversations are highly applicable here: good mechanisms should give participants incentives to make responsible decisions ex ante, rather than relying solely on ex-post penalties to correct behavior.[2]
The third category is "data governance risk." The capability of AI systems is built on data — data quality determines AI quality, data security determines enterprise security, and data compliance determines the enterprise's legal risk. The GDPR, China's Personal Information Protection Law, and data protection legislation in an increasing number of jurisdictions mean that enterprise data governance is not merely a technical issue but a legal obligation and governance responsibility. However, I have observed a widespread phenomenon in executive education: most boards lack basic visibility into their enterprise's data assets — how much data exists, where it is, how it is being used, and whether it is compliant. When the board cannot "see" data, it cannot "govern" data, and in the AI era this constitutes a serious governance blind spot.[3]
The common characteristic of these three risk categories is their "technical nature" — their identification, assessment, and management require board members to possess a certain degree of technical understanding. However, the reality is that most boards are still composed primarily with financial, legal, and industry experience as the main considerations, severely lacking in technical expertise. This "capability gap" is the most fundamental challenge facing board governance in the AI era.
II. Redefining Fiduciary Duty: From Financial Oversight to Algorithmic Oversight
The central concept of corporate governance law is "fiduciary duty" — directors, as fiduciaries of shareholders, owe a duty of loyalty and a duty of care. In the AI era, the scope of both duties requires fundamental expansion.
Expansion of the duty of care. Traditionally, directors' duty of care requires them to "reasonably" gather information, assess risks, and seek professional advice when making decisions. In the AI context, this means directors have an obligation to understand the basic operating logic, potential risks, and compliance requirements of the enterprise's AI systems. This does not require directors to become AI engineers, but it does require them to possess sufficient "AI literacy" to ask the right questions: Have our AI systems been tested for bias? Does the explainability of AI decisions meet regulatory requirements? Is the data governance framework in place? The financial regulatory law framework I studied during my doctoral research at Nagoya University provides a useful analogy — directors do not need to understand the pricing formula of every derivative instrument, but they need to understand the enterprise's risk exposure structure. The same logic applies to AI: directors do not need to understand the mathematics of deep learning, but they need to understand the legal, reputational, and financial impacts that AI decisions may have on the enterprise.[4]
New dimensions of the duty of loyalty. The duty of loyalty requires directors to place the interests of shareholders (and in some jurisdictions, stakeholders) above personal interests. In the AI context, this duty acquires new dimensions: directors should not block reasonable AI investment by the enterprise out of ignorance or fear of AI (which constitutes passive dereliction of duty), nor should they ignore AI risks due to excessive optimism about AI (which constitutes active dereliction of duty). A core observation that Professor Aumann raised when we discussed incentive design is particularly relevant here: in environments of extremely high uncertainty, the greatest danger is not making wrong decisions, but failing to establish decision-making procedures for handling uncertainty at all. For boards, the primary obligation of AI governance is not to "get every AI decision right," but to "establish the right AI decision-making procedures."
The technicalization of the oversight duty. The Caremark standard from Delaware established directors' oversight duty regarding corporate compliance systems — directors have an obligation to establish and maintain reasonable information reporting systems to detect illegality or material risks. In the AI era, this oversight duty logically extends to AI system governance: the board needs to ensure that the enterprise has established mechanisms for identifying, monitoring, and reporting AI risks. Specifically, this means the board should require management to regularly report on the operational status of AI systems, bias testing results, compliance audit findings, and AI-related customer complaints and legal risks.[5]
III. Rebuilding Board Capabilities: Five New Core Competencies
The expansion of fiduciary duty means that boards need a new combination of capabilities. Based on my interactions with hundreds of corporate decision-makers in executive education, and inspired by institutional design insights from game theory, I propose five core competencies that boards need to build in the AI era.
The first competency: "AI questioning competency." The value of the board lies not in developing AI itself, but in asking management the right questions. In the context of AI governance, these questions include: In which scenarios do our AI systems make decisions that affect customer rights? Does the explainability of these decisions meet current and foreseeable regulatory requirements? Do we have independent mechanisms for auditing the fairness of AI systems? If our AI system causes significant harm tomorrow, are our legal and insurance preparations adequate? A director who does not understand AI but knows how to ask questions is more valuable to the enterprise than a director who understands AI but does not ask questions.[6]
The second competency: "risk imagination." An important characteristic of AI risk is its "nonlinearity" — an AI system that performs perfectly under normal conditions may produce catastrophic failure under extreme conditions. The 2010 Flash Crash was a systemic event triggered by the interaction of high-frequency trading algorithms under specific market conditions — virtually no one foresaw it. Boards need to cultivate the ability to "envision worst-case scenarios" — not pessimism, but systematic thinking about the ways AI might go wrong. A viewpoint Professor Gostin shared when we discussed global health law is highly applicable here: the lesson from pandemics is that the most severe risks often come from events "we knew could happen but chose not to prepare for." AI risk management is similar — the board's responsibility is not to predict every risk, but to ensure the enterprise has the resilience to cope with unforeseen risks.[7]
The third competency: "incentive design competency." Professor Aumann told me that the most important lens for understanding the world is "incentives." This insight has extremely concrete applications in AI governance. What kind of incentive structure does the enterprise's AI team face? If performance evaluations measure only AI model prediction accuracy without fairness metrics, engineers will naturally prioritize accuracy over bias concerns. If AI project deployment timelines are determined by commercial pressure rather than safety testing completion, hasty deployment is the rational response. The board's responsibility is to design the right incentive structure — incorporating AI safety, fairness, and compliance into the frameworks of performance evaluation, compensation design, and resource allocation. As Professor Aumann put it: do not expect employees to spontaneously do the right thing — design a system where doing the right thing is the most advantageous choice.[8]
The fourth competency: "cross-domain integration." The complexity of AI governance lies in the fact that it spans technology, law, ethics, and business — no director with a single professional background can handle it alone. What boards need is the ability to integrate across domains — enabling effective dialogue among technology experts, legal experts, ethics experts, and business experts. In my experience directing executive education programs at Zhejiang University's International Joint Business School, the most valuable learning often occurs not in deep discussions within a single discipline, but when decision-makers from different backgrounds begin to understand each other's language and logic. Boards need to establish similar cross-domain dialogue mechanisms.
The fifth competency: "long-term perspective." The impact of AI has a significant temporal dimension — in the short term it may bring efficiency gains and cost savings, but the long-term legal, social, and reputational risks may far exceed short-term benefits. When we discussed auction design, Professor Wilson emphasized: the core of mechanism design is ensuring that short-term rational behavior does not lead to long-term systemic failure. Boards need the same long-term perspective when evaluating AI investments — calculating not only the ROI of AI deployment, but also assessing the "tail risk" distribution of AI risks. A single algorithmic bias scandal can destroy decades of brand trust.
IV. Redesigning Governance Mechanisms: From Committees to AI Governance Frameworks
Capability rebuilding requires the support of mechanism restructuring. I propose four mechanism innovations for board governance in the AI era.
First, establish an AI governance committee or incorporate AI issues into existing committee mandates. For large enterprises, establishing a dedicated "AI and Technology Governance Committee" is the most direct institutional response — chaired by an independent director with a technology background, regularly reviewing the enterprise's AI strategy, risk management, and compliance status. For small and medium-sized enterprises, a more pragmatic approach is to expand the mandate of the audit committee or risk management committee to include AI governance in its regular agenda. The key point is: AI governance cannot be treated as "the technology department's business" — it must enter the board-level agenda.[9]
Second, establish an "Algorithmic Impact Assessment" system. By analogy with Environmental Impact Assessment (EIA), enterprises should conduct systematic impact assessments before deploying high-risk AI systems — covering bias testing, privacy impact analysis, security verification, and stakeholder impact assessment. Assessment results should be reported to the board and serve as a mandatory prerequisite for deployment decisions. The design inspiration for this "ex-ante audit" mechanism comes from Professor Wilson's mechanism design theory — through procedural requirements, organizations are compelled to confront the potential impacts of AI ex ante rather than ex post.
Third, introduce a "Chief AI Ethics Officer" or an equivalent functional role. Just as enterprises have established CFOs to ensure financial governance and CLOs to ensure legal compliance, enterprises in the AI era need a senior management role to coordinate AI governance. The core responsibilities of this role include: establishing AI ethics guidelines, overseeing bias testing and compliance auditing of AI systems, handling stakeholder grievances related to AI, and reporting AI governance status to the board. This is not about adding bureaucratic layers, but about ensuring that AI governance has clear accountability.
Fourth, restructure the board's information flow. Under traditional governance frameworks, the board's information primarily comes from financial statements, management reports, and external audits. In the AI era, boards need new information channels: regular performance and risk reports on AI systems, independent third-party algorithmic audit results, summaries of AI-related regulatory developments, and feedback from AI stakeholders (such as customers and employees affected by AI decisions). Professor Aumann repeatedly emphasizes in game theory: decision quality depends on information quality. If the board cannot obtain high-quality information about AI, it cannot govern AI effectively.[10]
V. Toward Responsible AI Governance: An Action Checklist for Directors
To translate this article's analysis into specific actions for individual directors, I offer the following five recommendations.
First, invest in personal AI literacy. Every director should invest time in understanding the basic concepts of AI — not learning to code, but understanding what machine learning is, what training data bias is, what large language models are, and what hallucination is. Many top business schools and professional organizations have already launched AI literacy programs for directors — this is not optional continuing education, but a requirement of the duty of care. In the executive education programs I have directed, the most common breakthrough moment occurs when corporate decision-makers first understand that "AI is not magic, but statistics" — this basic cognitive shift is enough for them to start asking the right questions.
Second, promote diversity in board composition. AI governance requires a technology perspective — but the proportion of members with deep technology backgrounds on the boards of publicly listed companies globally is extremely low. Board nomination committees should include "technology governance capability" as one of the core criteria for director selection. This does not mean every board needs an AI scientist — but it does require at least one or two members who can understand the language of technology risk and engage effectively with technology teams.[11]
Third, require management to establish a formal AI governance framework. The board should explicitly require management to propose an enterprise AI governance policy — covering the scope of AI use, risk classification standards, bias testing procedures, data governance standards, and incident response mechanisms. This framework does not need to be finalized overnight, but it needs a clear timeline and progress tracking. AI governance without a formal framework is like financial governance without accounting standards — it is governance in name only.
Fourth, integrate AI risk into the Enterprise Risk Management (ERM) framework. AI risk should not be treated as a standalone risk category, but should be integrated into the enterprise's existing risk management framework — alongside market risk, credit risk, and operational risk. This means AI risk needs quantitative assessment methods, a clearly defined risk appetite, and regular stress testing. The "price discovery" principle that Professor Wilson emphasizes in mechanism design has an apt analogical application here: only when AI risk is incorporated into a formal risk management framework can its "true cost" be accurately perceived by the organization — otherwise AI risk will continue to be underestimated until a crisis erupts.
Fifth, include AI governance metrics in ESG reporting. As ESG (Environmental, Social, and Governance) reporting becomes institutionalized, AI governance should rightfully become a core component of the "G" (Governance) dimension. Enterprises should disclose in their ESG reports the scope and risks of their AI use, bias testing results, the implementation status of AI ethics policies, and AI-related stakeholder grievances and their resolution. This is not only a responsibility to investors, but also an external pressure mechanism to drive continuous improvement in AI governance within the enterprise.[12]
Looking back at this article as a whole, the corporate governance challenge in the AI era is essentially a race between "governance capability" and "technological change." Technology will not wait for governance to catch up — AI is permeating every decision node in the enterprise at an exponential rate. If boards do not proactively enhance their AI governance capabilities, they will devolve from "overseers of the enterprise" to "bystanders of technological change" — this is not merely dereliction of duty, but a betrayal of shareholders, employees, customers, and society. The most profound insight Professor Aumann taught me is this: good institutions do not depend on virtue, but through carefully designed incentive structures, guide rational actors to naturally make the right choices. The same applies to board governance in the AI era — we cannot expect every director to spontaneously become an AI expert, but we can and must design governance frameworks that give boards the motivation, capability, and mechanisms to effectively oversee AI.
References
- Obermeyer, Z. et al. (2019). Dissecting Racial Bias in an Algorithm Used to Manage the Health of Populations. Science, 366(6464), 447–453. doi.org
- European Commission. (2022). Proposal for a Directive on Adapting Non-Contractual Civil Liability Rules to Artificial Intelligence (AI Liability Directive). COM(2022) 496 final.
- Zuboff, S. (2019). The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. Public Affairs.
- Eisenberg, M. A. (2005). The Duty of Care of Corporate Directors and Officers. University of Pittsburgh Law Review, 51, 945–972.
- In re Caremark International Inc. Derivative Litigation, 698 A.2d 959 (Del. Ch. 1996).
- Fenwick, M. & Vermeulen, E. P. M. (2019). Technology and Corporate Governance: Blockchain, Crypto, and Artificial Intelligence. Texas Journal of Business Law, 48(1), 1–22.
- Gostin, L. O. (2014). Global Health Law. Harvard University Press.
- Aumann, R. J. (2005). War and Peace. Nobel Prize Lecture. nobelprize.org
- OECD. (2023). G20/OECD Principles of Corporate Governance. oecd.org
- Wilson, R. B. (2002). Architecture of Power Markets. Econometrica, 70(4), 1299–1340.
- World Economic Forum. (2024). Artificial Intelligence Governance Alliance: Presidio AI Framework. weforum.org
- European Commission. (2023). European Sustainability Reporting Standards (ESRS). Delegated Regulation (EU) 2023/2772.