In January 2025, a study published in Nature Medicine sent shockwaves through the global medical community: AMIE (Articulate Medical Intelligence Explorer), an AI system jointly developed by Google Health and multinational medical centers, achieved diagnostic accuracy and information-gathering capabilities in simulated clinical conversations that, for the first time, comprehensively surpassed those of specialty-trained human physicians.[1] This was not the first milestone for AI in healthcare, but it symbolized a critical turning point — AI is no longer merely a tool that assists physicians in reading medical images; it is now entering the core of medical decision-making: patient interviews, differential diagnosis, and treatment recommendations. Yet the pace of technological breakthroughs far outstrips the construction of governance frameworks. When an AI system's recommended diagnosis conflicts with a physician's professional judgment, whose assessment should prevail? When an algorithm systematically underestimates disease risk for certain populations due to biases in its training data, who should bear responsibility? When the "black box" nature of medical AI prevents patients from understanding why they received a particular treatment, can the legal foundation of informed consent still hold? These are not distant hypotheticals — they are playing out in hospitals, courtrooms, and regulatory agencies around the world. From my prior experience conducting technology governance research at the University of Cambridge to my current role leading Meta Intelligence in deploying AI systems for enterprises, I have come to deeply appreciate that the governance of AI in healthcare cannot wait until the technology matures — governance frameworks must evolve in tandem with the technology, and even guide the direction of its development.
I. The Current State and Breakthroughs of AI in Healthcare: From Image Interpretation to Drug Discovery
To understand the complexity of AI healthcare governance, one must first recognize that AI applications in healthcare have already far exceeded most people's imagination.
A revolution in diagnostic imaging. Medical imaging is the most mature application of AI in healthcare. By the end of 2025, the U.S. FDA had approved over 950 AI/ML-enabled medical devices, approximately 70 percent of which were related to medical imaging.[2] In radiology, AI systems can already detect early breast cancer in mammograms with sensitivity and specificity exceeding that of individual radiologists.[3] In ophthalmology, an AI system developed jointly by Google DeepMind and Moorfields Eye Hospital can detect over 50 ophthalmic conditions from retinal OCT scans with accuracy comparable to that of top ophthalmic specialists.[4] In pathology, AI-assisted digital pathology systems are transforming cancer staging — by analyzing whole slide images, AI can identify microscopic tissue features imperceptible to the human eye, providing more precise bases for treatment planning.[5]
Accelerating drug discovery. AI is fundamentally transforming the paradigm of drug development. The traditional drug development pipeline takes an average of 10 to 15 years, costs over $2 billion, and has a success rate of less than 10 percent. AI is delivering breakthroughs across multiple stages: in the target identification phase, deep learning models can identify new drug targets from massive genomic and proteomic datasets; in the molecular design phase, generative AI can design entirely novel molecular structures with specified pharmacological properties; and in the clinical trial design phase, AI can optimize patient recruitment strategies, predict trial outcomes, and identify patient subgroups most likely to benefit.[6] In 2023, the anti-fibrotic drug INS018_055, discovered and designed from scratch by Insilico Medicine using AI, entered Phase II clinical trials — taking only approximately 30 months from target discovery to clinical trial, shattering the traditional drug development timeline.[7]
Deepening precision medicine. The convergence of AI and genomics is driving healthcare from a "one-size-fits-all" approach to "tailor-made" treatment. Multi-omics data integration (genomics, transcriptomics, proteomics, metabolomics) enables AI systems to construct granular disease models for individual patients, predicting their response to specific therapies.[8] In oncology, AI-driven liquid biopsy analysis can detect trace amounts of tumor DNA fragments in the blood, enabling early cancer detection and real-time monitoring of treatment response. In rare diseases, AI is shortening the agonizing wait for diagnosis — on average, rare disease patients wait approximately 5 to 7 years from symptom onset to definitive diagnosis, and AI-assisted phenotype analysis systems can dramatically reduce this timeframe.[9]
II. Comparing Global Regulatory Frameworks: FDA, EMA, and TFDA
The rapid advancement of AI in healthcare poses a fundamental challenge to traditional medical device regulatory frameworks. Traditional medical devices — whether cardiac stents or X-ray machines — have fixed features and functionality at the time of market authorization. But AI medical devices, particularly machine learning-based Software as a Medical Device (SaMD), possess a fundamentally different characteristic: they can continuously learn and evolve. An AI diagnostic system that performs adequately at market launch may, after retraining on large volumes of new data, exhibit significantly altered behavioral characteristics — it may improve, or it may develop new biases.[10]
The FDA's adaptive regulation. The FDA leads globally in AI medical device regulation. The AI/ML-Based Software as a Medical Device Action Plan, published in 2021, proposed five core strategies, the most innovative of which is the Predetermined Change Control Plan (PCCP).[11] PCCP allows AI medical device manufacturers to submit, alongside their market authorization applications, a "predetermined change plan" — pre-describing the types of modifications the AI system may undergo in the future (such as the scope of retraining data, parameter boundaries for algorithm adjustments), along with validation methods to ensure these modifications do not compromise safety and effectiveness. Once a PCCP is approved, manufacturers can update their AI systems within the approved scope without resubmitting for market authorization for each change. This embodies a regulatory philosophy of "controlled flexibility" — granting AI systems room for continuous improvement within pre-established safety boundaries. By 2025, the FDA had approved dozens of AI medical devices containing PCCPs.[12] Critics point out, however, that PCCP effectiveness is highly dependent on manufacturers' self-monitoring integrity — if a manufacturer's monitoring of AI system behavioral changes is insufficiently rigorous, "controlled flexibility" could become a "regulatory vacuum."
The EU's full lifecycle regulation under MDR/IVDR. The EU's medical device regulatory framework — the Medical Devices Regulation (MDR, 2017/745) and the In Vitro Diagnostic Medical Devices Regulation (IVDR, 2017/746) — takes a more stringent stance toward AI medical devices.[13] The MDR explicitly includes AI software within the scope of medical devices and requires it to undergo conformity assessment procedures comparable to those for hardware medical devices. For high-risk AI diagnostic systems (such as Class IIb or Class III), independent review by a Notified Body is required. More importantly, the MDR mandates that medical device manufacturers establish comprehensive Post-Market Surveillance (PMS) systems and Post-Market Clinical Follow-up (PMCF), continuously monitoring the performance of AI systems in real-world clinical settings.[14] Unlike the FDA's PCCP, the EU currently takes a more conservative approach to "continuous learning" in AI medical devices — any significant change that could affect device safety or performance, in principle, requires a new conformity assessment. While this stance reinforces safety assurances, it also raises concerns: overly strict change management may slow the pace of AI system improvement, preventing European patients from benefiting promptly from the latest algorithmic enhancements. Furthermore, the EU AI Act classifies healthcare AI as a "high-risk AI system," layering additional compliance requirements — including human oversight, transparency, data governance, and risk management — which makes the regulatory burden for launching AI medical devices in the EU market significantly higher than in the United States.[15]
Taiwan TFDA's evolving framework. The regulation of AI medical devices by Taiwan's Food and Drug Administration (TFDA) under the Ministry of Health and Welfare is rapidly evolving. TFDA references the International Medical Device Regulators Forum (IMDRF) SaMD risk classification framework, categorizing AI medical software based on the degree of clinical impact (from information provision to driving treatment decisions) and the severity of the health condition (from non-serious to life-threatening).[16] In 2024, TFDA published the Review Guidelines for Medical Device Software Using Artificial Intelligence/Machine Learning Technology, specifying registration requirements for AI medical devices, including algorithm descriptions, training data documentation, and clinical validation data.[17] However, Taiwan's regulatory framework still needs strengthening in several key areas. First, for adaptive algorithms — AI systems that continue to learn and update after deployment — TFDA has not yet established a clear regulatory pathway akin to the FDA's PCCP. Second, requirements for localized clinical validation need reinforcement — most approved AI medical devices are primarily trained and validated on Western populations, and whether their performance is equivalent in Taiwan's population requires more rigorous local validation mechanisms.
III. The Legal Status and Liability Attribution of Clinical Decision Support Systems
The clinical applications of AI take various forms, but from a legal liability perspective, the most critical distinction is: does the AI "provide information" or "recommend decisions"? This distinction directly determines the regulatory classification and liability attribution of the AI system.
The spectrum of Clinical Decision Support Systems (CDSS). The U.S. FDA established an important distinction framework in the 21st Century Cures Act: if software merely provides clinical information for independent review by healthcare professionals and is not intended to replace physician professional judgment, it may be exempt from medical device regulation.[18] However, if the software directly provides diagnostic or treatment recommendations and physicians in practice heavily rely on these recommendations (even if they nominally retain final decision-making authority), it constitutes a medical device requiring regulation. The problem is that this boundary is extremely blurred in practice. An AI diagnostic system labeled "for reference only," if it achieves extremely high accuracy in actual clinical settings (for example, 99.5 percent), will naturally engender "automation trust" in physicians who use it — they will gradually reduce independent judgment and instead rely on the AI's output.[19] When the system errs in 0.5 percent of cases, and the physician has not verified the result due to over-reliance, who should bear the responsibility?
The multi-layered structure of liability attribution. The liability chain for AI medical misdiagnosis involves multiple parties. AI developers may bear product liability for algorithmic defects, insufficient training data, or failure to adequately disclose system limitations. Healthcare institutions may bear organizational liability for failing to properly evaluate the AI system's suitability, provide adequate training, or establish appropriate human oversight mechanisms. Individual physicians may bear professional negligence liability for failing to reasonably use AI recommendations — whether through over-reliance or inappropriate disregard.[20] This multi-layered liability structure gives rise to several thorny legal questions. First, how should the "reasonable physician standard" be defined in the AI era? When most peers already use AI-assisted diagnosis, does a physician who refuses to use AI constitute "falling below the standard of care"? Conversely, when the AI system's recommendations contradict a physician's clinical experience, should the physician follow the AI or their own judgment?[21] Second, how should a product liability "defect" be determined in a learning AI system? Is an AI system that performs excellently in one patient population but poorly in another "defective" or "limited"? Third, when the "black box" nature of AI systems prevents physicians from explaining the specific reasons for a diagnosis to patients, are the legal requirements for informed consent still satisfied?
IV. Algorithmic Bias and Healthcare Equity: Inequality Masked by Data
The deepest governance challenge in AI healthcare is not a technical problem, but one of equity. Algorithms appear objective and neutral — they make judgments based on data, free from human biases and prejudices. But this is a dangerous illusion. The "objectivity" of algorithms depends entirely on the quality of training data, and data itself is a mirror of social inequality.
Obermeyer's warning. In 2019, Obermeyer et al. published a landmark study in Science revealing systematic racial bias in a widely used healthcare risk prediction algorithm in the United States.[22] This algorithm was used to identify high-risk patients requiring additional care resources, affecting the healthcare resource allocation of approximately 200 million Americans. The study found that at equivalent levels of health status, Black patients were significantly less likely to be flagged as "high-risk" than White patients. The cause lay in the choice of proxy variable: the algorithm used "past healthcare expenditure" as a proxy indicator for "health needs." Due to structural socioeconomic inequality, Black patients, even at the same level of health, systematically had lower healthcare expenditures than White patients — not because they were healthier, but because they faced more access barriers to healthcare. The algorithm faithfully "learned" this pattern of inequality and converted it into systematic resource allocation bias.
The multiple sources of bias. Bias in healthcare AI stems not only from proxy variable selection but from multiple sources. Representational bias in training data is the most pervasive issue. The majority of global AI healthcare research uses training data from Western medical centers — meaning AI systems trained on data from White, middle-class patients may perform poorly for patients of other ethnicities and socioeconomic backgrounds. A study of dermatology AI diagnostic systems found that diagnostic accuracy on darker skin tones was significantly lower than on lighter skin tones — because images of darker skin were severely underrepresented in the training set.[23] Label bias is equally dangerous — if certain diseases have been historically systematically underdiagnosed in specific groups (such as heart disease in women, or mental illness in ethnic minorities), then AI systems trained on these historical diagnostic data will perpetuate and even amplify this underestimation.[24] Deployment environment bias must also not be overlooked — an AI system developed and validated at a major medical center may perform very differently in resource-limited community hospitals or rural clinics, because the quality of imaging equipment, clinical workflows, and patient demographics are all different.
Governance responses. In response to algorithmic bias, governance responses worldwide are unfolding across multiple dimensions. In draft guidance published in 2024, the FDA required AI medical device manufacturers to provide performance data stratified by age, sex, race/ethnicity, and other demographic factors when submitting market authorization applications, to reveal potential inter-group differences.[25] The World Health Organization (WHO), in its 2021 Ethics and Governance of Artificial Intelligence for Health guidelines, listed "ensuring inclusiveness" as one of six core principles, emphasizing that AI healthcare systems should not exacerbate existing health inequalities.[26] Academia is actively developing "fairness metrics" and "bias audit" tools — yet a fundamental difficulty is that fairness has multiple mathematical definitions (such as demographic parity, equalized odds, predictive calibration, etc.), and it is often mathematically impossible to satisfy these definitions simultaneously.[27] Choosing which definition of fairness to adopt is fundamentally a value judgment, not a technical problem.
V. Reshaping the Doctor-Patient Relationship: Professional Judgment vs. Statistical Prediction
The impact of AI on healthcare is not merely technological — it is shaking the most fundamental interpersonal relationship in the medical system: the trust between physicians and patients.
The transformation of the physician's role. In his widely influential book Deep Medicine, Eric Topol advanced a seemingly paradoxical argument: the greatest value of AI lies not in replacing physicians' professional judgment, but in freeing them from tedious data processing so they can return to the humanistic essence of medicine — listening, empathizing, and accompanying patients.[28] However, there is a significant gap between this optimistic vision and clinical reality. In practice, the introduction of AI tools often does not "liberate" physicians but rather changes their work patterns — from direct clinical observation to reviewing and managing AI outputs. When AI handles the majority of image interpretation, laboratory data analysis, and risk assessment, the physician's role gradually shifts from "professional decision-maker" to "quality supervisor." This transformation triggers a deep professional identity crisis: if AI is more accurate than physicians in most cases, what is the value of a physician's "professional judgment"?
The risk of "automation trust." Cognitive science research shows that when humans interact with highly reliable automated systems, they gradually develop "automation bias" — a tendency to accept the automated system's recommendations even when those recommendations contradict other available information.[29] In aviation, automation bias has been identified as a contributing factor in multiple air accidents. In healthcare, this risk is equally severe. A study examining radiologists' use of AI-assisted interpretation found that when an AI system provided incorrect recommendations, some physicians' interpretive accuracy actually dropped below their performance without AI — because the AI's "incorrect suggestions" distorted the physicians' cognitive frameworks.[30] This means that AI errors are not "neutral" — they can actively degrade physician performance, rather than merely failing to improve it.
Reconceptualizing informed consent. Informed consent is a cornerstone of modern medical ethics — patients have the right to make autonomous medical decisions based on a full understanding of the diagnosis, treatment options, and associated risks.[31] The introduction of AI poses new challenges to informed consent. First, are physicians obligated to inform patients that their diagnosis or treatment recommendation is based on an AI system's output? Most medical ethicists believe the answer is yes — but there is no consensus on how this should be operationalized in practice. Second, when the decision logic of an AI system is incomprehensible to humans (the "explainability problem"), how can a physician "explain" the rationale behind a diagnosis to a patient? If the physician themselves cannot understand the AI's reasoning process, how can they ensure that the patient's "informed" consent is truly informed? An editorial in The Lancet Digital Health argued that explainability is not merely a technical issue but a fundamental requirement of medical ethics — an unexplainable diagnosis, even if statistically more accurate, can undermine patient autonomy and trust in the healthcare system.[32]
VI. Data Governance and Privacy Protection: The Data Ethics of Healthcare AI
Another core governance challenge for AI in healthcare lies in data. Training AI systems requires vast quantities of medical data — medical records, imaging, genetic information, medication histories — which represent the most sensitive categories of personal information. How to strike a balance between advancing AI healthcare innovation and protecting patient privacy is an institutional design dilemma.
Tensions in global privacy frameworks. The EU's GDPR imposes strict limitations on the processing of medical data — health data is classified as "special category personal data," and its processing is in principle prohibited unless specific legal bases are met (such as explicit consent from the data subject, public health purposes, etc.).[33] Article 22 of the GDPR further grants data subjects the right "not to be subject to a decision based solely on automated processing" — which in the context of AI healthcare means that patients have the right to request human review of AI decisions affecting their health. The United States' HIPAA provides two pathways for de-identifying medical data (the Expert Determination method and the Safe Harbor method), but in the AI era, traditional de-identification techniques face new challenges — research shows that the risk of re-identifying individuals from "de-identified" medical data through cross-referencing with other datasets is significantly higher than expected.[34]
Taiwan's data governance challenges. Taiwan possesses a globally unique medical data asset — the National Health Insurance Database covers over 99 percent of the population and provides continuous medical records spanning over 25 years. This is an invaluable resource for training healthcare AI. However, the 2022 Constitutional Court Judgment No. 13 imposed important restrictions on secondary use of health insurance data — ruling that the existing Personal Data Protection Act's regulations on secondary use of personal data by government agencies were insufficiently robust, and requiring the legislature to enact more comprehensive protective mechanisms within three years.[35] This judgment has far-reaching implications for Taiwan's healthcare AI development — how to establish a lawful, transparent, and auditable mechanism for medical data utilization that respects patients' right to privacy will determine Taiwan's position in the global AI healthcare competition.
Emerging technological solutions. Several emerging technological frameworks are providing new possibilities for data governance in healthcare AI. Federated Learning allows AI models to be trained locally at each healthcare institution, sharing only model parameters rather than raw data — achieving a privacy-preserving design of "data stays, models travel."[36] Differential Privacy injects carefully calibrated random noise into data or models, mathematically guaranteeing that individual patients' information cannot be inferred. Synthetic Data uses generative models to create artificial datasets that share the same statistical properties as real data but contain no actual personal information.[37] These technologies each have their strengths and limitations, but they collectively point to an important governance principle: privacy protection should not be an obstacle to AI healthcare innovation but should instead be embedded into system design through "Privacy-Enhancing Technologies" (PETs) — this is the concrete manifestation of "privacy by design" in healthcare AI.
VII. Taiwan's Smart Healthcare: Opportunities and Challenges
Taiwan possesses several unique structural advantages in developing AI healthcare. The National Health Insurance system provides globally rare, complete, continuous, and high-quality population-wide health data; top-tier medical centers (such as National Taiwan University Hospital, Chang Gung Memorial Hospital, and the Veterans General Hospital system) possess internationally competitive clinical research capabilities; and a vibrant ICT industry (especially semiconductors and information and communications hardware) provides a solid industrial foundation for the hardware side of AI medical devices.[38]
However, to translate these advantages into global leadership in AI healthcare, Taiwan needs to overcome several critical challenges.
Modernizing the regulatory framework. TFDA needs to accelerate the establishment of adaptive regulatory mechanisms for AI medical devices — particularly clear change management pathways for continuously learning AI systems. Drawing on the FDA's PCCP model, Taiwan could establish a "pre-approved scope of evolution" framework that allows AI medical devices to continuously improve while maintaining safety. Additionally, TFDA should establish closer collaboration with international regulatory bodies — for example, joining IMDRF working groups to ensure that Taiwan's regulatory standards are aligned with international practices.
Cultivating interdisciplinary talent. AI healthcare requires professionals who simultaneously possess clinical medical knowledge and AI technical capabilities — a scarce resource in Taiwan and worldwide. Taiwan's medical education system needs to systematically integrate AI and data science curricula; simultaneously, engineering and science students specializing in AI need exposure to medical domain expertise. The advent of a super-aged society makes this talent demand even more urgent — AI is a critical technology for addressing the healthcare burden of an aging population.
Institutionalizing localized validation. Given that the performance of AI healthcare models is highly dependent on the representativeness of training and validation data, Taiwan should establish systematic localized validation mechanisms. This includes: building medical AI training datasets that encompass Taiwan's diverse ethnic groups; requiring imported AI medical devices to provide validation data based on Taiwanese populations before obtaining market authorization; and establishing a continuous post-market performance tracking system to monitor the real-world performance of AI systems in Taiwan's clinical environments.[39]
Building a medical data ecosystem. Taiwan should draw on international experience (such as the UK's NHS Health Data Research UK and Finland's Findata) to establish a national-level medical data governance platform — providing safe, compliant, and high-quality data access mechanisms for AI healthcare R&D and validation within a legal framework that respects privacy rights. The 2022 Constitutional Court judgment presents an opportunity to drive Taiwan toward a more transparent and rights-respecting system for medical data utilization. Experience in the development of global health law demonstrates that data openness and privacy protection are not a zero-sum game — the key lies in institutional design.
VIII. Conclusion: A Patient-Centered AI Healthcare Governance Framework
The governance dilemma of AI in healthcare is, at its core, an institutional design challenge of balancing multiple stakeholder interests. We must simultaneously address six dimensions of value: Safety (AI systems should not cause harm to patients), Efficacy (AI systems should substantively improve healthcare outcomes), Equity (AI systems should not exacerbate health inequalities), Transparency (AI decision logic should, to the extent feasible, be understandable and auditable), Privacy (patients' medical data should be adequately protected), and Accessibility (the benefits of AI healthcare should reach everyone, not just those with resources).[40]
Tensions exist among these values — for instance, transparency requirements may conflict with the protection of proprietary algorithmic trade secrets; stringent safety standards may delay the market entry of AI medical devices; data privacy protections may limit the data accessibility needed for AI training. Any governance framework is an attempt to find a dynamic equilibrium among these tensions.
Based on the analysis above, I propose five core principles for a patient-centered AI healthcare governance framework:
First, the Principle of Risk Proportionality. The stringency of regulation should be proportional to the clinical risk of the AI system. High-risk AI systems that directly affect treatment decisions (such as cancer diagnosis or medication recommendations) should be subject to rigorous pre-market review and continuous post-market surveillance; low-risk AI systems that provide general health information should be subject to a lighter regulatory touch.
Second, the Principle of Continuous Validation. The regulation of AI medical devices should not follow a "one-time review, lifetime validity" model. Continuous real-world performance monitoring mechanisms should be established, requiring manufacturers to regularly provide data on the actual performance of AI systems in clinical environments — including accuracy rates, bias indicators, and adverse event reports.
Third, the Principle of Meaningful Human Oversight. In AI-assisted clinical decision-making processes, physician oversight must be "meaningful" — not merely formally keeping humans in the loop, but ensuring that physicians possess the ability and conditions to understand, question, and override AI recommendations. This requires incorporating AI literacy training into medical education and establishing independent review mechanisms for AI outputs within clinical workflows.[41]
Fourth, the Principle of Algorithmic Equity. AI healthcare systems should undergo systematic bias audits to ensure that their performance across different demographic groups does not exhibit unjustifiable disparities. Regulators should require manufacturers to provide stratified performance data and establish mechanisms for bias reporting and correction.
Fifth, the Principle of Data Stewardship. The utilization of medical data should be conducted under the supervision of independent governance bodies, ensuring that data use aligns with the original purposes of collection, the reasonable expectations of data subjects, and the requirements of the public interest. Patients should be granted the right to be informed about how their data is used for AI training and, within reasonable bounds, the right to choose.
Demographic structural changes — particularly the global trend toward aging populations — mean that AI in healthcare is no longer an optional luxury but a necessary tool for sustaining healthcare systems. Yet the necessity of technology cannot serve as an excuse for neglecting governance. Quite the opposite: precisely because AI will profoundly affect every person's health and life, we need a rigorous, equitable, patient-centered governance framework all the more — not to obstruct innovation, but to ensure that innovation truly serves those it is meant to serve: every person who needs healthcare.
References
- Tu, T. et al. (2024). Towards Conversational Diagnostic AI. Nature Medicine. doi.org
- U.S. FDA. (2025). Artificial Intelligence and Machine Learning (AI/ML)-Enabled Medical Devices. fda.gov
- McKinney, S. M. et al. (2020). International evaluation of an AI system for breast cancer screening. Nature, 577, 89–94. doi.org
- De Fauw, J. et al. (2018). Clinically applicable deep learning for diagnosis and referral in retinal disease. Nature Medicine, 24, 1342–1350. doi.org
- Bera, K. et al. (2019). Artificial intelligence in digital pathology — new tools for diagnosis and precision oncology. Nature Reviews Clinical Oncology, 16, 703–715. doi.org
- Vamathevan, J. et al. (2019). Applications of machine learning in drug discovery and development. Nature Reviews Drug Discovery, 18, 463–477. doi.org
- Ren, F. et al. (2024). AlphaFold accelerates artificial intelligence powered drug discovery. Nature Reviews Drug Discovery, 23, 1–2. doi.org
- Topol, E. J. (2019). High-performance medicine: the convergence of human and artificial intelligence. Nature Medicine, 25, 44–56. doi.org
- Dias, R. & Torkamani, A. (2019). Artificial intelligence in personalized medicine. Expert Review of Precision Medicine and Drug Development, 4(4), 239–248. doi.org
- Gerke, S., Babic, B., Evgeniou, T. & Cohen, I. G. (2020). The need for a system view to regulate artificial intelligence/machine learning-based software as medical device. NPJ Digital Medicine, 3, 53. doi.org
- U.S. FDA. (2021). Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) Action Plan. fda.gov
- U.S. FDA. (2023). Marketing Submission Recommendations for a Predetermined Change Control Plan for AI/ML-Enabled Device Software Functions — Guidance for Industry. fda.gov
- European Parliament and Council. (2017). Regulation (EU) 2017/745 on Medical Devices (MDR). eur-lex.europa.eu
- European Parliament and Council. (2017). Regulation (EU) 2017/746 on In Vitro Diagnostic Medical Devices (IVDR). eur-lex.europa.eu
- European Parliament and Council. (2024). Regulation (EU) 2024/1689 laying down harmonised rules on artificial intelligence (AI Act). eur-lex.europa.eu
- IMDRF. (2014). Software as a Medical Device (SaMD): Key Definitions. IMDRF/SaMD WG/N10FINAL:2013. imdrf.org
- Taiwan Food and Drug Administration. (2024). Review Guidelines for Medical Device Software Using Artificial Intelligence/Machine Learning Technology. fda.gov.tw
- U.S. Congress. (2016). 21st Century Cures Act. Public Law 114-255, Section 3060. congress.gov
- Parasuraman, R. & Manzey, D. H. (2010). Complacency and Bias in Human Use of Automation. Human Factors, 52(3), 381–410. doi.org
- Price, W. N. & Cohen, I. G. (2019). Privacy in the age of medical big data. Nature Medicine, 25, 37–43. doi.org
- Maliha, G. et al. (2021). Artificial Intelligence and Liability in Medicine: Balancing Safety and Innovation. The Milbank Quarterly, 99(3), 629–647. doi.org
- Obermeyer, Z. et al. (2019). Dissecting racial bias in an algorithm used to manage the health of populations. Science, 366(6464), 447–453. doi.org
- Adamson, A. S. & Smith, A. (2018). Machine Learning and Health Care Disparities in Dermatology. JAMA Dermatology, 154(11), 1247–1248. doi.org
- Rajkomar, A., Hardt, M., Howell, M. D. et al. (2018). Ensuring Fairness in Machine Learning to Advance Health Equity. Annals of Internal Medicine, 169(12), 866–872. doi.org
- U.S. FDA. (2024). Recommendations for the Use of Artificial Intelligence and Machine Learning in the Development and Regulation of Drug and Biological Products — Draft Guidance. fda.gov
- World Health Organization. (2021). Ethics and Governance of Artificial Intelligence for Health. who.int
- Chouldechova, A. (2017). Fair Prediction with Disparate Impact: A Study of Bias in Recidivism Prediction Instruments. Big Data, 5(2), 153–163. doi.org
- Topol, E. (2019). Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again. Basic Books.
- Goddard, K., Roudsari, A. & Wyatt, J. C. (2012). Automation bias: a systematic review of frequency, effect mediators, and mitigators. Journal of the American Medical Informatics Association, 19(1), 121–127. doi.org
- Gaube, S. et al. (2021). Do as AI say: susceptibility in deployment of clinical decision-aids. NPJ Digital Medicine, 4, 31. doi.org
- Beauchamp, T. L. & Childress, J. F. (2019). Principles of Biomedical Ethics (8th ed.). Oxford University Press.
- The Lancet Digital Health. (2019). Walking the tightrope of artificial intelligence guidelines in clinical practice. The Lancet Digital Health, 1(3), e100. doi.org
- European Parliament and Council. (2016). Regulation (EU) 2016/679 — General Data Protection Regulation (GDPR). eur-lex.europa.eu
- Sweeney, L., Abu, A. & Winn, J. (2013). Identifying Participants in the Personal Genome Project by Name. SSRN. doi.org
- Judicial Yuan, R.O.C. (2022). Constitutional Court Judgment No. 13 of 111 (2022). cons.judicial.gov.tw
- Rieke, N. et al. (2020). The future of digital health with federated learning. NPJ Digital Medicine, 3, 119. doi.org
- Chen, R. J. et al. (2021). Synthetic data in machine learning for medicine: promise and challenges. Nature Medicine, 27, 1664–1669. doi.org
- Stanford HAI. (2024). Artificial Intelligence Index Report 2024. Stanford University. aiindex.stanford.edu
- Executive Yuan, R.O.C. (2024). Taiwan AI Action Plan 2.0. Executive Yuan
- BMJ. (2020). Artificial Intelligence in Health Care: Accountability and Safety. BMJ, 368, l6927. doi.org
- MIT Technology Review. (2024). The year AI truly entered the clinic. technologyreview.com