Introduction
Artificial intelligence (AI) is reshaping healthcare across clinical practice, public health, research, and administration. From diagnostic imaging and early‑warning systems to automating routine documentation, AI promises gains in efficiency, accuracy, and access. At the same time, concerns about bias, safety, privacy, and accountability have prompted governments and international organizations to articulate comprehensive strategies and ethical frameworks.
In the United States, the Department of Health and Human Services (HHS) has positioned AI as a “practical layer of value” across health and human services through a system‑wide Artificial Intelligence Strategy issued in 2025 HHS AI Strategy. Parallel global guidance from the World Health Organization (WHO) focuses on ethics and governance to ensure AI for health is safe, equitable, and rights‑respecting WHO Ethics & Governance of AI for Health; WHO LMM Guidance. This paper provides a concise overview of AI implementation in healthcare, drawing on the HHS strategy and WHO guidance, and highlights key opportunities and challenges.
Strategic Context: The HHS “OneHHS” AI Vision
The HHS Artificial Intelligence Strategy defines a whole‑of‑department approach, branded “OneHHS,” to integrate AI across internal operations, biomedical research, and care delivery while maintaining public trust HHS AI Strategy. It is organized around five pillars:
Governance and Risk Management for Public Trust
HHS has established an AI Governance Board and is building a comprehensive inventory of AI use cases. High‑impact systems (those that can significantly affect health outcomes or rights) must meet standardized risk‑management practices, including testing, impact assessments, ongoing monitoring, and the ability to suspend non‑compliant systems. This approach draws heavily on the NIST AI Risk Management Framework and aligns with federal directives such as OMB M‑25‑21 and M‑25‑22.Infrastructure and Platforms for User Needs
The strategy envisions a shared “AI‑integrated Commons” that provides secure data platforms, computing resources, model hosting, and evaluation environments, emphasizing FAIR data principles (findable, accessible, interoperable, reusable) and reuse of models across agencies such as FDA, CDC, CMS, and NIH HHS AI Strategy; Holland & Knight analysis.Workforce Development and Burden Reduction
HHS aims to create an “AI‑ready” workforce by deploying secure AI copilots, role‑based training (from basic literacy to advanced modeling), and new AI‑specialist roles. The explicit goal is to automate rote administrative work so staff can focus on high‑value clinical, policy, and research tasks HHS AI Strategy.Health Research and Reproducibility (Gold‑Standard Science)
AI is to be deeply integrated into biomedical and public health research—e.g., for drug discovery, precision medicine, and surveillance—under rigorous standards of reproducibility, transparency, and open science where legally and ethically permissible. HHS emphasizes documentation, standardized pipelines, and pre‑registration to support validation and regulatory use of AI‑derived evidence.Modernization of Care and Public Health Delivery
AI is expected to augment, not replace, clinicians and public health professionals through applications such as clinical decision support, risk stratification, early‑warning tools (e.g., for sepsis or overdose), and proactive outreach to high‑risk populations HHS AI Strategy; Holland & Knight analysis.
This strategy marks a shift from scattered pilots to coordinated, scalable AI capabilities across the federal health ecosystem and signals strong expectations for industry partners regarding data standards, transparency, and safety.
Key Use Cases for AI in Healthcare
AI implementation in healthcare can be grouped into several practical domains:
Clinical Decision Support and Diagnostics
Image analysis for radiology, pathology, and dermatology (e.g., detecting malignancies or fractures).
Prediction models for deterioration, readmission, or complications to support triage and resource allocation.
Natural‑language tools that summarize clinical notes and highlight critical findings.
Population Health and Public Health Surveillance
Early detection of outbreaks using multimodal data streams.
Risk stratification for chronic diseases (e.g., diabetes, cardiovascular disease) to target preventive interventions.
Analytics for maternal health, overdose prevention, and other priority conditions highlighted by HHS HHS AI Strategy.
Administrative and Operational Efficiency
Automation of prior authorization, claims adjudication, and billing.
Scheduling optimization and capacity management in hospitals and clinics.
Streamlined regulatory workflows, such as pre‑market review support and post‑market surveillance at the FDA Holland & Knight analysis.
Research and Innovation
AI‑driven drug discovery and target identification (e.g., using deep learning models for structure prediction and virtual screening).
Analysis of large‑scale genomic, imaging, and registry datasets for precision medicine.
Generative and large multimodal models (LMMs) to synthesize literature, design experiments, or simulate trial scenarios WHO LMM Guidance.
Collectively, these use cases illustrate AI’s role across the full continuum—from bench research to bedside care and systems‑level management.
Ethical, Legal, and Social Considerations
WHO’s guidance emphasizes that AI for health must be grounded in ethics and human rights, not only technical performance WHO Ethics & Governance of AI for Health. It articulates six core principles highly relevant to implementation:
Protecting Human Autonomy
AI systems should support, not supplant, human clinical judgment. Patients must give informed consent, understand when AI is involved, and retain meaningful control over their health decisions.Promoting Well‑Being, Safety, and the Public Interest
AI must demonstrably improve health outcomes and safety. This implies stringent validation, robust post‑deployment monitoring, and clear mechanisms to pause or withdraw unsafe systems—an approach reflected in HHS requirements for high‑impact AI HHS AI Strategy.Ensuring Transparency, Explainability, and Intelligibility
While not all models are fully interpretable, stakeholders must be able to understand AI’s intended use, limitations, and performance characteristics. Plain‑language public summaries and published evaluations, as planned by HHS, contribute to this goal Holland & Knight analysis; WHO Ethics & Governance of AI for Health.Fostering Accountability and Responsibility
Governance mechanisms must clarify who is responsible for AI‑enabled decisions—the developer, deployer, clinician, or institution—and how harms are remediated. Both HHS and WHO stress the need for auditability, impact assessments, and clear liability frameworks HHS AI Strategy; WHO Ethics & Governance of AI for Health.Ensuring Inclusiveness and Equity
AI can entrench existing health inequities if training data under‑represent minority populations or if deployment is limited to well‑resourced settings. Ethical implementation requires attention to data representativeness, bias testing, and equitable access to AI‑enabled services.Promoting Sustainable and Environmentally Responsible AI
WHO notes that the environmental and resource costs of AI—particularly large models—should be weighed against benefits, and systems should be designed for long‑term sustainability WHO Ethics & Governance of AI for Health.
For generative AI and LMMs, WHO adds specific cautions: risks of hallucinated medical content, privacy breaches from training data, and overreliance on unvalidated outputs—all requiring strict guardrails when these models are used for clinical or public health purposes WHO LMM Guidance.
Implementation Challenges
Despite robust strategic frameworks, several practical challenges remain:
Data Quality and Interoperability: Fragmented, inconsistent data and limited interoperability between systems hinder reliable model development and deployment. HHS’s push for standardized, FAIR data infrastructures is a direct response to this barrier HHS AI Strategy.
Regulatory Capacity and Evaluation Methodologies: Regulators must keep pace with adaptive and continuously learning AI. Developing clinically meaningful metrics, real‑world performance monitoring, and update protocols for learning systems is an ongoing task for agencies such as FDA and CMS.
Workforce Adoption and Trust: Even high‑performing tools can fail if clinicians and staff do not trust them or if they disrupt workflows. Training, user‑centered design, and transparent communication about AI’s role are critical to adoption.
Privacy, Security, and Secondary Use of Data: Large‑scale AI often requires aggregating sensitive health data. Ensuring compliance with privacy laws (e.g., HIPAA), securing infrastructure, and preventing unauthorized secondary use of data are foundational to both HHS and WHO frameworks HHS AI Strategy; WHO Ethics & Governance of AI for Health.
Conclusion
AI implementation in healthcare is moving from experimentation to system‑level integration. The HHS AI Strategy outlines an ambitious “OneHHS” approach that couples governance, infrastructure, workforce development, research rigor, and modernization of care delivery. In parallel, WHO’s ethical and governance guidance provides global principles to ensure AI advances health while protecting rights, equity, and safety.
Realizing AI’s potential will depend not only on technical innovation but also on sustained investment in governance, data quality, evaluation, and workforce readiness. If these elements are aligned, AI can meaningfully support more efficient, equitable, and patient‑centered health systems.
References
Holland & Knight. (2025, December 10). HHS Releases Strategy Positioning Artificial Intelligence as the Core of Health Innovation [Holland & Knight Healthcare Blog]. Retrieved from https://www.hklaw.com/en/insights/publications/2025/12/hhs-releases-strategy-positioning-artificial-intelligence
U.S. Department of Health and Human Services. (2025). Artificial Intelligence (AI) Strategy (Version 1.0). Retrieved from https://www.hhs.gov/sites/default/files/hhs-artificial-intelligence-strategy.pdf
World Health Organization. (2021). Ethics and governance of artificial intelligence for health (WHO guidance). Retrieved from https://www.who.int/publications/i/item/9789240029200
World Health Organization. (2025). Ethics and governance of artificial intelligence for health: Guidance on large multi‑modal models. Retrieved from https://www.who.int/publications/i/item/9789240084759






