Skip to main content
Lab Notes
KSA Regulation

Healthcare AI Governance in Saudi Arabia: What NHIC and MOH Compliance Actually Requires

Nora Al-Rashidi|March 4, 2026|6 min read

Saudi Arabia's Ministry of Health announced in 2024 that AI-assisted diagnostics would be deployed across 150 government hospitals by the end of 2025. As of early 2026, that rollout is underway. What most healthcare organizations are not prepared for is the governance framework those deployments are supposed to operate within — and what regulators expect to see documented before any clinical AI goes live.

The National Health Information Center (NHIC) and the Ministry of Health (MOH) have published guidance that is often treated as advisory. It is not. Under Saudi Arabia's National Digital Health Strategy and the Personal Data Protection Law (PDPL), healthcare AI governance is becoming a compliance obligation — not a best-practice checklist.

What NHIC Actually Governs

The National Health Information Center is the custodian of Saudi Arabia's health data infrastructure. Its mandate covers interoperability standards, health data exchange, and — increasingly — the governance of AI systems that touch patient data.

NHIC's AI-relevant frameworks include:

Health Data Governance Policy: Any AI system that ingests, processes, or generates health records must comply with data classification, access controls, and audit trail requirements. This applies to diagnostic AI, clinical decision support systems, and administrative automation that touches patient records.

Interoperability Standards: AI systems integrated with the Saudi Health Information Exchange (SeHA) must meet NHIC's HL7 FHIR compliance requirements. This is a technical constraint that directly affects AI vendors — a model that cannot operate within a FHIR-compliant data pipeline is not deployable in regulated settings.

Data Localization: All personal health data must remain within Saudi Arabia's borders. Cloud-based AI vendors that process patient data on foreign infrastructure are in direct violation of NHIC policy, regardless of encryption or anonymization claims.

The MOH Layer: Clinical AI Oversight

Where NHIC governs data infrastructure, the Ministry of Health governs clinical applications. MOH's framework for AI in healthcare establishes requirements across three categories:

Pre-Deployment Validation

Before any AI system is used in clinical decision-making — even in an advisory capacity — organizations must document:

  • Clinical validation studies: Evidence that the model performs accurately on Saudi patient populations. Studies conducted on Western datasets do not satisfy this requirement. Saudi demographics, comorbidity patterns, and disease prevalence differ meaningfully from the populations in most published AI validation research.
  • Bias assessment: Explicit evaluation of whether the model performs differently across patient subgroups (age, gender, region). MOH guidance specifically calls out AI systems that show degraded performance on underrepresented populations.
  • Failure mode documentation: What happens when the AI produces an incorrect output? Organizations must demonstrate that clinical workflows catch AI errors before they reach patients.

Ongoing Monitoring Requirements

Clinical AI is not a deploy-and-forget technology. MOH expects organizations to maintain:

  • Performance drift monitoring: AI model accuracy degrades over time as patient populations and clinical practices evolve. Quarterly performance reviews against defined thresholds are the expected minimum.
  • Adverse event logging: Any clinical incident where an AI output was a contributing factor must be documented and reported. This requirement mirrors the adverse event reporting framework already in place for medical devices.
  • Human oversight documentation: Who reviews AI outputs before clinical action is taken? The oversight chain must be documented and auditable. A system where clinicians routinely accept AI recommendations without documented review does not satisfy MOH expectations.

Procurement and Vendor Due Diligence

For organizations purchasing AI from third-party vendors, MOH guidance creates a due diligence obligation. Healthcare organizations cannot outsource governance accountability to vendors. Before procurement, organizations should require:

  • Model cards documenting training data, validation studies, and known limitations
  • Data processing agreements that explicitly address PDPL compliance and data localization
  • Incident response commitments, including timelines for communicating model failures or safety issues

PDPL Intersections: Where Health Data Meets Privacy Law

Saudi Arabia's Personal Data Protection Law (PDPL), enforced by SDAIA's National Data Management Office, creates additional obligations for healthcare AI that most organizations are only beginning to address.

Key intersections include:

Consent for AI processing: Using patient data to train or fine-tune AI models requires explicit consent under the PDPL. Retrospective training on historical patient records is not automatically permitted. Organizations that have fine-tuned commercial AI models on their patient data without documented consent processes are exposed.

Automated decision-making rights: The PDPL grants individuals rights around automated decision-making that significantly affects them. An AI system that makes autonomous decisions about treatment pathways, insurance claims, or care eligibility must have a human review mechanism. Organizations that cannot demonstrate this are in violation.

Data minimization: AI systems must only process the minimum patient data necessary for their stated function. A diagnostic AI that ingests full patient records when it only requires imaging data violates the PDPL's data minimization principle.

What a Compliant Healthcare AI Program Looks Like

A healthcare organization that has done this correctly will have documented:

  1. AI inventory: Every AI system in clinical or administrative use, categorized by data access, clinical impact level, and the regulatory frameworks that apply to it.

  2. Governance ownership: A named individual (typically the CMIO or a designated AI governance officer) accountable for each system's ongoing compliance.

  3. Validation evidence: Pre-deployment validation studies, bias assessments, and the decisions made about acceptable risk levels — documented and signed off by clinical leadership.

  4. Monitoring cadence: Scheduled performance reviews, adverse event thresholds, and the process for escalating when performance drops below acceptable levels.

  5. Vendor contracts: Third-party AI vendor agreements that include data processing terms, PDPL compliance representations, and incident notification requirements.

Key Takeaways

  • NHIC and MOH healthcare AI guidance is increasingly enforceable under Saudi Arabia's PDPL and National Digital Health Strategy — treat it as compliance, not advisory
  • Clinical AI validation must use Saudi patient population data — Western validation studies do not satisfy MOH requirements
  • Data localization is a hard requirement: cloud AI vendors processing Saudi patient data on foreign infrastructure are non-compliant
  • The PDPL creates explicit rights around automated decision-making in healthcare — human review mechanisms are mandatory, not optional
  • AI governance in healthcare requires continuous monitoring, not just pre-deployment validation

Where to Start

Most Saudi healthcare organizations are between 12 and 24 months behind where regulators expect them to be on AI governance. The good news is that the gap is closable with a structured approach — an AI inventory, a risk-tiered governance framework, and vendor due diligence protocols that can be built and documented in weeks, not years.

PeopleSafetyLab's AI Safety Pack is designed specifically for this situation: a practical governance system that meets NHIC, MOH, and SDAIA standards without requiring a large internal compliance team. If your organization has deployed AI in clinical settings and lacks documented governance, contact us to discuss what a rapid assessment and remediation program looks like.

N

Nora Al-Rashidi

Expert in AI Safety and Governance at PeopleSafetyLab. Dedicated to building practical frameworks that protect organizations and families, ensuring ethical AI deployment aligned with KSA and international standards.

Share this article: