Skip to main content
Lab Notes
AI Governance

AI Vendor Due Diligence in Saudi Arabia: A CISO's Guide

PeopleSafetyLab|March 10, 2026|13 min read

AI Vendor Due Diligence in Saudi Arabia: A CISO's Guide

The procurement meeting moved quickly. The vendor's slide deck was polished—all the right buzzwords about machine learning and predictive analytics, all the right logos from global enterprises. The business unit lead was sold. The CFO saw cost savings. The CEO wanted to announce the partnership at the next board meeting. Everyone turned to the CISO for the security sign-off, expecting a routine approval.

He hadn't seen the technical architecture. He didn't know where the data would be processed. He had no information about the vendor's incident response capabilities, their access controls, their compliance with Saudi data protection law. He had forty-eight hours to evaluate a system that would process sensitive customer data, integrate with core business processes, and create dependencies that would take years to unwind. This is the position Saudi CISOs find themselves in repeatedly—asked to bless AI acquisitions after the business decision has already been made.

This guide is for those moments. It provides a practical framework for AI vendor due diligence in the Saudi context—the questions that matter, the red flags that kill deals, the contract clauses that actually protect your organization. It is not comprehensive, because comprehensive frameworks take months to implement. It is the minimum viable due diligence that a CISO can execute under time pressure while still fulfilling their fiduciary duty to the organization.

The Stakes Have Changed

AI vendor risk is different from traditional software vendor risk in three ways that matter for due diligence.

First, AI systems require data—often large volumes of sensitive data—to function. The vendor is not just processing transactions; they are training models, storing embeddings, potentially retaining information that could be used to reconstruct individual records. Data residency and data protection questions become existential rather than peripheral.

Second, AI systems are opaque. Traditional software can be audited through code review and testing. AI systems encode their behavior in trained models that resist inspection. You cannot verify that a system will behave correctly by examining it; you must rely on the vendor's training processes, testing regimes, and monitoring capabilities. This shifts due diligence from verification to trust assessment.

Third, AI vendor failure modes are different. A traditional software vendor might have uptime problems or security vulnerabilities. An AI vendor might produce systematically biased outputs, make decisions that violate regulatory requirements, or fail in ways that are difficult to detect until significant harm has occurred. The time horizon for discovering problems extends from days to months.

These differences demand a different approach to due diligence—one that focuses on data governance, model transparency, and ongoing monitoring rather than just security certifications and SLAs.

The Security Questionnaire: What Actually Matters

Most organizations have vendor security questionnaires. Most of these questionnaires are outdated for AI vendors. They ask about firewalls and encryption but not about model training pipelines or data retention for machine learning. They assume a software-as-a-service architecture where the vendor hosts the application and the customer provides the data. AI systems often reverse this relationship—the vendor provides the model, and the customer's data flows through it, potentially being incorporated into future model versions.

The Questions Your Questionnaire Is Missing

Data Flow and Retention

Where exactly does our data go? This question seems basic, but AI vendors often give vague answers. Push for specificity: which data centers process our data? Are training and inference workloads separated? Is our data used to improve the vendor's models? If so, can we opt out? How long is data retained after processing? Are there backup copies? Where are they stored?

The answer matters for PDPL compliance. Saudi Arabia's Personal Data Protection Law imposes strict requirements on data processing, and you cannot certify compliance if you do not know where your data flows.

Model Training and Transparency

Was the model trained on data from Saudi Arabia? If not, what populations were represented in the training data? This question goes to the fairness and accuracy of the model in the Saudi context. A facial recognition system trained primarily on Western faces may perform poorly on Saudi faces. A credit scoring model trained on American financial behavior may encode assumptions that do not generalize to the Kingdom.

Can the vendor provide documentation of their training process? Model cards—standardized documentation describing a model's intended use, training data, performance characteristics, and limitations—are becoming an industry norm. Vendors who cannot provide them are either not following best practices or are hiding something.

Monitoring and Incident Response

How does the vendor detect model failures? AI systems can fail silently—producing plausible outputs that are systematically wrong. The vendor should have monitoring in place that detects performance degradation, concept drift, and anomalous outputs. Ask for specifics: what metrics are monitored? What thresholds trigger alerts? Who responds when problems are detected?

When an incident occurs—a data breach, a model failure, a compliance violation—what is the vendor's notification timeline? SLAs for incident response are standard in software contracts, but AI incidents have different characteristics. A data breach is immediately recognizable; a model producing biased outputs may take months to discover. How does the vendor handle slow-burning problems?

Third-Party Dependencies

AI vendors often build on platforms from larger providers—OpenAI, Google Cloud AI, AWS SageMaker. Understanding these dependencies matters for data residency: if the vendor processes data through OpenAI's APIs, where are OpenAI's data centers located? Ask for software supply chain documentation—what base models and third-party libraries do they use?

PDPL Data Residency: The Compliance Minefield

Saudi Arabia's Personal Data Protection Law, implemented in 2023 and enforced with increasing rigor, imposes data residency requirements that complicate AI vendor relationships. The law restricts the transfer of personal data outside Saudi Arabia unless specific conditions are met—and even permitted transfers require documentation and safeguards that most AI vendor contracts do not address.

The Transfer Question

Ask the vendor directly: will any personal data processed by your system leave Saudi Arabia? The answer determines your compliance obligations. If data stays in the Kingdom, PDPL compliance is straightforward—you need appropriate data processing agreements and security measures, but no transfer authorization.

If data leaves the Kingdom, you need to verify that one of the PDPL's transfer conditions applies. The most relevant for vendor relationships are: transfer to countries with adequate data protection (the Saudi government has not published a list of approved countries, creating uncertainty), transfer pursuant to contractual safeguards approved by SDAIA, or transfer with explicit data subject consent. Each of these has documentation requirements that your legal team will need to address.

Many AI vendors process data in multiple regions for redundancy and performance. A vendor might claim their primary data center is in Saudi Arabia while maintaining backup copies in Europe or North America. This is a PDPL compliance issue—those backup copies constitute data transfers. Push the vendor to confirm that all data processing occurs within Saudi Arabia or to identify exactly which data centers will handle Saudi data and provide the contractual basis for the transfer.

The Processor Agreement

PDPL requires a written data processing agreement between the data controller—the Saudi organization—and the data processor—the AI vendor. This agreement must address specific elements: the scope and purpose of processing, security measures, confidentiality obligations, sub-processor restrictions, and data subject rights implementation.

Most vendor contracts are not written with PDPL in mind. They may reference GDPR or CCPA, but these frameworks have different requirements. The vendor's standard data processing agreement likely needs amendment for Saudi compliance.

SDAIA Compliance: Beyond the Checkbox

SDAIA's AI Ethics Principles apply to AI systems deployed in Saudi Arabia regardless of where the vendor is headquartered. A US-based AI vendor serving Saudi customers is subject to SDAIA's framework. The problem is that most foreign vendors have never heard of SDAIA, and even domestic vendors may lack mature compliance programs.

What to Ask About SDAIA Compliance

Has the vendor reviewed SDAIA's AI Ethics Principles? This question separates vendors who are serious about the Saudi market from those treating it as another sales territory. Vendors who have should be able to articulate how their practices align with each principle.

Can the vendor provide a compliance assessment for their system? A mature vendor will have documented how their AI system addresses each SDAIA principle: fairness testing results, accountability structures, transparency mechanisms, privacy and security controls, safety testing, and human oversight provisions.

What is the vendor's incident reporting process for AI-related harms? Your vendor needs a process for detecting incidents that affect Saudi users and notifying you promptly. This notification timeline should be contractual.

The Certification Question

SDAIA is developing certification frameworks for AI systems, but these are not yet fully operational. Vendors may claim certifications that do not exist or reference generic AI ethics certifications from other jurisdictions. Be skeptical—the relevant certification is one issued by SDAIA or a SDAIA-recognized body. Until available, due diligence must rely on direct assessment rather than third-party attestation.

Contract Clauses That Actually Protect You

Vendor contracts are written by vendor lawyers to protect vendor interests. The standard terms will not adequately protect your organization against AI-specific risks. These are the clauses worth fighting for.

Data Deletion and Model Retraining

What happens when the contract ends? Standard vendor contracts commit to deleting customer data upon termination. AI vendors have a complication: customer data may have been incorporated into trained models. Deleting the raw data does not remove its influence from the model weights.

The strongest protection is a commitment to retrain models without your data if the contract terminates. This is technically demanding—retraining large models is expensive—and vendors will resist it. A compromise position is a commitment to exclude your data from future model versions, even if past influence cannot be removed.

Audit Rights

You need the right to audit the vendor's AI systems—not just their security controls, but their model performance, bias testing, and compliance practices. This audit right should include access to training data documentation, model performance metrics, and incident logs. It should permit both scheduled audits and audit rights triggered by specific events such as data breaches or regulatory inquiries.

Vendors will resist broad audit rights, citing proprietary information and customer confidentiality. These are legitimate concerns that can be addressed through scoped audits, third-party auditors, and appropriate confidentiality protections. What is not acceptable is a vendor who refuses any meaningful audit access.

Liability for AI Failures

Standard vendor liability caps were written for software failures—system downtime, data loss, security breaches. AI failures have different characteristics. A biased model may produce discriminatory outputs for months before detection. A faulty recommendation system may cause business losses that are difficult to quantify. An AI-powered decision support tool may provide advice that, when followed, causes harm.

Push for liability provisions that address AI-specific failure modes. At minimum, the vendor should accept liability for failures in their model training and testing processes—not just for the software functioning as specified, but for the model producing appropriate outputs. This is difficult to define precisely, but the conversation itself reveals the vendor's attitude toward responsibility for AI outcomes.

Regulatory Cooperation

When SDAIA or another Saudi regulator has questions about the AI system, what is the vendor's obligation to cooperate? The contract should require the vendor to provide information, respond to inquiries, and participate in audits or investigations. The vendor should not be able to leave you facing regulatory scrutiny alone.

This clause matters because AI accountability is a shared responsibility. You, as the data controller, bear primary regulatory exposure. But the vendor's cooperation—or lack thereof—will significantly affect your ability to respond to regulatory concerns. A vendor who stonewalls SDAIA creates problems that become your problems.

The Red Flags That Kill Deals

Some vendor responses indicate problems that no contract clause can fix. These are the red flags that should end negotiations.

"We cannot disclose our training data." Transparency about training data is fundamental to AI due diligence. A vendor who refuses to discuss data sources is either hiding something—biased data, pirated content, privacy violations—or is operating with a level of secrecy incompatible with enterprise risk management.

"Our model is proprietary and cannot be evaluated." Model evaluation is not the same as model disclosure. You are not asking for the model weights; you are asking for evidence that the model performs appropriately for your use case. A vendor who refuses to provide performance testing results, bias assessments, or accuracy metrics for relevant populations is not ready for enterprise deployment.

"We process data in [country] but we are not sure about backups." Data residency uncertainty is a compliance nightmare. If the vendor cannot tell you where your data goes, you cannot tell SDAIA you are in compliance. Walk away.

"Our standard contract is non-negotiable." AI vendor relationships require customized contracts because standard contracts do not address AI-specific risks. A vendor who will not negotiate is a vendor who is not serious about the Saudi market or about enterprise risk management. There are other vendors.

The Decision Framework

Green light: The vendor answers questions directly, provides documentation, acknowledges SDAIA requirements, and is willing to negotiate contract terms. Residual risks can be mitigated through contract provisions and monitoring.

Yellow light: The vendor answers some questions but not others, provides partial documentation, or resists certain contract terms. Consider proceeding with enhanced monitoring, reduced data sharing, or phased deployment.

Red light: The vendor refuses to answer key questions, cannot provide basic documentation, insists on contract terms that leave your organization exposed, or reveals practices that violate PDPL or SDAIA requirements. Do not proceed.

The Ongoing Obligation

Due diligence is not a point-in-time activity. AI systems evolve—models are retrained, capabilities are added, data flows change. The vendor you evaluated at contract signing may not be the vendor you are working with two years later.

Build ongoing due diligence into the vendor relationship. Quarterly reviews of security posture and compliance status. Annual assessments of model performance and bias testing. Immediate notification requirements for material changes to data processing, model architecture, or security controls.

The CISO who signs off on an AI vendor has accepted responsibility for the risks that vendor creates. That responsibility extends beyond the signature. The most important due diligence is the due diligence that continues after deployment—when the business has moved on to the next initiative and the AI system has become infrastructure that no one thinks to question.


PeopleSafetyLab helps Saudi enterprises build AI governance programs that work—from vendor due diligence frameworks to compliance automation tools. Because the right questions, asked at the right time, prevent the wrong outcomes.

P

PeopleSafetyLab

Expert in AI Safety and Governance at PeopleSafetyLab. Dedicated to building practical frameworks that protect organizations and families, ensuring ethical AI deployment aligned with KSA and international standards.

Share this article: