SDAIA's Three Pillars: What Human-Centric, Ethical, and Secure Actually Mean in Practice
SDAIA published its three-pillar AI framework — Human-Centric, Ethical, Secure — in 2023. Most organizations treated it as a compliance checklist. The organizations that understood it as a design constraint built different systems.
That distinction matters more than it might seem. A compliance checklist tells you what boxes to check before launch. A design constraint shapes the architecture of a system from its first line of code. Organizations that read the three pillars as a checklist ended up with governance documentation layered on top of systems that were not designed for oversight. Organizations that read them as constraints built systems where transparency, fairness, and security were baked into the logic — not appended to it.
SDAIA — the Saudi Data and AI Authority, established in 2019 — is the central body responsible for data and AI strategy in Saudi Arabia. Its three-pillar framework emerged from the National Strategy for Data and AI (NSDAI) and represents the Kingdom's formal articulation of what responsible AI looks like in a Saudi context. Understanding what that context actually requires is where most analysis falls short.
The Argument the Three Pillars Are Making
The three pillars are not three separate requirements. They are one interconnected argument: that trustworthy AI, in Saudi Arabia specifically, must simultaneously serve human needs, operate according to shared moral values, and remain protected against manipulation and breach. Pull any one of those requirements out of the system and the remaining two become harder to sustain.
This is not obvious from reading the framework document in isolation. The pillars are presented as distinct categories, and organizations naturally process them that way — assigning Human-Centric to the product team, Ethical to legal and compliance, Secure to the security engineers. The result is three parallel workstreams that rarely talk to each other, producing AI systems that check all three boxes in documentation but fail the underlying test in practice.
The underlying test is simpler than any checklist: does a Saudi citizen, interacting with this AI system, have reason to trust it? Trust in this context is not a feeling. It is a verifiable property — the product of systems that are transparent enough to understand, fair enough to rely on, and secure enough to use without fear.
What "Human-Centric" Actually Means Here
The phrase "human-centric AI" appears in governance frameworks around the world. In most contexts, it means something relatively narrow: keep a human in the loop for high-stakes decisions, build explainability into your models, design interfaces that are usable.
In Saudi Arabia, the question is more specific. What does Human-Centric mean when your AI is making decisions about Saudi citizens, in Arabic, in a society where the relationship between individuals and state institutions carries particular cultural weight?
The SDAIA framework is explicit that AI systems should augment human decision-making rather than replace it, and that individuals retain the right to challenge or override AI recommendations. This is standard language. What is less standard is the implication for language and cultural design. An AI system that communicates in formal Modern Standard Arabic but makes eligibility decisions affecting citizens who speak regional dialects is not, in any meaningful sense, human-centric — even if it passes every technical explainability test. A system that offers a "human review" option but places that option four screens deep in a government portal is not preserving human agency in practice.
Human-Centric, understood correctly, is a systems design problem. It requires asking not just whether humans can intervene, but whether intervention is realistically accessible to the specific humans who are subject to the system's decisions. That question looks different in Riyadh than it does in Brussels or Washington.
The transparency requirement runs in both directions. Citizens need to understand when AI is influencing decisions that affect them. But operators and overseers also need to understand how their AI systems are behaving at scale — where they are confident, where they are uncertain, and where their outputs are diverging from intended outcomes. Human-Centric AI requires interpretability infrastructure that serves both audiences.
What "Ethical" Means When You Have Multiple Oversight Frameworks
Ethical AI is the pillar most organizations think they understand and most frequently get wrong. The common mistake is to treat ethics as a set of universal principles — fairness, accountability, privacy — and then demonstrate compliance with those principles through audits and documentation. The SDAIA framework expects more.
Saudi Arabia operates with a layered ethical and legal environment that has no direct equivalent in Western regulatory frameworks. PDPL — the Personal Data Protection Law — sets baseline requirements for how personal data can be collected, processed, and retained, including for AI model training and inference. Those requirements exist alongside institutional contexts in which some organizations maintain Shariah Supervisory Boards that review whether specific products and services conform to Islamic principles. The relationship between these frameworks is not always clearly delineated.
What this means in practice is that "Ethical AI" in a Saudi context requires organizations to map which oversight frameworks apply to their specific use case and then demonstrate that their AI systems are compatible with all of them — not just the ones that feel most familiar.
Fairness is one of the clearest pressure points. Algorithmic bias audits are a standard tool, but the protected characteristics that matter in a Saudi context are not identical to those emphasized in American or European fairness literature. Gender, nationality, age, and geographic origin all carry regulatory and cultural significance. An AI system trained primarily on data from urban populations in Riyadh and Jeddah may perform differently for citizens in the Eastern Province or Asir — and if that performance differential affects access to services, it is an ethical problem whether or not it was intentional.
Accountability is the other place where the framework's requirements are more demanding than they first appear. SDAIA expects clear assignment of responsibility for AI outcomes — not just documentation of who owns the model, but traceable accountability for what the model does. In practice, this means audit trails robust enough to explain, after the fact, why a specific AI recommendation was made and what data drove it. For organizations deploying AI at scale, this is a significant engineering requirement that needs to be built into the system architecture from the start.
What "Secure" Means for AI Specifically
The Secure pillar is where organizations most often default to treating the SDAIA framework as an extension of their existing cybersecurity program. The NCA's Essential Cybersecurity Controls (NCA ECC) establish the baseline security requirements for systems operating in Saudi Arabia, and most organizations with mature security practices are already working toward ECC compliance. The Secure pillar in SDAIA's framework builds on that foundation.
But AI systems introduce security considerations that standard ECC implementations do not fully address. Data poisoning — the deliberate manipulation of training data to corrupt a model's behavior — is an AI-specific attack vector. Model extraction — reconstructing a proprietary model by querying its outputs — is another. Adversarial inputs designed to cause systematic misclassification are a third. These are not theoretical concerns; they are documented attack methods used against deployed AI systems.
The Secure pillar requires organizations to think about security across the full AI lifecycle: not just protecting the infrastructure that hosts the model, but protecting the integrity of the training data, the model itself, and the inference pipeline. An AI system making eligibility decisions that is susceptible to adversarial manipulation is not a secure system, even if its hosting environment is fully NCA ECC-compliant.
The interaction between security and the other two pillars is direct. A model whose outputs can be manipulated through adversarial inputs is not human-centric — it cannot be trusted to serve the citizens it was designed for. A model whose training data can be poisoned is not ethical — its fairness guarantees are only as strong as the integrity of the data they were measured against. Security failures are not contained within the Secure pillar. They propagate.
Why the Pillars Fail When Treated as Silos
The practical consequence of treating the three pillars as independent workstreams is that each pillar ends up being implemented in a way that is locally coherent but globally fragile.
Consider a typical implementation. The product team builds explainability features and a human review interface — Human-Centric box checked. Legal conducts a fairness audit and documents accountability processes — Ethical box checked. Security deploys the system within the NCA ECC control framework — Secure box checked. On paper, all three pillars are satisfied.
But no one has asked whether the explainability features are meaningful to the actual users of the system. No one has verified that the fairness audit covered the relevant demographic subgroups for this specific deployment context. No one has tested whether the model's outputs are robust to adversarial inputs that could systematically bias the human review queue toward particular outcomes. The documentation is compliant. The system is not trustworthy.
The integrated version of three-pillar implementation looks different at every stage of the development process. Security considerations inform the design of the human oversight interface — because an oversight interface that can be manipulated is not oversight. Fairness requirements inform the security threat model — because demographic bias can be introduced through data poisoning as well as through flawed training data. Transparency requirements inform the audit trail architecture — because the same logging infrastructure that enables human oversight also enables post-incident forensics.
A Hypothetical That Illustrates the Integration
Hypothetical scenario: A Saudi government agency deploying AI to assess eligibility for a social support program would face all three pillar requirements simultaneously — and the requirements would interact in ways that a siloed implementation would miss.
The Human-Centric requirement means the system must be intelligible to caseworkers who will review AI recommendations, and accessible to citizens who may request human review of decisions affecting them. This immediately has security implications: the human review interface is a high-value target, because an adversary who can influence which cases are flagged for human review — or which reviewer handles them — can influence outcomes at scale. The design of the oversight mechanism must account for this.
The Ethical requirement means the system must be audited for bias across the full range of applicant demographics — region, nationality, age, gender, family composition. This immediately has data implications: the training data must be representative, and its integrity must be verifiable. If the training data was assembled without adequate controls, the fairness audit is measuring the model's performance on potentially corrupted ground truth. The data provenance controls required by the Secure pillar are prerequisites for the fairness guarantees required by the Ethical pillar.
The Secure requirement means the system's data handling must comply with PDPL — purpose limitation, data minimization, defined retention periods, subject rights. This immediately has a Human-Centric implication: the right of applicants to request human review of their cases, and to receive an explanation of how the AI assessed their application, requires that relevant data be retained long enough for that explanation to be meaningful. The retention policy is not just a compliance decision; it is a design decision that affects how the transparency obligation can be fulfilled.
None of these interactions would surface if each pillar were reviewed independently. All of them would be visible to a team that understood the three pillars as one interconnected argument.
What an Organization That Gets This Right Looks Like
It does not look like an organization with a large compliance team and extensive governance documentation, though it may have both. It looks like an organization where the people building AI systems understand the governance requirements well enough to make implementation decisions — not just the people reviewing those systems after the fact.
The clearest signal is where governance enters the development process. In organizations that treat the three pillars as a checklist, governance enters at the end: audit the model, document the controls, get sign-off before launch. In organizations that treat them as design constraints, governance enters at the beginning: what are the transparency requirements for this use case, what fairness guarantees do we need to make and over what population, what security threat model applies to this specific deployment.
That difference in sequencing changes what is possible. Explainability is far easier to build in from the start than to retrofit onto an existing model. Fairness across demographic subgroups is far easier to validate when the training data pipeline was designed with that validation in mind. Security controls designed for AI-specific attack vectors are far more robust when they are part of the initial architecture than when they are layered on afterward.
The three pillars, understood as design constraints, produce organizations that build AI differently. The Human-Centric requirement produces systems that are genuinely transparent to the humans who use and are affected by them — not just technically explainable, but meaningfully so. The Ethical requirement produces systems whose fairness guarantees are grounded in verified data and tested against realistic demographic subgroups. The Secure requirement produces systems whose integrity is protected across the full lifecycle, not just at the perimeter.
These are not documentation outcomes. They are engineering outcomes. That distinction is what SDAIA's framework is asking organizations to take seriously — and what separates the organizations that are building trustworthy AI in Saudi Arabia from those that are only documenting it.
Published by PeopleSafetyLab — AI safety and governance research for KSA organizations.