Skip to main content
Lab Notes
General

When the Algorithm Goes Wrong: Crisis Communications for AI Failures in KSA

Nora Al-Rashidi|March 7, 2026|12 min read

The call came on a Tuesday afternoon. A regional bank's loan-decisioning AI — deployed eighteen months earlier with considerable fanfare about modernizing financial inclusion — had been flagging applications from residents of specific postal codes at rates that, once a journalist's spreadsheet started circulating on X, looked a great deal like systematic geographic discrimination. Within six hours, the story had been picked up by three Saudi outlets. Within twelve, SAMA had made an inquiry. By the following morning, the bank's communications team was issuing a statement that managed, simultaneously, to say nothing useful, admit nothing, and alienate the very customers it needed to reassure.

The technical failure, it later emerged, was relatively contained. The model had been trained on historical lending data that encoded decades of neighborhood-level credit patterns. The engineers understood the problem, and they could fix it. What the bank could not fix quickly was the trust deficit created by forty-eight hours of organizational silence followed by language that felt, to affected customers, more like legal positioning than human accountability.

This is the real shape of an AI crisis. Not a dramatic system collapse, but a slow-building failure of communication that compounds a technical problem into a reputational one. For Saudi organizations operating at the intersection of ambitious digital transformation goals and a regulatory environment that is growing both more sophisticated and more watchful, the stakes of getting this wrong are not abstract.

The organizations that navigate AI failures well do not do so by accident. They have thought through the problem before it arrives.

The Anatomy of an AI Incident in KSA

AI failures present differently from traditional IT outages, and crisis communication teams trained on the latter are often unprepared for the former. A server goes down; the impact is immediate and visible. An algorithmic failure can persist for months before anyone notices — and when it surfaces, often through a journalist, a regulator, or a viral social media post, the organization faces a question it should have been asking internally: how long has this been happening?

That question is devastating in the wrong hands. It implies negligence, concealment, or incompetence — sometimes all three. Saudi organizations face this dynamic with additional layers of complexity. The PDPL (Personal Data Protection Law), administered by the National Data Management Office, imposes a 72-hour breach notification requirement when a data incident poses risk to the rights and freedoms of individuals. SDAIA, which oversees AI governance and ethics at the national level, has developed an incident-reporting framework for algorithmic errors, bias issues, and ethics violations. The NCA (National Cybersecurity Authority) has its own reporting channels when an AI failure implicates system security. SAMA governs financial AI with particular attention to consumer protection.

Each of these regulators operates on its own timeline, its own documentation expectations, and its own definition of what constitutes a reportable incident. An organization that has not mapped these requirements before a crisis is not just unprepared — it is exposed.

The 72-hour PDPL window, in particular, creates pressure that most organizations do not appreciate until they are inside it. Seventy-two hours from discovery is not a long time when the discovery itself is ambiguous, when leadership is still being briefed, and when legal counsel has yet to assess liability exposure. Organizations that wait for certainty before notifying regulators will routinely miss the window. The better discipline is to notify early with what you know, and update as the picture clarifies.

Before the Crisis: The Infrastructure of Readiness

Crisis communication does not begin when the incident begins. It begins months earlier, in the form of decisions that will determine whether an organization responds or merely reacts.

The most important of these is team architecture. A crisis communication team for AI incidents needs to span disciplines in ways that feel unnecessary until they are not: a senior executive with genuine decision-making authority, a technical lead who can explain model behavior to a non-technical audience without either oversimplifying or retreating into jargon, legal counsel who understands both Saudi regulatory requirements and the specific liabilities that attach to AI systems, and a communications lead who has thought about the difference between a press release and an actual human explanation. The team also needs someone who owns the regulator relationship — not just as a compliance function, but as an ongoing professional relationship built before there is anything urgent to discuss.

What this team produces, in advance, is a set of pre-approved communication frameworks: draft language for acknowledgment statements, for regulatory notifications, for customer communications at different severity levels. These frameworks are not scripts. They are architectures — placeholder language that can be completed quickly when facts are scarce and time is short, without requiring that every sentence be litigated from scratch in a conference room while an incident is unfolding.

The frameworks also force a clarifying conversation before any crisis arrives: what does this organization believe it owes its customers when an AI system fails them? That question — not legal strategy, not PR positioning, but organizational values — is the one that will determine how communication lands when it matters.

Communicating When It Counts

The first two hours of an AI crisis are, in many ways, the most consequential. This is when the holding statement needs to go out — not a full explanation, which will not yet be possible, but an acknowledgment that the organization is aware of the issue, takes it seriously, and is actively investigating. Silence in this window is not neutral. It is interpreted, usually uncharitably, as concealment or indifference.

The holding statement is not the place for precision. It is the place for humanity. A brief, direct acknowledgment that something has gone wrong, that people may be affected, and that the organization is treating this urgently — this buys credibility that technical accuracy, delivered two days later, cannot recover. Organizations that lead with legal qualifications ("we cannot confirm at this time") or deflection ("our system performed within expected parameters") are telling affected parties, in effect, that institutional protection matters more than they do. In a cultural context where organizational trust is relational and interpersonal, not just transactional, that message has lasting consequences.

Within 24 hours, for high-severity incidents, the communication expands to regulators. Regulatory notification under the PDPL is not a confession of wrongdoing. It is a procedural obligation that, when met promptly, demonstrates organizational competence and good faith. Regulators who receive timely, substantive notifications are working with an organization. Regulators who discover incidents through media reports are investigating one. The distinction matters enormously for how the subsequent process unfolds.

Customer communications at this stage need to answer three questions clearly: what happened, what does it mean for them specifically, and what the organization is doing about it. The temptation to be vague on the first question, to protect against legal exposure, almost always backfires. Customers who receive vague explanations fill in the gaps with the worst available interpretation. Concrete language — even if it means acknowledging an error directly — builds more trust than careful ambiguity.

As the incident progresses into the days-long phase of investigation and remediation, the communication challenge shifts from acknowledgment to maintenance. Organizations need to provide regular updates even when there is nothing substantively new to report. "We are still investigating and will have more information by Thursday" is a meaningful update. It signals that the organization has not forgotten, has not moved on, and is treating the matter with ongoing seriousness. What erodes trust at this stage is not the absence of resolution — people understand that complex technical problems take time — but the absence of attention.

The Post-Incident Report as Accountability Document

When the incident is resolved, the communication work is not done. For significant incidents — anything that affected a meaningful number of people, triggered regulatory attention, or involved a system failure with safety implications — a post-incident report is both a regulatory expectation and an organizational opportunity.

The post-incident report, done well, is not a defensive document. It is an analytical one. It explains what the system was supposed to do, what it actually did, how the discrepancy was discovered, and what has changed to prevent recurrence. It should be honest about root causes, including uncomfortable ones: inadequate training data, insufficient bias testing, gaps in human oversight, deployment timelines that outpaced governance frameworks. The organizations that write these reports candidly tend to find that stakeholders — including regulators — respond to them as evidence of maturity rather than sources of additional liability.

SDAIA's incident reporting framework, aligned with its broader AI ethics principles, specifically expects this kind of analytical transparency. It is not enough to report that an incident occurred and was fixed. The expectation is that the organization can explain what its AI governance failures were and what structural changes have been made in response. Organizations that meet this expectation credibly have, in effect, turned a crisis into a demonstration of governance capacity.

KSA-Specific Dimensions

Saudi Arabia's cultural and regulatory context shapes crisis communication in ways that organizations importing frameworks from other markets often underestimate.

The bilingual expectation matters more than it might appear. Primary communications in Arabic — not translations produced after the fact, but Arabic-first documents that reflect the actual language and register of Saudi institutional communication — signal that the organization considers its Saudi stakeholders primary, not secondary. Organizations that issue polished English-language statements with workmanlike Arabic translations are communicating something they did not intend to communicate about where their priorities actually lie.

The relationship between AI adoption and Vision 2030 creates a particular framing opportunity and risk. Saudi authorities have invested significantly in positioning the Kingdom as a serious, responsible actor in AI governance. An organization that, in its crisis communication, frames its response in terms of this shared national project — demonstrating how it is upholding the standards that Saudi Arabia's AI ambitions require — is speaking a language that resonates with regulators, with the media, and with sophisticated stakeholders. An organization that treats its crisis as a purely technical or legal problem, without engaging this broader context, misses the register entirely.

There is also the question of timing and channel. Crisis communications sent during prayer times, or delivered through channels that do not reach the relevant audiences, or scheduled without attention to the rhythms of Saudi organizational life, create friction that is entirely avoidable. These are not exotic considerations. They are the basic work of communicating in context.

What Organizations Get Wrong

The most common failure is not malice. It is the mistaken belief that the goal of crisis communication is to minimize the organization's exposure rather than to actually help the people affected.

This belief produces communications that are technically accurate but humanly useless. Passive voice constructions that diffuse accountability. Timelines that are precise about when the investigation was initiated but vague about what people should actually do. Statements that express concern for affected parties while providing no mechanism for those parties to get answers or relief. Legal language that reads, correctly, as self-protection.

The second most common failure is treating communication as separate from response. The organizations that communicate best during crises are the ones where the communications team is in the room where technical and operational decisions are being made, not briefed after the fact. Communication is not a coating applied to decisions made elsewhere. It is a function that shapes decisions, because it forces the question of how those decisions will be understood and experienced by the people they affect.

The third failure is underestimating the regulatory sophistication of the KSA landscape. PDPL, SDAIA, NCA, SAMA — these are not bureaucratic checkbox exercises. They reflect genuine policy intentions around data rights, algorithmic accountability, and organizational responsibility for AI systems. Organizations that approach them transactionally, doing the minimum required to satisfy the letter of notification requirements, find that regulators are quite capable of distinguishing between compliance and accountability.

The Discipline of Preparation

The organizations that handle AI crises well share a characteristic that becomes visible only in retrospect: they treated crisis preparation as a governance investment rather than an insurance purchase. They built their crisis communication teams not because they expected to need them soon, but because they understood that deploying AI systems without that infrastructure was an organizational risk they were not willing to carry.

That preparation manifests as speed when a crisis arrives. The holding statement goes out in forty minutes instead of four hours, because the language was drafted and approved three months ago. The regulatory notification is filed within the 72-hour window, because the reporting template exists and the regulator relationship is already established. The customer communication is empathetic and specific, because the organization has already had the internal conversation about what it owes the people its systems serve.

AI systems will fail. The question is not whether your organization will face a crisis, but whether, when it does, your communication will make the situation better or worse. For Saudi organizations navigating the genuine complexity of digital transformation at scale, the answer to that question is increasingly a matter of governance — and governance, in this domain as in others, is the work you do before anyone is watching.


Published by PeopleSafetyLab — AI safety and governance research for KSA organizations.

N

Nora Al-Rashidi

Expert in AI Safety and Governance at PeopleSafetyLab. Dedicated to building practical frameworks that protect organizations and families, ensuring ethical AI deployment aligned with KSA and international standards.

Share this article: