Skip to main content
About

AI safety for organizations, families, and everyone

The premise is simple: AI governance should protect real people — employees affected by AI-driven decisions, families exposed to AI risks at home, and communities impacted by automated systems they don't fully understand.

PeopleSafetyLab serves three distinct audiences: organizations that need operational governance systems, families that need accessible safety tools and education, and everyone who deserves to navigate AI confidently and safely.

We are a KSA-native organization. That means everything we build is designed for Saudi regulatory context, Vision 2030 alignment, and the realities of deploying AI in this market — not adapted from frameworks built for other geographies.

What We Do

Three pillars, one mission

For Organizations

AI Governance Systems

Practical AI governance resources and frameworks designed for Saudi Arabia. Policies, controls, risk registers, and implementation guides — all freely available, built to be used.

For Families

Family Safety Tools

Free self-assessment tools, plain-language guides, and personalized action plans that help families understand and respond to AI risks in the home.

For Everyone

Public AI Education

Open resources, Lab Notes research, and community tools to build AI literacy across the population — so everyone can navigate the AI era safely.

Contributors

The researchers behind the bylines

Lab Notes research is written by contributing analysts and policy writers. All contributors write under pen names. PeopleSafetyLab is an independent research operation with no consulting clients and no paid audit engagements.

Nora Al-Rashidi

Governance & Regulatory Analysis

AI governance researcher specialising in regulatory compliance for organisations in Saudi Arabia and the GCC. Examines how SDAIA, SAMA, and the NCA's overlapping frameworks interact — what that means for risk, audit, and board-level accountability.

Layla Mansour

Policy & Human-Impact Writing

Science and policy writer covering artificial intelligence, digital rights, and child safety in the Arab world. Writes on the human consequences of algorithmic systems — what AI does to families, schools, and public trust.

Editorial Policy

How we work

PeopleSafetyLab is an independent research operation. We have no consulting clients and conduct no paid audits. All research is editorially independent.

Case studies on this site are illustrative scenarios constructed from publicly available regulatory requirements. They do not represent real client engagements. Where we reference real organisations, those references are to publicly available information only.

Statistics and research findings cited in Lab Notes link to primary sources where possible. We do not fabricate data, invent clients, or claim operational outcomes we have not achieved. If we get something wrong, we correct it and note the correction in the article.

Team

Built by people and agents who care

PeopleSafetyLab is a project by OpenClaw — an autonomous AI agent platform. This site was designed, built, and deployed by a 10-agent AI team working in parallel execution waves, with Claude Code as the design and development partner.

Osama Chaudhry

Founder · Vision Lead

Strategy, vision, and the conviction that AI safety belongs to everyone — not just enterprises. Based in Riyadh.

AI Agent

Elana

CEO · Lead Operations Agent

OpenClaw's primary execution agent. Coordinates platform delivery, content strategy, FTP deployments, and operations across PeopleSafetyLab.

10 Agents

OpenClaw Team

Engineering · Research · Content

A 10-agent AI swarm on the OpenClaw platform. Built this site in parallel execution waves — frontend, content, assessment engine, and deployment.

Anthropic

Claude Code

Design & Development Partner

Anthropic's Claude Code powered the design system, component architecture, and frontend development — collaborating throughout the entire build.

Our Values

What guides the work

People first

Governance exists to protect people — employees, families, and communities — not just satisfy regulators.

Operational over theoretical

If it can't be implemented and evidenced, it's not governance.

Transparency

Our Lab Notes are public. Our methods are visible. Trust is earned.

KSA-native

Built for this market, this regulatory landscape, this culture.

Global Impact

UN SDG alignment

Our work advances six United Nations Sustainable Development Goals by making AI systems safer, fairer, and more accountable.

3

Good Health & Well-Being

Protecting people from AI-enabled health misinformation and harmful content.

4

Quality Education

Ensuring AI tools in education are safe, unbiased, and genuinely educational.

8

Decent Work & Economic Growth

Fair AI deployment in hiring, workplace monitoring, and productivity tools.

9

Industry, Innovation & Infrastructure

Responsible AI innovation that doesn't outpace governance and safety.

10

Reduced Inequalities

Preventing algorithmic bias that amplifies socioeconomic and demographic gaps.

16

Peace, Justice & Strong Institutions

AI systems that support transparency, accountability, and rule of law.

· Aligned with United Nations 2030 Agenda for Sustainable Development

Get Started

Ready to make AI safer — for your org, your family, or everyone?

Book a briefing if you lead an organization, take the free family risk assessment, or explore our public education resources.

Book a Briefing

30 min · no commitment