top of page

Saudi Arabia Artificial Intelligence Ethics principles 2.0

Updated: Nov 1, 2023

In August 2022, SDAIA released version 1.0 of their AI Ethics Principles for consultation. Now there is version 2.0.

Here's a summary:

1.      Application scope: all AI stakeholders designing, developing, deploying, implementing, using, or being affected by AI systems within #KSA.

Both public entities and private entities, non-profit entities, researchers, workers and consumers etc.

2.       Creates 4 risk classification levels: Little

or no risk, Limited risk, High risk, Unacceptable risk.

3. Restriction:       AI systems that pose an “unacceptable risk” to people’s safety,

livelihood, and rights like social profiling, exploitation of children, or

distortion of behaviour are not allowed.


4.       An AI System Lifecycle defines each step that an organization is

expected to follow to take advantage of #AI to derive practical business value.

5.       The Framework sets out 7 x AI Ethics Principles, namely

Fairness, Privacy & Security, Humanity, Social & Environmental

Benefits, Reliability & Safety, Transparency & Explainability,

Accountability & Responsibility.

6.       Previously the NDMO was named as the KSA AI Ethics owner who would create regulations and monitor compliance. The NDMO was mentioned 26 times. All references to the NDMO have been removed, naming only #SDAIA.

7.       The Framework sets out clear Roles and Responsibilities. The

Head of the Entity or Chief Data Officer (required for public sector) is responsible for the AI Ethics practice.

8.       Additional roles and responsibilities created include a Chief

Compliance Officer (CCO), Responsible AI Officer (RAIO) and AI System Assessor.


9.     The Framework offers some useful AI Ethics Tools.

10.  An AI Ethics Checklist is provided.

2 views0 comments


bottom of page