Explainability (and its sibling, interpretability) is arguably the most important new legal concept developed in response to the creation of highly complex artificial intelligence models trained on massive amounts of data and computer power.

Explainability serves as a cornerstone of AI regulation. It matters because users have a higher level of trust and greater perception of fairness when algorithmic outputs and decisions are explained in a way that the user can understand. Law makers and regulators argue that explainability is democratic and required for the rule of law. Explainability is democratic because it ensures transparency. Explainability enables the rule of law because it subjects algorithmic outputs to the principle of reasoned justification. Law makers and regulators argue that the combined effect enables effective oversight of artificial intelligence systems.

Learn more about this concept and what the future of explainability might look like here.