Skip to content
You Decide AP-2.2

Show Me How You Decided

If AI makes a decision about you, you deserve to know why.

An AI decision without reasoning is often just a digital gatekeeper. AP-2.2 requires transparent decision chains instead of black-box outputs. 1 2

What This Means

This policy means AI must explain decisions in a way affected people can understand and challenge. Not raw math, but clear factors, uncertainty, and traceable reasoning. Without that chain, there is no meaningful accountability.

A Real-World Scenario

A student is flagged by an AI proctoring tool as high cheating risk. Today she often gets only a label, not a reason. With AP-2.2, the system must disclose key indicators, uncertainty, and review pathways. Without AP-2.2, she is left to fight a black box.

Why It Matters to You

Opaque systems usually hurt people with the least time, money, or legal support first. AP-2.2 turns a final verdict back into a reviewable process. That is the difference between accountability and technical deflection. 1 3

If We Do Nothing...

If we do nothing, "computer says no" becomes a default social experience. In an AGI-nearer world of connected automation, the impact multiplies as many systems reuse the same non-transparent logic. AP-2.2 preserves traceability as a core safeguard. 1 3

For the technically inclined

AP-2.2: Transparent Decision Chains

AI decision processes must be explainable and traceable. Stakeholders should be able to understand how an AI system arrived at a given output or recommendation.

What You Can Do

Always ask AI-backed systems for main decision factors and uncertainty notes. If those are missing, the result is not meaningfully reviewable.

Join the Discussion

Share your thoughts about this policy with the community.

Discuss in Forum

Sources & References

  1. [1] AIPolicy Policy Handbook, AP-2.2 Transparent Decision Chains. https://gitlab.com/aipolicy/web-standard/-/blob/main/registry/policy-handbook.md?ref_type=heads
  2. [2] AIPolicy Categories: Decision Authority. https://gitlab.com/aipolicy/web-standard/-/blob/main/registry/categories.md?ref_type=heads
  3. [3] Inherent Trade-Offs in Risk Scores (Kleinberg et al., 2016). https://arxiv.org/abs/1609.05807
  4. [4] ProPublica: Machine Bias. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
  5. [5] NIST AI RMF. https://www.nist.gov/itl/ai-risk-management-framework

Related Policies

Stay Updated

Get notified about specification updates and new releases.

No spam. Release updates only.