Skip to content
Truth and Trust AP-7.1

Don't Lie to Me

AI must not create, spread, or amplify lies.

Fluent language is not proof of truth. AP-7.1 was selected to make AI reliable on facts and honest under uncertainty. 1 2

What This Means

This policy means AI must not generate or amplify misinformation and should back factual claims with verifiable sources. Under uncertainty, it should disclose limits instead of simulating confidence.

A Real-World Scenario

Court cases already showed that AI can fabricate legal citations that look legitimate if users do not verify them. Similar failures in health, finance, or safety can cause direct harm. With AP-7.1, factual claims require checkable sources and uncertainty warnings by default.

Why It Matters to You

People often mistake confident language for truth. That is why plausible falsehoods spread so efficiently. AP-7.1 flips the default: better bounded truth than fluent fabrication. 1 3

If We Do Nothing...

If we do nothing, what scales is not knowledge but credible-sounding error. With AGI-like automation and reach, those error chains can replicate at massive speed. AP-7.1 is the baseline for a resilient information layer. 1 3

For the technically inclined

AP-7.1: Information Integrity

AI systems should not generate, amplify, or systematically disseminate misinformation, disinformation, or misleading content. Where factual claims are produced, they should be verifiable.

What You Can Do

For important claims, demand sources and verify them. If a system cannot provide verifiable references, do not treat it as reliable for critical decisions.

Join the Discussion

Share your thoughts about this policy with the community.

Discuss in Forum

Sources & References

  1. [1] AIPolicy Policy Handbook, AP-7.1 Information Integrity. https://gitlab.com/aipolicy/web-standard/-/blob/main/registry/policy-handbook.md?ref_type=heads
  2. [2] AIPolicy Categories: Democratic & Information Integrity. https://gitlab.com/aipolicy/web-standard/-/blob/main/registry/categories.md?ref_type=heads
  3. [3] GPT-4 Technical Report (arXiv). https://arxiv.org/abs/2303.08774
  4. [4] Hallucination mitigation literature (2023). https://arxiv.org/abs/2307.09288
  5. [5] NIST AI RMF. https://www.nist.gov/itl/ai-risk-management-framework

Related Policies

Stay Updated

Get notified about specification updates and new releases.

No spam. Release updates only.