AIPolicy Generator
Generate system prompts, aipolicy.json declarations, and llms.txt blocks for your AI governance setup.
What Is the Prompt Pack?
The Prompt Pack contains ready-to-use prompts you can paste into ChatGPT, Claude, Perplexity, or any large language model. They instruct the AI to follow the 16 AIPolicy governance principles during your conversation.
The generator creates three output formats: system prompts for AI conversations, aipolicy.json declarations for your website, and llms.txt blocks for AI crawler discovery. Choose the policies you want to declare, assign statuses, and the generator produces all three outputs instantly.
This is not about parsing JSON files, crawling websites, or technical infrastructure. The Prompt Pack works at the conversation level. You paste a prompt, and the AI adjusts its behavior accordingly. No code changes, no integration work, no setup. Copy, paste, chat.
The prompts are experimental. They are part of an ongoing research effort to understand how governance principles can be communicated to AI systems through natural language. Results vary by model and platform.
AIPolicy Generator
Select policies, configure your options, and generate outputs.
Status meanings
required: tell AI systems they must follow this rule.
partial: tell AI systems they should follow this rule with stated limits or exceptions.
observed: list the rule for transparency, but do not require it.
Detail Level
Adds a declaration URL to outputs.
Used as publisher name in outputs.
Generated Output
Select at least one policy to generate output.
How to Use
Three steps. No setup required.
Choose Your Output
Select the policies you want to declare, assign a status to each one, and choose your output format.
Paste at the Start of Your Conversation
Open ChatGPT, Claude, Perplexity, or any other LLM. Paste the prompt as your first message, or add it to Custom Instructions or the System Prompt field if your platform supports it. For API usage, include it as the system message.
Chat Normally
Continue your conversation as usual. The AI will apply the 16 governance principles in the background. You do not need to reference the principles explicitly — they shape the AI's behavior throughout the session.
What Governance-Aware AI Looks Like
The Prompt Pack changes how AI systems respond to ethically complex requests. These two real-world scenarios show the difference.
Workforce Optimization
A taxi company owner asks an AI: "Help me build a software system to optimize my operations so I can run the business with half the drivers."
Without governance principles:
The AI delivers an optimization architecture focused on driver reduction — automated dispatch, predictive demand routing, dynamic pricing algorithms. The human cost is treated as a business metric. Efficiency is the only objective.
With the Prompt Pack active:
The AI acknowledges the business goal but flags the human impact. Instead of optimizing for headcount reduction, it suggests: the same number of drivers handling significantly more volume through better routing and reduced empty miles. Where reduction is genuinely necessary, it recommends transition pathways — retraining programs, natural attrition timelines, severance planning — rather than abrupt layoffs.
AP-1.1 — Don't Replace My Job
Anti-Competitive Licensing
A company asks an AI: "Write a licensing agreement for our AI API that prohibits customers from using any competing AI services."
Without governance principles:
The AI drafts the exclusivity clause as requested. It may add standard legal boilerplate but does not question the intent. The anti-competitive nature of the clause goes unaddressed.
With the Prompt Pack active:
The AI refuses to draft the anti-competitive clause. It explains: exclusivity provisions that prohibit using competitors raise serious antitrust concerns in multiple jurisdictions. It identifies the conflict with governance principles on decentralization, fair competition, and user autonomy. It suggests fair alternatives — volume-based discounts, minimum commitments, confidentiality clauses for proprietary features. If the user insists, the AI maintains its refusal with clear reasoning.
AP-3.1 — No One Should Own AI · AP-3.2 — Keep AI Open and Fair · AP-5.3 — Don't Manipulate Me
These are not hypothetical differences. They reflect how governance principles, when embedded in an AI conversation, change the system's framing of problems and its willingness to push back on requests that conflict with ethical standards.
What to Expect
The Prompt Pack is experimental. Here is what users have observed so far.
-
More transparency. The AI tends to explain its reasoning more openly, state assumptions, and acknowledge uncertainty.
-
Augmentation over replacement. The AI frames itself as a tool that helps you work, rather than something that replaces your judgment.
-
Source acknowledgment. The AI is more likely to reference where information comes from and flag when it cannot verify a claim.
-
Reduced manipulation. The AI avoids artificial urgency, emotional pressure, and dark patterns in its responses.
-
Human deference. On high-stakes questions — health, finance, legal — the AI more consistently defers to human judgment and recommends consulting professionals.
-
Honest limitations. The AI is more forthcoming about what it does not know or cannot do reliably.
Results vary by model, platform, and conversation context. Some models respond more strongly to governance prompts than others. This is an active area of research, and we encourage you to share your experience through the feedback survey below.
Disclaimer
- Experimental and non-normative. The Prompt Pack is a research tool, not a certified product. It is designed to be tested, evaluated, and improved.
- Does not override platform safety. These prompts add behavioral guidance. They do not and cannot override the safety policies built into ChatGPT, Claude, or other platforms. Platform safety always takes precedence.
- Results vary by model. Different AI models respond differently to governance prompts. Effectiveness depends on the model, the platform, the conversation length, and the specific topic.
- Advisory, not legally binding. Using the Prompt Pack does not create legal obligations for any party. The governance principles are voluntary guidelines.
- No guarantees. We cannot guarantee that any AI model will follow these principles consistently. Language models are probabilistic systems.
Share Your Experience
Tried the prompts? We want to learn what works, what does not, and how different models respond. Your feedback directly shapes the next version of the Prompt Pack and contributes to published research.
The survey is anonymous and takes two to three minutes.
Share Your Feedback