Last reviewed: April 2026
A repository of culturally responsive AI review prompts to help professionals ensure their work is respectful and aligned with Indigenous frameworks, laws, and cultural protocols.
Professionals across the world often want to "do the right thing" when working with Indigenous communities, but many feel a sense of cultural anxiety - a fear of making a mistake that leads to inaction or superficial engagement.
This project bridges the gap between intent and action. By surfacing credible, existing frameworks and posing thought-provoking questions, these prompts help you identify cultural risks and opportunities before you engage with a community.
Important: These prompts are a pre-engagement tool. They do not replace real relationships or the authority of Indigenous communities. The AI does not speak for these communities; it surfaces named, verifiable sources to help you think more deeply.
Choose the community and version that fits your needs:
- General Review Prompt: For reviewing documents or products against publicly known legislation and cultural guidelines.
- Workplace Review Prompt: For cross-referencing work against both external frameworks and your organisation's internal policies (e.g., RAPs, Māori strategies).
Paste the prompt into your AI assistant (optimised for Claude, compatible with ChatGPT and Gemini).
You can upload the .md files directly to a Claude Project to provide persistent cultural context for your entire workspace.
Answer the AI's intake questions (Role, Industry, Region, etc.) and provide the content you want to review.
The AI will provide observations, provocations, and a Confidence Score. Use this feedback to refine your work and guide your real-world engagement. Treat outputs as guidance and questions, not final answers.
CRITICAL: When using these prompts, you must respect Indigenous Data Sovereignty.
- Do NOT upload sensitive, sacred, or private community documents to a public AI assistant without explicit community consent.
- Anonymise names and specific locations if you are unsure about the data's sensitivity.
- Check your organisation's AI policy regarding the use of third-party LLMs for community-related work.
The AI assistant is a pre-engagement tool. It is designed to help you prepare for a conversation, not to store or manage community knowledge.
This project is built on a foundation of professional expertise and technical rigor.
- Expert-led: Created by [Amy Heritage], a Fijian professional with over 10 years of experience in Indigenous affairs across NZ, Australia, and Canada.
- Expert-validated: Rigorously reviewed through an internal 11-expert validation panel using maintainer-created synthetic personas informed by public scholarship and frameworks; this does not indicate endorsement, affiliation, or direct participation by named individuals.
- Technically robust: Uses advanced LLM techniques like Chain of Thought (CoT) and Confidence Scoring to minimise hallucination.
Read the full Methodology and Development Process here
A minimal web interface is available in the app/ directory. It uses the Claude API to run reviews directly in your browser — no copy-pasting required.
cd app
npm install
cp .env.local.example .env.local
# Add your Anthropic API key to .env.local
npm run devOpen http://localhost:3000. Select a region, choose General or Workplace review, paste or upload your document, and run the review. For Workplace reviews, a second input appears for your organisational strategies and policies.
Requirements: Node.js 18+, an Anthropic API key.
maori-aotearoa/: Māori (Aotearoa New Zealand) prompts.aboriginal-torres-strait-islander-australia/: Aboriginal and Torres Strait Islander (Australia) prompts.first-nations-metis-inuit-canada/: First Nations, Métis, and Inuit (Canada) prompts.fiji/: Fiji prompts.samoa/: Samoa prompts.tonga/: Tonga prompts.resources/: Terminology, Trusted Sources, and Global Rights.app/: Web interface with Claude API integration.tests/: TypeScript evaluation suite and rubric.
Is this a replacement for community engagement? No. This tool is designed to help you prepare for engagement. It helps you identify the right questions to ask and the right frameworks to respect, so your real-world conversations are more informed and respectful.
Can I trust the AI's cultural advice? The AI is constrained to surface only named, verifiable frameworks (e.g., UNDRIP, TRC, Te Tiriti). We have implemented a "Confidence Score" so the AI can tell you when it lacks enough context to provide a reliable review.
Who owns the output? You do. The AI is a supportive peer looking over your shoulder. It provides observations and questions, but the final thinking and output belongs to you.
Contributions are welcome! If you have suggestions for new frameworks or improvements to existing prompts, please see CONTRIBUTING.md.
Connect on LinkedIn