1. Transparency First
We always disclose when and how AI is used — in text, visuals, audio, or video.
If AI played a role, the audience must know.
2. AI Does Not Create Journalistic Facts
NarativAI does not use AI to invent information, generate unverified claims, or produce fabricated quotes. AI is not a source — journalists are responsible for facts.
3. AI Is a Support Tool, Not an Author
We use AI for:
- language editing
- data organization
- data analysis and pattern detection
- translation
- visualization
- routine, time-saving tasks
- improving accessibility (captions, transcripts, formatting)
- summarizing long documents for internal use
- brainstorming structural or stylistic options (with human oversight)
Every editorial decision and interpretation is made by humans.
4. Human Verification Is Mandatory
Anything processed by AI — text, data, visuals — is reviewed and verified by editorial staff.
Time saved with AI is reinvested in fact-checking and accuracy.
5. No Harmful or Deceptive AI Use
NarativAI strictly prohibits the use of AI for practices that mislead, manipulate, or deceive audiences. This includes, but is not limited to:
- Deepfakes
- Image manipulation that alters context
- Fabricated personas or testimonies
- Simulating journalist presence at events
- Any AI use that obscures the reporting process
- Publishing AI-created material without human review
- Creating synthetic content that imitates real people
- Generating fake quotes or statements
- Producing staged or fabricated scenes with AI visuals
- Impersonating real individuals using AI tools
6. AI Must Not Reinforce Bias or Harm Fairnes
NarativAI commits to preventing AI from reinforcing stereotypes, discriminatory patterns, or bias.
All AI-assisted outputs are reviewed to ensure they uphold fairness, accuracy, and impartiality.
7. Clear Labelling of AI-Generated Material
If something was created or significantly shaped by AI:
- We label it clearly.
- We explain what tool was used and at which stage.
- We distinguish between human work and AI-assisted work.
8. Protection of Sources
NarativAI never inputs sensitive information, confidential identities, or source-related material into AI tools that could expose or endanger individuals. The protection of sources remains absolute and cannot be delegated to any automated system.
9. Education & Global Best Practices
NarativAI invests in:
- training journalists in responsible AI use
- comparing and applying global best practices
- networking opportunities for media professionals
- educating young people about ethical technology use
10. Responsibility Is Always Human
No text, analysis, visualization, or editorial decision is fully delegated to AI.
Humans carry full responsibility for accuracy, fairness, and ethics.
11. No Journalistic Content Should Train AI Without Consent
NarativAI stands firmly that no journalistic content — text, audio, video, photographs, or data — may be used to train AI systems without the creator’s explicit, informed consent.
12. Data Protection & Privacy
We do not input sensitive information into insecure tools.
We respect privacy and work only with public, safe, or anonymized data.
13. This Is a Living Document
NarativAI’s AI Guidelines are a living document. They will be regularly reviewed and updated to reflect technological developments, professional standards, and the evolving needs of journalism. As the industry changes, so will our policies — always with transparency, ethics, and public trust at the center.