NarativAI Guidelines for the Use of Artificial Intelligence

1. Transparency First

We always disclose when and how AI is used — in text, visuals, audio, or video.
If AI played a role, the audience must know.

NarativAI does not use AI to invent information, generate unverified claims, or produce fabricated quotes.  AI is not a source — journalists are responsible for facts.

We use AI for:

  • language editing
  • data organization
  • data analysis and pattern detection
  • translation
  • visualization
  • routine, time-saving tasks
  • improving accessibility (captions, transcripts, formatting)
  • summarizing long documents for internal use
  • brainstorming structural or stylistic options (with human oversight)


Every editorial decision and interpretation is made by humans.

Anything processed by AI — text, data, visuals — is reviewed and verified by editorial staff.
Time saved with AI is reinvested in fact-checking and accuracy.

NarativAI strictly prohibits the use of AI for practices that mislead, manipulate, or deceive audiences. This includes, but is not limited to:

  • Deepfakes
  • Image manipulation that alters context
  • Fabricated personas or testimonies
  • Simulating journalist presence at events
  • Any AI use that obscures the reporting process
  • Publishing AI-created material without human review
  • Creating synthetic content that imitates real people
  • Generating fake quotes or statements
  • Producing staged or fabricated scenes with AI visuals
  • Impersonating real individuals using AI tools

NarativAI commits to preventing AI from reinforcing stereotypes, discriminatory patterns, or bias.
All AI-assisted outputs are reviewed to ensure they uphold fairness, accuracy, and impartiality.

If something was created or significantly shaped by AI:

  • We label it clearly.
  • We explain what tool was used and at which stage.
  • We distinguish between human work and AI-assisted work.

NarativAI never inputs sensitive information, confidential identities, or source-related material into AI tools that could expose or endanger individuals. The protection of sources remains absolute and cannot be delegated to any automated system.

NarativAI invests in:

  • training journalists in responsible AI use
  • comparing and applying global best practices
  • networking opportunities for media professionals
  • educating young people about ethical technology use

No text, analysis, visualization, or editorial decision is fully delegated to AI.
Humans carry full responsibility for accuracy, fairness, and ethics.

NarativAI stands firmly that no journalistic content — text, audio, video, photographs, or data — may be used to train AI systems without the creator’s explicit, informed consent.

We do not input sensitive information into insecure tools.
We respect privacy and work only with public, safe, or anonymized data.

NarativAI’s AI Guidelines are a living document. They will be regularly reviewed and updated to reflect technological developments, professional standards, and the evolving needs of journalism. As the industry changes, so will our policies — always with transparency, ethics, and public trust at the center.