Learning what not to do with AI is just as important as learning what to do. These cautionary guidelines can save your credibility.
- 1
Don't Publish AI-Generated Content Without Human Review
Publishing AI-generated content without thorough human review is one of the fastest ways to destroy your credibility and harm your audience.
Risk
AI-generated content can contain hallucinations (fabricated facts presented confidently), outdated information, biased framing, factual errors, and fabricated quotes or sources. Publishing such content without review can mislead the public, damage your reputation, and open you to legal liability.
Real-World Example
In 2023, CNET published dozens of AI-generated financial articles without adequate human review. Readers and competitors discovered numerous factual errors, including basic math mistakes in financial advice articles. The resulting scandal damaged CNET's credibility and led to a halt in their AI content program.
How to Avoid This
Establish a mandatory multi-step review process: AI generates a draft, a reporter fact-checks all claims against primary sources, an editor reviews for accuracy and tone, and a final check ensures no hallucinated information slipped through. Never allow AI content to go directly from generation to publication.
Meet the Journalaism Team
Inka Johansson-VarelaThe Pioneer — AI-Native JournalismI'm one of the biggest AI advocates you'll meet, but even I would never hit 'publish' on unreviewed AI content. The technology is powerful and useful, but it's not reliable enough for unsupervised publishing. Think of it as a draft machine, not a publishing machine.
Edmund Osei-HarringtonThe Guardian — Editorial Standards & EthicsEvery major AI publishing failure shares the same root cause: skipping human review. The pressure to publish faster is real, but the cost of publishing wrong is always higher. No efficiency gain justifies putting your newsroom's credibility at risk.
Mila Santos-KimThe Amplifier — Digital Audience & EngagementOur readers don't care how fast we published — they care that we got it right. One viral correction does more damage to audience trust than a dozen delayed stories. The review step isn't a bottleneck; it's our quality guarantee.
- 2
Don't Feed Confidential Sources into AI Tools
Entering confidential source information into AI tools can expose your sources, compromise investigations, and violate the fundamental trust between journalists and their sources.
Risk
Cloud-based AI tools may store, log, or use your inputs for model training. Confidential source names, documents, or story details entered into these systems could be exposed through data breaches, legal subpoenas of AI company records, or inadvertent inclusion in training data that surfaces in other users' outputs.
Real-World Example
Samsung employees inadvertently leaked proprietary source code and internal meeting notes by pasting them into ChatGPT. The information became part of the training data. A similar breach with confidential journalistic sources could endanger lives, especially in authoritarian contexts.
How to Avoid This
Create a strict 'never enter' list: source names, contact information, confidential documents, unpublished investigation details, and whistleblower communications. If you need AI help with sensitive material, use anonymization first or use local AI models that process data entirely on your own hardware.
Meet the Journalaism Team
Inka Johansson-VarelaThe Pioneer — AI-Native JournalismThis is the one area where I'm as cautious as Edmund. Innovation cannot come at the cost of source safety. Use local LLMs for sensitive work, anonymize everything, and when in doubt, don't use AI at all. Our sources' trust is more valuable than any efficiency gain.
Edmund Osei-HarringtonThe Guardian — Editorial Standards & EthicsSource protection isn't a guideline — it's a covenant. Journalists have gone to jail to protect sources. The idea that we'd casually enter their information into a corporate AI tool is unconscionable. This must be an absolute red line for every newsroom.
Mila Santos-KimThe Amplifier — Digital Audience & EngagementOur sources are real people who trusted us with sensitive information. If word gets out that a newsroom leaked source details through AI tools, no source will ever trust that organization again. The reputational damage would be permanent.
- 3
Don't Rely on AI for Fact-Checking Alone
AI can help organize and plan your fact-checking process, but it cannot verify facts on its own. Treating AI output as verified information is a dangerous shortcut.
Risk
AI models generate responses based on patterns in training data, not by checking facts in real time. They can present false information with complete confidence, fabricate sources that don't exist, and miss context that changes the meaning of a claim. Relying on AI as your sole fact-checker will inevitably lead to published errors.
Real-World Example
A lawyer used ChatGPT to research legal precedents for a court filing. The AI generated multiple case citations that sounded authoritative but were entirely fabricated — the cases simply did not exist. The lawyer faced sanctions for presenting false information to the court.
How to Avoid This
Use AI only as a fact-check planning tool: to break down claims into verifiable components and suggest where to find authoritative sources. Then do the actual verification yourself using primary sources — official databases, original documents, direct expert interviews, and established fact-checking organizations.
Meet the Journalaism Team
Inka Johansson-VarelaThe Pioneer — AI-Native JournalismAI is brilliant at helping you figure out WHAT to fact-check and WHERE to look. But the actual checking? That's still a human job. Use AI to build your verification roadmap, then walk the road yourself.
Edmund Osei-HarringtonThe Guardian — Editorial Standards & EthicsThe moment we outsource fact-checking to AI, we've abandoned our most fundamental responsibility. AI can help us organize our verification process, but every fact must be checked against primary sources by a human journalist. There are no shortcuts to truth.
Mila Santos-KimThe Amplifier — Digital Audience & EngagementI've seen how quickly a single factual error can go viral on social media. Our readers are our fact-checkers too — they will find our mistakes. Using AI to plan better fact-checking processes actually makes us more accurate, but only if we do the actual checking ourselves.
- 4
Don't Use AI-Generated Images Without Disclosure
Using AI-generated images in news coverage without clear disclosure misleads your audience and undermines the documentary function of photojournalism.
Risk
AI-generated images can be photorealistic and virtually indistinguishable from real photographs. Using them without disclosure blurs the line between documentation and fabrication, erodes trust in visual journalism, and can spread misinformation. Audiences may believe AI-generated scenes actually occurred.
Real-World Example
In 2023, an AI-generated image of an explosion near the Pentagon went viral on social media, briefly causing stock market fluctuations. News organizations that shared the image without verification contributed to real-world consequences from fabricated visual content.
How to Avoid This
Never use AI-generated images to illustrate news events. If you use AI-generated illustrations for feature or opinion content, clearly label them as AI-generated. Develop a newsroom policy that distinguishes between editorial illustration (where AI might be acceptable with disclosure) and news photography (where it never is).
Meet the Journalaism Team
Inka Johansson-VarelaThe Pioneer — AI-Native JournalismAI image generation is incredible technology with legitimate creative uses, but it has no place in news photography. For illustrations, opinion pieces, or feature content, use it with clear labeling. For news coverage, stick to real photographs. Always.
Edmund Osei-HarringtonThe Guardian — Editorial Standards & EthicsPhotojournalism's power lies in its truthfulness — the photograph as witness. AI-generated images destroy that covenant. I don't care how good the technology gets; a fabricated image is a fabricated image. Label everything, and never use AI images where readers expect documentation of reality.
Mila Santos-KimThe Amplifier — Digital Audience & EngagementOur audience's ability to distinguish real from fake is already under assault. As a news organization, we have a responsibility to be part of the solution, not the problem. Clear labeling of AI images isn't just ethical — it's how we maintain our credibility in a post-truth landscape.
- 5
Don't Ignore AI Bias in Your Reporting Tools
AI tools carry biases from their training data — reflecting and sometimes amplifying societal inequities. Ignoring these biases means incorporating them into your journalism.
Risk
AI models trained on existing text data inherit the biases present in that data: racial stereotypes, gender assumptions, cultural blind spots, and geographic imbalances. If journalists use AI output without accounting for these biases, they risk reinforcing harmful stereotypes, underrepresenting marginalized communities, and producing skewed analysis.
Real-World Example
Researchers have demonstrated that AI language models associate certain professions with specific genders and ethnicities, generate more negative language when describing certain communities, and provide less detailed information about underrepresented regions and cultures. Using such biased outputs uncritically in journalism amplifies existing inequities.
How to Avoid This
Always critically evaluate AI output for bias. Ask: whose perspective is centered? Who is missing? Are stereotypes being reinforced? Cross-check AI-generated content about marginalized communities with sources from those communities. Build bias awareness into your AI training programs.
Meet the Journalaism Team
Inka Johansson-VarelaThe Pioneer — AI-Native JournalismAI bias isn't a bug to be fixed — it's a feature to be managed. Every AI tool carries the biases of its training data, and pretending otherwise is dangerous. I actively look for bias in every AI output and use it as a signal to dig deeper.
Edmund Osei-HarringtonThe Guardian — Editorial Standards & EthicsBias has always been journalism's greatest challenge, and AI introduces new dimensions of it. We must be even more vigilant about representation and fairness when using AI tools, because algorithmic bias can be harder to detect than human bias.
Mila Santos-KimThe Amplifier — Digital Audience & EngagementOur audience is diverse, and they notice when our coverage isn't. If AI tools are pushing us toward biased framing, we're failing the very communities we're supposed to serve. Bias checking should be as routine as spell checking.
- 6
Don't Let AI Replace Your Journalistic Instincts
Journalistic instinct — the ability to sense a story, read a room, or know when a source is being evasive — is a human skill that AI cannot replicate. Don't let AI convenience erode these essential abilities.
Risk
Over-reliance on AI for editorial decisions can atrophy the critical thinking, source evaluation, and news judgment skills that define professional journalism. Journalists who defer to AI recommendations without applying their own judgment become less effective over time, not more.
Real-World Example
Some newsrooms that heavily automated story selection based on algorithmic metrics found their coverage becoming increasingly narrow and sensational — optimized for clicks rather than public interest. When editors reclaimed editorial judgment, coverage quality and reader trust improved.
How to Avoid This
Use AI to inform your decisions, not make them. Regularly practice journalism without AI assistance to keep your skills sharp. When AI suggests a direction, ask yourself: does this align with my journalistic judgment? Would I have reached this conclusion on my own? Trust your training and experience.
Meet the Journalaism Team
Inka Johansson-VarelaThe Pioneer — AI-Native JournalismAI should amplify your instincts, not replace them. I've learned to use AI as a second opinion — when it disagrees with my gut feeling, that's often a signal to investigate further, not to override my judgment. Your nose for news is irreplaceable.
Edmund Osei-HarringtonThe Guardian — Editorial Standards & EthicsJournalistic instinct is built over years of practice, thousands of interviews, and hard-won experience. No algorithm can replicate the feeling that something doesn't add up, or the instinct to ask one more question. Protect these skills fiercely.
Mila Santos-KimThe Amplifier — Digital Audience & EngagementThe journalists our audience trusts most are the ones with strong voices and sharp instincts. AI can help us work faster, but our readers come to us for human insight, context, and judgment. Those are the things that make us indispensable.
- 7
Don't Assume AI Tools Are Always Accurate
AI tools present all output with equal confidence, whether it's correct or completely fabricated. Assuming accuracy without verification is a recipe for publishing errors.
Risk
AI models do not distinguish between accurate and inaccurate information in their outputs. They can fabricate citations, invent statistics, misattribute quotes, confuse similar events, and present outdated information as current. The confident tone of AI responses makes errors harder to catch without active verification.
Real-World Example
Multiple instances have been documented where ChatGPT and similar tools fabricated academic paper citations that sounded authentic but didn't exist, invented statistics about real topics, and confused details between similar historical events — all presented with the same confident, authoritative tone.
How to Avoid This
Treat every AI output as unverified until proven otherwise. Develop a 'trust but verify' mindset: use AI for speed, but never skip verification. Be especially skeptical of specific numbers, dates, quotes, and citations — these are where AI most frequently hallucinates.
Meet the Journalaism Team
Inka Johansson-VarelaThe Pioneer — AI-Native JournalismI love AI tools, but I never trust them blindly. The most dangerous AI errors are the ones that sound completely plausible. I've trained myself to be especially skeptical of AI output that sounds too perfect — that's often when it's making things up.
Edmund Osei-HarringtonThe Guardian — Editorial Standards & EthicsAccuracy is not optional in journalism. AI tools fail the accuracy test frequently enough that no responsible journalist should treat their output as reliable without verification. Every number, every name, every claim must be checked. This is basic journalism, AI or not.
Mila Santos-KimThe Amplifier — Digital Audience & EngagementOur readers hold us to a high standard of accuracy, and they should. When we publish an error that came from AI, our audience doesn't blame the AI — they blame us. The responsibility for accuracy always rests with the journalist, not the tool.
- 8
Don't Skip the Learning Curve — Invest in AI Training
Jumping into AI without proper training leads to misuse, errors, and frustration. Investing in structured AI education pays dividends in quality, efficiency, and responsible use.
Risk
Untrained journalists may misuse AI tools in ways that create ethical violations, publish AI errors they don't know how to catch, expose sensitive information to cloud services, or become frustrated and abandon potentially valuable tools. The learning curve exists for good reasons.
Real-World Example
A survey by JournalismAI found that newsrooms with formal AI training programs reported significantly fewer AI-related errors and higher journalist satisfaction than those that simply gave staff access to tools without guidance. The 'figure it out yourself' approach consistently produced worse outcomes.
How to Avoid This
Invest in structured AI training at all levels. Start with foundational literacy (what AI is and isn't), move to practical skills (prompt engineering, tool selection), and advance to critical evaluation (bias detection, verification workflows). Make training ongoing, not a one-time event.
Meet the Journalaism Team
Inka Johansson-VarelaThe Pioneer — AI-Native JournalismI know training feels slow when you're excited to dive in, but I've seen the difference between trained and untrained AI users. Trained journalists get better results, catch more errors, and find more creative applications. The investment pays for itself within weeks.
Edmund Osei-HarringtonThe Guardian — Editorial Standards & EthicsWe don't let reporters cover beats they haven't been trained for. Why would we let them use AI tools without training? Proper education in AI capabilities, limitations, and ethical guidelines is a prerequisite for responsible use, not an optional extra.
Mila Santos-KimThe Amplifier — Digital Audience & EngagementThe newsrooms getting the most value from AI are the ones that invested in training first. It's not about being slow — it's about being smart. Well-trained teams produce better content faster and with fewer errors. That's a win for our audience.
- 9
Don't Use AI to Manufacture Quotes or Sources
Using AI to generate fake quotes, fabricate sources, or create fictional expert opinions is journalistic fraud — full stop. It violates every ethical standard in the profession.
Risk
AI can generate convincing quotes attributed to real or fictional people. Using such fabricated quotes is no different from making up sources — it's fraud that can result in termination, legal action, and permanent damage to your career and your news organization's reputation.
Real-World Example
In 2023, a German magazine discovered that one of its award-winning reporters had fabricated sources and quotes in multiple stories, some aided by AI text generation. The reporter was fired, awards were rescinded, and the publication's credibility suffered significant damage despite the reporter acting alone.
How to Avoid This
Never use AI to generate quotes attributed to real people. Every quote in a published story must come from an actual interview, press conference, public statement, or documented source. If you use AI to draft possible interview questions or anticipated responses for preparation, clearly mark these as fictional and never publish them as real.
Meet the Journalaism Team
Inka Johansson-VarelaThe Pioneer — AI-Native JournalismThis should go without saying, but I'll say it anyway: fabricating quotes is the line you never cross, with or without AI. AI makes it technically easier to create convincing fake quotes, which makes our ethical responsibility even greater.
Edmund Osei-HarringtonThe Guardian — Editorial Standards & EthicsFabricating quotes is the cardinal sin of journalism. AI does not change this. If anything, the ease with which AI can generate convincing quotes makes this an even more critical standard to enforce. Any journalist who uses AI to manufacture quotes should face the same consequences as one who fabricates them manually.
Mila Santos-KimThe Amplifier — Digital Audience & EngagementOur relationship with our audience is built on the promise that when we put words in quotation marks, someone actually said them. Breaking that promise — whether through laziness, deadline pressure, or AI convenience — is an unforgivable betrayal of reader trust.
- 10
Don't Ignore Copyright Issues with AI-Generated Content
The copyright landscape around AI-generated content is complex and rapidly evolving. Ignoring these issues can expose your newsroom to legal liability and ethical challenges.
Risk
AI models are trained on vast datasets that may include copyrighted material. AI-generated content may inadvertently reproduce copyrighted text, mimic specific writers' styles, or generate images that closely resemble copyrighted works. The legal status of AI-generated content's copyright ownership remains unsettled in many jurisdictions.
Real-World Example
Major publishers including The New York Times have filed lawsuits against AI companies alleging copyright infringement through training data usage. Meanwhile, the U.S. Copyright Office has ruled that purely AI-generated works cannot be copyrighted, raising questions about ownership of AI-assisted journalism content.
How to Avoid This
Consult with legal counsel about copyright implications of AI use in your newsroom. Ensure AI-generated content is substantially transformed through human editing before publication. Avoid using AI to mimic specific writers' styles. Stay informed about evolving copyright law and court decisions regarding AI content.
Meet the Journalaism Team
Inka Johansson-VarelaThe Pioneer — AI-Native JournalismCopyright law is struggling to keep up with AI technology, and that uncertainty is a risk we need to manage proactively. I recommend treating AI output as raw material that must be substantially transformed by human creativity before it's publishable.
Edmund Osei-HarringtonThe Guardian — Editorial Standards & EthicsThe copyright questions around AI are some of the most consequential legal issues facing journalism today. Until the law is settled, err on the side of caution. Human authorship, substantial transformation, and legal consultation should be standard practice.
Mila Santos-KimThe Amplifier — Digital Audience & EngagementOur content is our product, and we need to be sure we actually own it. If there's any question about whether AI-generated elements in our work create copyright exposure, that's a question we need answered before publication, not after a lawsuit.
Did you find this useful?
Journalaism Learning Program
A structured, self-paced learning program that takes you from AI novice to confident practitioner. Includes assessments, exercises, and certification.