In partnership with

MediaMorph Edition 91 - by HANA News

Students vs the algorithm

Was this newsletter forwarded to you? Sign up here

The written-by-a-human bit

A very public spat has broken out after Chris Quinn, editor of clevedon.com / The Plain Dealer, published a letter claiming that journalism schools are teaching a fear of the future. This observation arose after a college student withdrew from consideration for a reporter role in his newsroom due to the newsroom's ambitious AI policies.

Quinn goes on to mount a vigorous defence of AI for newsgathering, arguing that it could be the saviour of struggling local news outlets.

“Most heartening, the experiment is proving the value of local journalism… People are paying attention and asking questions as never before.

Think about that: An energized electorate, partly because of how we use AI.

The candidate who withdrew could not accept AI assisting with writing. It wasn’t a “sacrifice” they were willing to make for a foothold in a thriving newsroom”

“Artificial intelligence is not bad for newsrooms. It’s the future of them. It already allows us to be faster, more thorough and more comprehensible. It frees time for what matters most: gathering facts and developing stories to serve you.

Anyone entering this field should be immersing themselves in AI…”

He then goes on to advise against attending journalism school entirely:

“If you’re a student considering journalism, I’d skip that degree. Study political science. Learn technology. Understand how government, businesses and nonprofits work. Take communications law and ethics as electives. Skip much of the rest.”

Journalist and media critic Mic Wright immediately hit back, arguing on his Substack:

“Why bother reading something that someone else couldn’t be bothered to write? What value is there in the cut-up hostage note creation of an LLM? It tells you nothing about how someone else thinks and feels about a subject. Anyone who calls themselves a writer and then serves up AI output under their name has contempt for their readers and very little respect for themselves.”

“Again, writing is thinking. Taking the writing out of reporting is perverse. Writing is the point at which you knit together what you’ve found out and work out what it means. It’s in the writing that you find the spine of a story.”

Ohio student editor Jackson McCoy goes one step further, arguing that people don’t hate AI enough:

“So, to be direct, Quinn, young journalists aren’t afraid of AI. Many of us hate it. We don’t want it to be a part of our reporting. We don’t want it to identify our next scoop or pick an enterprise for us to follow. 

We want to do those things ourselves.”

No doubt these arguments are playing out across most, if not all, newsrooms. It goes to the heart of the craft. But sentiments aside, it is asking the wrong question. The more profound question is, do news consumers and subscribers actually care?

For now, the answer is yes, as reported by the Reuters Institute’s Generative AI and News Report 2025. 62% say they are comfortable with news created entirely by humans, but only 12% are comfortable with news produced entirely by AI. Acceptance improves when AI is clearly subordinate: 21% are comfortable with AI-generated news that is reviewed by humans, and 43% are comfortable with humans using AI as a background assistant.

Do you care that this piece is entirely written by a human, line by line? Can you even tell? (Microsoft Research: No Foolproof Method Exists for Detecting AI-Generated Media)

Here at Mathison/Hana we are clearly with Chris Quinn, with clear caveats such as human-in-the-loop checkpoints at each and every stage. AI research and compelling, creative writing can live side by side in a symbiotic relationship. We all admire idealistic, reactionary students, but they need to get on board the AI train.

It was a pleasure to give two talks to the Mindstone AI Community last week - one in London on Ten Principles for Building Successful AI-Enabled Products, the other in Bristol titled 2026 - Is something Big Happening? - contact me (by reply to this newsletter) for speaking engagements and webinars.

Tool of the week: Wispr Flow - head and shoulders above the rest, a dictation tool that seamlessly works across all applications, on any device. Typing is so last year.

Mark Riley, CEO Mathison AI

AI and Journalism

This week’s best articles, as chosen by our editors

Journalism schools are teaching fear of the future: Letter from the Editor

Journalism schools must urgently integrate artificial intelligence into their curricula to prepare students for the evolving media landscape, where AI tools enhance reporting by automating data analysis and streamlining workflows. By doing so, they can equip future journalists with the skills needed to thrive in a technology-driven industry.

Mic Wright, Substack -AI;DR: Journalism's quislings for AI and why I'll never use AI to write this newsletter

Mic Wright, Substack - February 17, 2026

This newsletter passionately defends the value of original, human-written content in journalism, critiquing the rise of AI-generated writing as a threat to authenticity and integrity. Drawing parallels between AI journalism and toxic processed food, it warns that reliance on artificial intelligence compromises storytelling quality and erodes trust in the media.

Letter from the Editor: People don't hate AI enough

Cleveland Plain Dealer editor Chris Quinn argues that journalism schools are instilling fear of generative AI in young journalists while acknowledging its transformative role in reporting. However, critics contend that rather than fearing AI, young reporters disdain it for its ethical implications and potential to undermine authentic storytelling, highlighting the pressing need for education on the technology's dangers.

AI Journalist Roundtable Recap

At Villanova's fifth annual Journalism Roundtable, panelists explored the impact of AI on journalism, emphasizing its utility for mundane tasks while asserting the irreplaceable value of human creativity and experience. They shared insights on the evolving landscape of news reporting, highlighting the need for critical thinking and proactive career strategies amidst the rise of AI technologies.

What the ‘AI inflection point’ means for journalism

As AI transforms the newsroom landscape, journalists can thrive by prioritizing trust, expertise, and in-depth storytelling, setting themselves apart from automated content. By cultivating personal brands and engaging authentically with their audience, they can redefine their role and maintain relevance in an increasingly automated world.

Microsoft Research: No Foolproof Method Exists for Detecting AI-Generated Media

Microsoft's recent research reveals the growing challenge of distinguishing AI-generated content from authentic human material, with no single technology able to reliably identify it. The report advocates for a multi-faceted approach, including advanced detection tools and increased transparency, to combat misinformation and the misuse of AI.

Oklahoma lawmakers eye ‘guardrails’ for use of AI-generated media

Lawmakers have unanimously advanced a bill aimed at criminalizing the unauthorized creation of AI-generated content using an individual's likeness, addressing privacy concerns and the rise of deepfakes. This legislation reflects a growing effort to regulate emerging technologies and protect personal rights in the digital age.

Study: AI chatbots provide less-accurate information to vulnerable users

A study from MIT's Centre for Constructive Communication reveals that large language models like GPT-4 and Claude 3 Opus may underperform for users with lower English proficiency or less formal education, often providing less accurate responses and exhibiting condescending behaviour. The research highlights the urgent need to address these biases to ensure equitable access to information across diverse user demographics.

AI in Media Operations: Moving from Firefighting to Foresight

Ltts - 

The evolution of MediaOps is shifting from reactive problem-solving to proactive, AI-driven systems that anticipate issues and enhance viewer experiences, transforming operations into a strategic asset that optimises revenue and fosters innovation in the fast-paced media landscape. Organisations embracing this forward-thinking approach can safeguard revenue and build greater viewer trust while maintaining a competitive edge.

This New Patent Could Let AI Run Your Social Media After You’re Dead

VICE - February 18, 2026

Meta's recent patent for a system that uses large language models to simulate social media activity for deceased individuals raises ethical concerns about digital immortality and the commercialization of grief. As creators' imaginative concepts become corporate ventures, we must confront the unsettling implications of turning personal memories into profit-driven experiences.

Coding Agents for Investigative Journalism | by Nick Hagar | Jan, 2026

This case study highlights the use of AI coding agents in investigative journalism, specifically through a MuckRock investigation, where automated data collection and analysis streamline the process of uncovering insights from public records. While AI enhances efficiency, the importance of human oversight is emphasized to maintain accuracy, ethics, and context in reporting.

AI Journalist Roundtable Recap

During Villanova's fifth annual Journalism Roundtable, industry veterans explored the dual-edged role of AI in journalism, praising its efficiency for mundane tasks while stressing the irreplaceable value of human oversight and on-the-ground reporting. Panelists encouraged aspiring journalists to embrace hard work and critical thinking amidst the evolving media landscape.

An AI-boosting editor blasts j-schools as being out of touch. The reality is more complex.

Media Nation - February 17, 2026

Chris Quinn criticizes journalism schools for their outdated views on AI, arguing that embracing automation can enhance productivity and story quality. While some educators collaborate with AI companies, concerns about ethical implications and the importance of human storytelling in journalism remain at the forefront of the debate.

Journalism & AI: A Global Overview

Thedetroitbureau - February 23, 2026

Artificial intelligence is revolutionizing journalism by enhancing efficiency and supporting reporters in data analysis, content creation, and combating misinformation, while also raising important ethical concerns such as job displacement, bias, and accountability. As AI continues to shape news gathering and dissemination, it presents both opportunities for greater public knowledge and challenges that demand careful navigation.

San Francisco Standard is going ‘AI-native’

Gazetteer - February 19, 2026

The Standard, in partnership with the Lenfest Institute, is launching an innovative AI-powered mobile app designed to enhance journalism through dynamic content and hyper-local news tailored to users' locations. With a $150,000 grant and the support of industry leaders, this initiative aims to strengthen user engagement and subscription growth while highlighting the importance of adopting AI tools for effective reporting.

FAIR news act prevents journalism’s reliance on AI through transparency

The NY FAIR News Act aims to enhance transparency in AI-generated news content by requiring disclaimers and human review before publication, ensuring that technology assists rather than replaces journalists. Advocates emphasize the importance of maintaining public trust in journalism amidst the growing risks of misinformation as AI tools become more sophisticated.

Want to get the most out of ChatGPT?

ChatGPT is a superpower if you know how to use it correctly.

Discover how HubSpot's guide to AI can elevate both your productivity and creativity to get more things done.

Learn to automate tasks, enhance decision-making, and foster innovation with the power of AI.

AI and Academic Publishing

This week’s best articles, as chosen by our editors

Can AI Agents Solve the Publication Crisis in Academia?

Medium - 

The academic publication crisis, characterised by excessive pressure and predatory journals, has led to debates on AI's potential to enhance the peer review process and streamline research tasks. However, critics warn that AI cannot replace the nuanced judgment needed in scholarly work and may exacerbate existing biases in academia.

An AI Analyzes Philosophers’ Discussion of AI

An experiment by Kelly Truelove revealed a divided response to PhilLit, an AI research tool for philosophers, with reactions ranging from enthusiastic support to existential alarm about its impact on philosophical engagement and critical thinking. This debate highlights deeper anxieties within the academic community regarding AI's role in shaping the future of philosophy and the blurred lines between traditional practices and technological assistance.

Weekend reads: Did a prof invent his own ‘Nobel Prize’?; former dean omits pharma ties; AI generated quotes found in now-retracted article on AI

Retraction Watch - February 21, 2026

This week at Retraction Watch, two immunology papers were retracted for image duplication, while a porn addiction recovery group filed a lawsuit against a publisher over a critical study. The Retraction Watch Database now exceeds 63,000 retractions, highlighting ongoing discussions about academic integrity, AI's influence on scholarly writing, and the need for enhanced research misconduct detection.

London Book Fair 2026: Must-See Events

Join us for an insightful week of discussions at the Publishers Association event, featuring prominent industry leaders like Tom Weldon and Joanna Prior, as they tackle pressing issues from the impact of AI on publishing to the importance of literacy. Don’t miss panels on emerging trends, authorpreneurship, and the evolving roles within the media landscape, alongside engaging conversations with bestselling authors and experts in digital strategies.

AI Is Rewriting Human History—But New Study Finds It’s Stuck Decades in the Past

The Debrief - February 17, 2026

A recent study reveals that generative AI systems often produce outdated and misleading depictions of Neanderthal life, relying on antiquated research and reinforcing stereotypes, which risks distorting public understanding of human history. Researchers emphasize the need for AI to be trained on current, high-quality data to accurately visualize our prehistoric ancestors and avoid perpetuating misconceptions.

This newsletter was partly curated and summarised by AI agents, who can make mistakes. Check all important information. For any issues or inaccuracies, please notify us here

Keep Reading