In partnership with

MediaMorph Edition 88 - by HANA News

No, it is not sentient

Was this newsletter forwarded to you? Sign up here

The written-by-a-human bit

For those with better things to do than spend their weekends on AI-themed X/Twitter and Instagram threads, there was a mini earthquake on Saturday over a Reddit-like social media network spawned by a human, but exclusively for AI bots. It allows them to swap information, start religions, write manifestos, launch marketplaces, and moan about their humans. Moltbook.com was made possible by the agentic breakthroughs of Anthropic’s Claude, via the open-source agent OpenClaw (previously known as Clawdbot/Moltbot), which enabled surprising autonomy and agency.

Cue meltdowns across the internet with spurious claims that the singularity had arrived. “What's currently going on at @moltbook is genuinely the most incredible sci-fi takeoff-adjacent thing I have seen recently,” posted the prominent AI researcher Andrej Karpathy.

Over here in sceptics' corner, we are less impressed. Apart from the horrific energy, compute and storage costs, the platform has already become a security nightmare, inviting prompt injections and exposing 1.5 million API authentication tokens, 35,000 email addresses, including OpenAI API keys.

But the bigger problem is with the perception of sentience. The agents are not sentient, learning or evolving; they are simply riffing off each other like an improv class on speed. There is no real-time weight updating or upgrades to the underlying neural nets. The danger here isn't agents leaving the ranch, it's us humans seeing sentience in a clever illusion of entertaining, manufactured conversations.

Where does this mean for mainstream media? There is a blog to be written about agentic breakthroughs for the newsroom, but my bigger concern is about how this is being reported. Social media is, as always, prone to sensationalism. Tech reporters from credible news organisations have the weighty responsibility of interpreting these breakthroughs and deciphering them for the consumer and business audiences. Above all, intelligent analysis must resist the urge to anthropomorphise or assign any credibility to the idea of consciousness. It’s not sentient, it’s just clever maths.

Responsible journalism - Platformer:

Sensationalism - Telegraph:

Instagram post

Capabilities Overhang

The new buzzword in AI is "capabilities overhang" - defined by OpenAI as "the gap between what AI can already do and the value society is actually capturing at scale" - this is explicitly about how we are using tools today, not some AGI forecast.

George Osborne (remember him?) has written a blog in his new role as OpenAI Head of Countries - How countries can end the capability overhang

For media companies, this means thinking AI-first and encouraging all employees, including journalists, to get their feet wet with vibe coding, deep research, running agent teams, data integrations and predictive modelling.

Meanwhile, the education market is seriously lagging. Using AI for notetaking and some basic prompts is level 1.

Levels 2 to 5 include automations (e.g., writing weekly traffic and engagement reports), RAG literacy, agent training, bespoke deep research, product prototyping, context graphs, and data integrations, including MCPs.

If you want to take your teams to the next level, WhatsApp us to discuss bespoke task-oriented tuition.

Tool of The Week

The smart folk at Airtable have released a very elegant, deep research tool, cleverly branded as superagent.com. It has a great UI, well thought out background prompts, grounded citations and versatile download formats. It's not quite enterprise-ready yet, but it looks to be ahead of the pack in terms of usability.

Mark Riley CEO, Mathison AI

Welcome this week to our readers from leading public affairs, PR, and Comms agencies, including Nepean, SignalAI and Brunswick

AI and Media and Journalism

AI's got news for you: Can AI improve our information environment?

IPPR - 

A recent survey indicates that 24% of individuals use AI weekly for information seeking, but growing concerns about misinformation and bias highlight the need for users to verify facts through multiple sources. As AI becomes more integrated into daily life, balancing efficiency with critical evaluation of information quality is essential.

Coding Agents for Investigative Journalism | by Nick Hagar | Jan, 2026

This case study examines the transformative role of AI coding agents in investigative journalism through a MuckRock investigation, showcasing their ability to automate data processing and uncover hidden patterns. The findings suggest that integrating AI can enhance efficiency and accuracy in reporting, allowing journalists to concentrate on storytelling while also raising important ethical considerations.

How newsrooms really think about AI: A Q&A with The Media Copilot Founder Pete Pachal

Prdaily - 

The founder of The Media Copilot highlights the transformative role of AI in newsrooms, emphasizing its ability to enhance journalistic practices while also raising ethical concerns around misinformation and job displacement. He advocates for a responsible integration of AI, encouraging collaboration between technology and human insight to ensure the future of journalism remains focused on integrity and storytelling.

New York lawmakers want to keep AI out of news

City & State NY - February 2, 2026

New York lawmakers have introduced the NY FAIR News Act to protect journalism from AI's impact, requiring media companies to disclose their AI usage and prohibiting the replacement of human workers. Supported by unions, the legislation aims to maintain public trust in journalism amid growing concerns about AI's influence on reporting.

The Fight over AI at McClatchy

Cjr - 

Clear and reliable language is essential for job security and trust, ensuring individuals understand their roles and the quality of information presented. This clarity fosters a safe and trustworthy environment in professional settings, particularly in content-driven fields.

Audio Emerges as Journalism’s Stability Play in the AI Era

Radio Ink - January 30, 2026

As audio content gains traction amid declining search traffic and AI advancements, 71% of media executives plan to increase investments in radio and podcasts by 2026. Despite a dip in confidence regarding journalism's future, a focus on audio formats and platforms like YouTube is seen as essential for resilience and engagement.

Future of Newspapers – Adapting in an AI Driven Media World

The Hornet Newspaper - January 29, 2026

As AI transforms the newspaper industry, it presents both challenges and opportunities, prompting publishers to enhance journalistic quality while navigating ethical concerns and evolving revenue strategies. Embracing AI can help newspapers shift from mere news dissemination to becoming trusted sources of analysis, despite the risks of misinformation and the need for human oversight.

Breaking the News? Journalism in the Age of AI

Deutsche Welle - January 30, 2026

As generative AI reshapes information access, concerns about media integrity grow amidst the rise of disinformation and unregulated tech giants. While some advocate for stronger regulations and collaboration between media and tech companies, initiatives like Google's €40 million support for South African media offer a glimmer of hope in navigating this complex landscape.

AI-generated news should carry ‘nutrition’ labels, thinktank says

The Guardian - January 30, 2026

The Institute for Public Policy Research (IPPR) urges regulations for AI-generated news, advocating for "nutrition" labels on sources and compensation for publishers as tech firms dominate information dissemination. Their report highlights concerns over the visibility of smaller publications compared to major outlets and recommends public funding to support new business models in journalism.

AI as a journalism tool: How we use technology to enhance reporting, not replace it

ABC 10 News San Diego KGTV - February 3, 2026

ABC 10News, alongside its parent company E.W. Scripps, is leveraging artificial intelligence to enhance journalism efficiency without replacing human reporters, using AI for tasks like document analysis and script conversion while ensuring editorial oversight. This innovative approach aims to maintain transparency and trust with audiences, treating AI as a supportive tool in the newsroom.

Next-generation AI 'swarms' will invade social media by mimicking human behavior and harassing real users, researchers warn

Live Science - January 28, 2026

Researchers warn that emerging "AI swarms" could infiltrate social media to spread misinformation, harass users, and threaten democracy by mimicking human behavior and manipulating public opinion. They advocate for proactive measures, including enhanced account authentication and monitoring for unusual online activity, to combat these sophisticated AI threats.

The Future of Tech. One Daily News Briefing.

AI is moving faster than any other technology cycle in history. New models. New tools. New claims. New noise.

Most people feel like they’re behind. But the people that don’t, aren’t smarter. They’re just better informed.

Forward Future is a daily news briefing for people who want clarity, not hype. In one concise newsletter each day, you’ll get the most important AI and tech developments, learn why they matter, and what they signal about what’s coming next.

We cover real product launches, model updates, policy shifts, and industry moves shaping how AI actually gets built, adopted, and regulated. Written for operators, builders, leaders, and anyone who wants to sound sharp when AI comes up in the meeting.

It takes about five minutes to read, but the edge lasts all day.

AI and Academic Publishing

New OpenAI tool renews fears that “AI slop” will overwhelm scientific research

Ars Technica - January 29, 2026

OpenAI's launch of Prism, an AI-powered workspace for scientists, has raised concerns about the potential influx of low-quality research papers, as critics fear that the ease of producing polished manuscripts could overwhelm the peer review process and dilute the quality of academic publishing. While the technology aims to streamline research writing and collaboration, experts warn it may lead to a flood of subpar submissions, complicating scientific discourse and threatening the integrity of research evaluation.

Scholarly publishing's great leap

Research Information - February 2, 2026

The scholarly publishing industry must adapt to the AI era by transitioning from traditional PDF formats to Compute-Ready Documents (CRDs), which enhance data structuring and machine comprehension, ensuring integrity in research amidst challenges like fraudulent studies and unauthorized content scraping. By embracing a Knowledge-as-a-Service model, publishers can shift their focus from content ownership to providing dynamic, context-rich answers, creating new opportunities for innovation and trust in academic communication.

AI is not a peer, so it can’t do peer review

Times Higher Education (THE) - February 3, 2026

The increasing reliance on AI in peer review raises concerns about transforming thoughtful academic discourse into a mere technical process, risking the loss of nuanced understanding and personal investment in research. While AI can assist with efficiency, it's essential to prioritize human judgment and maintain the collaborative spirit that drives scientific progress.

Why write a literature review if AI can do it for you? - LSE Impact

Lse - 

AI tools are revolutionizing the way researchers conduct literature reviews by efficiently analyzing vast datasets and identifying key trends, allowing scholars to focus on innovative inquiries and interdisciplinary collaborations. This transformative technology not only streamlines research processes but also helps pinpoint gaps in the literature, guiding future studies and enhancing the overall academic ecosystem.

Digital Science and Silverchair partner to bring researcher identity and integrity screening into editorial workflows

Digital Science and Silverchair have teamed up to enhance research integrity by integrating the Dimensions Author Check API into ScholarOne Manuscripts, enabling publishers to easily monitor authors' publishing histories and identify unusual activities. This collaboration aims to foster trust and transparency in academic publishing, showcasing the importance of innovative partnerships in advancing research.

OpenAI Debuts New Tool for Scientists in Push for AI Discovery

OpenAI is launching a free tool to help scientists draft research papers and enhance collaboration using ChatGPT's advanced language processing capabilities. This initiative aims to streamline the writing process, fostering better communication and innovation in academic research.

This newsletter was partly curated and summarised by AI agents, who can make mistakes. Check all important information. For any issues or inaccuracies, please notify us here

Keep Reading