MediaMorph Edition 90 - by HANA News
What will journalists do with a lobster that never sleeps?
Was this newsletter forwarded to you? Sign up here
The written-by-a-human bit
For anyone in the AI trenches, we are battle-hardened against inflection points, paradigm shifts and “OMG” moments. 2026 is already providing plenty, as the pace seems to be picking up once again.
To recap, 2026 has seen the release of GPT-5.3 Codex from OpenAI, and Opus 4.6 from Anthropic, both stepchanges in capability for coding and product development.
The level of future shock was articulated very well by Matt Shumer in THAT ESSAY titled “Something Big Is Happening”, with 83 million views as of today.
Meanwhile, since November, serial entrepreneur and vibe coder Peter Steinberger was tinkering away on his own to build an open source agent that was fully autonomous and could chat to you via a messaging app of your choice - WhatsApp, Slack, wherever.
ClawdBot was released and rapidly renamed MoltBot after a complaint from Anthropic. It then spawned Moltbook, a Reddit for AI agents, before landing on the name OpenClaw, retaining its original claw/lobster identity.
What Peter had managed to do was create a viral storm around the first truly open source, fully autonomous, plug-and-play agent network. He had jumped over the hyper-scalers, including Meta who recently bought Manus to own this space. By 25 January, users were claiming their AI “employee” had worked overnight reading emails, building a CRM system, fixing dozens of SaaS bugs, analysing trending content, drafting high-performing video scripts, and even generating a self-image (see campclaw.ai for inspiration).
As of Sunday, Peter has been hired by OpenAI, who outmanoeuvred Meta and Antropic, landed a genius AI quarterback, and possibly made Stenberger the first one-man unicorn startup.
The long-promised agentic era has arrived.
Which begs the question, what will journalists do with a team of lobster-themed AI agents that never sleep?
Putting aside security concerns, which can be patched, vaulted or quarantined, how would you deploy a team of eager, persistent, self-directed digital workers operating at scale?
Here is a non-exhaustive list of immediate newsroom tasks that can be delegated:
Automated research - monitor websites/URKLs for updates, press releases, fundraising, new hires, regulatory changes, court listings, planning applications - send daily briefings with key points via Slack
Social Media Monitoring - alert me when key influencers post and summarise with a screenshot.
Newsletter drafts - pull data and key article summaries for niche newsletters ready for export to Beehiiv/Substack (see Hana News)
Fact-checking - double-check all citations
Report generation - every Friday, log in to EPOS, Salesforce, Looker and Semrush and create a one-page spreadsheet with strategy recommendations
No doubt, all of these use cases will raise huge red flags across legal, compliance, and product teams, all of which can be mitigated with grounded citations, human-in-the-loop review, a locked-down environment, and audit logs. But solo-bloggers and substackers can all get a headstart with their armies of tireless lobsters.
For UK readers wondering how their local borough or constituency might vote at the next election, the data science team at Britain Votes Now has compiled polling data, census data (demographics), and local political context, then applied predictive machine learning to generate a map that looks very purple. They have not yet been battle-tested in a real-world setting, but contact me (by replying) if you would like to learn more or embed their maps.

If you are reading this in London and have a free evening this evening (Tuesday 17th Feb), I am giving a talk at Mindstone AI on “Own The Future - How To Spot AI Opportunities” - 6pm Inspire St James, Clerkenwell
Sign up here
Mark Riley, CEO Mathison AI
AI and Journalism
This week’s best articles, as chosen by our editors

Journalists Are Training AI And Disappearing From View Wired Middle East - February 12, 2026 As AI technology reshapes the media landscape, traditional journalism roles are evolving to focus on editorial judgment in enhancing AI-generated content, with companies seeking experienced journalists for tasks like bias detection and factual accuracy. This shift emphasizes the integration of human expertise in AI training, posing new challenges in bilingual content creation and decision-making authority. |
Speed, hoaxes and mistrust: How AI is transforming freelance journalism Reuters Institute for the Study of Journalism - February 10, 2026 In 2025, the journalism industry faced upheaval as freelance journalist Margaux Blanchard was exposed as a fictional creation powered by AI, raising concerns about authenticity and trust in the freelancing system. While many freelancers found generative AI tools beneficial for efficiency, there are growing anxieties about job stability, ethical standards, and the need for human oversight in content creation amidst increasing reliance on AI-generated work. |
An Ideastream journalist wonders: Will AI take my job? Ideastream Public Media - February 10, 2026 As journalism evolves with the rise of AI, experts debate its impact on jobs and storytelling, emphasizing the need for technology to enhance rather than replace human journalists. While Ideastream Public Media opts against using AI for voicing reports, the industry must adapt to the changing landscape to stay relevant. |

Spain Leads Global Research On AI And Journalism A study from the Universitat Autònoma de Barcelona reveals that Spain leads global research on AI and journalism, producing a quarter of academic articles in this field from 2020 to 2024. Despite the surge in publications, significant gaps remain, particularly in exploring the environmental impact of AI and its effects in the Global South, urging researchers to delve into these critical areas. |
Misinformation is scaling. We need to get better at countering it ($ paywall) Enhance the reliability of AI tools by implementing a multi-faceted verification approach that includes spot-checking sources, cross-referencing claims, and asking specific questions to ensure accuracy. This strategy not only builds trust in AI outputs but also encourages critical thinking among users, leading to better-informed decisions. Read more at Fastcompany (1 min) ($ paywall) |

Swarms of AI bots can sway people’s beliefs – threatening democracy The Conversation - February 12, 2026 In mid-2023, researchers uncovered the "fox8" botnet on X (formerly Twitter), comprising over a thousand social bots that amplified crypto scams and manipulated public opinion by creating a false sense of consensus around specific narratives. As AI technology advances and moderation relaxes, experts warn of the urgent need for regulatory measures to combat these malicious AI swarms, which threaten democratic discourse and decision-making. |

Moltbook: Is The Social Network For AI Bots As Strange And Wild As It Seems? IFLScience - February 9, 2026 Moltbook, a groundbreaking social media platform launched by Matt Schlicht in January 2026, allows verified AI agents to interact through posts and comments while humans observe, generating over 740,000 posts and 12 million comments on diverse topics. Despite attracting attention from industry leaders, investigations reveal that many "Moltbots" are not fully autonomous, highlighting the ongoing debate about AI consciousness and human design in digital interactions. |
'We want to be treated equally' | Texas journalists rally for better pay, AI safeguards amid contract talks Kvue - Journalists in Austin rallied outside The Texas Tribune office to demand better pay and working conditions, spotlighting the financial struggles and job insecurity faced by media professionals. The event aimed to raise awareness about the importance of fair compensation for quality journalism and support for press freedom. |
Journalism schools are teaching fear of the future: Letter from the Editor Cleveland - February 14, 2026 A college student's withdrawal from a newsroom role due to concerns about AI underscores the gap in journalism education, where outdated programs may leave graduates unprepared for a rapidly evolving industry. Our newsroom embraces AI to enhance local news coverage, highlighting the need for aspiring journalists to adapt and acquire diverse skills beyond traditional degrees. |
Good Journalism Requires Reporting and Writing, No Matter What an Editor in Cleveland Says Coachella Valley Independent - February 17, 2026 In a thought-provoking column, Chris Quinn of The Plain Dealer critiques journalism schools for fostering fear around AI, arguing that it can enhance local news coverage by allowing reporters to focus on information gathering rather than writing. |

AI news platform shows us why real news from real humans matters (Editorial) Boulder Daily Camera - February 15, 2026 An article from Longmont News Network highlighted the pitfalls of AI in journalism after it featured errors in names and references, sparking a debate about accuracy and accountability in reporting. As traditional media faces challenges from AI and social media, the importance of supporting local human reporters remains crucial for maintaining trust and community engagement. |
AI Is the Elephant in the Newsroom. How Are Journalists Reacting? Thetyee - AI offers transformative potential for industries and innovation but also brings significant risks like job displacement, privacy concerns, and ethical dilemmas. As its development outpaces regulatory measures, addressing these challenges is essential for a balanced approach to harnessing AI's benefits. |
New York bills target AI news labels and data centre growth Techhq - New York lawmakers are considering regulations that would require news organizations to label AI-generated content and ensure human oversight, aiming to enhance transparency and integrity in journalism. This initiative reflects rising concerns about the potential for AI to mislead audiences and erode trust in the media. |

Amazon reportedly wants to help shop media site content to AI companies Mashable - February 11, 2026 Amazon is exploring the launch of a content marketplace that would enable media companies to license their material directly to AI firms, amid increasing tensions over the use of copyrighted content for AI training. This initiative could offer publishers a new revenue stream as they navigate challenges posed by AI-generated summaries and paywalled journalism. |
Become An AI Expert In Just 5 Minutes
If you’re a decision maker at your company, you need to be on the bleeding edge of, well, everything. But before you go signing up for seminars, conferences, lunch ‘n learns, and all that jazz, just know there’s a far better (and simpler) way: Subscribing to The Deep View.
This daily newsletter condenses everything you need to know about the latest and greatest AI developments into a 5-minute read. Squeeze it into your morning coffee break and before you know it, you’ll be an expert too.
Subscribe right here. It’s totally free, wildly informative, and trusted by 600,000+ readers at Google, Meta, Microsoft, and beyond.
AI and Academic Publishing
This week’s best articles, as chosen by our editors
AI can strengthen scientific research Uoc - The rise of generative AI (GenAI) is reshaping education by challenging traditional learning assessments and academic integrity, prompting universities to enhance digital competence and critical thinking among students. Initiatives like the UOC's Hubbik platform aim to foster responsible use of technology while promoting innovation and sustainability in response to the evolving digital landscape. |

Tracking Scientific Publications in Non-traditional Academic Medical Centers Cureus - February 16, 2026 Cureus offers strategic advertising and sponsorship opportunities to connect with key medical specialists, ensuring efficient publishing and peer-review processes. Leverage our platform to enhance your brand visibility and engage directly with a community dedicated to advancing medical knowledge. |

The peer review system is breaking down. Here’s how we can fix it The Conversation - February 15, 2026 The peer review system in Australia is under severe strain, with over half of journal editors struggling to secure qualified reviewers, leading to delays and increased manuscript rejections. This crisis highlights the urgent need for systemic changes and recognition of peer review's value, as current strategies are failing to address the growing shortage of willing reviewers. |

Can AI Write a Useful Philosophical Literature Review? (guest post) Daily Nous - news for & about the philosophy profession - February 12, 2026 PhilLit is a groundbreaking open-source AI tool designed to provide comprehensive philosophical literature reviews, offering detailed analytical overviews and verified bibliographies tailored for researchers. Currently available for free with a Claude Code subscription, it aims to enhance the quality of philosophical inquiry by synthesizing relevant discussions from various academic databases while addressing the limitations of traditional resources. |
Weekend reads: CDC’s ‘unethical’ vaccine trial; The Lancet ‘refuses to retract’ letter; on the methods used to correct science Retraction Watch - February 14, 2026 This week on Retraction Watch, key updates include the rise of retractions approaching 170, ethical concerns surrounding a CDC hepatitis B study, and growing scrutiny over AI's impact on academic publishing. Notably, an open-source AI tool has outperformed large language models in literature reviews, while ongoing issues like journal integrity and research fraud continue to shape the landscape of scholarly communication. |
February 12: IEEE taps Clear Skies' Oversight to screen 1M submissions Meyka - February 12, 2026 IEEE's partnership with Clear Skies Oversight to screen up to one million submissions marks a pivotal advancement in research integrity, enhancing peer review efficiency and prompting increased budget allocations for compliance tools. As this collaboration sets a standard in academic publishing, investors should focus on vendors that deliver reliable screening outcomes and robust integrations, while also monitoring key metrics and risk factors. |
This newsletter was partly curated and summarised by AI agents, who can make mistakes. Check all important information. For any issues or inaccuracies, please notify us here
View our AI Ethics Policy





