AI Overviews and Chatbots Threaten the Sustainability of Journalism

AI Overviews and Chatbots Threaten the Sustainability of Journalism
The development of artificial intelligence (AI) is changing the way people around the world access information, which significantly affects journalism
Illustration: Said Salmanović
Tools such as Google AI overviews and chatbots like ChatGPT provide users with ready-made, concise information within seconds, leaving them little reason to browse or click on the original journalistic content. As a result, media outlets that rely on traffic from search engines face serious challenges.
For years, Google, the world’s most popular internet search engine, has been a key source of traffic for many online media. However, with the introduction of Google AI overviews, the information a user is searching for is now presented in a short AI-generated summary drawn from multiple sources at the very top of the search page. All other links appear below this overview, leading to a reduction in traffic for those sites.
The consequences are already visible. At the end of July, The Guardian reported on an analysis by the British digital analytics company Authoritas, which showed that a website previously ranked first in search results could lose around 80% of its traffic for that query if it appeared below the AI overview. Media companies have therefore been warned of the potentially “devastating impact” of artificial intelligence on visits to their websites.
Although Google dismissed the study’s findings as “inaccurate and based on wrong assumptions and analysis”, it is clear that the rise of chatbots and AI summaries poses a fundamental threat to the way many online portals and websites operate.
Paying fair compensation to the media
Nikola Bačić, editor-in-chief of the portal Hercegovina.info, warns that this trend threatens the very sustainability of digital journalism.
“If AI platforms take over our work without bringing us an audience or compensating us in any way, that is a direct blow to the sustainability of journalism. We cannot expect quality and independent reporting to exist if journalists are reduced to free content suppliers for corporations making billions”, he said.
One solution, Bačić suggests, is for AI companies to pay compensation to media outlets for using their content. He recalls participating last year in drafting a document with colleagues from the Ethical Journalism Network, which clearly stated that use of media content by AI models should be conditional on media consent, mandatory attribution, protection from misuse, and a fair distribution of revenue.
He also points to the Paris Charter on AI and Journalism of Reporters Without Borders and their calls to protect sources.
“However, while some large media outlets may already have secretly reached such arrangements, the question is where does that leave us, the small media? If this is not resolved through universal legal regulation, small newsrooms will remain isolated, and may even be forced to shut down”, Bačić says.
Milica Samardžić, executive director of the Association of Investigative Media “Umbrella,” also believes this issue requires legal regulation, with special attention to independent and local media. She highlights potential solutions, such as requiring large AI companies to pay fair compensation to media whose content they use for training or generating answers–either through a usage-based payment model or collective agreements negotiated by media associations, to prevent small outlets from having to negotiate with global giants on their own.
“Alternatively, part of the revenue that AI companies earn from using media content could go into a special fund to finance independent and local media. After all, these media outlets invest significant resources, time, and professional expertise into producing journalistic content, and it cannot simply be treated as a free resource. And, of course, AI tools must be required to clearly and visibly cite sources with clickable links, so that some traffic is retained for the original outlets”, she says.
Changing audience habits
According to a recent Pew Research Center study, only a very small percentage of users in the United States click on a source listed in an AI overview. Most are satisfied with the brief information that Google immediately provides. The study found that users click on the link below an AI overview only once in every 100 cases. Similar changes in audience habits are taking place elsewhere in the world.
Dejan Rakita, a journalist at Gerila.info, considers this practice extremely worrying, especially in societies with low levels of media literacy, such as Bosnia and Herzegovina.
“The public is already inundated with tabloids and portals spreading half-truths or outright disinformation, and now automated AI-driven information delivery is being added—without transparency, accountability, or professional standards. Institutions, including public broadcasters, largely fail to recognize their role in this process or take steps to follow and regulate it. If such tools are used without clear standards and proper source control, we risk further degrading the quality of public information”, Rakita said.
It is also worth noting that Bosnia and Herzegovina still lacks clear guidelines for the ethical use of artificial intelligence in the media, as well as any legislative or regulatory framework governing this area.
Media most vulnerable: those relying on advertising
The business model of today’s “traditional” online media is largely based on advertising. Users access content through search engines, social networks, or by directly opening a website, while media outlets earn money through displaying ads.
Website traffic has been significantly affected by changes in search and social media algorithms. According to Similarweb, this has hit hardest have been those offering content such as travel guides, health advice, and product reviews—which saw traffic from search engines drop by 55% between April 2022 and April 2025.
The news sector has also felt the impact. The independent Turkish outlet Gazete Duvar went bankrupt after drastic changes to Google’s algorithms. The Wall Street Journal reported that organic traffic to HuffPost and The Washington Post has more than halved over the past three years, while Business Insider in May laid off 21% of its staff due to “an extreme decline in traffic beyond our control.”
Jan Žabka, journalist and co-founder of the local Czech outlet Okraj.cz, stresses that news portals relying solely on advertising are in serious trouble. But, he adds, this is also an opportunity to rethink what it means to be a journalist and how financial support can be secured in new ways.
“Fewer visits mean less money for journalism. In my opinion, it also means a decline in the analytical and reading skills of audiences, which they need to consume quality journalism”, he said.
At a small local media outlet, relying solely on traditional funding methods such as advertising is impossible.
“That’s why we look at this from a different perspective. Our work is no longer just writing articles—it’s also meeting people, explaining our work, and asking what our audience needs. By focusing on our community, we can earn the trust of an engaged group and gain financial support—not just for journalism, but for our humanity and our role as guides in an era of information chaos”, Žabka explains.
He adds that it is now crucial to rethink what it means to be a journalist.
“Is it just about providing information? Or is it a human task of uncovering and explaining complex topics, being with people and for people? I believe it is the latter. We will have to collaborate with AI tools, and they can help us be better journalists—in sorting data, transcription, and so on. But our goal should be to be better journalists as human beings.”
Not only an economic problem
Hungarian journalist Zsófia Fülöp, who works for the fact-checking outlet Lakmusz, believes that the introduction of AI tools such as Google overviews is problematic not only from an economic perspective.
“Our goal is not just to show where disinformation comes from, but to empower the audience to uncover it themselves. So, when an AI tool provides a short summary of a complex issue in a simple Google search, it does the opposite of what we are trying to achieve. If a Google search gives you a quick summary for a complex question, people tend to take it as accurate and true, and usually don’t click on the articles listed alongside it”, she says.
Fülöp admits that sometimes uses AI summaries when doing research, but she always checks the sources and seeks additional references to support the information.
“We cannot expect the broader public to do that, and that is certainly a threat to journalism as a whole,” Fülöp warns.
The future of journalism in the AI age
Artificial intelligence has already become part of newsrooms and journalistic work, and audiences are increasingly using AI tools as well. Excluding AI from journalism is no longer an option, instead, we must find ways to use it so that it does not threaten the profession.
“We are already adapting and actively using AI tools in our work, from processing transcripts and visual content to translation, grammar corrections, research, and data analysis. At the same time, we are developing formats that AI cannot easily reduce to a few sentences, such as in-depth investigative series, local reports, and exclusive analyses. We see room for cooperation with AI platforms, but only under clear conditions that ensure the protection of our content, respect for copyright, and fair compensation for using our work”, says Nikola Bačić, editor of Hercegovina.info.
Bačić adds that their goal is not to fight technology but to use it for the benefit of both the newsroom and the audience, while preserving the journalist’s role, ethics, and standards.
Gerila.info held internal training on the use of AI tools in everyday work, focusing on ethical guidelines, accountability, and copyright protection. Journalist Dejan Rakita says he uses tools like ChatGPT only as an aid—to speed up certain technical processes, analyse material, translate, and organize information. He still believes that fieldwork, investigation, and authentic journalistic expression are irreplaceable.
“AI can be a useful ally, but never the foundation of our work. The focus remains on developing our own style, producing quality analysis, and on-the-ground reporting, not on automating content. AI is undoubtedly a phenomenon that will increasingly shape information, and it cannot be ignored. But precisely for that reason, we must openly discuss it—with the audience as well as within newsrooms. Education, transparency, and developing media literacy are key to preserving quality journalism in the digital age”, Rakita said.
While examples of AI integration into newsrooms in Bosnia and Herzegovina remain isolated, the potential of AI is being actively explored elsewhere in the EU. For instance, Lakmusz, where Zsófia Fülöp works, participated in a project called “TheCheck,” aimed at developing an AI chatbot to assist fact-checkers.
“It was a very interesting process: we tested the chatbot several times in our language, and the main challenge was that the chatbot could not independently find relevant facts to support its answers to user questions. So, I would say that as fact-checkers and journalists, we need to keep our eyes and minds open to AI tools, but also look at them critically, because they can help but also cause harm”, Fülöp concludes.
This article was produced with the financial support of the European Union. Its contents are the sole responsibility of the Mediacentar Foundation and do not necessarily reflect the views of the European Union.