Connect with us

Hi, what are you looking for?

Social Media

Media bias? ChatGPT draws on selected news sources

Just how biased is the news curated by LLMs?

Huge investment announcements by ChatGPT-maker OpenAI this week boosted tech optimism but there are worries that the AI-fuelled rally may have run too far
Image: — © AFP SEBASTIEN BOZON
Image: — © AFP SEBASTIEN BOZON

Popular AI tools used by millions to access news are drawing on a narrow and inconsistent range of sources, often sidelining more trusted journalism and reshaping which voices are heard. This is according to new analysis from the Institute for Public Policy Research (IPPR).

The think tank analysed how four leading AI tools – ChatGPT, Google Gemini, Perplexity, and Google AI Overviews – respond to news queries, and found that the BBC, the UK’s most popular and trusted news outlet, was missing entirely from ChatGPT and Gemini, partly due to the murky rules governing this interaction. 

Other major news outlets, albeit ones classed as right-wing, also received limited exposure on ChatGPT: The Telegraph was cited in just 4 per cent of answers, GB News in 3 per cent, the Sun in 1 per cent, and the Daily Mail in 0 per cent. Of these, the last three are not considered to be reliable news sources and their limited exposure will not detract from most fair-minded readers.

ChatGPT’s top source was the Guardian, which was used as a source in 58 per cent of responses, and linked to far more than any other outlet, followed by Reuters, the Independent, and the Financial Times. These are each well-respected news outlets.

However, Google AI overview used the BBC as a source of its answer for 52.5 per cent of news queries, and Perplexity used the public broadcaster for 36 per cent. The Guardian was the most common source used by Gemini (appearing in 53 per cent of answers).

Blockages and licences

The think tank says there are several reasons why AI companies source news inconsistently. Some publishers, such as the Guardian, have licensing agreements with firms that own AI products like ChatGPT, while others — including the BBC — have sought to block AI companies from accessing their content.

Last year, the BBC threatened legal action against Perplexity for using its content without permission. While ChatGPT appears to be respecting the BBC’s wishes, this comes at a cost to the public: the UK’s most popular and trusted news outlet is absent from the country’s most widely used AI tool.

IPPR says this editorialisation by AI companies is creating a new generation of winners and losers. The disproportionate use of some outlets over others risks narrowing the range of perspectives users are exposed to, potentially amplifying particular viewpoints or agendas without users’ knowledge.

Future of journalism?

Additionally, the rise of AI could have serious consequences for the financial sustainability of quality journalism. News publishers have redicted a 43 per cent reduction in traffic from search engines over the next three years, threatening advertising and subscription revenues, particularly where AI companies are reproducing content without payment in return.

The authors of the report say that AI is rapidly transforming the news ecosystem, and AI companies are quicky emerging as the new gatekeepers of the internet, dominating the way the public now consume news and information.  

The think tank recommends:

  • Making AI companies pay for the news they use, by requiring fair payment and collective licensing deals that ensure a wide range of publishers are included
  • Introducing clear, standardised “nutrition labels” for AI news so the public can see where AI answers come from and how they’re shaped
  • Using public funding to protect independent news in the age of AI, by backing a BBC-led public interest AI news service.

Much of this will depend on the UK Government to rule out any new text and data mining exception in their March reports as well as taking action in relation to current data gathering.

Avatar photo
Written By

Dr. Tim Sandle is Digital Journal's Editor-at-Large for science news. Tim specializes in science, technology, environmental, business, and health journalism. He is additionally a practising microbiologist; and an author. He is also interested in history, politics and current affairs.

You may also like:

World

AI tools make deepfakes easier to create and harder to detect than ever before.

Business

If intelligence becomes a metered utility controlled by a handful of providers, then decision making becomes capacity-constrained infrastructure.

Business

Factors like convenience and workflow efficiency increasingly outweigh model preference in day-to-day usage.

Social Media

Australian mining magnate Andrew Forrest is asking a US federal court in Silicon Valley to hold Meta accountable for scam ads.