Connect with us

Hi, what are you looking for?

Social Media

Interventions needed to limit the spread of algorithm-led misinformation on social media

The research is helping to construct a framework to understand how algorithms influence social interactions.

Social media. — © AFP
Social media. — © AFP

Assessments of regular Twitter and Facebook users show that many people are unhappy with the overrepresentation of extreme political content or receiving notification of controversial topics in their feeds.

Research scientists from Northwestern University have considered how the misalignment between social media algorithms, which are designed to boost user engagement, and functions of human psychology can move away from their intended effect and instead lead to increased polarization and misinformation.

The research is helping to construct a framework to understand how algorithms influence social interactions.

The researchers contend that humans are predisposed to learn from those they perceive as having prestige within their group. This is part of the imperative of cooperation in society and as well as being necessary for survival.

Yet within the artificial, diverse and complex modern communities of social media these learning biases become less effective. This can arise when a person we are connected to online might not be trustworthy or where people feign prestige on social media.

Algorithms select information that boosts user engagement to increase advertising revenue. In other words, algorithms amplify the very information from which humans are biased to learn, oversaturating social media feeds with ‘Prestigious, Ingroup, Moral and Emotional (PRIME)’ information. This is regardless of the content’s accuracy or representativeness of a group’s opinions.

America's top health official said there is growing evidence that social media use is associated with harm to young people's mental health
Image: – © AFP/File DENIS CHARLET

Consequently, extreme political content or controversial topics are more likely to be amplified. When users are not exposed to outside opinions, they find themselves with a false understanding of the majority opinion of different groups.

To address this problem, the researchers think that social media users need to become more aware of how algorithms work and to understand why certain content shows up on their feed.

For example, a post may appear simply because the user’s friends are engaging with the content rather than the content itself being generally popular and newsworthy.

It would also be useful, the researchers state, if algorithms could be limited in terms of how much PRIME information they amplify and instead prioritize presenting users with a diverse set of content.

The advantages for social media firms are with improving their reputations and with better aligning their algorithms to improve user experience.

The research appears in the journal Trends in Cognitive Sciences titled: “Algorithm-Mediated Social Learning in Online Social Networks”.

Avatar photo
Written By

Dr. Tim Sandle is Digital Journal's Editor-at-Large for science news. Tim specializes in science, technology, environmental, business, and health journalism. He is additionally a practising microbiologist; and an author. He is also interested in history, politics and current affairs.

You may also like:

Business

The EU, Japan, Canada, Australia, and the Middle East definitely aren’t going to stop doing business with China, end of discussion.

Business

Among large metros, San Jose, CA, Washington, D.C. and Columbus, OH, take the podium in 2025 for women in tech.

Tech & Science

Image generated with Gemini.In a world where threats travel faster than updates and cyberattacks evolve as fast as the tools designed to stop them,...

Business

US government attorneys urged a federal judge to make Google spin off its Chrome browser.