Til forsiden

Oktober 2024

Tinius Digest

Månedlige rapporter om endringer, trender og utviklinger i mediebransjen.

Logo Tinius Digest

Om Tinius Digest

Tinius Digest gir deg en oversikt over rapporter om og analyser av utvikling i mediebransjen og publiseres en gang i måneden. Her er våre viktigste funn fra denne måneden.

Del gjerne innholdet med kollegaer og bruk det i møter og presentasjoner.

AI labeling weakens headlines' credibility

Researchers from the University of Zurich studied how people react to news headlines labelled as 'AI-generated'. The study consists of two experiments with nearly 5,000 participants from the US and UK.

Download the research paper.

Key findings

1

Labeling have a negative effect

Labelling headlines as 'AI-generated' reduced both their perceived accuracy and people's willingness to share them, regardless of whether the headlines were actually true or false or whether humans or AI created them. When human-written headlines were incorrectly labelled AI-generated, they were also perceived as less accurate.

2

Impact is smaller than 'false' labels

Labelling headlines as AI-generated had a negative effect (-2,7 per cent), but it was three times smaller than explicitly marking them as false (-9,3 per cent). This suggests that while people are sceptical of AI-generated news, they do not automatically equate AI with falsehood.

3

Clearer AI labelling could mitigate scepticism

When participants were given explanations about AI’s role—such as AI only improving style or drafting text—scepticism was reduced. Without clarification, people tended to assume full automation, leading to greater distrust.

4

AI-generated labels do not affect overall trust in news

Exposure to AI-generated labels did not significantly lower participants’ general trust in news or journalism. The effect was limited to individually labelled headlines and did not reduce trust in unlabeled content, decrease overall trust in media, or increase general concerns about AI.

Political bias in social media sanctions linked to misinformation sharing

Researchers from the University of Oxford, MIT, and Yale have investigated whether political bias in social media enforcement results from actual partisan discrimination or differences in misinformation sharing.

Download the research paper.

Key findings

1

Higher suspension rates for conservative users

A study of 9,000 politically active Twitter users during the 2020 US presidential election found that pro-Trump/conservative users were 4.4 times more likely to be suspended than pro-Biden/liberal users. However, this did not necessarily indicate bias but correlated with misinformation-sharing behaviour.

2

Conservative users shared more low-quality news

The researchers found that pro-Trump/conservative users shared significantly more links to low-quality news sources than pro-Biden/liberal users. This trend persisted across multiple datasets from 2016 to 2023, covering Facebook, Twitter, and survey experiments in 16 countries.

3

Confirmed misinformation asymmetry

To test for bias in misinformation ratings, politically balanced groups of laypeople (including Republicans) assessed news quality. Even using this measure, conservative users still shared more low-quality news, refuting claims that professional fact-checkers unfairly target right-leaning sources.

4

More conservative bots

Pro-Trump/conservative users had a higher estimated likelihood of being bots or engaging in behaviours flagged by platform policies, such as sharing conspiracy theories or inciting violence. These factors likely influenced the suspension rates alongside misinformation sharing.

AI usage doubles among Norwegian businesses

Statistics Norway has published surveys looking into the use of AI in Norwegian businesses.

Key findings

1

Doubling among businesses

21 per cent of Norwegian enterprises with at least ten employees use one or more AI tools, more than double the number from last year (9%).

2

Significant industry differences

The use of AI tools is highest in the information and communication sector (63%), followed by other services (33%) and wholesale and agency trade (21%). The industries with the lowest AI adoption are accommodation and food services (11%), construction (9%), and transport and storage (9%).

3

Geographical disparities

The proportion of businesses using AI tools is highest in Oslo (50%), Akershus (31%), and Rogaland (26%), and lowest in Troms (17%), Vestfold (16%), and Østfold (13%).

4

Young people leading the way

36 per cent of Norwegians have used AI tools in the past three months, with a significant age gap: 65 per cent of 16-24-year-olds report using AI tools, while the share steadily declines to just nine per cent among those aged 65-74.

Digital exclusion is growing among older Swedes

The Swedish Internet Foundation has published its annual study, The Swedes and the Internet. The study provides facts and insights into the development of internet usage in Sweden.

Download the report.

Key findings

1

Older Swedes are digitally excluded

While 95 per cent of Swedes use the internet daily, those who do not are almost exclusively senior citizens. The share of non-users increases fivefold among those over 75 compared to younger retirees (65–75 years), with older women particularly affected.

2

Many struggle with digital services

One in five adult Swedes needs help with digital tasks. Among those over 75, more than half require assistance installing mobile Bank-ID, managing passwords, and understanding technical terms.

3

Social media affects well-being—especially young women

Social media is widely used for socialising and information-sharing, but it also has negative effects. Many young women feel lonely, inadequate, and less attractive due to social media. Meanwhile, young men are the most frequent victims of online harassment—one in four men born in the 2000s has experienced cyberbullying.

4

Strong support for increased online surveillance

Most Swedes favour camera surveillance with facial recognition in public spaces. Nearly all believe the police should have access to private online conversations, and only four per cent prioritise personal privacy over crime prevention.

5

Digital payments are rising, but older people are falling behind

Nearly all Swedish adults use e-identification and digital payment services. Mobile Bank-ID is dominant (92 per cent use it). Swish remains the most popular payment app, but older Swedes struggle to adopt these technologies—25 per cent of retirees lack an e-ID.

Norwegians struggle to spot misinformation online

The Norwegian Media Authority has published a report examining the level of critical media literacy among Norwegians.

Download the report (in Norwegian).

Key findings

1

Social media primary source of false information

Two out of three Norwegians have encountered news stories online that they suspected were false. Social media is the primary source of this type of content, with 80 per cent of those who encountered suspicious news stating they saw it on social platforms.

2

Misinformation confidence gap

Despite widespread concerns about misinformation, only 13 per cent of respondents find it easy to determine whether online information is true or false. Younger people are significantly more confident in assessing information accuracy than older individuals, but this confidence does not necessarily correlate with actual skill.

3

Low awareness of AI-generated content

Many respondents struggle to recognise AI-generated images. When presented with different visuals, only a minority correctly identified which ones were created using artificial intelligence. This suggests that AI technology makes separating real from manipulated content increasingly difficult.

4

Widespread concern over societal impact

A large majority (80 per cent) worry that misinformation erodes public trust in authorities, politicians, and the media. Older respondents express greater concern than younger ones, particularly regarding the role of artificial intelligence in making it harder to verify the truth.

5

Online harassment discourages participation in public debate

13 per cent of Norwegians have received offensive or harassing comments online in the past six months. As a result, 16 per cent of those who experienced online harassment have stopped participating in online discussions, and 33 per cent have become more cautious about engaging in debates. Women are more likely than men to feel negatively impacted by online harassment.

Social media platforms fail to stop AI-powered bots

Researchers from the University of Notre Dame and Dublin City University investigated how well social media platforms enforce their policies against AI-powered bots. They tested eight major platforms—X (formerly Twitter), Instagram, Facebook, Threads, TikTok, Mastodon, Reddit, and LinkedIn—using an automated bot powered by OpenAI’s GPT-4 and DALL-E 3.

Download the research paper.

Key findings

1

Fail to block AI-powered bots

Despite having policies against bots, all tested platforms failed to detect or stop the researchers’ automated bot. The bot successfully created accounts, logged in, and posted AI-generated content without being flagged or removed.

2

Meta provide some resistance but remain vulnerable

Facebook, Instagram, and Threads initially suspended test accounts for violating “account integrity” policies. However, after multiple attempts, the bot successfully created accounts and posted AI-generated content undetected. Meta’s stricter enforcement appeared to focus on repeated logins rather than detecting AI-generated content.

3

User-moderated platforms struggle the most

Mastodon and Reddit, which rely heavily on user moderation, were the easiest platforms for the bot to operate on. Without automated detection mechanisms, AI-powered bots can freely post and engage with users.

4

X and Tiktok allow AI-generated content despite policies

Although X (formerly Twitter) and TikTok have explicit rules against non-API automation and undisclosed AI-generated content, the bot successfully posted on both platforms without intervention. TikTok required more CAPTCHA tests but ultimately failed to detect the bot’s activity.

Advanced AI models may be less reliable

Researchers from the Universitat Politècnica de València, the University of Cambridge, and ValGRAI have investigated how increasing the size and instructability of large language models (LLMs) affects their reliability.

Download the research paper.

Key findings

1

Loss of reliability in larger models

As language models become larger and more instructable through scaling up (increasing size, data, computing) and shaping up (fine-tuning, human feedback), they paradoxically become less reliable from a human usage perspective. The latest models often fail at seemingly simple tasks while succeeding at more complex ones.

2

Decline in task avoidance

More recent AI models are less likely to avoid answering when uncertain. Instead of acknowledging uncertainty or limitations, newer models tend to give plausible-sounding but incorrect answers more frequently. The paper terms this behaviour "ultracrepidarianism"—increasingly giving non-avoidant answers when they don't know, leading to proportionally more failures.

3

Inconsistent performance

The models don't show consistent performance patterns based on task difficulty. They can sometimes solve very challenging problems while failing at much simpler ones in the same domain. This makes it difficult for users to develop reliable mental models of when to trust the system's outputs.

Flere rapporter

Desember 2024

Tinius Digest desember 2024

Månedlige rapporter om endringer, trender og utviklinger i mediebransjen.

November 2024

Tinius Digest november 2024

Månedlige rapporter om endringer, trender og utviklinger i mediebransjen.

September 2024

Tinius Digest september 2024

Månedlige rapporter om endringer, trender og utviklinger i mediebransjen.

August 2024

Tinius Digest august 2024

Månedlige rapporter om endringer, trender og utviklinger i mediebransjen.

Se alle rapporter

Meld deg på nyhetsbrevet