September 2024
Tinius Digest
Tinius Digest report on changes, trends and developments within the media business at large. These are our key findings from last month. Share the report with colleagues and friends, and use the content in presentations or meetings.
About Tinius Digest
Tinius Digest report on changes, trends and developments within the media business at large. These are our key findings from last month.
Share the report with colleagues and friends, and use the content in presentations or meetings.
Content
- Online ads have minimal impact on users' experience
- Widespread confusion over ads and editorial content
- AI models trained on Norwegian news articles
- AI fails to sway 2024 elections—but erodes trust
- AI challenges demand new media literacy
- Dialogue with AI chatbots reduces conspiracy beliefs by 20 per cent
Online ads have minimal impact on users' experience
Stanford, Carnegie Mellon, and Meta researchers have analysed the long-term effects of online advertising on consumer experience using a large-scale Facebook experiment. The experiment, initiated in 2013, assessed whether ads create significant discontent or annoyance for users.
Key findings
1
No measurable dissatisfaction with ads
The study found no significant difference in users' willingness to accept compensation for giving up Facebook between those who saw ads and those who did not. This suggests that the negative impact of ads is minor or offset by the benefits of discovering relevant products and services.
2
Small valuation differences between groups
The median monthly value for Facebook access was $31.95 without ads and $31.04 with ads, showing a negligible difference. The minimum detectable difference was $3.18, suggesting that any dissatisfaction from ads would be less than ten per cent of the median value.
3
Similar time spent on Facebook
Users in the ads group spent 9.4 per cent less time on Facebook than those in the no-ads group. However, this time reduction did not correlate with a difference in overall satisfaction.
4
Consistency across regions and user types
The study revealed only small variations in the impact of advertisements across different regions or among users with varying levels of engagement or lengths of Facebook usage.
Widespread confusion over ads and editorial content
The Norwegian Media Authority has examined Norwegians’ ability to distinguish between commercial and editorial content.
Key findings
1
Struggles with differentiation
Most participants struggled to recognise editorial content, with 58 per cent mistakenly identifying an editorial article from E24 as commercial content.
2
Age-related differences
Younger respondents (aged 16-24) better distinguished editorial content than older participants. Despite this, many young people still struggle to identify commercial content correctly.
3
Effectiveness of clear labelling
When content was explicitly marked as commercial, 34 per cent of respondents still misclassified it as editorial. However, the more precise the labelling, the more likely respondents were to identify the content correctly.
AI models trained on Norwegian news articles
A coalition of media organisations in Norway has published a report exploring how generative artificial intelligence impacts copyright issues within editor-controlled media. The report assesses the current landscape and presents future scenarios for the media industry.
Key findings
1
Million of URLs from Norwegian media
The report points out that current AI models, such as those developed by OpenAI, rely on extensive datasets that often incorporate content from editor-controlled media. These datasets raise concerns about the unauthorised use of copyrighted materials. The public datasets that train large language models contain several million URLs from Norwegian news media.
2
Protected by paywalls
Paywalls prevent news articles from being included in datasets, resulting in more articles from the public broadcaster NRK than from subscription-focused media such as VG and Aftenposten. In specific searches, no paywalled news articles from Amedia newspapers were found in the Common Crawl, OpenWebText, or OpenWebText2 datasets, and only 39 such articles appeared in the mC4 dataset. This may be because the articles were initially published as free content and later placed behind a paywall.
3
Blocking scripts not respected
Despite TV 2 blocking AI companies from using content from their website, the report found over 57,000 URLs from tv2.no in the mC4 dataset alone.
4
Lag behind in AI readiness
Many Norwegian media organisations are slow to adopt AI. Some are experimenting with AI to streamline workflows, but most have yet to consider broader implications, including shifts in business models and copyright law.
5
Need for a unified legal framework
The report emphasises the need to create a legal framework that protects media content and recommends implementing stricter policies to prevent the unauthorised use of copyrighted material in AI models. The authors argue that without solid legislation, the media industry's intellectual property rights are in jeopardy.
AI fails to sway 2024 elections—but erodes trust
The Alan Turing Institute has published a report analysing the impact of AI-enabled influence operations during the UK, EU, and French elections in 2024.
Key findings
1
AI content did not significantly affect election outcomes
While there were only 16 instances of viral AI-enabled disinformation during the UK general election and 11 across the EU and French elections combined, none had a noticeable impact on the election results. Most viral AI content reached users whose political views aligned with the disinformation, limiting its influence on undecided voters.
2
Growing mistrust in online information
The study found that although AI did not sway voters on a large scale, its misuse seriously affected public trust. Deepfakes created confusion, particularly when labelled satire, leading people to doubt synthetic and authentic content. This scepticism threatens the integrity of the digital information space.
3
Emergence of new AI misuse tactics
A new challenge emerged where deepfakes were interpreted as factual, even when identified as parody or satire. This increased public confusion and led to politicians being falsely accused of unethical behaviour. Such cases risk incentivising future political campaigns to exploit similar tactics without transparency.
4
Deepfake attacks on political figures
AI-generated deepfakes, including pornographic material targeting politicians, caused severe reputational and psychological harm. These attacks often incited online hate and harassment, particularly against female candidates, raising concerns over the safety and well-being of those targeted.
5
Limited evidence of state-sponsored interference
Although state-sponsored groups like those linked to Russia were identified as creators of some AI disinformation, they had minimal success in influencing these elections. Traditional methods, such as bot-driven comment campaigns, remained more effective in amplifying disinformation than AI-generated content.
AI challenges demand new media literacy
The European Audiovisual Observatory has published a report about the evolving role of media literacy, particularly in light of the rise of AI technologies.
Key findings
1
AI literacy becomes essential
As AI technologies become more integrated into media content creation, it is crucial to emphasise the importance of AI literacy. AI-generated content, such as deepfakes, challenges discerning authentic and manipulated information. The public needs to develop the skills needed to evaluate AI-driven content to avoid being misled by misinformation critically.
2
Empowering vulnerable groups
Media literacy efforts have traditionally focused on minors. Still, there is a growing emphasis on supporting and educating adults and older people. These groups, especially seniors, need assistance navigating online and identifying misleading content. Media literacy programs are being broadened to fill this need, ensuring people of all ages are prepared to tackle the complexities of AI and digital media.
3
Education for educators
Educators and parents need better skills to guide children in critical media consumption, especially with AI-generated content. European training initiatives address this to support media literacy in schools and homes.
4
Formal and non-formal education efforts
Media literacy initiatives are incorporated into formal education systems and non-formal learning environments. Countries such as Ireland and Luxembourg have implemented national strategies that empower users with critical thinking skills. These initiatives are available through schools, libraries, and public campaigns.
5
Impact of AI on media literacy by design
The report suggests integrating "media literacy by design" in media services to promote digital literacy directly through platforms. It highlights Ofcom's work in the UK, where platforms like TikTok and Google explore ways to embed critical media literacy features within their services, using overlays, notifications, and algorithmic interventions to prompt users to engage with content more thoughtfully.
Dialogue with AI chatbots reduces conspiracy beliefs by 20 per cent
Researchers from Cornell University and MIT Sloan have studied the effect of personalised conversations with AI chatbots on belief in conspiracy theories. The study used GPT-4 Turbo to engage participants in conversations tailored to their conspiracy beliefs.
Key findings
1
Significant reduction in belief
Across 2,190 participants, the AI reduced belief in conspiracy theories by 20% on average. The effect persisted for at least two months, showing the durability of AI-led debunking efforts.
2
Effective across various conspiracies
The intervention worked across classic and modern conspiracy theories, from the JFK assassination to COVID-19. Even profoundly entrenched beliefs were successfully challenged.
3
Accurate and tailored responses
The AI personalised arguments based on participants’ stated evidence, with a 99.2 per cent accuracy rate in the information provided, further enhancing its effectiveness.
4
Spillover effects
The AI-driven conversations not only reduced belief in the targeted conspiracy but also lessened belief in other unrelated conspiracies, reflecting a broader shift in participants' conspiratorial mindset.
5
Positive influence on social behaviour
Participants exposed to the AI dialogues were likelier to disengage from or argue against other conspiracy believers, suggesting a change in their behavioural intentions towards misinformation.