March 2024
Tinius Digest
Tinius Digest report on changes, trends and developments within the media business at large. These are our key findings from last month. Share the report with colleagues and friends, and use the content in presentations or meetings.
About Tinius Digest
Tinius Digest report on changes, trends and developments within the media business at large. These are our key findings from last month.
Share the report with colleagues and friends, and use the content in presentations or meetings.
Content
- AI models struggle to provide reliable election information
- 6 out of 10 Norwegian TikTok videos are positive
- AI better prompt engineers than humans
- Seniors' offline habits in online navigation
- Future of Media: Navigating the tech supercycle
- Increased screen time reduces parent-child interaction
- YouTubes right-wing bias in the Finnish election
AI models struggle to provide reliable election information
The AI Democracy Projects has evaluated the performance of five leading AI models in answering voter queries.
Key findings
1
High rates of inaccuracy
Over 50 per cent of AI responses to election-related questions were inaccurate. While GPT-4 performed best, it still produced incorrect answers in 19 per cent of cases.
2
Potential for harm
More than one-third of the responses were deemed harmful or incomplete. Inaccurate information on voter eligibility and polling locations could mislead voters, potentially discouraging participation or causing confusion.
3
Bias and misleading outputs
Around 13 per cent of the AI models’ responses were flagged as biased. Some models failed to provide fact-based answers, especially when discussing contentious political issues, raising concerns about impartiality and trust.
4
Open vs. closed models
Open models like Llama 2 and Mixtral produced longer but less accurate outputs than closed models like GPT-4 and Claude. The report suggests brevity in AI responses may contribute to higher accuracy.
5
GPT-4 outperforms others
GPT-4 consistently outperformed the other models, with the lowest rate of harmful or incomplete answers, but it failed to consistently deliver fully accurate and safe election-related information.
6 out of 10 Norwegian TikTok videos are positive
Faktisk.no and the AI Journalism Resource Centre at OsloMet have used AI to categorise the emotions shown by Norwegian influencers in 37,000 TikTok videos from 2016 to 2023.
Read more (in Norwegian).
Key findings
1
Positive content dominates
Contrary to concerns about harmful content, the analysis reveals that nearly 60 per cent of the videos express positive emotions, with only 25 per cent showing negative sentiment. Videos with positive emotions receive the highest engagement rates.
2
Decreasing user engagement
While TikTok viewership has surged in Norway, the study shows a marked decline in active user engagement (likes, shares, comments). Despite an increase in video consumption, viewers have shifted towards passive scrolling since 2018.
3
TikTok as a search engine for youth
Many young people now use TikTok as an informal search engine, valuing its short, visual content for quick explanations on various topics. This shift shows TikTok’s growing role as an information hub among younger generations.
4
Concerns over youth mental health
Despite the dominance of positive content, experts warn that the passive use of TikTok could negatively impact mental health. Prolonged passive scrolling has been linked to feelings of meaninglessness, especially among young people.
5
Pandemic-driven growth
The platform saw a significant increase in Norwegian users during the COVID-19 pandemic. By 2023, 29 per cent of adults and over half of Norwegian children aged 8-19 used TikTok daily, cementing its role as a dominant social platform.
AI better prompt engineers than humans
Researchers from VMware NLP Lab have examined how unconventional prompt design can affect large language models (LLMs). The study compares different prompts across three models and evaluates their performance.
Key findings
1
Positive phrases helps performance
Adding positive phrases like “You are highly intelligent” to prompts often improved the models' performance, especially when combined with a "step-by-step" approach. However, this effect was not always the same for every model.
2
Best results with automatic prompt optimisation
The largest model, Llama2-70B, performed even better when the prompts were optimised automatically rather than by humans. The optimized prompts were often strange but worked surprisingly well.
3
Step-by-step prompts work better
Prompts that guide the model through a step-by-step process, especially for complex math problems, gave better results than simple instructions.
4
Automatic prompts beat manual ones
Automatically created prompts consistently outperformed human-made "positive thinking" prompts. The system-generated prompts were often unusual but more effective.
Seniors' offline habits in online navigation
Researchers at the University of Bergen have examined the expectations of older adults (65–98) in Norway as they navigate digital services.
Key findings
1
Expectations of human involvement
Older adults frequently expect human participation in online interactions. For instance, they interpret automated systems or algorithms as human actions, attributing personality traits like helpfulness or rudeness to them.
2
Expectations of visibility
Many older adults assume their online actions are visible to others, much like in physical spaces. They imagine being watched or judged while navigating websites, which sometimes affects their behaviour, such as hesitating to explore a webpage too freely.
3
Lack of a human safety net
Participants often believe there is a human safety net overseeing their online actions, only to realise they are solely responsible when things go wrong. This leads to frustration and a reluctance to continue using online services.
4
Human limitations and social conventions
Older adults apply offline social norms and limitations to their digital interactions. For example, some users write polite, complete sentences in English when searching online, mistakenly believing that real people rather than algorithms are handling their queries. This misunderstanding hinders their effective use of search engines and other digital services.
Future of Media: Navigating the tech supercycle
The Future Today Institute has published its 17th annual Tech Trends Report. This year's edition—the 17th in a row—spans an incredible 979 pages, covers 695 trends and is divided into 16 sub-reports. Here are some key findings from the media and information industry sub-report.
Key findings
1
Generative AI becomes mainstream
Generative AI, particularly large language models (LLMs), is changing content creation and consumption. While human creators remain essential, AI tools increasingly automate content creation, summarisation, and editing. Publishers must adjust to this shift as multimodal LLMs take over various aspects of the media value chain.
2
Shift in digital traffic and search behaviour
Traditional search-driven traffic is declining as AI-driven search engines—like Google’s Search Generative Experience—offer direct answers instead of lists of links. This threatens referral traffic to news websites, forcing publishers to reconsider how they reach audiences.
3
Loss of publisher power
AI tools, large data sets, and AI summarisation models enable tech companies to control more of the content value chain. As a result, traditional publishers risk losing their foothold as major players in content distribution.
4
Philanthropy as a lifeline for journalism
With local and public news outlets struggling financially, philanthropic foundations are emerging as key supporters. However, relying on this model poses sustainability risks as these funds might not always be available in the long run.
5
Historic low trust in news
Trust in media is at near-historic lows, which presents an existential challenge for legacy media organisations. To succeed, they must rebuild credibility for retaining advertising revenue and reader subscriptions.
Increased screen time reduces parent-child interaction
Researchers at the University of Western Australia have studied how screen time affects parent-child interactions during children’s early developmental stages, from 12 to 36 months. It provides longitudinal data using advanced speech recognition technology.
Key findings
1
Negative association with parent-child talk
The study found that increased screen time is linked to fewer adult words spoken, child vocalisations, and conversational turns. The strongest reduction was observed at 36 months of age, where each additional minute of screen time corresponded to 6.6 fewer adult words, 4.9 fewer child vocalisations, and 1.1 fewer conversational turns.
2
Technoference and language development
The findings support the concept of "technoference", where screen time interferes with opportunities for parent-child interaction. This interference can limit children’s exposure to language, potentially hindering their early language development.
3
Socioeconomic factors matter
The impact of screen time was influenced by family characteristics, such as the mother's education level and the number of activities the child participated in at home. Higher maternal education and more home activities were linked to better outcomes in terms of parent-child interaction despite screen time.
YouTubes right-wing bias in the Finnish election
Faktabaari has analysed YouTube's "Up Next" video recommendations during Finland’s 2024 presidential election. The researchers used Raspberry Pi devices to simulate searches across Finland and track how YouTube recommends content related to election topics.
Key findings
1
Right-wing bias in recommendations
YouTube’s recommendation system disproportionately promotes videos from the right-wing Finns Party and its candidates. Nearly 20 per cent of the initial video recommendations after political searches featured Finns Party politicians, particularly Jussi Halla-aho, amplifying their visibility beyond search results.
2
Concentrated video recommendations
The algorithm’s "funnelling effect" pushes users towards a narrow selection of video channels, limiting the diversity of political content available through recommendations. A small group of videos dominated the first recommendations for users, with some videos being several years old yet remaining relevant due to the election and Ukraine invasion topics.
3
Military topics prominently featured
Videos related to military themes, especially concerning Russia’s invasion of Ukraine, were frequently recommended. These recommendations often included World War II history and contemporary military issues, reflecting an algorithmic preference for emotionally charged or topical content.
4
Limited diversity in political discourse
Although many producers create political videos, the recommendation system primarily focuses on a few channels and political viewpoints. Politicians from parties outside the right-wing spectrum, including the centre-left and Green League, were notably underrepresented.
5
Recommendation bias versus search results
The algorithm promoted more partisan content in recommendations compared to search results. While only 12 per cent of search results mentioned individual politicians, over 30 per cent of the first recommended videos did, skewing towards the most discussed and controversial figures.