Mai 2024
Tinius Digest
Månedlige rapporter om endringer, trender og utviklinger i mediebransjen.
Om Tinius Digest
Tinius Digest gir deg en oversikt over rapporter om og analyser av utvikling i mediebransjen og publiseres en gang i måneden. Her er våre viktigste funn fra denne måneden.
Del gjerne innholdet med kollegaer og bruk det i møter og presentasjoner.
Innhold
- Twitch’s impact on journalism and live news reporting
- AI chatbots struggle to deliver accurate, up-to-date news
- Facial recognition reveals your political orientation
- Meta’s news ban hits Canadian outlets, but users adapt
- Environmental journalists face rising violence worldwide
- AI-generated reviews fool both people and detection tools
Twitch’s impact on journalism and live news reporting
Researchers from the University of Oregon, the University of Houston Clear Lake, and the University of North Carolina have studied Twitch's evolving role in digital journalism. The study includes three case studies: The Washington Post’s Twitch experiments, political commentator Hasan Piker, and the pro-QAnon Patriots’ Soapbox.
Key findings
1
Audience interaction redefines journalism
Twitch's defining feature—real-time audience interaction—has transformed how news is consumed and produced. The platform's live chat allows viewers to contribute directly to the broadcast, often providing real-time content that hosts respond to immediately. This blurs the line between professional journalism and user-generated content, with audiences and streamers co-creating news narratives.
2
New forms of liveness
Unlike traditional news broadcasts, Twitch fosters a sense of "liveness" built on community engagement. Hosts like Piker and the Patriots’ Soapbox encourage interaction and feedback from their audience, which becomes integral to the live experience. This constant back-and-forth helps build a participatory news environment, unlike the more static news delivery on TV or print.
3
Twitch as a digital intermediary
The economic model of Twitch, including subscriptions, donations, and Bits (its virtual currency), affects how news content is produced. Independent streamers depend on audience support, monetising content while adjusting to the platform’s rules. In contrast, legacy outlets like The Washington Post are more resistant to adopting these monetisation tools, reflecting broader tensions between institutional journalism and platform dependency.
4
News and entertainment crossover
On Twitch, news content often blends with entertainment, particularly in channels like Piker’s, where current events are covered alongside gaming or meme commentary. This hybrid format is part of a broader trend on social platforms, where the line between factual reporting and entertainment becomes increasingly blurred, catering to a younger, more interactive audience.
5
The challenge for traditional journalism
Twitch presents a challenge for traditional media organisations as the platform's model complicates notions of journalistic authority and professionalism. On Twitch, streamers act as "journalistic strangers" who don’t adhere to established norms but can still build trust and authority with their audience through interactive, engaging formats.
AI chatbots struggle to deliver accurate, up-to-date news
Reuters Institute for the Study of Journalism at Oxford University has examined how generative AI chatbots, including ChatGPT and Bard, respond to news-related queries. The report tests the reliability of these chatbots in providing current headlines from top news websites across ten countries.
Key findings
1
Inconsistent news responses
ChatGPT failed to provide current news headlines 54 per cent of the time, often responding with non-news messages such as “I’m unable to access this website.” Bard was even less reliable, delivering non-news responses 95 per cent of the time.
2
Low accuracy for current top stories
Only eight per cent of ChatGPT’s outputs matched the actual top stories from the queried websites. Bard’s accuracy was even lower at three per cent.
3
Old or misattributed headlines
Around 30 per cent of ChatGPT’s responses included headlines from real news outlets, but these were either old or unrelated to the top stories. Some headlines were misattributed, presenting stories from other outlets—a form of AI "hallucination."
4
Impact of website blocking
News outlets that block AI crawlers significantly affected chatbot performance. ChatGPT returned top headlines only 20 per cent of the time from unblocked websites and rarely from blocked ones.
5
Limited referrals to specific news stories
Most news-like responses from ChatGPT included a referral link, but only 10 per cent linked directly to specific articles, limiting users' ability to follow up on stories in detail.
Facial recognition reveals your political orientation
Researchers at Stanford University have published a study examining how facial recognition technology and human raters predict political orientation from neutral portraits.
Key findings
1
Facial recognition can predict political views
The facial recognition software was able to predict if someone was more liberal or conservative from a neutral facial image with a high level of accuracy. When combined with basic information like age and gender, the accuracy improved even more.
2
Works across different settings
The model used to predict political orientation also worked when applied to photos of politicians from the US, UK, and Canada. It was still effective even when people in the photos were smiling or posing naturally.
3
Conservatives tend to have more prominent lower faces
Among the more peculiar findings, the study found that conservatives, on average, have larger lower facial areas (such as a broader jawline), while liberals tend to have smaller lower faces.
4
Privacy concerns
The ability of facial recognition technology to reveal personal details, like political views, from just a facial image raises privacy concerns. The researchers point to a need for stricter rules to protect people’s biometric data.
Meta’s news ban hits Canadian outlets, but users adapt
Researchers from McGill University and the University of Toronto have examined the effects of Meta’s decision to block news content on Facebook and Instagram in Canada. The study focuses on how this change impacted Canadian news outlets and Facebook users' engagement with political content.
Key findings
1
Massive drop in news engagement
After Meta's news ban, national news outlets saw a 64 per cent drop in Facebook engagement, while local news outlets lost 85 per cent. Nearly half of local outlets stopped posting on Facebook altogether.
2
Users find ways around the ban
Despite the news block, politically engaged Facebook users continued to discuss Canadian news by sharing screenshots of articles. Although fewer screenshots were posted than pre-ban news links, the overall engagement with these posts remained steady.
3
Political group activity unchanged
The number of posts and discussions within Canadian political Facebook Groups stayed consistent after the ban. Users continued to engage in discussions about current events by using alternative ways to share news content.
4
Local news hit hardest
Local news outlets, which relied heavily on Facebook for visibility, were particularly affected. Many stopped posting entirely, leading to reduced access to local news.
5
No increase in misinformation
Surprisingly, the amount of misinformation shared in political Facebook Groups decreased after the news ban. Users appeared to reduce their overall link sharing, which may have contributed to this decline.
Environmental journalists face rising violence worldwide
This UNESCO report highlights the increasing violence and threats faced by environmental journalists.
Key findings
1
Rising attacks on environmental journalists
Between 2009 and 2023, at least 749 journalists and media outlets covering environmental issues were attacked in 89 countries. Over 300 of these attacks occurred in the last five years, marking a 42 per cent increase compared to the previous period. These attacks range from physical violence to legal threats.
2
Frequent state involvement
Half of the attacks on environmental journalists were committed by state actors, such as police and government officials. Private actors, including corporations and criminal groups, were responsible for a quarter of the attacks.
3
Physical violence on the rise
Physical attacks, including assaults, arbitrary arrests, and harassment, have more than doubled in the last five years. There were 183 physical incidents reported between 2019 and 2023, with journalists facing the most violence while covering protests.
4
Journalists targeted for reporting on mining and land conflicts
Topics like mining and land conflicts are particularly dangerous for reporters. Over 40 per cent of the threats were linked to the mining industry, with journalists often receiving death threats for exposing illegal activities.
5
Widespread impunity for attacks
Only five of the 44 journalists killed while covering environmental issues since 2009 saw their attackers convicted. This lack of justice reflects the ongoing challenges in holding perpetrators accountable.
AI-generated reviews fool both people and detection tools
Researchers at Yale University have published a study exploring whether humans and AI detectors can differentiate between GPT-4-generated and human-written online reviews.
Key findings
1
Humans can’t tell the difference
In two experiments, participants could not reliably distinguish between reviews written by humans and those generated by GPT-4. Even when offered financial rewards for accurate identification, most participants guessed incorrectly about 50 per cent of the time.
2
AI detectors also struggle
The study tested current AI detection tools, including GPT-4 itself and external software like Copyleaks. Both tools failed to identify GPT-4-generated reviews, labelling most as human-written.
3
Implications for trust in online reviews
With AI-generated reviews becoming indistinguishable from human ones, consumers may lose trust in review platforms. This could lead to scepticism about the authenticity of online feedback, impacting consumer behaviour and business reputations.
4
Potential for manipulation
The ease of creating convincing AI-generated reviews may encourage businesses to flood review sites with fake positive or negative reviews, further undermining trust in online platforms and giving unfair advantages to those using AI for manipulation.