9. juli 2023
The Regulation Paradox
Regulating technology can often produce results opposite of what is intended.
By KJERSTI LØKEN STAVRUM, CEO Tinius Trust
On the morning of my written Bokmål exam in high school (quite some time ago), a local newspaper visited the gymnasium where we were seated to create a report. I agreed to be photographed – in a somewhat staged thinking pose. The picture was published in the newspaper, and later, I was made aware that it had been used as an illustrative photo in other newspapers.
Illustrative photos are a well-established genre in newspapers and magazines. These images don't relate to the article's content, weren't taken for the specific piece they accompany, and the people depicted have no connection to the text. However, such images—like the one accompanying this text—create a framework or an atmosphere and entice reading. Without them, it would be not very interesting.
Ethical Rules for AI
It's possible to create illustrative images using artificial intelligence (AI). You can describe what it should look like, and voilà—an AI-generated image emerges. This capability and the potential to create texts using AI have led editorial teams worldwide to develop specific ethical guidelines for AI use.
Transparency, or openness, towards readers, is a key principle.
It's advisable to state that the image was created using AI, or that AI partially wrote this text, and so on.
If similar precision had been required in the past, one might have written something like, 'This picture features a girl from Askim, unaware that she is being used as an illustrative image for this text. She is, in any case, unrelated to the subject matter.' This would have been truthful, but it likely would have led to the discontinuation of using such illustrative images, I believe. It certainly wasn't ethical.
If AI-generated images are henceforth to be labelled, it may create a greater overall clarity regarding the use of illustrative images.
Until the advent of AI and ChatGPT, which made contributions to the editorial process possible, it was common to use the internet and Google searches in the research phase. However, it has not been customary to inform readers about this practice.
Now, many are keen to make it clear when they have used AI in their text. This can be done, but the accuracy of the information and the presence of sources for facts and claims will probably remain most important to readers. This means that the content should adhere to the principles outlined in the Code of Ethics of the Norwegian Press (point 3.2.), which states that the editorial team should be critical in choosing sources and verifying that the information is accurate. Moreover, according to the same point, 'good press practice involves striving for breadth and relevance in the choice of sources.' This implies that one must actually know where the content comes from.
Fear of Extinction
New technology always creates uncertainty with ChatGPT, and existential dread was born.
About a month ago (though it seems longer), 350 international technology leaders, researchers, and engineers signed a letter calling for a slowdown in AI development, arguing that AI could lead to the extinction of humanity. That was a severe concern, understood by all.
One of the signatories was X/Twitter CEO Elon Musk.
His signature might be likened to the cartoon character Bart Simpson trying to squeeze through a crowd gathered around a car accident, arguing that they should let him through because he arrived late.
Shortly after the appeal, Musk started a competitor to ChatGPT.
Received with Open Arms
Another letter signatory was Sam Altman, the CEO of OpenAI, the organization behind ChatGPT.
Altman advocates for AI regulation and has already testified in Congress. He has raised the alarm. For other tech leaders, it took years before they were summoned to Congress to account for their enormous market power.
Altman, however, was welcomed with open arms. Despite a significant knowledge gap between him and the politicians, they were on the same side. Congress was eager to be accommodating and regulate, but the question was what exactly they should hold. Finally, one of them asked Altman directly: What would you handle if you were to regulate? And Altman, unsurprisingly, suggested regulations in the form of licensing that would ensure a smooth path for his OpenAI while potentially creating significant barriers for other companies.
A Show for the Public
This situation is reminiscent of when Meta/Facebook CEO Mark Zuckerberg also requested regulation, confidently assuming that any potential law would impact others more than Facebook. Additionally, he could benefit from goodwill generated by his positive stance towards regulation.
Altman has travelled the world throughout the year, speaking to packed audiences about the need for global AI regulation. However, as revealed in documents from the European Commission accessed by TIME magazine, behind the scenes of these lectures, he has been lobbying hard to dilute the EU's regulation of artificial intelligence. He and OpenAI have submitted text proposals to the EU's 'AI Act', which are now part of the final negotiations. Among other things, they have avoided being categorized as a company representing 'high risk' – a designation that would have imposed greater demands on the company for transparency, traceability, and human oversight.
Paradoxes
Regulation can always go both ways. The term 'regulatory paradox' was coined with the advent of giant platform companies.
This concept warns that regulations may ultimately have the opposite effect than intended and increase the power of large companies for three reasons:
- Often, only the most prominent companies have the resources to comply with the regulations.
- Compliance can be costly, which smaller companies may not afford.
- Detailed requirements on what can be communicated particularly empower the large companies over the space of expression.
- So far, the large companies have fared well in the face of all attempts at regulation. And this is something Altman must have been aware of.
The world is always young when it comes to new technology. That's why ongoing, critical debate is crucial and why principles for sound business practices should be established and expected to be followed. It's also important to be cautious to ensure that regulation doesn't inadvertently hinder progress and favor some actors over others.
—
This op-ed was first published by Dagens Næringsliv on July 7, 2023.
The image for this article was generated by artificial intelligence.