We invited digital economist Anna Felländer to discuss the ethical dimensions of artificial intelligence.

Is there something scary about the development of AI? Who is responsible for ethical algorithms? And what is most disturbing, the algorithms itself or its creators? These are some of the questions Kjersti Løken Stavrum, CEO of the Tinius Trust, asked Anna Felländer, co-founder of the AI Sustainability Center in Stockholm, when she was invited to talk about AI and ethics in Tinius Talks.

Listen to the episode below or find Tinius Talks in your favorite podcast application, for instance iTunes, Spotify or SoundCloud.

In the podcast, Anna Felländer suggest that AI Ethics can become a Nordic competitive advantage.

The Nordics are good at combining values and sustainable business models. We cannot compete with the US and China when it comes to the engineering side of AI, so we have to focus on verticals and the humanistic side of AI. We could actually export it, says Felländer.

Løken Stavrum then asks how the Nordics can raise the voice on AI and ethics.

Nordic companies that avoid ethical pitfalls, act proactively to include ethics in their business models and is prepared for new standards and regulations on AI ethics will be extremely competitive, says Felländer.

In the podcasts Felländer, who is a digital economist, explains the potential pitfalls implementing AI into a business.

– The first one is the misuse of data, which could be explained as privacy pollutions. The second one is the bias of the creator. The third is immature AI, and the forth is data bias. If you experience bias in data, it is sometimes not because the data itself is bad. Sometimes it is a reflection of our reality, she explains.

Løken Stavrum agrees and point out that AI is developed in a society with racism, gender inequality and scandals such as money laundering. Stavrum asks Felländer if there are companies or boards of directors that is conscious about ethics within AI.

– Yes, the giants. There have been so many pitfalls that they have stated their AI principles.

But is it the algorithms in Facebook that is the problem, or is it the values of Mark Zuckerberg, Stavrum asks?

Zuckerberg decoded the Facebook-algorithm to make posts become filter bobbles, because he adjusted them to a study that said that our happiness increases if we see or read posts from family or near friends. Not only was it immature, but it led to mistrust, Felländer says.

Read more about AI and ethics in this article written by Anna Felländer in 2018.