Lars Gudbrandsson, CEO, IIH Nordic
Published in Børsen on June 19, 2023
The development of AI – or artificial intelligence – is advancing at an unprecedented rate, and it seems as if the technology of the future we’ve been waiting for has arrived. And it’s already becoming part of our everyday lives.
The potential of the technology is huge, as both ChatGPT and OpenAI, a generative artificial intelligence, can create new original content.
It creates both fear and anticipation when machines can ‘think for themselves’, but we need to be aware of a major weakness when using AI.
Generative AI is designed to learn from relationships between words and data and structure existing data. From there, it can generate new and original content, but it doesn’t give us any source references. We are therefore not given the opportunity to assess whether a statement created by AI is reliable or not, as opposed to when we read news articles from established media or find knowledge online ourselves. We can usually see which website or journalist wrote the content, and we can review it, complain or ask questions if we want to. It’s all part of our democratic way of life, but AI doesn’t give us that opportunity.
Machine learning and artificial intelligence have long been important tools in the data industry, but they are also being used extensively in other industries. AI can do a lot of good as it can be built into almost all the software we use and help us with tasks in IT, healthcare, construction and logistics, just to name a few.
It’s already a huge help today, but all use of technology comes with responsibility, and the lack of references is a huge weak point for generative AI like ChatGPT.
AI is cynical and pessimistic
We also need to be aware that AI can have a negative bias and be both cynical and pessimistic.
That’s because it is a mirror of the reality we live in and the content we put online ourselves. When artificial intelligence is vacuuming the internet for answers, it also reflects the cynicism of the internet. So AI can seem harsh compared to the direct dialogue between humans, and that can affect us.
The AI wave is hitting us at a time when we are already attracted to entertaining news that supports certain opinions while we are moving away from news based on facts and reliable sources. An example of this is of course Fox News and the use of ‘alternative facts’.
Fortunately, that’s not where we are in Denmark, but if we look at the latest press freedom report from Reporters Without Borders, Denmark has fallen from second to third place. It’s still a good position, but if we look back three years, we were 2.49 points higher, so we’re headed in the wrong direction.
A new study from Danske Medier shows that only 28% of us pay for a subscription to a newspaper or digital news media, meanwhile we’re spending tons of money on streaming services.
Thus, our loyalty to certain media and media types is fading. A positive detail in this study is that the largest proportion of those who pay for news to ensure ‘good journalism’ is found among young people between 18 and 29 – but it is still a minority that pays for news.
Social media, where the majority of us can be found, is also often source-free. Today, it is the preferred tool of many politicians when they need to launch new initiatives or want to get a message across without going through traditional media.
It is of course positive that so many people can have their say when they want to, and it contributes to a broad and open democratic debate, but it is also a difficult balancing act when sources are missing. It changes the terms of the social debate.
It is therefore more important than ever that we have common ground rules for good press ethics, good source criticism and, not least, good use of new technology.
It’s a trinity that we need to consider when using artificial intelligence – because we’re going to use artificial intelligence far more than we already do today – but we need to include our democratic customs and the area needs to be regulated.
The Minister of Digitalization must be involved, and so must the Danish Data Protection Agency, which must not be cut but strenghtened. A place to start would be to indicate if a text or chat is written by AI. And our new “friend” MyAI on Snapchat, for example, could get a stamp to make it clear that you’re not talking to a real person. There’s plenty of work to do and the future isn’t waiting.