In recent years, we have seen the tip of the ice-berg in regards to the potential of machines and computing programs.

Artificial intelligence or machine learning is changing the landscape.

 

As our societies become ever more complex structures. There comes a demand for understanding, controlling and processing data. Whether it is personal or public.

Handling these growing masses of data can sometimes seem quite mundane.

But effective management is important for the flow of everyday tasks as well as for our safety.

In our digital age we increasingly depend on data-management through our advancements in artificial intelligence or machine learning.
This can make data-operations better, safer, smoother and more reliant.

 

A.I. has many practical purposes for society, but it is also immensely important for us scientists and researchers.
With the computing power and creative use of A.I. we can open many closed doors.

However, as with many modern advances, there are risks and challenges.
While these vary greatly, all show that one thing is required: responsibility.

This will of course be a red thread for today’s conference. After all, with a rising societal complexity – the work of media becomes ever more challenging.

 

I recently saw a TED-talk which also raised questions about risks of our faith in algorithms and big data. The talk was given by mathematician Cathy O’Neil, and she has a fascinating take on algorithms.

She calls them «weapons of math destruction.»

The reason is mainly because algorithms too often are not transparent and understandable.

The complexity of algorithms can be used to mask what they do.
Especially for us who are not computer-experts.

Indeed this can be highly problematic as the algorithms we live with become ever more important and influence us more than before.

 

But what happens when even the experts struggle with understanding advanced algorithms and A.I.?

In September 2017 a controversial study was released. The topic drew much attention, as the researcher had experimented with an A.I. that could guess people’s sexual orientation based on pictures.

The A.I. used an enormous amount of data gathered from online dating profiles to learn itself how to guess accurately.

On average it could guess correct sexual orientation just below 90% of the time. Humans tend to be right about 60 % of the time.

Naturally, the topic led to heated political and ethical debates about the value this kind of research.

However, in regards to A.I. there was another very interesting aspect to discuss.

You see, the researcher could not tell how the A.I.’s neural network was so accurate in its conclusions.

What did the A.I. see that we humans cannot? The researcher could not tell us, as he did not figure it out himself. Therefore we don’t know what the machine learning process had taught itself.[1]

 

This presents a problem and challenge for us, because in a society we depend on our ability to explain.
We demand that all decisions and the rationale behind them can be described and clarified.

What then happens if we rely on knowledge from A.I. in our decision making?
Well, some researchers work on A.I. which explains other A.I., but issues like this will continue to rise.

If we are to be responsible we must meet the need for transparency, and for explaining.

If not, we risk not understanding our own work and the complexity of A.I..
This could challenge our direction towards open research, media and even our open democracy.

 

As academics we follow some basic and highly important principles.

With the principle of verifiable science being chief among them.

We must be able to verify and recreate a scientific theory in order to prove that it carries merit.

To do this our research and science must be transparent.
One must be able to see and understand how an experiment has been carried out, and what logic lies behind a conclusion.

We operate in this way to reach qualified research-based knowledge.

 

The same principles are closely connected to the basics of a democracy.

Policy decisions should be explainable and open for the public to understand.

So what happens when our societies become so complex that we can’t comprehend parts of them?
Does it diminish our ability to participate in a wide democratic debate?

 

Enter, the media and today’s topic of the ViSmedia Conference.

Conveying knowledge through new outlets and visuals is certainly challenged.

Of course there are issues of fake news and alternative facts. But our technological advances only add layers to these challenges.

This is shown by today’s speakers.
Researchers and journalists from five countries will contribute new perspectives on transparency.

I think the fact that we discuss their topics, as well as my mentioned examples, illustrate the need for Responsible Research and Innovation.

The European Union states that it implies that societal actors work together during the whole research and innovation process in order to better align both the process and its outcomes with the values, needs and expectations of society.

With that in mind, this conference is a very important contribution, as it is built on the principles of Responsible Research and Innovation.

It is free, open to all, and I know that earlier conferences has connected students, people in the media and researchers.

It is also a meeting place for cross-disciplinarity, which is of high importance.
If we intend to really meet the challenges of our time, they cannot be met by knowledge from individual disciplines alone.

We need to see each other, share and work together across our own academic boundaries.

I know that many, especially students, have benefited from earlier conferences and I hope you will find this one to be even more beneficial.

With this, I wish you all an enlightening conference and thank you for your attention.

 

[The live performance of the speech given at the conference on the 26th of March, 2019, differs somewhat from the script]

 

[1] https://www.nytimes.com/2017/11/21/magazine/can-ai-be-taught-to-explain-itself.html