LTH-image

How to make artificial agents a bit more like us

Hedvig Kjellström, KTH Royal Institute of Technology, Sweden.

Abstract:

There is currently a lively and important discussion in society about the dangers of artificial intelligence. While there is much focus on general and human-like artificial intelligence in this debate, it can be argued that an overlooked but highly problematic aspect of computers is that they function in a fundamentally different way than the human brain. This means that they come to conclusions in a different manner than humans, and that it consequently will be difficult for a human to overview how they will be affected by actions of an interacting agent. An example is social media, where the underlying computer engine has been trained with examples of human behavior and use this knowledge to curate the flow to the user. In this way, the the range of information the user receives is changed, which causes a change of behavior of the user – the system has affected the user's perception of the world in an emergent manner. The conclusion is that we humans should not fear sentinent artificial intelligence – that can understand us humans – but rather be more afraid of non-sentinent artificial intelligence – functioning in a different way than humans. I will in my talk describe how my research group work towards developing sentinent artificial intelligence in different ways.

Presentation Slides