Although the concept of Artificial Intelligence (AI) has existed for more than half a decade, it is only
just now that real-world systems have made an entry in everyday life. It is argued, that AI
technology is able to carry 'smart' tasks and automate repetitive work more efficiently and
personalized. That said, AI technology has also been accused for discriminating groups based on
gender, ethnicity and religion - and thus amplifying existing power inequalities. Rather than further outlining discriminatory practices or biases we are mainly interested in how people put their (dis)
trust in AI. In addition to (dis)trust in AI, we are interested in how people are resisting AI.
A technological deterministic undertone is still very prominent in everyday discourse in which
technologies are depicted as something that is neutral, value-free and autonomous. The latter
would imply that biases and discriminatory practices by AI are not seen as such and thus more
easily accepted as 'natural' (or involuntary) rather than 'human' mistakes. This project will adopt a
critical/ interpretative research design to understand the social construction of trust in AI; the
levels of (dis)trust of people towards AI; and why, who and how people resist against AI.
Specifically, in this research project we will focus on a specific kind of AI that is advancing rapidly
and is widely distributed in various contexts (labor, leisure, medical), i.e., Artificial Intelligent
decision support systems (ADSS).