Biased Artificial Intelligence: Openness, innovation and the remaking of mental health

Biased AI
01 September 2018 → 31 August 2020
European funding: framework programme
Research disciplines
  • Humanities
    • Ethics of technology
mental health AI
Other information
Project description

Following trends to design Artificial Intelligence (AI) for Good, (a plea to develop AI in ways addressing pressing problems that face humanity), the focal question we will address is: What is the proper combination of open and proprietary elements in AI innovation in the field of mental health in light of the need to design a system that both promotes innovation and is compatible with the ideals of human dignity, fundamental rights and freedoms, and cultural diversity? Our concern in the context of mental health in particular is that proprietary AI may reproduce gender and racial stereotypes, which in turn may reinforce social stigma and raise a barrier to proper diagnosis and treatments. On the other hand, the present research does not entertain a model of innovation where the proprietary stands in opposition to the open. Rolling back private property does not necessarily favour the weak over the strong. To this effect, we propose to conduct qualitative research to identify public and commercial demands in order to rethink the governance of innovation in AI, focusing in particular on mental health.
Participant observation and Interviews with key stakeholders will be conducted to elicit their views, taking into account not only economic but also ethical aspects (diversity, rights and freedoms). The governance question needs to take into account public demands that are technically feasible and at the same time commercially viable in the AI area.

Funded by the European Union. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Research Executive Agency (REA). Neither the European Union nor the authority can be held responsible for them.