The proposed research evaluates whether monolingual testing practice can provide a complete picture of multilingual (ML) pupils’ abilities in content-related areas (e.g. science). In traditional tests, these pupils receive the test items and respond to them in the school language only. The hypothesis to be tested is whether monolingual testing practice crucially overlooks some of the pupils’ competencies, whereas ML testing may result in more accurate information on actual achievement levels. Little research is available on this topic (Menken & Shohamy 2013). Yet, this is an important matter, e.g. for researchers whose insights depend on test results, or for teachers who on a daily basis assess pupils’ achievements. Our aim is to contribute to closing a gap in knowledge about alternative ways of testing. Do ML forms of assessment provide a more complete picture of pupils’ actual abilities and come with (expected) effects of enhanced self-efficacy and improved well-being (factors which are held to contribute positively to achievement)? A mixed method design is used (a background questionnaire, ML tests in different formats and interviews with pupils). The results will have an important impact in the scientific field of (language) testing, with implications for the design of high stake international tests (e.g. PISA) and how we interpret their results. In addition, there will be implications for everyday practices of language assessment in classrooms.