We are in daily contact with a multitude of black boxes, that is to say a system who hides its internal logic from the user. The user cannot know what is going on in the system, many questions appear. In particular on the confidence to be granted in these systems, but also from an ethical point of view, particularly on issues of discrimination and protection of the user's privacy.
The main goal of this master thesis is to understand how a black box system and more specifically how an artificial intelligence is brought to make a decision. For this, we will analyze the different methods existing in the literature and apply them to several concrete cases.In order to know if explaining these black box systems would allow us to solve the main problems they face. The applications will therefore relate to questions of ethics, reliability and performance.