AI Model Explainability User Research Study
This questionnaire will present you with a number of different tasks, from recognising items in images to reading comprehension. For each exercise we will present you with a subset of the following info:
- Input (always shown)
- AI Model output
- AI Model explainability output
We then ask a series of questions such as how much you trust the model output and what you think the correct result should be. Please give your best guess as to what you think the result should be, even if you are not sure. Note that some of the problems are intended to be very difficult, so don't worry if the answer is not known to you.
We will provide you with descriptive text for each problem which will tell you what the problem is.
By filling out this study, you will be helping us understand what makes AI models trustworthy and interpretable.
If you would like to know more about how we manage your data or about our research, please email: jonathan.frawley <at> durham.ac.uk.
We will first ask you a few short questions about yourself. All of this info is used only for the purposes of this study and will not be shared with others.
This questionnaire will take about 10 minutes to complete.
Start