AB - © 2019 Copyright held by the owner/author(s). This paper introduces fact-checking into Machine Learning (ML) explanation by referring training data points as facts to users to boost user trust. We aim to investigate what influence of training data points, and how they affect user trust in order to enhance ML explanation and boost user trust. We tackle this question by allowing users check the training data points that have the higher influence and the lower influence on the prediction. A user study found that the presentation of influences significantly increases the user trust in predictions, but only for training data points with higher influence values under the high model performance condition, where users can justify their actions with more similar facts. AU - Zhou, J AU - Li, Z AU - Yu, K AU - Chen, F AU - Wang, Y AU - Hu, H DA - 2019/05/02 DO - 10.1145/3290607.3312962 JO - Conference on Human Factors in Computing Systems - Proceedings PY - 2019/05/02 TI - Effects of influence on user trust in predictive decision making Y1 - 2019/05/02 Y2 - 2024/03/29 ER -