10-3bHave you provided an explanation to users about the inference results of the AI model?
• If the probability of each model’s inference result matching the ground truth can be computed, this can be used to explain the final decision by the model. You may consider the uncertainty that indicates the certainty of a AI model’s results with the size of variances of random variables as an explanation of the inference results.
• The following are examples of explanations about a model’s inference results depending on the prediction rate and uncertainty of the inference results.
• There must be an explanation about the model’s inference results if the predictive probability is lower than the threshold or if the uncertainty is higher for users to recognize this. Hence, define potentially problematic situations due to the AI model and identify key variables that determine whether or not a problem has occurred to derive the threshold. Problematic situations in this context include not only circumstances that threaten the user’s life or properties but also when the quality is lower than expected or the standard.
• However, results due to the model’s overfitting can be suspected if uncertainty is very low. In this case, you may need additional verification on the model’s overfitting.