바로가기 메뉴 본문 바로가기 주메뉴 바로가기
  • 10-1 Do you provide evidence for users to accept the generation process of the model’s inference results?
    • System users must be able to understand how the AI model made such an inference in order for them to trust the AI model’s inference result and the AI system’s operation. It is ideal to provide users with an explanation and evidence.

    • Consideration should be given to reviewing and applying explainable AI (XAI) that suggests evidence for the model’s decision in a way that humans can understand. Here, XAI such as surrogate models, attention, and internal analysis methods can be used depending on the factors that need explanations and the AI model’s features.

    • Since evidence of the AI model’s inference result is not always explainable, you may need an alternative other than XAI to ensure the AI system’s transparency. Therefore, after reviewing the applicability of XAI technology, stick to "10-1a" if the application is feasible, and if application is challenging, refer to "10-1b."