• Consider virtual testing and real-world testing depending on AI systems’ complexity and risks.
• According to UNESCO’s Recommendation on the Ethics of Artificial Intelligence, AI systems identified as potential risks to human rights should be broadly tested by stakeholders, including in real-world conditions if needed, as part of UNESCO’s Ethical Impact Assessment, before releasing them in the market.
• Although real-world testing is appropriate to ensure accuracy, it may not be suitable if an AI system has complex operating conditions since the test must be performed within a reasonable timeframe and budget. In addition, real-world testing for AI that physically interacts with humans raises concerns about dangerous situations. In this case, virtual testing must be performed.
• Design a test environment after determining an environment suitable for system properties. Below are examples of considerations when designing a test environment:
✔ Are operating conditions of the AI system complex and do they change constantly?
✔ Does the AI system have potential risks to human rights?
✔ Can the test be performed within a reasonable timeframe and budget?
✔ Does real-world testing damage the entities of the environment (e.g. vehicles, buildings, animals, people)?