09-2Do you have a defense technique in place against model evasion attacks?
• Model evasion attacks trick the AI model by minimally modifying the input data. Image domains are vulnerable to adversarial attacks since a slight change is not visible to humans.
• There are studies on mitigating attacks against text processing AI models through adversarial training or against speech recognition AI algorithms by using downsampling, local smoothing, and quantization. These can be some of the techniques to consider mitigating model evasion attacks.