16+
DOI: 10.18413/2518-1092-2022-8-2-0-7

PROTECTION AGAINST ADVERSARIAL ATTACKS ON AUDIO AND IMAGES IN ARTIFICIAL INTELLIGENCE MODELS USING THE SGEC METHOD

In the modern world, the use of artificial intelligence (AI) is increasingly facing the risk of adversarial attacks on audio and images. This article explores this issue and presents the SGEC method as a means to minimize these risks. Various types of attacks on audio and images are discussed, including label manipulation, white-box and black-box attacks, leakage through trained models, and hardware-level attacks. The main focus is on the SGEC method, which offers data encryption and ensures their integrity in AI models. The article also examines other approaches to protect audio and images, such as dual verification and ensemble methods, access restriction and data anonymization, as well as the use of provably robust AI models.

Number of views: 343 (view statistics)
Количество скачиваний: 1071
Full text (PDF)To articles list
  • User comments
  • Reference lists
  • Thanks

While nobody left any comments to this publication.
You can be first.

Leave comment: