<?xml version='1.0' encoding='utf-8'?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.2 20190208//EN" "http://jats.nlm.nih.gov/publishing/1.2/JATS-journalpublishing1.dtd">
<article article-type="research-article" dtd-version="1.2" xml:lang="ru" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><front><journal-meta><journal-id journal-id-type="issn">2518-1092</journal-id><journal-title-group><journal-title>Research result. Information technologies</journal-title></journal-title-group><issn pub-type="epub">2518-1092</issn></journal-meta><article-meta><article-id pub-id-type="doi">10.18413/2518-1092-2025-10-1-0-5</article-id><article-id pub-id-type="publisher-id">3747</article-id><article-categories><subj-group subj-group-type="heading"><subject>ARTIFICIAL INTELLIGENCE AND DECISION MAKING</subject></subj-group></article-categories><title-group><article-title>&lt;strong&gt;STUDY OF APPROACHES TO DETECTING MOVING&amp;nbsp;OBJECTS IN NOISY IMAGES&lt;/strong&gt;</article-title><trans-title-group xml:lang="en"><trans-title>&lt;strong&gt;STUDY OF APPROACHES TO DETECTING MOVING&amp;nbsp;OBJECTS IN NOISY IMAGES&lt;/strong&gt;</trans-title></trans-title-group></title-group><contrib-group><contrib contrib-type="author"><name-alternatives><name xml:lang="ru"><surname>Abramov</surname><given-names>Kirill Vladislavovich</given-names></name><name xml:lang="en"><surname>Abramov</surname><given-names>Kirill Vladislavovich</given-names></name></name-alternatives><email>kirya_abramov_2002@bk.ru</email></contrib><contrib contrib-type="author"><name-alternatives><name xml:lang="ru"><surname>Alexandrov</surname><given-names>Kirill Sergeevich</given-names></name><name xml:lang="en"><surname>Alexandrov</surname><given-names>Kirill Sergeevich</given-names></name></name-alternatives></contrib><contrib contrib-type="author"><name-alternatives><name xml:lang="ru"><surname>Balabanova</surname><given-names>Tatiana Nikolaevna</given-names></name><name xml:lang="en"><surname>Balabanova</surname><given-names>Tatiana Nikolaevna</given-names></name></name-alternatives><email>sozonova@bsuedu.ru</email></contrib><contrib contrib-type="author"><name-alternatives><name xml:lang="ru"><surname>Babenko</surname><given-names>Alexander Andreevich</given-names></name><name xml:lang="en"><surname>Babenko</surname><given-names>Alexander Andreevich</given-names></name></name-alternatives><email>babencko.alexander2011@yandex.ru</email></contrib><contrib contrib-type="author"><name-alternatives><name xml:lang="ru"><surname>Burdanova</surname><given-names>Ekaterina Vasilyevna</given-names></name><name xml:lang="en"><surname>Burdanova</surname><given-names>Ekaterina Vasilyevna</given-names></name></name-alternatives></contrib></contrib-group><pub-date pub-type="epub"><year>2025</year></pub-date><volume>10</volume><issue>1</issue><fpage>0</fpage><lpage>0</lpage><self-uri content-type="pdf" xlink:href="/media/information/2025/1/ИТ_НР_10_1_5.pdf" /><abstract xml:lang="ru"><p>The paper considers a neural network approach to cleaning images from the noise component in the form of rain, the use of which will improve the quality of detection of moving objects in adverse weather conditions. A generative adversarial network was chosen as the neural network architecture. The main idea of image processing in order to remove the noise component in the form of rain is that a rectangular area of 256 by 256 pixels is selected from the original image with the rain component (the fragment is selected randomly). Then this fragment is fed to the generator, which cleans it from the rain component. Then the generator passes the processed fragment to the discriminator, which, in turn, tries to understand whether the fragment it received is processed or reference. Thus, the better the discriminator works, the better the generator performs the assessment and vice versa. The paper presents two approaches to cleaning images from the rain component: an approach to eliminating the rain component in the form of stripes; an approach to eliminating the rain component in the form of a curtain (fog) effect.</p></abstract><trans-abstract xml:lang="en"><p>The paper considers a neural network approach to cleaning images from the noise component in the form of rain, the use of which will improve the quality of detection of moving objects in adverse weather conditions. A generative adversarial network was chosen as the neural network architecture. The main idea of image processing in order to remove the noise component in the form of rain is that a rectangular area of 256 by 256 pixels is selected from the original image with the rain component (the fragment is selected randomly). Then this fragment is fed to the generator, which cleans it from the rain component. Then the generator passes the processed fragment to the discriminator, which, in turn, tries to understand whether the fragment it received is processed or reference. Thus, the better the discriminator works, the better the generator performs the assessment and vice versa. The paper presents two approaches to cleaning images from the rain component: an approach to eliminating the rain component in the form of stripes; an approach to eliminating the rain component in the form of a curtain (fog) effect.</p></trans-abstract><kwd-group xml:lang="ru"><kwd>framework</kwd><kwd>detection</kwd><kwd>convolutional neural network</kwd><kwd>generative-adversarial neural network</kwd><kwd>diffusion neural network</kwd><kwd>discriminator</kwd><kwd>metric</kwd><kwd>diffusion model</kwd></kwd-group><kwd-group xml:lang="en"><kwd>framework</kwd><kwd>detection</kwd><kwd>convolutional neural network</kwd><kwd>generative-adversarial neural network</kwd><kwd>diffusion neural network</kwd><kwd>discriminator</kwd><kwd>metric</kwd><kwd>diffusion model</kwd></kwd-group></article-meta></front><back><ref-list><title>Список литературы</title><ref id="B1"><mixed-citation>He Zhang [и др.] Image De-raining Using a Conditional Generative Adversarial Network [Электронный ресурс]. URL: https://arxiv.org/pdf/1701.05957v4 (дата обращения: 06.02.2025).</mixed-citation></ref><ref id="B2"><mixed-citation>Janusz K, Tadeus U. Video Quality Assessment: Some Remarks on Selected Objective Metrics [Электронный ресурс]. URL: https://ieeexplore.ieee.org/document/9238303 (дата обращения: 06.02.2025).</mixed-citation></ref><ref id="B3"><mixed-citation>Kshitiz Garg, Shree K. Nayar Vision and Rain [Электронный ресурс]. URL: https://www.cs.columbia.edu/CAVE/publications/pdfs/Garg_IJCV07.pdf (дата обращения: 06.02.2025).</mixed-citation></ref><ref id="B4"><mixed-citation>Syed Waqas Zamir [и др.] Multi-Stage Progressive Image Restoration [Электронный ресурс]. URL: https://openaccess.thecvf.com/content/CVPR2021/papers/Zamir_Multi-Stage_Progressive_Image_Restoration_CVPR_2021_paper.pdf (дата обращения: 06.02.2025).</mixed-citation></ref><ref id="B5"><mixed-citation>Prashant W. Patil [и др.] Multi-weather Image Restoration via Domain Translation [Электронный ресурс].&amp;nbsp; URL: https://openaccess.thecvf.com/content/ICCV2023/papers/Patil_Multi-weather_Image_Restoration_via_Domain_Translation_ICCV_2023_paper.pdf (дата обращения: 06.02.2025).</mixed-citation></ref><ref id="B6"><mixed-citation>Phillip Isola [и др.] Image-to-Image Translation with Conditional Adversarial Networks [Электронный ресурс]. URL: https://arxiv.org/pdf/1611.07004 (дата обращения: 06.02.2025).</mixed-citation></ref><ref id="B7"><mixed-citation>Rui Qian [и др.] Attentive Generative Adversarial Network for Raindrop Removal from A Single Image [Электронный ресурс]. URL: https://arxiv.org/pdf/1711.10098 (дата обращения: 06.02.2025).</mixed-citation></ref><ref id="B8"><mixed-citation>Ruoteng Li [и др.] Heavy Rain Image Restoration: Integrating Physics Model and Conditional Adversarial Learning [Электронный ресурс]. URL: https://arxiv.org/pdf/1904.05050v1 (дата обращения: 06.02.2025).</mixed-citation></ref><ref id="B9"><mixed-citation>Yanyan Wei [и др.] DerainCycleGAN: Rain Attentive CycleGAN for Single Image Deraining and Rainmaking [Электронный ресурс]. URL: https://arxiv.org/pdf/1912.07015 (дата обращения: 06.02.2025).</mixed-citation></ref><ref id="B10"><mixed-citation>Yiyang Shen [и др.] Rethinking Real-world Image Deraining via An Unpaired Degradation-Conditioned Diffusion Model [Электронный ресурс]. URL: https://arxiv.org/pdf/2301.09430 (дата обращения: 06.02.2025).</mixed-citation></ref><ref id="B11"><mixed-citation>Yuanbo Wen [и др.] From Heavy Rain Removal to Detail Restoration: A Faster and Better Network [Электронный ресурс]. URL: https://arxiv.org/pdf/2205.03553 (дата обращения: 06.02.2025).</mixed-citation></ref><ref id="B12"><mixed-citation>Ziwei Luo [и др.] Image Restoration with Mean-Reverting Stochastic Differential Equations [Электронный ресурс]. URL: https://www.researchgate.net/publication/367529695_Image_Restoration_with_Mean-Reverting_Stochastic_Differential_Equations (дата обращения: 06.02.2025).</mixed-citation></ref></ref-list></back></article>