<?xml version='1.0' encoding='utf-8'?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.2 20190208//EN" "http://jats.nlm.nih.gov/publishing/1.2/JATS-journalpublishing1.dtd">
<article article-type="research-article" dtd-version="1.2" xml:lang="ru" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><front><journal-meta><journal-id journal-id-type="issn">2518-1092</journal-id><journal-title-group><journal-title>Research result. Information technologies</journal-title></journal-title-group><issn pub-type="epub">2518-1092</issn></journal-meta><article-meta><article-id pub-id-type="doi">10.18413/2518-1092-2022-8-1-0-8</article-id><article-id pub-id-type="publisher-id">3036</article-id><article-categories><subj-group subj-group-type="heading"><subject>ARTIFICIAL INTELLIGENCE AND DECISION MAKING</subject></subj-group></article-categories><title-group><article-title>&lt;strong&gt;COMPARISON OF EFFICIENCY OF DIFFERENT METHODS OF TRAINING NEURAL NETWORKS&lt;/strong&gt;</article-title><trans-title-group xml:lang="en"><trans-title>&lt;strong&gt;COMPARISON OF EFFICIENCY OF DIFFERENT METHODS OF TRAINING NEURAL NETWORKS&lt;/strong&gt;</trans-title></trans-title-group></title-group><contrib-group><contrib contrib-type="author"><name-alternatives><name xml:lang="ru"><surname>Chernykh</surname><given-names>Vladimir Sergeevich</given-names></name><name xml:lang="en"><surname>Chernykh</surname><given-names>Vladimir Sergeevich</given-names></name></name-alternatives></contrib><contrib contrib-type="author"><name-alternatives><name xml:lang="ru"><surname>Zhikharev</surname><given-names>Alexander Gennadievich</given-names></name><name xml:lang="en"><surname>Zhikharev</surname><given-names>Alexander Gennadievich</given-names></name></name-alternatives><email>zhikharev@bsu.edu.ru</email></contrib><contrib contrib-type="author"><name-alternatives><name xml:lang="ru"><surname>Fedoseev</surname><given-names>Artemy Dmitrievich</given-names></name><name xml:lang="en"><surname>Fedoseev</surname><given-names>Artemy Dmitrievich</given-names></name></name-alternatives></contrib><contrib contrib-type="author"><name-alternatives><name xml:lang="ru"><surname>Marton</surname><given-names>Nikita Andreevich</given-names></name><name xml:lang="en"><surname>Marton</surname><given-names>Nikita Andreevich</given-names></name></name-alternatives></contrib></contrib-group><pub-date pub-type="epub"><year>2023</year></pub-date><volume>8</volume><issue>1</issue><fpage>0</fpage><lpage>0</lpage><self-uri content-type="pdf" xlink:href="/media/information/2023/1/ИТ_НР_81_8_VJyfF3D.pdf" /><abstract xml:lang="ru"><p>The paper considers several approaches to training multilayer fully connected neural networks. In particular, the authors have developed an artificial neural network, the purpose of which is to recognize images of numbers from zero to six. To train the neural network, a training set was formed. The authors carried out a number of experiments on the implementation of various learning methods for the considered artificial neural network. The description of the network training procedure using the classical genetic algorithm is given. The results showed that the genetic algorithm in the classical form is ineffective for solving the problem, since the training time of the artificial neural network is significantly higher compared to the backpropagation algorithm. A combined learning method based on a genetic algorithm and gradient descent has also been proposed. The results of the experiment showed close results in terms of efficiency in comparison with the backpropagation algorithm. It follows from this that the genetic algorithm is applicable for solving the problems of training artificial neural networks.</p></abstract><trans-abstract xml:lang="en"><p>The paper considers several approaches to training multilayer fully connected neural networks. In particular, the authors have developed an artificial neural network, the purpose of which is to recognize images of numbers from zero to six. To train the neural network, a training set was formed. The authors carried out a number of experiments on the implementation of various learning methods for the considered artificial neural network. The description of the network training procedure using the classical genetic algorithm is given. The results showed that the genetic algorithm in the classical form is ineffective for solving the problem, since the training time of the artificial neural network is significantly higher compared to the backpropagation algorithm. A combined learning method based on a genetic algorithm and gradient descent has also been proposed. The results of the experiment showed close results in terms of efficiency in comparison with the backpropagation algorithm. It follows from this that the genetic algorithm is applicable for solving the problems of training artificial neural networks.</p></trans-abstract><kwd-group xml:lang="ru"><kwd>neural network</kwd><kwd>neural network training</kwd><kwd>fully connected neural network</kwd><kwd>method</kwd><kwd>genetic algorithms</kwd><kwd>optimization methods</kwd><kwd>error backpropagation algorithm</kwd><kwd>gradient descent</kwd><kwd>individual</kwd><kwd>neuron</kwd></kwd-group><kwd-group xml:lang="en"><kwd>neural network</kwd><kwd>neural network training</kwd><kwd>fully connected neural network</kwd><kwd>method</kwd><kwd>genetic algorithms</kwd><kwd>optimization methods</kwd><kwd>error backpropagation algorithm</kwd><kwd>gradient descent</kwd><kwd>individual</kwd><kwd>neuron</kwd></kwd-group></article-meta></front><back><ref-list><title>Список литературы</title><ref id="B1"><mixed-citation>Redko V.G. Evolution, neural networks, intelligence: Models and concepts of evolutionary cybernetics / V.G. Redko. &amp;ndash; M.: Lenand, 2019. &amp;ndash; 224 p.</mixed-citation></ref><ref id="B2"><mixed-citation>Zhikharev A.G., Korsunov N.I., Mamatov R.A., Shcherbinina N.V., Ponomarenko S.V. On the development of an adaptive educational platform using machine learning technologies // Economics. Computer science. 2022. V. 49. No. 4. P. 810-819.</mixed-citation></ref><ref id="B3"><mixed-citation>Deeney I.A., Zhikharev A.G., Klyuchnikov D.A., Shurukhina T.N., Gavrilova T.A. Some aspects of AI-technologies in education // Revista San Gregorio. &amp;ndash; 2021. &amp;ndash; Vol. 44. &amp;ndash; P. 186-197.</mixed-citation></ref><ref id="B4"><mixed-citation>Khaikin S. Neural networks: full course / S. Khaikin. &amp;ndash; M.: Dialectics, 2019. &amp;ndash; 1104 p.</mixed-citation></ref><ref id="B5"><mixed-citation>Voronovsky G.K. Genetic algorithms, artificial neural networks and problems of virtual reality. &amp;ndash; Kharkov: Osnova, 1997.</mixed-citation></ref><ref id="B6"><mixed-citation>Callan R. Neural networks: A quick guide / R. Callan. &amp;ndash; M.: Williams I.D., 2017. &amp;ndash; 288 p.</mixed-citation></ref><ref id="B7"><mixed-citation>Jones T.&amp;nbsp;Programming artificial intelligence in applications / Per. from English. Osipov A.I. &amp;ndash; M.: DMK Press, 2011. &amp;ndash; 312 p.</mixed-citation></ref></ref-list></back></article>