<?xml version='1.0' encoding='utf-8'?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.2 20190208//EN" "http://jats.nlm.nih.gov/publishing/1.2/JATS-journalpublishing1.dtd">
<article article-type="research-article" dtd-version="1.2" xml:lang="ru" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><front><journal-meta><journal-id journal-id-type="issn">2518-1092</journal-id><journal-title-group><journal-title>Research result. Information technologies</journal-title></journal-title-group><issn pub-type="epub">2518-1092</issn></journal-meta><article-meta><article-id pub-id-type="doi">10.18413/2518-1092-2024-9-4-0-3</article-id><article-id pub-id-type="publisher-id">3665</article-id><article-categories><subj-group subj-group-type="heading"><subject>ARTIFICIAL INTELLIGENCE AND DECISION MAKING</subject></subj-group></article-categories><title-group><article-title>&lt;strong&gt;COMPARISON OF THE STRUCTURE, EFFICIENCY,&amp;nbsp;AND SPEED OF OPERATION OF FEEDFORWARD, CONVOLUTIONAL, AND RECURRENT NEURAL NETWORKS&lt;/strong&gt;</article-title><trans-title-group xml:lang="en"><trans-title>&lt;strong&gt;COMPARISON OF THE STRUCTURE, EFFICIENCY,&amp;nbsp;AND SPEED OF OPERATION OF FEEDFORWARD, CONVOLUTIONAL, AND RECURRENT NEURAL NETWORKS&lt;/strong&gt;</trans-title></trans-title-group></title-group><contrib-group><contrib contrib-type="author"><name-alternatives><name xml:lang="ru"><surname>Shapalin</surname><given-names>Vitaliy Gennadiyevich</given-names></name><name xml:lang="en"><surname>Shapalin</surname><given-names>Vitaliy Gennadiyevich</given-names></name></name-alternatives><email>shapalinv@gmail.com</email></contrib><contrib contrib-type="author"><name-alternatives><name xml:lang="ru"><surname>Nikolayenko</surname><given-names>Denis Vladimirovich</given-names></name><name xml:lang="en"><surname>Nikolayenko</surname><given-names>Denis Vladimirovich</given-names></name></name-alternatives><email>dv.nikolaenko@yandex.ru</email></contrib></contrib-group><pub-date pub-type="epub"><year>2024</year></pub-date><volume>9</volume><issue>4</issue><fpage>0</fpage><lpage>0</lpage><self-uri content-type="pdf" xlink:href="/media/information/2024/4/ИТ.НР.9_4_3.pdf" /><abstract xml:lang="ru"><p>This article examines the efficiency of fully connected, recurrent, and convolutional neural networks in the context of developing a simple model for weather forecasting. The architectures and working principles of fully connected neural networks, the structure of one-dimensional and two-dimensional convolutional neural networks, as well as the architecture, features, advantages, and disadvantages of recurrent neural networks&amp;mdash;specifically, simple recurrent neural networks, LSTM, and GRU, along with their bidirectional variants for each of the three aforementioned types&amp;mdash;are discussed. Based on the available theoretical materials, simple neural networks were developed to compare the efficiency of each architecture, with training time and error magnitude serving as criteria, and temperature, wind speed, and atmospheric pressure as training data. The training speed, minimum and average error values for the fully connected neural network, convolutional neural network, simple recurrent network, LSTM, and GRU, as well as for bidirectional recurrent neural networks, were examined. Based on the results obtained, an analysis was conducted to explore the possible reasons for the effectiveness of each architecture. Graphs were plotted to show the relationship between processing speed and error magnitude for the three datasets examined: temperature, wind speed, and atmospheric pressure. Conclusions were drawn about the efficiency of specific models in the context of forecasting time series of meteorological data.</p></abstract><trans-abstract xml:lang="en"><p>This article examines the efficiency of fully connected, recurrent, and convolutional neural networks in the context of developing a simple model for weather forecasting. The architectures and working principles of fully connected neural networks, the structure of one-dimensional and two-dimensional convolutional neural networks, as well as the architecture, features, advantages, and disadvantages of recurrent neural networks&amp;mdash;specifically, simple recurrent neural networks, LSTM, and GRU, along with their bidirectional variants for each of the three aforementioned types&amp;mdash;are discussed. Based on the available theoretical materials, simple neural networks were developed to compare the efficiency of each architecture, with training time and error magnitude serving as criteria, and temperature, wind speed, and atmospheric pressure as training data. The training speed, minimum and average error values for the fully connected neural network, convolutional neural network, simple recurrent network, LSTM, and GRU, as well as for bidirectional recurrent neural networks, were examined. Based on the results obtained, an analysis was conducted to explore the possible reasons for the effectiveness of each architecture. Graphs were plotted to show the relationship between processing speed and error magnitude for the three datasets examined: temperature, wind speed, and atmospheric pressure. Conclusions were drawn about the efficiency of specific models in the context of forecasting time series of meteorological data.</p></trans-abstract><kwd-group xml:lang="ru"><kwd>LSTM</kwd><kwd>GRU</kwd><kwd>bidirectional recurrent neural networks</kwd><kwd>convolutional neural networks keras</kwd><kwd>tensorflow</kwd></kwd-group><kwd-group xml:lang="en"><kwd>LSTM</kwd><kwd>GRU</kwd><kwd>bidirectional recurrent neural networks</kwd><kwd>convolutional neural networks keras</kwd><kwd>tensorflow</kwd></kwd-group></article-meta></front><back><ref-list><title>Список литературы</title><ref id="B1"><mixed-citation>Ryndin A.A., Ulev V.P., Neural networks learning speed research. https://cyberleninka.ru/article/n/issledovanie-skorosti-obucheniya-neyronnyh-setey (accessed October 12, 2024) (in Russian).</mixed-citation></ref><ref id="B2"><mixed-citation>Bykov F.L., Tsaralov N.D., Modern practices of machine learning applying in weather forecast task. https://cyberleninka.ru/article/n/sovremennye-praktiki-primeneniya-mashinnogo-obucheniya-v-zadache-prognoza-pogody (accessed October 13, 2024)</mixed-citation></ref><ref id="B3"><mixed-citation>Loginom wiki: multilayer perceptron. https://wiki.loginom.ru/articles/multilayered-perceptron.html (accessed July 3, 2024) (in Russian).</mixed-citation></ref><ref id="B4"><mixed-citation>Murat H. Sazli, A brief review of feed-forward neural networks. https://www.researchgate.net/publication/228394623_A_brief_review_of_feed-forward_neural_networks (accessed July 3, 2024).</mixed-citation></ref><ref id="B5"><mixed-citation>Fully connected layers of neural networks in machine learning. https://habr.com/ru/articles/718044/ (accessed July 3.2024) (in Russian).</mixed-citation></ref><ref id="B6"><mixed-citation>Gorbachevskaya E.N., Neural network classification. https://cyberleninka.ru/article/n/klassifikatsiya-neyronnyh-setey/viewer (accessed July 3, 2024) (in Russian).</mixed-citation></ref><ref id="B7"><mixed-citation>Kaggle weather dataset. https://www.kaggle.com/datasets/muthuj7/weather-dataset/data (accessed July 8, 2024) (in Russian).</mixed-citation></ref><ref id="B8"><mixed-citation>Shapalin Vitaliy: &amp;ldquo;Forecast&amp;rdquo; github repository. https://github.com/ShapalinVitaliy/Forecast (accessed July 9, 2024) (in Russian).</mixed-citation></ref><ref id="B9"><mixed-citation>Keras official website. https://keras.io/ (accessed July 10, 2024)</mixed-citation></ref><ref id="B10"><mixed-citation>Simon J. D. Prince, Understanding deep learning. MIT press, udlbook.com, 2023, 161.</mixed-citation></ref><ref id="B11"><mixed-citation>Habr, &amp;ldquo;Convolution&amp;rdquo;. https://habr.com/ru/articles/795223/ (accessed August 5, 2024) (in Russian).</mixed-citation></ref><ref id="B12"><mixed-citation>D.A. Marshalko, O.V. Kubansky, Architecture of convolutional neural networks. Available at: https://cyberleninka.ru/article/n/arhitektura-svyortochnyh-neyronnyh-setey/viewer (accessed August 5, 2024) (in Russian).</mixed-citation></ref><ref id="B13"><mixed-citation>1D convolutional neural networks and applications: A survey. https://www.sciencedirect.com/science/article/pii/S0888327020307846 (accessed August 8, 2024)</mixed-citation></ref><ref id="B14"><mixed-citation>Conceptual Understanding of Convolutional Neural Network &amp;ndash; A Deep Learning Approach. URL: https://www.sciencedirect.com/science/article/pii/S1877050918308019 (accessed August 6, 2024)</mixed-citation></ref><ref id="B15"><mixed-citation>I. Goodfellow, Y. Bengio, A. Courville, Deep learning. MIT press, deeplearningbook.com, 2016, 373.</mixed-citation></ref><ref id="B16"><mixed-citation>RNN, LSTM, GRU and other recurrent neural networks. http://vbystricky.ru/2021/05/rnn_lstm_gru_etc.html (accessed July 12, 2024) (in Russian).</mixed-citation></ref><ref id="B17"><mixed-citation>Robin M. Schmidt, Recurrent Neural Networks (RNNs): A gentle Introduction and Overview. https://arxiv.org/abs/1912.05911 (accessed July 13, 2024)</mixed-citation></ref><ref id="B18"><mixed-citation>Recurrent Neural Networks (RNN) - The Vanishing Gradient Problem. https://www.superdatascience.com/blogs/recurrent-neural-networks-rnn-the-vanishing-gradient-problem (accessed July 13, 2024)</mixed-citation></ref><ref id="B19"><mixed-citation>Paul Werbos, Backpropagation through time: what it does and how to do it. https://www.researchgate.net/publication/2984354_Backpropagation_through_time_what_it_does_and_how_to_do_it (accessed July 12, 2024).</mixed-citation></ref><ref id="B20"><mixed-citation>LSTM &amp;mdash; neural network with Long Short-Term Memory. https://neurohive.io/ru/osnovy-data-science/lstm-nejronnaja-set/ (accessed July 29, 2024) (in Russian).</mixed-citation></ref><ref id="B21"><mixed-citation>A survey on long short-term memory networks for time series prediction. https://www.sciencedirect.com/science/article/pii/S2212827121003796 (accessed July 30, 2024)</mixed-citation></ref><ref id="B22"><mixed-citation>GRU recurrent blocks. An example of their implementation in a sentiment analysis task. https://proproprogs.ru/neural_network/rekurrentnye-bloki-gru-primer-realizacii-v-zadache-sentiment-analiza (accessed July 30, 2024) (in Russian).</mixed-citation></ref><ref id="B23"><mixed-citation>&amp;nbsp;Bidirectional recurrent neural networks. https://proproprogs.ru/neural_network/bidirectional-rekurrentnye-neyronnye-seti (accessed July 30, 2024) (in Russian).</mixed-citation></ref><ref id="B24"><mixed-citation>Mike Schuste, Kuldip K. Paliwal Bidirectional Recurrent Neural Networks. https://deeplearning.cs.cmu.edu/S24/document/readings/Bidirectional%20Recurrent%20Neural%20Networks.pdf (accessed July 30, 2024)</mixed-citation></ref></ref-list></back></article>