<?xml version='1.0' encoding='utf-8'?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.2 20190208//EN" "http://jats.nlm.nih.gov/publishing/1.2/JATS-journalpublishing1.dtd">
<article article-type="research-article" dtd-version="1.2" xml:lang="ru" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><front><journal-meta><journal-id journal-id-type="issn">2518-1092</journal-id><journal-title-group><journal-title>Research result. Information technologies</journal-title></journal-title-group><issn pub-type="epub">2518-1092</issn></journal-meta><article-meta><article-id pub-id-type="doi">10.18413/2518-1092-2024-9-4-0-6</article-id><article-id pub-id-type="publisher-id">3668</article-id><article-categories><subj-group subj-group-type="heading"><subject>ARTIFICIAL INTELLIGENCE AND DECISION MAKING</subject></subj-group></article-categories><title-group><article-title>&lt;strong&gt;APPLICATION OF RECURRENT NEURAL NETWORK&amp;nbsp;MODELS FOR GENERATING TEXT DATA&amp;nbsp;OF SPECIFICATIONS OF ASSEMBLY DRAWINGS&lt;/strong&gt;</article-title><trans-title-group xml:lang="en"><trans-title>&lt;strong&gt;APPLICATION OF RECURRENT NEURAL NETWORK&amp;nbsp;MODELS FOR GENERATING TEXT DATA&amp;nbsp;OF SPECIFICATIONS OF ASSEMBLY DRAWINGS&lt;/strong&gt;</trans-title></trans-title-group></title-group><contrib-group><contrib contrib-type="author"><name-alternatives><name xml:lang="ru"><surname>Kolesnikov</surname><given-names>Vladimir Dmitrievich</given-names></name><name xml:lang="en"><surname>Kolesnikov</surname><given-names>Vladimir Dmitrievich</given-names></name></name-alternatives><email>kolesnikov_vladm@edu.bstu.ru</email></contrib><contrib contrib-type="author"><name-alternatives><name xml:lang="ru"><surname>Kabalyants</surname><given-names>Petr Stepanovich</given-names></name><name xml:lang="en"><surname>Kabalyants</surname><given-names>Petr Stepanovich</given-names></name></name-alternatives><email>p.s.k@list.ru</email></contrib></contrib-group><pub-date pub-type="epub"><year>2024</year></pub-date><volume>9</volume><issue>4</issue><fpage>0</fpage><lpage>0</lpage><self-uri content-type="pdf" xlink:href="/media/information/2024/4/ИТ.НР.9_4_6.pdf" /><abstract xml:lang="ru"><p>The article explores the capabilities of various models of recurrent neural networks in text data generation. Specifically, classic recurrent network (RNN), long short-term memory network (LSTM) and generative adversarial network (GAN) models are considered in the context of the problem of generating specification text for assembly drawings according to the format approved by government standard. To train the models, a data set in Russian language was used, expanded with additional records simulating input data, consisting of the drawing parts, and expected text of the specifications. The input data set for the study was divided into four groups of equal size, depending on three main factors: amount of input parts, their repeatability and grammatical complexity. It is concluded that for all four groups of input data generative adversarial networks (GANs) have the maximum ratio of error-free responses to all responses generated by model, followed by LSTMs and, lastly &amp;ndash; RNNs. As a result, it is planned to use GAN-based models in future researches on specification text data generation.</p></abstract><trans-abstract xml:lang="en"><p>The article explores the capabilities of various models of recurrent neural networks in text data generation. Specifically, classic recurrent network (RNN), long short-term memory network (LSTM) and generative adversarial network (GAN) models are considered in the context of the problem of generating specification text for assembly drawings according to the format approved by government standard. To train the models, a data set in Russian language was used, expanded with additional records simulating input data, consisting of the drawing parts, and expected text of the specifications. The input data set for the study was divided into four groups of equal size, depending on three main factors: amount of input parts, their repeatability and grammatical complexity. It is concluded that for all four groups of input data generative adversarial networks (GANs) have the maximum ratio of error-free responses to all responses generated by model, followed by LSTMs and, lastly &amp;ndash; RNNs. As a result, it is planned to use GAN-based models in future researches on specification text data generation.</p></trans-abstract><kwd-group xml:lang="ru"><kwd>text generation</kwd><kwd>language models</kwd><kwd>neural networks</kwd><kwd>recurrent neural networks</kwd><kwd>generative adversarial networks</kwd><kwd>drawing specification</kwd></kwd-group><kwd-group xml:lang="en"><kwd>text generation</kwd><kwd>language models</kwd><kwd>neural networks</kwd><kwd>recurrent neural networks</kwd><kwd>generative adversarial networks</kwd><kwd>drawing specification</kwd></kwd-group></article-meta></front><back><ref-list><title>Список литературы</title><ref id="B1"><mixed-citation>Imran A.S., Yang R., Kastrati Z., Daudpota S.M., Shaiks S. The impact of synthetic text generation for sentiment analysis using GAN based models // Egyptian Informatics Journal. &amp;ndash; Volume 23. &amp;ndash; Issue 3. &amp;ndash; 2022. &amp;ndash;</mixed-citation></ref><ref id="B2"><mixed-citation>pp. 547-557.</mixed-citation></ref><ref id="B3"><mixed-citation>Subasi F. Practical Machine Learning for Data Analysis Using Python. Academic Press. &amp;ndash; 2020. &amp;ndash;</mixed-citation></ref><ref id="B4"><mixed-citation>pp. 91-202.</mixed-citation></ref><ref id="B5"><mixed-citation>Shrouti D., Bayas A., Joshi N., Misal M., Mahajan S., Gite S. Story Generation Using GAN, RNN and LSTM. In: Garg D., Rodrigues J.J.P.C., Gupta S.K., Cheng X., Sarao P., Patel G.S. (eds) Advanced Computing. IACC 2023. Communications in Computer and Information Science. &amp;ndash; vol. 2053. &amp;ndash; 2024. https://doi.org/10.1007/978-3-031-56700-1_16, pp. 193-204.</mixed-citation></ref><ref id="B6"><mixed-citation>Li Y., Pan Q., Wang S., Yang T., Cambria, E. A Generative Model for category text generation. Information Sciences. &amp;ndash; 2018. &amp;ndash; Volume 450. &amp;ndash; pp. 301-315.</mixed-citation></ref><ref id="B7"><mixed-citation>Dychka I., Legeza V., Oleshenko L., Bohutskiy D. Advances in Computer Science for Engineering and Education III, 2020. &amp;ndash; pp. 344-346.</mixed-citation></ref><ref id="B8"><mixed-citation>Tee T.H., Bei Yeap B.Q., Gan K.H., Tan T.P. Learning to Automatically Generating Genre-Specific Song Lyrics: A Comparative Study. In: Villaz&amp;oacute;n-Terrazas B., Ortiz-Rodriguez F., Tiwari S., Sicilia M.A., Mart&amp;iacute;n-Moncunill D. (eds) Knowledge Graphs and Semantic Web. KGSWC 2022. Communications in Computer and Information Science, vol 1686.https://doi.org/10.1007/978-3-031-21422-6_5, 2022. &amp;ndash; pp. 62-75.</mixed-citation></ref><ref id="B9"><mixed-citation>Dhall I., Vashisth S. Micro-Electronics and Telecommunication Engineering. 2020. &amp;ndash; pp. 649-657.</mixed-citation></ref><ref id="B10"><mixed-citation>Goodfellow I.J., Pouget-Abadie J., Mirza M., Xu B., Warde-Farley D., Ozair S., Courville F., Bengio Y. Generative Adversarial Nets. D&amp;eacute;partement d&amp;rsquo;informatique et de recherche op&amp;eacute;rationnelle, Universit&amp;eacute; de Montr&amp;eacute;al, Montr&amp;eacute;al, 2014. &amp;ndash; p. 9.</mixed-citation></ref><ref id="B11"><mixed-citation>Kusner M., Hern&amp;aacute;ndez-Lobato J.M. GANS for Sequences of Discrete Elements with the Gumbel-softmax Distribution, 2016. &amp;ndash; p. 6.</mixed-citation></ref><ref id="B12"><mixed-citation>Yu L., Zhang W., Wang J., Yu Y. SeqGAN: Sequence Generative Adversarial Nets with Policy Gradient, Shanghai Jiao Tong University, 2016. &amp;ndash; pp. 2852-2858.</mixed-citation></ref><ref id="B13"><mixed-citation>Zuev S.V., Kabalyanc P.S., Polyakov V.M. Detection of stream anomalies by means of the fractal dimension of the graph corresponding to the data processing neural network // Information Systems and Technologies, 2021. &amp;ndash; № 5(127). &amp;ndash; pp. 31-38.</mixed-citation></ref></ref-list></back></article>