<?xml version='1.0' encoding='utf-8'?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.2 20190208//EN" "http://jats.nlm.nih.gov/publishing/1.2/JATS-journalpublishing1.dtd">
<article article-type="research-article" dtd-version="1.2" xml:lang="ru" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><front><journal-meta><journal-id journal-id-type="issn">2518-1092</journal-id><journal-title-group><journal-title>Research result. Information technologies</journal-title></journal-title-group><issn pub-type="epub">2518-1092</issn></journal-meta><article-meta><article-id pub-id-type="doi">10.18413/2518-1092-2024-9-3-0-6</article-id><article-id pub-id-type="publisher-id">3560</article-id><article-categories><subj-group subj-group-type="heading"><subject>ARTIFICIAL INTELLIGENCE AND DECISION MAKING</subject></subj-group></article-categories><title-group><article-title>&lt;strong&gt;APPROACH TO VECTORIZATION OF DRAWINGS OF&amp;nbsp;DESIGN DOCUMENTATION ON PAPER&lt;/strong&gt;</article-title><trans-title-group xml:lang="en"><trans-title>&lt;strong&gt;APPROACH TO VECTORIZATION OF DRAWINGS OF&amp;nbsp;DESIGN DOCUMENTATION ON PAPER&lt;/strong&gt;</trans-title></trans-title-group></title-group><contrib-group><contrib contrib-type="author"><name-alternatives><name xml:lang="ru"><surname>Basov</surname><given-names>Oleg Olegovich</given-names></name><name xml:lang="en"><surname>Basov</surname><given-names>Oleg Olegovich</given-names></name></name-alternatives><email>o.basov@acti.ru</email></contrib><contrib contrib-type="author"><name-alternatives><name xml:lang="ru"><surname>Demin</surname><given-names>Oleg Dmitrievich</given-names></name><name xml:lang="en"><surname>Demin</surname><given-names>Oleg Dmitrievich</given-names></name></name-alternatives></contrib><contrib contrib-type="author"><name-alternatives><name xml:lang="ru"><surname>Noskov</surname><given-names>Dmitry Aleksandrovich</given-names></name><name xml:lang="en"><surname>Noskov</surname><given-names>Dmitry Aleksandrovich</given-names></name></name-alternatives></contrib></contrib-group><pub-date pub-type="epub"><year>2024</year></pub-date><volume>9</volume><issue>3</issue><fpage>0</fpage><lpage>0</lpage><self-uri content-type="pdf" xlink:href="/media/information/2024/3/НР_ИТ_9_3_6.pdf" /><abstract xml:lang="ru"><p>The work proposes a solution to the problem of vectorization and machine interpretation of drawings of design documentation on paper, which provides the ability to automate the transfer of images of products, parts and assembly units into CAD systems. A number of neural network architectures have been proposed for detecting and recognizing the main elements of a drawing (frame, title block, specification, views, projections and sections), inscriptions, dimension and extension lines, as well as primitives that directly describe the product. For hierarchical and interconnected vectorization, a mechanism for semantic segmentation of drawings based on a graph neural network is proposed. The results of the implementation of the main stages of solving the problem of vectorization of design drawings are presented.</p></abstract><trans-abstract xml:lang="en"><p>The work proposes a solution to the problem of vectorization and machine interpretation of drawings of design documentation on paper, which provides the ability to automate the transfer of images of products, parts and assembly units into CAD systems. A number of neural network architectures have been proposed for detecting and recognizing the main elements of a drawing (frame, title block, specification, views, projections and sections), inscriptions, dimension and extension lines, as well as primitives that directly describe the product. For hierarchical and interconnected vectorization, a mechanism for semantic segmentation of drawings based on a graph neural network is proposed. The results of the implementation of the main stages of solving the problem of vectorization of design drawings are presented.</p></trans-abstract><kwd-group xml:lang="ru"><kwd>design documentation</kwd><kwd>drawing</kwd><kwd>product</kwd><kwd>vectorization</kwd><kwd>graph neural network</kwd><kwd>deep reinforcement learning</kwd></kwd-group><kwd-group xml:lang="en"><kwd>design documentation</kwd><kwd>drawing</kwd><kwd>product</kwd><kwd>vectorization</kwd><kwd>graph neural network</kwd><kwd>deep reinforcement learning</kwd></kwd-group></article-meta></front><back><ref-list><title>Список литературы</title><ref id="B1"><mixed-citation>Liu C., Wu J., Kohli P., Furukawa Y. Raster-to-vector: revisiting floorplan transformation. Proceedings of the IEEE International Conference on Computer Vision. 2017: 2195&amp;ndash;2203. DOI: 10.1109/ICCV.2017.241.</mixed-citation></ref><ref id="B2"><mixed-citation>Ellis K., Ritchie D., Solar-Lezama A., Tenenbaum J. Learning to infer graphics programs from hand-drawn images. Advances in neural information processing systems. 2018: 6059&amp;ndash;6068.</mixed-citation></ref><ref id="B3"><mixed-citation>Guo Y., Zhang Z., Han C., Hu W.B., Li C., Wong T.T. Deep line drawing vectorization via line subdivision and topology reconstruction. Comput. Graph. Forum 38, 2019: 81&amp;ndash;90. DOI: 10.1111/cgf.13818.</mixed-citation></ref><ref id="B4"><mixed-citation>Gao J., Tang C., Ganapathi-Subramanian V., Huang J., Su H., Guibas L.J. Deepspline: Data-driven reconstruction of parametric curves and surfaces. 2019. arXiv preprint arXiv:1901.03781.</mixed-citation></ref><ref id="B5"><mixed-citation>Ha D., Eck D. A neural representation of sketch drawings. 2018. URL: https://openreview.net/pdf?id=Hy6GHpkCW.</mixed-citation></ref><ref id="B6"><mixed-citation>Zhou T., Fang C., Wang Z., Yang J., Kim B., Chen Z., Brandt J., Terzopoulos D. Learning to doodle with stroke demonstrations and deep q-networks. BMVC. 2018: 13.</mixed-citation></ref><ref id="B7"><mixed-citation>Kaiyrbekov K., Sezgin M. Stroke-based sketched symbol reconstruction and segmentation. 2019. arXiv preprint arXiv:1901.03427.</mixed-citation></ref><ref id="B8"><mixed-citation>Zheng N., Jiang Y., Huang D. Strokenet: A neural painting environment. ICLR (Poster) 2019.</mixed-citation></ref><ref id="B9"><mixed-citation>Ribeiro L.S.F., Bui T., Collomosse J., Ponti M., Sketchformer: Transformer-based representation for sketched structure. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 2020: 14&amp;thinsp;153&amp;ndash;14&amp;thinsp;162.</mixed-citation></ref><ref id="B10"><mixed-citation>Lopes R.G., Ha D., Eck D., Shlens J. A learned representation for scalable vector graphics. Proceedings of the IEEE/CVF International Conference on Computer Vision (CVPR). 2019: 7930&amp;ndash;7939.</mixed-citation></ref><ref id="B11"><mixed-citation>Carlier A., Danelljan M., Alahi A., Timofte R. Deepsvg: A hierarchical generative network for vector graphics animation. Advances in Neural Information Processing Systems. 2020, vol. 33: 16&amp;thinsp;351&amp;ndash;16&amp;thinsp;361.</mixed-citation></ref><ref id="B12"><mixed-citation>Azadi S., Fisher M., Kim V.G., Wang Z., Shechtman E., Darrell T. Multi-content gan for few-shot font style transfer. Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR). 2018: 7564&amp;ndash;7573.</mixed-citation></ref><ref id="B13"><mixed-citation>Gao Y., Guo Y., Lian Z., Tang Y., Xiao J. Artistic glyph image synthesis via one-stage few-shot learning. ACM Transactions on Graphics (TOG). 2019, vol. 38, no. 6: 1&amp;ndash;12.</mixed-citation></ref><ref id="B14"><mixed-citation>Li T.-M., Luk&amp;aacute;č M., Gharbi M., Ragan-Kelley J. Differentiable vector graphics rasterization for editing and learning. ACM Transactions on Graphics (TOG). 2020, vol. 39, no. 6: pp. 1&amp;ndash;15.</mixed-citation></ref><ref id="B15"><mixed-citation>Hao S., Xuefeng L., Jianwei N., Jiahe C., Ji W., Xinghao W., Nana W. MARVEL: Raster Gray-level Manga Vectorization via Primitive-wise Deep Reinforcement Learning. IEEE Transactions on Circuits and Systems for Video Technology. 2023: 2677&amp;ndash;2693. DOI: 10.1109/TCSVT.2023.3309786.</mixed-citation></ref></ref-list></back></article>