<?xml version='1.0' encoding='utf-8'?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.2 20190208//EN" "http://jats.nlm.nih.gov/publishing/1.2/JATS-journalpublishing1.dtd">
<article article-type="research-article" dtd-version="1.2" xml:lang="ru" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><front><journal-meta><journal-id journal-id-type="issn">2518-1092</journal-id><journal-title-group><journal-title>Research result. Information technologies</journal-title></journal-title-group><issn pub-type="epub">2518-1092</issn></journal-meta><article-meta><article-id pub-id-type="doi">10.18413/2518-1092-2022-8-2-0-5</article-id><article-id pub-id-type="publisher-id">3145</article-id><article-categories><subj-group subj-group-type="heading"><subject>ARTIFICIAL INTELLIGENCE AND DECISION MAKING</subject></subj-group></article-categories><title-group><article-title>&lt;strong&gt;THE METHOD OF IDENTIFYING INDIRECT SIGNS OF CORRUPTION ACTS BASED ON VIDEO RECORDINGS OF SPEECHES OF CIVIL SERVANTS&lt;/strong&gt;</article-title><trans-title-group xml:lang="en"><trans-title>&lt;strong&gt;THE METHOD OF IDENTIFYING INDIRECT SIGNS OF CORRUPTION ACTS BASED ON VIDEO RECORDINGS OF SPEECHES OF CIVIL SERVANTS&lt;/strong&gt;</trans-title></trans-title-group></title-group><contrib-group><contrib contrib-type="author"><name-alternatives><name xml:lang="ru"><surname>Krainovskikh</surname><given-names>Veronika Igorevna</given-names></name><name xml:lang="en"><surname>Krainovskikh</surname><given-names>Veronika Igorevna</given-names></name></name-alternatives><email>vikraynova@edu.hse.ru</email></contrib><contrib contrib-type="author"><name-alternatives><name xml:lang="ru"><surname>Komarova</surname><given-names>Alyona Alekseevna</given-names></name><name xml:lang="en"><surname>Komarova</surname><given-names>Alyona Alekseevna</given-names></name></name-alternatives></contrib><contrib contrib-type="author"><name-alternatives><name xml:lang="ru"><surname>Basov</surname><given-names>Oleg Olegovich</given-names></name><name xml:lang="en"><surname>Basov</surname><given-names>Oleg Olegovich</given-names></name></name-alternatives><email>oobasov@mail.ru</email></contrib><contrib contrib-type="author"><name-alternatives><name xml:lang="ru"><surname>Basov</surname><given-names>Oleg Olegovich</given-names></name><name xml:lang="en"><surname>Basov</surname><given-names>Oleg Olegovich</given-names></name></name-alternatives><email>oobasov@mail.ru</email></contrib></contrib-group><pub-date pub-type="epub"><year>2023</year></pub-date><volume>8</volume><issue>2</issue><fpage>0</fpage><lpage>0</lpage><self-uri content-type="pdf" xlink:href="/media/information/2023/2/ИТ_НР_8.2_5_Ql0RJfM.pdf" /><abstract xml:lang="ru"><p>At the moment, the problem of countering corruption in the field of public service still does not lose its relevance. It is assumed that when committing acts of corruption, people show certain verbal and non-verbal signals, with the help of which it is possible to identify signs of corruption. The article deals with the problem of violation of anti-corruption legislation in state and municipal institutions from the point of view of the psycho-emotional state of officials. It is proposed to use machine learning methods to analyze video and audio recordings of speeches with unprepared speech of officials of various levels of government, where they answer questions from journalists and the public, in order to determine emotions and identify aggression, uncertainty, and evasiveness in answers. The results of the study may be useful for government agencies involved in the fight against corruption, as well as for the public interested in transparency and honesty of the activities of civil servants.</p></abstract><trans-abstract xml:lang="en"><p>At the moment, the problem of countering corruption in the field of public service still does not lose its relevance. It is assumed that when committing acts of corruption, people show certain verbal and non-verbal signals, with the help of which it is possible to identify signs of corruption. The article deals with the problem of violation of anti-corruption legislation in state and municipal institutions from the point of view of the psycho-emotional state of officials. It is proposed to use machine learning methods to analyze video and audio recordings of speeches with unprepared speech of officials of various levels of government, where they answer questions from journalists and the public, in order to determine emotions and identify aggression, uncertainty, and evasiveness in answers. The results of the study may be useful for government agencies involved in the fight against corruption, as well as for the public interested in transparency and honesty of the activities of civil servants.</p></trans-abstract><kwd-group xml:lang="ru"><kwd>anti–corruption</kwd><kwd>public administration</kwd><kwd>machine learning</kwd><kwd>verbal analysis</kwd><kwd>analysis of audio-video recordings</kwd><kwd>analysis of psycho-emotional state</kwd></kwd-group><kwd-group xml:lang="en"><kwd>anti–corruption</kwd><kwd>public administration</kwd><kwd>machine learning</kwd><kwd>verbal analysis</kwd><kwd>analysis of audio-video recordings</kwd><kwd>analysis of psycho-emotional state</kwd></kwd-group></article-meta></front><back><ref-list><title>Список литературы</title><ref id="B1"><mixed-citation>Ovsyannikova V.V. On the question of the classification of emotions: categorical and multidimensional approaches // Financial analytics: problems and solutions. &amp;ndash; 2013. &amp;ndash; No. 37. &amp;ndash; P. 43-48.</mixed-citation></ref><ref id="B2"><mixed-citation>Rogov E.I. The handbook of a practical psychologist: a textbook //Moscow: VL. &amp;ndash; 1998. &amp;ndash; P. 134-142.</mixed-citation></ref><ref id="B3"><mixed-citation>Samigullin T.R., Smirnov I.Z., Laushkina A.A. Determination of markers of aggressive human behavior based on the analysis of audio and text channels //Scientific result. Information technology. &amp;ndash; 2022. &amp;ndash; Vol. 7. &amp;ndash; No.&amp;nbsp;2. &amp;ndash; P. 55-62.</mixed-citation></ref><ref id="B4"><mixed-citation>Bazarevsky V. et al. Blazeface: Sub-millisecond neural face detection on mobile gpus //arXiv preprint arXiv:1907.05047. &amp;ndash; 2019.</mixed-citation></ref><ref id="B5"><mixed-citation>Burkhardt F. et al. A database of German emotional speech // Interspeech. &amp;ndash; 2005. &amp;ndash; Т. 5. &amp;ndash; P. 1517-1520.</mixed-citation></ref><ref id="B6"><mixed-citation>Chow A., Louie J. Detecting lies via speech patterns. &amp;ndash; 2017.</mixed-citation></ref><ref id="B7"><mixed-citation>Devyatkin D. A. et al. Intelligent analysis of manifestations of verbal aggressiveness in network community texts //Scientific and Technical Information Processing. &amp;ndash; 2014. &amp;ndash; Т. 41. &amp;ndash; P. 377-389.</mixed-citation></ref><ref id="B8"><mixed-citation>Goupil L. et al. Listeners&amp;rsquo; perceptions of the certainty and honesty of a speaker are associated with a common prosodic signature // Nature communication. &amp;ndash; 2021. &amp;ndash; Т. 12. &amp;ndash; №. 1. &amp;ndash; P. 861.</mixed-citation></ref><ref id="B9"><mixed-citation>Gournay P., Lahaie O., Lefebvre R. A canadian french emotional speech dataset // Proceedings of the 9th ACM multimedia systems conference. &amp;ndash; 2018. &amp;ndash; P. 399-402.</mixed-citation></ref><ref id="B10"><mixed-citation>Korobov M. Morphological analyzer and generator for Russian and Ukrainian languages // Analysis of Images, Social Networks and Texts: 4th International Conference, AIST 2015, Yekaterinburg, Russia, April 9&amp;ndash;11, 2015, Revised Selected Papers 4. &amp;ndash; Springer International Publishing, 2015. &amp;ndash; P. 320-332.</mixed-citation></ref><ref id="B11"><mixed-citation>Kossaifi J. et al. Sewa db: A rich database for audio-visual emotion and sentiment research in the wild // IEEE transactions on pattern analysis and machine intelligence. &amp;ndash; 2019. &amp;ndash; Т. 43. &amp;ndash; №. 3. &amp;ndash; P. 1022-1040.</mixed-citation></ref><ref id="B12"><mixed-citation>Laushkina, Anastasia &amp;amp; Smirnov, Ivan &amp;amp; Medvedev, Anatoly &amp;amp; Laptev, Andrey &amp;amp; Sinko, Mikhail. (2022). Detecting incongruity in the expression of emotions in short videos based on a multimodal approach. Cybernetics and Physics. P. 210-216.</mixed-citation></ref><ref id="B13"><mixed-citation>Luna-Jim&amp;eacute;nez C. et al. A Proposal for Multimodal Emotion Recognition Using Aural Transformers and Action Units on RAVDESS Dataset // Applied Sciences. 2021. &amp;ndash; Vol. 12. &amp;ndash; № 1. &amp;ndash; P. 327.</mixed-citation></ref><ref id="B14"><mixed-citation>Marcolla F., de Santiago R., Dazzi R. Novel Lie Speech Classification by using Voice Stress // Proceedings of the 12th International Conference on Agents and Artificial Intelligence. SCITEPRESS &amp;ndash; Science and Technology Publications. &amp;ndash; 2020. &amp;ndash; P. 742&amp;ndash;749.</mixed-citation></ref><ref id="B15"><mixed-citation>McFee B. et al. librosa: Audio and music signal analysis in python //Proceedings of the 14th python in science conference. &amp;ndash; 2015. &amp;ndash; Т. 8. &amp;ndash; P. 18-25.</mixed-citation></ref><ref id="B16"><mixed-citation>Oviatt S. et al. (ed.). The Handbook of Multimodal-Multisensor Interfaces: Signal Processing, Architectures, and Detection of Emotion and Cognition-Volume 2. &amp;ndash; Association for Computing Machinery and Morgan &amp;amp; Claypool, 2018.</mixed-citation></ref><ref id="B17"><mixed-citation>S. Haq P.J.B.J. Multimodal Emotion Recognition / ed. Wang W. IGI Global, 2010. P. 398&amp;ndash;423.</mixed-citation></ref><ref id="B18"><mixed-citation>Saeed H. H., Shahzad K., Kamiran F. Overlapping toxic sentiment classification using deep neural architectures //2018 IEEE international conference on data mining workshops (ICDMW). &amp;ndash; IEEE, 2018. &amp;ndash; P. 1361-1366.</mixed-citation></ref><ref id="B19"><mixed-citation>Sinko M. et al. Method of constructing and identifying predictive models of human behavior based on information models of non-verbal signals //Procedia Computer Science. &amp;ndash; 2022. &amp;ndash; Т. 212. &amp;ndash; P. 171-180.</mixed-citation></ref><ref id="B20"><mixed-citation>Tenney I., Das D., Pavlick E. BERT rediscovers the classical NLP pipeline // arXiv preprint arXiv:1905.05950. &amp;ndash; 2019.</mixed-citation></ref><ref id="B21"><mixed-citation>Tsai Y. H. H. et al. Learning factorized multimodal representations // arXiv preprint arXiv:1806.06176. &amp;ndash; 2018.</mixed-citation></ref><ref id="B22"><mixed-citation>Tsai Y. H. H. et al. Multimodal transformer for unaligned multimodal language sequences //Proceedings of the conference. Association for Computational Linguistics. Meeting. &amp;ndash; NIH Public Access, 2019. &amp;ndash; Т. 2019. &amp;ndash; P. 6558.</mixed-citation></ref><ref id="B23"><mixed-citation>Tzirakis P. et al. End-to-end multimodal emotion recognition using deep neural networks // IEEE Journal of selected topics in signal processing. &amp;ndash; 2017. &amp;ndash; Т. 11. &amp;ndash; №. 8. &amp;ndash; P. 1301-1309.</mixed-citation></ref><ref id="B24"><mixed-citation>The IMF estimated the losses to the world economy from corruption [Electronic resource]. &amp;ndash; URL: https://www.rbc.ru/economics/18/09/2017/59bfead89a794704d063c4f0 (date of application: 03.04.2023).</mixed-citation></ref><ref id="B25"><mixed-citation>Brief description of the state of crime in the Russian Federation for January-June 2022. URL: https://xn--b1aew.xn--p1ai/reports/item/31209853 / (accessed 03.04.2023).</mixed-citation></ref><ref id="B26"><mixed-citation>HSE has proposed using artificial intelligence to prevent corruption in the Russian Federation [Electronic resource]. &amp;ndash; URL: https://tass.ru/ekonomika/14304599 (date of application: 03.04.2023).</mixed-citation></ref><ref id="B27"><mixed-citation>Bert for Sequence Classification (Question vs Statement) [Electronic resource]. &amp;ndash; URL: https://sparknlp.org/2021/11/04/bert_sequence_classifier_question_statement_en.html (accessed: 04/15/2023).</mixed-citation></ref></ref-list></back></article>