<?xml version='1.0' encoding='utf-8'?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.2 20190208//EN" "http://jats.nlm.nih.gov/publishing/1.2/JATS-journalpublishing1.dtd">
<article article-type="research-article" dtd-version="1.2" xml:lang="ru" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><front><journal-meta><journal-id journal-id-type="issn">2518-1092</journal-id><journal-title-group><journal-title>Research result. Information technologies</journal-title></journal-title-group><issn pub-type="epub">2518-1092</issn></journal-meta><article-meta><article-id pub-id-type="doi">10.18413/2518-1092-2023-8-4-0-6</article-id><article-id pub-id-type="publisher-id">3302</article-id><article-categories><subj-group subj-group-type="heading"><subject>ARTIFICIAL INTELLIGENCE AND DECISION MAKING</subject></subj-group></article-categories><title-group><article-title>&lt;strong&gt;DEVELOPMENT OF MACHINE LEARNING METHODS AND A&amp;nbsp;LIBRARY OF INTERPRETABLE PREDICTIVE MODELING OF HUMAN BEHAVIOR DURING HIS&amp;nbsp;ONLINE PROFILING&lt;/strong&gt;
&lt;div&gt;&amp;nbsp;&lt;/div&gt;</article-title><trans-title-group xml:lang="en"><trans-title>&lt;strong&gt;DEVELOPMENT OF MACHINE LEARNING METHODS AND A&amp;nbsp;LIBRARY OF INTERPRETABLE PREDICTIVE MODELING OF HUMAN BEHAVIOR DURING HIS&amp;nbsp;ONLINE PROFILING&lt;/strong&gt;
&lt;div&gt;&amp;nbsp;&lt;/div&gt;</trans-title></trans-title-group></title-group><contrib-group><contrib contrib-type="author"><name-alternatives><name xml:lang="ru"><surname>Smirnov</surname><given-names>Ivan Zakharovich</given-names></name><name xml:lang="en"><surname>Smirnov</surname><given-names>Ivan Zakharovich</given-names></name></name-alternatives></contrib><contrib contrib-type="author"><name-alternatives><name xml:lang="ru"><surname>Medvedev</surname><given-names>Anatoly Andreevich</given-names></name><name xml:lang="en"><surname>Medvedev</surname><given-names>Anatoly Andreevich</given-names></name></name-alternatives></contrib><contrib contrib-type="author"><name-alternatives><name xml:lang="ru"><surname>Samigulin</surname><given-names>Timur Ruslanovich</given-names></name><name xml:lang="en"><surname>Samigulin</surname><given-names>Timur Ruslanovich</given-names></name></name-alternatives></contrib><contrib contrib-type="author"><name-alternatives><name xml:lang="ru"><surname>Komarova</surname><given-names>Alena Alekseevna</given-names></name><name xml:lang="en"><surname>Komarova</surname><given-names>Alena Alekseevna</given-names></name></name-alternatives></contrib><contrib contrib-type="author"><name-alternatives><name xml:lang="ru"><surname>Timoshchuk-Bondar</surname><given-names>Artyom Igorevich</given-names></name><name xml:lang="en"><surname>Timoshchuk-Bondar</surname><given-names>Artyom Igorevich</given-names></name></name-alternatives></contrib><contrib contrib-type="author"><name-alternatives><name xml:lang="ru"><surname>Sinko</surname><given-names>Mikhail Vitalievich</given-names></name><name xml:lang="en"><surname>Sinko</surname><given-names>Mikhail Vitalievich</given-names></name></name-alternatives></contrib><contrib contrib-type="author"><name-alternatives><name xml:lang="ru"><surname>Laushkina</surname><given-names>Anastasia Alexandrov</given-names></name><name xml:lang="en"><surname>Laushkina</surname><given-names>Anastasia Alexandrov</given-names></name></name-alternatives></contrib><contrib contrib-type="author"><name-alternatives><name xml:lang="ru"><surname>Goffman</surname><given-names>Olga Olegovna</given-names></name><name xml:lang="en"><surname>Goffman</surname><given-names>Olga Olegovna</given-names></name></name-alternatives></contrib><contrib contrib-type="author"><name-alternatives><name xml:lang="ru"><surname>Basov</surname><given-names>Oleg Olegovich</given-names></name><name xml:lang="en"><surname>Basov</surname><given-names>Oleg Olegovich</given-names></name></name-alternatives><email>oobasov@mail.ru</email></contrib></contrib-group><pub-date pub-type="epub"><year>2023</year></pub-date><volume>8</volume><issue>4</issue><fpage>0</fpage><lpage>0</lpage><self-uri content-type="pdf" xlink:href="/media/information/2023/4/ИТ_НР_8.4_6.pdf" /><abstract xml:lang="ru"><p>The study of individual psychological characteristics of people is important in the areas of education, management and administration, ensuring the safety of individuals and communities. There are various tools for solving the problem of determining and analyzing personal characteristics, but they have a number of limitations. We present a solution that extracts and uses machine learning to analyze human facial and speech features from video footage, applicable to the study of eight different individual psychological characteristics in an online digital profiling task. The user is invited to use the developed Expert library to obtain new characteristics by applying and combining existing ML modules to solve a wide class of problems.</p></abstract><trans-abstract xml:lang="en"><p>The study of individual psychological characteristics of people is important in the areas of education, management and administration, ensuring the safety of individuals and communities. There are various tools for solving the problem of determining and analyzing personal characteristics, but they have a number of limitations. We present a solution that extracts and uses machine learning to analyze human facial and speech features from video footage, applicable to the study of eight different individual psychological characteristics in an online digital profiling task. The user is invited to use the developed Expert library to obtain new characteristics by applying and combining existing ML modules to solve a wide class of problems.</p></trans-abstract><kwd-group xml:lang="ru"><kwd>machine learning</kwd><kwd>open source</kwd><kwd>multimodal analysis</kwd><kwd>verbal and non-verbal signs</kwd></kwd-group><kwd-group xml:lang="en"><kwd>machine learning</kwd><kwd>open source</kwd><kwd>multimodal analysis</kwd><kwd>verbal and non-verbal signs</kwd></kwd-group></article-meta></front><back><ack><p>The research was carried out with the financial support of the Russian Science Foundation, Agreement No. 22-21-00604.</p></ack><ref-list><title>Список литературы</title><ref id="B1"><mixed-citation>Goupil L., Ponsot E., Richardson D. et al. Listeners&amp;rsquo; perceptions of the certainty and honesty of a speaker are associated with a common prosodic signature // Nat Commun. 2021. &amp;ndash; №12.</mixed-citation></ref><ref id="B2"><mixed-citation>Teixeira J. P., Oliveira C., Lopes C. Vocal Acoustic Analysis &amp;ndash; Jitter, Shimmer and HNR Parameters // Procedia Technology. &amp;ndash; 2013. &amp;ndash; V. 9. &amp;ndash; P. 1112-1122.</mixed-citation></ref><ref id="B3"><mixed-citation>Kirillov S., Lukyanov D. Evaluation of psycho-emotional status of robotic system operator in the Arctic&amp;nbsp;// IOP Conference Series: Earth and Environmental Science. &amp;ndash; 2019. &amp;ndash; № 302.</mixed-citation></ref><ref id="B4"><mixed-citation>Rammstedt B., Danner D., Lechner C. Personality, competencies, and life outcomes: results from the German PIAAC longitudinal study // Large-scale Assess Educ 5. &amp;ndash; 2017. &amp;ndash; №2.</mixed-citation></ref><ref id="B5"><mixed-citation>Anbesaw T., Zenebe Y., Asmamaw A., et. al. Post-traumatic stress disorder and associated factors among people who experienced traumatic events in Dessie town, Ethiopia, 2022: A community based study // Frontiers in Psychiatry. 2022. №13.</mixed-citation></ref><ref id="B6"><mixed-citation>Reeve D. Psycho-Emotional Disablism: The Missing Link? // Routledge Handbook of Disability StudiesEdition: 1stChapter: 7. 2012.</mixed-citation></ref><ref id="B7"><mixed-citation>Le Duc T., Huynh S., Vu T., et. al. Personality Traits and Aggressive Behavior in Vietnamese Adolescents&amp;nbsp;// Psychology Research and Behavior Management. &amp;ndash; 2023. &amp;ndash; №16. &amp;ndash; Р. 1987-2003.</mixed-citation></ref><ref id="B8"><mixed-citation>Cheng S., Dawson J., Thamby J., et al. How do aggression source, employee characteristics and organisational response impact the relationship between workplace aggression and work and health outcomes in healthcare employees? A cross-sectional analysis of the National Health Service staff survey in England // BMJ Open. 2020. &amp;ndash; №10(8).</mixed-citation></ref><ref id="B9"><mixed-citation>Sokolova M.S. Adaptation to the interlocutor as a component of positive communication: constitutive features // Current problems of philology and pedagogical linguistics. 2017. No. 2 (26).</mixed-citation></ref><ref id="B10"><mixed-citation>Danilin M.V. Methods of teaching listening in conditions of multimodal communication using authentic audio-video materials (English, secondary general education): dis. Ph.D. ped. Sciences: 5.8.2. - M., 2021.&amp;nbsp;&amp;ndash; 173 p.</mixed-citation></ref><ref id="B11"><mixed-citation>Zobkov V. A. A person&amp;rsquo;s self-confidence in decision-making situations // Bulletin of Kostroma State University. Series: Pedagogy. Psychology. Sociokinetics. &amp;ndash; 2018. &amp;ndash; No. 2.</mixed-citation></ref><ref id="B12"><mixed-citation>Romek V. G. Self-confidence as a socio-psychological characteristic of personality: abstract. dis. Ph.D. social psycho. Sciences: 19.00.05. - Rostov-on-Don, 1997. &amp;ndash; 12 p.</mixed-citation></ref><ref id="B13"><mixed-citation>Kashapova E. R., Ryzhkova M. V. Cognitive distortions and their influence on individual behavior // Vestn. Volume. state un-ta. Economy. &amp;ndash; 2015. &amp;ndash; No. 2 (30).</mixed-citation></ref><ref id="B14"><mixed-citation>Aneri R., Sonali J. Emotion Based Hate Speech Detection using Multimodal Learning // arXiv Computation and Language. 2022.</mixed-citation></ref><ref id="B15"><mixed-citation>Jianyuan G., Kai H., Han W., et al. CMT: Convolutional Neural Networks Meet Vision Transformers // Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 2022. &amp;ndash; Р. 12175-12185.</mixed-citation></ref><ref id="B16"><mixed-citation>EmoPy: a machine learning toolkit for emotional expression // thoughtworks URL: https://www.thoughtworks.com/ (data access: 13.07.2023).</mixed-citation></ref><ref id="B17"><mixed-citation>Camillo L., Jiuqiang T., Hadon N., et al. EmoPy: a machine learning toolkit for emotional expression // arXiv Distributed, Parallel, and Cluster Computing. 2019.</mixed-citation></ref><ref id="B18"><mixed-citation>TextBlob: Simplified Text Processing // TextBlob URL: https://textblob.readthedocs.io/en/dev/ (data access: 13.07.2023).</mixed-citation></ref><ref id="B19"><mixed-citation>Razzaq M.A., Hussain J., Bang J., et. al. A Hybrid Multimodal Emotion Recognition Framework for UX Evaluation Using Generalized Mixture Functions. Sensors 23. 2023. №23(9).</mixed-citation></ref><ref id="B20"><mixed-citation>Detoxify // github URL: https://github.com/unitaryai/detoxify (data access: 13.07.2023).</mixed-citation></ref><ref id="B21"><mixed-citation>Boersma P., Van Heuven V. Speak and unSpeak with PRAAT // Glot International. &amp;ndash; V. 5. &amp;ndash; №9/10. &amp;ndash;&amp;nbsp;Р. 341-347.</mixed-citation></ref><ref id="B22"><mixed-citation>Gedas B., Heng W., Lorenzo T. Is Space-Time Attention All You Need for Video Understanding? // arXiv Computer Vision and Pattern Recognition. 2021.</mixed-citation></ref><ref id="B23"><mixed-citation>Grishchenko I., Ablavatski A., Kartynnik Y., et. al. Attention Mesh: High-fidelity Face Mesh Prediction in Real-time // arXiv Computer Vision and Pattern Recognition. 2022.</mixed-citation></ref><ref id="B24"><mixed-citation>Samigulin T.R., Smirnov I.Z., Laushkina A.A. Determination of markers of aggressive human behavior based on analysis of audio and text channels // Scientific result. Information Technology. &amp;ndash; 2022. &amp;ndash; T.7. &amp;ndash; No. 2. &amp;ndash; pp. 56-61.</mixed-citation></ref><ref id="B25"><mixed-citation>Jacob D., Ming-Wei C., Kenton L., Kristina T. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding // arXiv preprint arXiv:1810.04805. 2019.</mixed-citation></ref><ref id="B26"><mixed-citation>Wen Z., Lin W., Wang T., Xu G. Distract Your Attention: Multi-Head Cross Attention Network for Facial Expression Recognition // Biomimetics 8. 2023. &amp;ndash; №2. &amp;ndash; P. 199.</mixed-citation></ref><ref id="B27"><mixed-citation>Peng Z., Lu Y., Pan S., Liu Y. Efficient Speech Emotion Recognition Using Multi-Scale CNN and Attention // ICASSP 2021 &amp;ndash; 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). &amp;ndash; 2021. &amp;ndash; P. 3020-3024.</mixed-citation></ref><ref id="B28"><mixed-citation>Amit G., Noura Al M., Steven B. ExBERT: An External Knowledge Enhanced BERT for Natural Language Inference // arXiv Computation and Language. 2021.</mixed-citation></ref><ref id="B29"><mixed-citation>Shickel B., Scott S., Martin H., at. al. Automatic Detection and Classification of Cognitive Distortions in Mental Health Text // IEEE 20th International Conference on Bioinformatics and Bioengineering (BIBE). 2019.</mixed-citation></ref><ref id="B30"><mixed-citation>Xuejiao Z., Chunyan M., Zhenchang X. Identifying Cognitive Distortion by Convolutional Neural Network based Text Classification // International Journal of Information Technology. 2017. &amp;ndash; №23.</mixed-citation></ref><ref id="B31"><mixed-citation>Simms T., Ramstedt C., Rich M., et.al. Detecting Cognitive Distortions Through Machine Learning Text Analytics // 2017 IEEE International Conference on Healthcare Informatics (ICHI). &amp;ndash; 2017. &amp;ndash; P. 508-512.</mixed-citation></ref><ref id="B32"><mixed-citation>Beck A. Cognitive therapy and the emotional disorders // New York: New American Library. 1979. &amp;ndash;&amp;nbsp;P. 374.</mixed-citation></ref><ref id="B33"><mixed-citation>Breiman L. Random Forests // Machine Learning. 2001. &amp;ndash; №45. &amp;ndash; P. 5&amp;ndash;32.</mixed-citation></ref><ref id="B34"><mixed-citation>Ferracane E., Durrett G., Li J., et. al. Did they answer? Subjective acts and intents in conversational discourse // Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. 2021. &amp;ndash; P. 1626&amp;ndash;1644.</mixed-citation></ref></ref-list></back></article>