<?xml version='1.0' encoding='utf-8'?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.2 20190208//EN" "http://jats.nlm.nih.gov/publishing/1.2/JATS-journalpublishing1.dtd">
<article article-type="research-article" dtd-version="1.2" xml:lang="ru" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><front><journal-meta><journal-id journal-id-type="issn">2518-1092</journal-id><journal-title-group><journal-title>Научный результат. Информационные технологии</journal-title></journal-title-group><issn pub-type="epub">2518-1092</issn></journal-meta><article-meta><article-id pub-id-type="doi">10.18413/2518-1092-2023-8-4-0-6</article-id><article-id pub-id-type="publisher-id">3302</article-id><article-categories><subj-group subj-group-type="heading"><subject>ИСКУССТВЕННЫЙ ИНТЕЛЛЕКТ И ПРИНЯТИЕ РЕШЕНИЙ</subject></subj-group></article-categories><title-group><article-title>&lt;strong&gt;РАЗРАБОТКА МЕТОДОВ МАШИННОГО ОБУЧЕНИЯ&amp;nbsp;И БИБЛИОТЕКИ ИНТЕРПРЕТИРУЕМОГО ПРЕДСКАЗАТЕЛЬНОГО МОДЕЛИРОВАНИЯ ПОВЕДЕНИЯ ЧЕЛОВЕКА В ПРОЦЕССЕ&amp;nbsp;&amp;nbsp;ЕГО ОНЛАЙН-ПРОФАЙЛИНГА&lt;/strong&gt;</article-title><trans-title-group xml:lang="en"><trans-title>&lt;strong&gt;DEVELOPMENT OF MACHINE LEARNING METHODS AND A&amp;nbsp;LIBRARY OF INTERPRETABLE PREDICTIVE MODELING OF HUMAN BEHAVIOR DURING HIS&amp;nbsp;ONLINE PROFILING&lt;/strong&gt;
&lt;div&gt;&amp;nbsp;&lt;/div&gt;</trans-title></trans-title-group></title-group><contrib-group><contrib contrib-type="author"><name-alternatives><name xml:lang="ru"><surname>Смирнов</surname><given-names>Иван Захарович</given-names></name><name xml:lang="en"><surname>Smirnov</surname><given-names>Ivan Zakharovich</given-names></name></name-alternatives></contrib><contrib contrib-type="author"><name-alternatives><name xml:lang="ru"><surname>Медведев</surname><given-names>Анатолий Андреевич</given-names></name><name xml:lang="en"><surname>Medvedev</surname><given-names>Anatoly Andreevich</given-names></name></name-alternatives></contrib><contrib contrib-type="author"><name-alternatives><name xml:lang="ru"><surname>Самигулин</surname><given-names>Тимур Русланович</given-names></name><name xml:lang="en"><surname>Samigulin</surname><given-names>Timur Ruslanovich</given-names></name></name-alternatives></contrib><contrib contrib-type="author"><name-alternatives><name xml:lang="ru"><surname>Комарова</surname><given-names>Алёна Алексеевна</given-names></name><name xml:lang="en"><surname>Komarova</surname><given-names>Alena Alekseevna</given-names></name></name-alternatives></contrib><contrib contrib-type="author"><name-alternatives><name xml:lang="ru"><surname>Тимощук-Бондарь</surname><given-names>Артём Игоревич</given-names></name><name xml:lang="en"><surname>Timoshchuk-Bondar</surname><given-names>Artyom Igorevich</given-names></name></name-alternatives></contrib><contrib contrib-type="author"><name-alternatives><name xml:lang="ru"><surname>Синько</surname><given-names>Михаил Витальевич</given-names></name><name xml:lang="en"><surname>Sinko</surname><given-names>Mikhail Vitalievich</given-names></name></name-alternatives></contrib><contrib contrib-type="author"><name-alternatives><name xml:lang="ru"><surname>Лаушкина</surname><given-names>Анастасия Александров</given-names></name><name xml:lang="en"><surname>Laushkina</surname><given-names>Anastasia Alexandrov</given-names></name></name-alternatives></contrib><contrib contrib-type="author"><name-alternatives><name xml:lang="ru"><surname>Гофман</surname><given-names>Ольга Олеговна</given-names></name><name xml:lang="en"><surname>Goffman</surname><given-names>Olga Olegovna</given-names></name></name-alternatives></contrib><contrib contrib-type="author"><name-alternatives><name xml:lang="ru"><surname>Басов</surname><given-names>Олег Олегович</given-names></name><name xml:lang="en"><surname>Basov</surname><given-names>Oleg Olegovich</given-names></name></name-alternatives><email>oobasov@mail.ru</email></contrib></contrib-group><pub-date pub-type="epub"><year>2023</year></pub-date><volume>8</volume><issue>4</issue><fpage>0</fpage><lpage>0</lpage><self-uri content-type="pdf" xlink:href="/media/information/2023/4/ИТ_НР_8.4_6.pdf" /><abstract xml:lang="ru"><p>Изучение индивидуально-психологических особенностей людей имеет важное значение в областях: образовании, менеджменте и управлении, обеспечении безопасности человека и сообществ. В решении задачи определения и анализа личностных особенностей существуют различные инструменты, однако они имеют ряд ограничений. Мы представляем решение, которое извлекает и с использованием машинного обучения анализирует признаки лица и речи человека из видеоряда, применимое для исследования восьми различных индивидуально-психологических характеристик в задаче цифрового онлайн-профайлинга. Пользователю предлагается использовать разработанную библиотеку Expert для получения новых характеристик путем применения и комбинации существующих ML-модулей для решения широкого класса задач.</p></abstract><trans-abstract xml:lang="en"><p>The study of individual psychological characteristics of people is important in the areas of education, management and administration, ensuring the safety of individuals and communities. There are various tools for solving the problem of determining and analyzing personal characteristics, but they have a number of limitations. We present a solution that extracts and uses machine learning to analyze human facial and speech features from video footage, applicable to the study of eight different individual psychological characteristics in an online digital profiling task. The user is invited to use the developed Expert library to obtain new characteristics by applying and combining existing ML modules to solve a wide class of problems.</p></trans-abstract><kwd-group xml:lang="ru"><kwd>машинное обучение</kwd><kwd>open source</kwd><kwd>мультимодальный анализ</kwd><kwd>вербальные и невербальные признаки</kwd></kwd-group><kwd-group xml:lang="en"><kwd>machine learning</kwd><kwd>open source</kwd><kwd>multimodal analysis</kwd><kwd>verbal and non-verbal signs</kwd></kwd-group></article-meta></front><back><ack><p>Исследование выполнено при финансовой поддержке Российского научного фонда, соглашение № 22-21-00604.</p></ack><ref-list><title>Список литературы</title><ref id="B1"><mixed-citation>Goupil L., Ponsot E., Richardson D. et al. Listeners&amp;rsquo; perceptions of the certainty and honesty of a speaker are associated with a common prosodic signature // Nat Commun. 2021. &amp;ndash; №12.</mixed-citation></ref><ref id="B2"><mixed-citation>Teixeira J. P., Oliveira C., Lopes C. Vocal Acoustic Analysis &amp;ndash; Jitter, Shimmer and HNR Parameters // Procedia Technology. &amp;ndash; 2013. &amp;ndash; V. 9. &amp;ndash; P. 1112-1122.</mixed-citation></ref><ref id="B3"><mixed-citation>Kirillov S., Lukyanov D. Evaluation of psycho-emotional status of robotic system operator in the Arctic&amp;nbsp;// IOP Conference Series: Earth and Environmental Science. &amp;ndash; 2019. &amp;ndash; № 302.</mixed-citation></ref><ref id="B4"><mixed-citation>Rammstedt B., Danner D., Lechner C. Personality, competencies, and life outcomes: results from the German PIAAC longitudinal study // Large-scale Assess Educ 5. &amp;ndash; 2017. &amp;ndash; №2.</mixed-citation></ref><ref id="B5"><mixed-citation>Anbesaw T., Zenebe Y., Asmamaw A., et. al. Post-traumatic stress disorder and associated factors among people who experienced traumatic events in Dessie town, Ethiopia, 2022: A community based study // Frontiers in Psychiatry. 2022. &amp;ndash; №13.</mixed-citation></ref><ref id="B6"><mixed-citation>Reeve D. Psycho-Emotional Disablism: The Missing Link? // Routledge Handbook of Disability StudiesEdition: 1stChapter: 7. 2012.</mixed-citation></ref><ref id="B7"><mixed-citation>Le Duc T., Huynh S., Vu T., et. al. Personality Traits and Aggressive Behavior in Vietnamese Adolescents&amp;nbsp;// Psychology Research and Behavior Management. &amp;ndash; 2023. &amp;ndash; №16. &amp;ndash; Р. 1987-2003.</mixed-citation></ref><ref id="B8"><mixed-citation>Cheng S., Dawson J., Thamby J., et al. How do aggression source, employee characteristics and organisational response impact the relationship between workplace aggression and work and health outcomes in healthcare employees? A cross-sectional analysis of the National Health Service staff survey in England // BMJ Open. 2020. &amp;ndash; №10(8).</mixed-citation></ref><ref id="B9"><mixed-citation>Соколова М.С. Адаптация к собеседнику как составляющая позитивной коммуникации: конститутивные признаки // Актуальные проблемы филологии и педагогической лингвистики. 2017. &amp;ndash; №2(26).</mixed-citation></ref><ref id="B10"><mixed-citation>Данилин М. В. Методика обучения аудированию в условиях мультимодальной коммуникации с использованием аутентичных аудиовидеоматериалов (английский язык, среднее общее образование): дис. канд. пед. наук: 5.8.2. &amp;ndash; М., 2021. &amp;ndash; 173 с.</mixed-citation></ref><ref id="B11"><mixed-citation>Зобков В.А. Уверенность человека в себе в ситуациях принятия решения // Вестник Костромского государственного университета. Серия: Педагогика. Психология. Социокинетика. 2018. &amp;ndash; №2.</mixed-citation></ref><ref id="B12"><mixed-citation>Ромек В.Г. Уверенность в себе как социально-психологическая характеристика личности: автореф. дис. канд. соц. псих. наук: 19.00.05. &amp;ndash; Ростов-на-Дону, 1997. &amp;ndash; 12 с.</mixed-citation></ref><ref id="B13"><mixed-citation>Кашапова Э.Р., Рыжкова М. В. Когнитивные искажения и их влияние на поведение индивида // Вестн. Том. гос. ун-та. Экономика. 2015. &amp;ndash;№2(30).</mixed-citation></ref><ref id="B14"><mixed-citation>Aneri R., Sonali J. Emotion Based Hate Speech Detection using Multimodal Learning // arXiv Computation and Language. 2022.</mixed-citation></ref><ref id="B15"><mixed-citation>Jianyuan G., Kai H., Han W., et al. CMT: Convolutional Neural Networks Meet Vision Transformers // Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 2022. &amp;ndash; Р. 12175-12185.</mixed-citation></ref><ref id="B16"><mixed-citation>EmoPy: a machine learning toolkit for emotional expression // thoughtworks URL: https://www.thoughtworks.com/ (дата обращения: 13.07.2023).</mixed-citation></ref><ref id="B17"><mixed-citation>Camillo L., Jiuqiang T., Hadon N., et al. EmoPy: a machine learning toolkit for emotional expression // arXiv Distributed, Parallel, and Cluster Computing. 2019.</mixed-citation></ref><ref id="B18"><mixed-citation>TextBlob: Simplified Text Processing // TextBlob URL: https://textblob.readthedocs.io/en/dev/ (дата обращения: 13.07.2023).</mixed-citation></ref><ref id="B19"><mixed-citation>Razzaq M.A., Hussain J., Bang J., et. al. A Hybrid Multimodal Emotion Recognition Framework for UX Evaluation Using Generalized Mixture Functions. Sensors 23. &amp;ndash; 2023. &amp;ndash; №23(9).</mixed-citation></ref><ref id="B20"><mixed-citation>Detoxify // github URL: https://github.com/unitaryai/detoxify (дата обращения: 13.07.2023).</mixed-citation></ref><ref id="B21"><mixed-citation>Boersma P., Van Heuven V. Speak and unSpeak with PRAAT // Glot International. &amp;ndash; V. 5. &amp;ndash; №9/10. &amp;ndash;&amp;nbsp;Р. 341-347.</mixed-citation></ref><ref id="B22"><mixed-citation>Gedas B., Heng W., Lorenzo T. Is Space-Time Attention All You Need for Video Understanding? // arXiv Computer Vision and Pattern Recognition. 2021.</mixed-citation></ref><ref id="B23"><mixed-citation>Grishchenko I., Ablavatski A., Kartynnik Y., et. al. Attention Mesh: High-fidelity Face Mesh Prediction in Real-time // arXiv Computer Vision and Pattern Recognition. 2022.</mixed-citation></ref><ref id="B24"><mixed-citation>Samigulin T.R., Smirnov I.Z., Laushkina A.A. Determination of markers of aggressive human behavior based on analysis of audio and text channels // Scientific result. Information Technology. &amp;ndash; 2022. &amp;ndash; T.7. &amp;ndash; No. 2. &amp;ndash; pp. 56-61.</mixed-citation></ref><ref id="B25"><mixed-citation>Jacob D., Ming-Wei C., Kenton L., Kristina T. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding // arXiv preprint arXiv:1810.04805. 2019.</mixed-citation></ref><ref id="B26"><mixed-citation>Wen Z., Lin W., Wang T., Xu G. Distract Your Attention: Multi-Head Cross Attention Network for Facial Expression Recognition // Biomimetics 8. 2023. &amp;ndash; №2. &amp;ndash; P. 199.</mixed-citation></ref><ref id="B27"><mixed-citation>Peng Z., Lu Y., Pan S., Liu Y. Efficient Speech Emotion Recognition Using Multi-Scale CNN and Attention // ICASSP 2021 &amp;ndash; 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). &amp;ndash; 2021. &amp;ndash; P. 3020-3024.</mixed-citation></ref><ref id="B28"><mixed-citation>Amit G., Noura Al M., Steven B. ExBERT: An External Knowledge Enhanced BERT for Natural Language Inference // arXiv Computation and Language. 2021.</mixed-citation></ref><ref id="B29"><mixed-citation>Shickel B., Scott S., Martin H., at. al. Automatic Detection and Classification of Cognitive Distortions in Mental Health Text // IEEE 20th International Conference on Bioinformatics and Bioengineering (BIBE). 2019.</mixed-citation></ref><ref id="B30"><mixed-citation>Xuejiao Z., Chunyan M., Zhenchang X. Identifying Cognitive Distortion by Convolutional Neural Network based Text Classification // International Journal of Information Technology. 2017. &amp;ndash; №23.</mixed-citation></ref><ref id="B31"><mixed-citation>Simms T., Ramstedt C., Rich M., et.al. Detecting Cognitive Distortions Through Machine Learning Text Analytics // 2017 IEEE International Conference on Healthcare Informatics (ICHI). &amp;ndash; 2017. &amp;ndash; P. 508-512.</mixed-citation></ref><ref id="B32"><mixed-citation>Beck A. Cognitive therapy and the emotional disorders // New York: New American Library. 1979. &amp;ndash;&amp;nbsp;P. 374.</mixed-citation></ref><ref id="B33"><mixed-citation>Breiman L. Random Forests // Machine Learning. 2001. &amp;ndash; №45. &amp;ndash; P. 5&amp;ndash;32.</mixed-citation></ref><ref id="B34"><mixed-citation>Ferracane E., Durrett G., Li J., et. al. Did they answer? Subjective acts and intents in conversational discourse // Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. 2021. &amp;ndash; P. 1626&amp;ndash;1644.</mixed-citation></ref></ref-list></back></article>