<?xml version='1.0' encoding='utf-8'?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.2 20190208//EN" "http://jats.nlm.nih.gov/publishing/1.2/JATS-journalpublishing1.dtd">
<article article-type="research-article" dtd-version="1.2" xml:lang="ru" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><front><journal-meta><journal-id journal-id-type="issn">2518-1092</journal-id><journal-title-group><journal-title>Research result. Information technologies</journal-title></journal-title-group><issn pub-type="epub">2518-1092</issn></journal-meta><article-meta><article-id pub-id-type="doi">10.18413/2518-1092-2022-7-2-0-8</article-id><article-id pub-id-type="publisher-id">2803</article-id><article-categories><subj-group subj-group-type="heading"><subject>COMPUTER SIMULATION</subject></subj-group></article-categories><title-group><article-title>&lt;strong&gt;AN ALGORITHM FOR DETECTING NON-VERBAL MARKERS OF HUMAN BEHAVIOR ON VIDEO&lt;/strong&gt;</article-title><trans-title-group xml:lang="en"><trans-title>&lt;strong&gt;AN ALGORITHM FOR DETECTING NON-VERBAL MARKERS OF HUMAN BEHAVIOR ON VIDEO&lt;/strong&gt;</trans-title></trans-title-group></title-group><contrib-group><contrib contrib-type="author"><name-alternatives><name xml:lang="ru"><surname>Medvedev</surname><given-names>Anatoly Andreevich</given-names></name><name xml:lang="en"><surname>Medvedev</surname><given-names>Anatoly Andreevich</given-names></name></name-alternatives><email>anatolmdvdv@gmail.com</email></contrib><contrib contrib-type="author"><name-alternatives><name xml:lang="ru"><surname>Laptev</surname><given-names>Andrey Aleksandrovich</given-names></name><name xml:lang="en"><surname>Laptev</surname><given-names>Andrey Aleksandrovich</given-names></name></name-alternatives><email>nickname.avast@gmail.com</email></contrib></contrib-group><pub-date pub-type="epub"><year>2022</year></pub-date><volume>7</volume><issue>2</issue><fpage>0</fpage><lpage>0</lpage><self-uri content-type="pdf" xlink:href="/media/information/2022/2/НР_ИТ_72-8.pdf" /><abstract xml:lang="ru"><p>Microexpressions are unconscious, short-term non-verbal signals that allow to determine the emotional state of a person. Microexpressions occur when a person blocks emotions or hides true intentions. Determining non-verbal signals becomes an urgent task in situations where lying or hiding information leads to resource or financial losses, affects the safety and health of other people. The spread of online conferences opens up the possibility of programmatic processing of a human speech video channel to analyze emotions and behavior in order to identify the congruence or inconsistency of person&amp;#39;s statements. The article discusses computer vision and machine learning methods that allow extracting and analyzing person&amp;#39;s face from a video channel to determine its non-verbal markers and emotional state. The method of facial landmarks, key points of the face, classification of human emotions by facial landmarks, detection of blinking and turning of a person during speech are considered in detail.</p></abstract><trans-abstract xml:lang="en"><p>Microexpressions are unconscious, short-term non-verbal signals that allow to determine the emotional state of a person. Microexpressions occur when a person blocks emotions or hides true intentions. Determining non-verbal signals becomes an urgent task in situations where lying or hiding information leads to resource or financial losses, affects the safety and health of other people. The spread of online conferences opens up the possibility of programmatic processing of a human speech video channel to analyze emotions and behavior in order to identify the congruence or inconsistency of person&amp;#39;s statements. The article discusses computer vision and machine learning methods that allow extracting and analyzing person&amp;#39;s face from a video channel to determine its non-verbal markers and emotional state. The method of facial landmarks, key points of the face, classification of human emotions by facial landmarks, detection of blinking and turning of a person during speech are considered in detail.</p></trans-abstract><kwd-group xml:lang="ru"><kwd>non-verbal signals</kwd><kwd>face detection</kwd><kwd>facial landmarks</kwd><kwd>eye tracking</kwd><kwd>emotion classification</kwd><kwd>machine learning</kwd></kwd-group><kwd-group xml:lang="en"><kwd>non-verbal signals</kwd><kwd>face detection</kwd><kwd>facial landmarks</kwd><kwd>eye tracking</kwd><kwd>emotion classification</kwd><kwd>machine learning</kwd></kwd-group></article-meta></front><back /></article>