Deception Detection in Videos

December 12, 2017 Β· Entered Twilight Β· πŸ› AAAI Conference on Artificial Intelligence

πŸŒ… TWILIGHT: Old Age
Predates the code-sharing era β€” a pioneer of its time

"No code URL or promise found in abstract"
"Derived repo from GitHub Pages (backfill)"

Evidence collected by the PWNC Scanner

Repo contents: LICENSE, README.md, _config.yml, auc_fun.m, build_gesture_classifer.m, finalexp.m, index.md, learn_CNNFV_gesture.m, test_GTgestureSearch.m, test_MFCC.m, test_Trans.m, test_dtfv_CV.m, test_gestureSearch.m, video_list.txt

Authors Zhe Wu, Bharat Singh, Larry S. Davis, V. S. Subrahmanian arXiv ID 1712.04415 Category cs.AI: Artificial Intelligence Cross-listed cs.CV Citations 126 Venue AAAI Conference on Artificial Intelligence Repository https://github.com/doubaibai/DARE ⭐ 100 Last Checked 7 days ago
Abstract
We present a system for covert automated deception detection in real-life courtroom trial videos. We study the importance of different modalities like vision, audio and text for this task. On the vision side, our system uses classifiers trained on low level video features which predict human micro-expressions. We show that predictions of high-level micro-expressions can be used as features for deception prediction. Surprisingly, IDT (Improved Dense Trajectory) features which have been widely used for action recognition, are also very good at predicting deception in videos. We fuse the score of classifiers trained on IDT features and high-level micro-expressions to improve performance. MFCC (Mel-frequency Cepstral Coefficients) features from the audio domain also provide a significant boost in performance, while information from transcripts is not very beneficial for our system. Using various classifiers, our automated system obtains an AUC of 0.877 (10-fold cross-validation) when evaluated on subjects which were not part of the training set. Even though state-of-the-art methods use human annotations of micro-expressions for deception detection, our fully automated approach outperforms them by 5%. When combined with human annotations of micro-expressions, our AUC improves to 0.922. We also present results of a user-study to analyze how well do average humans perform on this task, what modalities they use for deception detection and how they perform if only one modality is accessible. Our project page can be found at \url{https://doubaibai.github.io/DARE/}.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

πŸ“œ Similar Papers

In the same crypt β€” Artificial Intelligence