A High-Fidelity Open Embodied Avatar with Lip Syncing and Expression Capabilities
September 19, 2019 ยท Entered Twilight ยท ๐ International Conference on Multimodal Interaction
"Last commit was 6.0 years ago (โฅ5 year threshold)"
Evidence collected by the PWNC Scanner
Repo contents: AvatarControl, README.md, imgs, videos
Authors
Deepali Aneja, Daniel McDuff, Shital Shah
arXiv ID
1909.08766
Category
cs.HC: Human-Computer Interaction
Cross-listed
cs.AI,
cs.CV,
cs.GR
Citations
39
Venue
International Conference on Multimodal Interaction
Repository
https://github.com/danmcduff/AvatarSim
โญ 77
Last Checked
1 month ago
Abstract
Embodied avatars as virtual agents have many applications and provide benefits over disembodied agents, allowing non-verbal social and interactional cues to be leveraged, in a similar manner to how humans interact with each other. We present an open embodied avatar built upon the Unreal Engine that can be controlled via a simple python programming interface. The avatar has lip syncing (phoneme control), head gesture and facial expression (using either facial action units or cardinal emotion categories) capabilities. We release code and models to illustrate how the avatar can be controlled like a puppet or used to create a simple conversational agent using public application programming interfaces (APIs). GITHUB link: https://github.com/danmcduff/AvatarSim
Community Contributions
Found the code? Know the venue? Think something is wrong? Let us know!
๐ Similar Papers
In the same crypt โ Human-Computer Interaction
R.I.P.
๐ป
Ghosted
R.I.P.
๐ป
Ghosted
Improving fairness in machine learning systems: What do industry practitioners need?
R.I.P.
๐ป
Ghosted
Identifying Stable Patterns over Time for Emotion Recognition from EEG
R.I.P.
๐ป
Ghosted
Questioning the AI: Informing Design Practices for Explainable AI User Experiences
R.I.P.
๐ป
Ghosted
Deep Learning for Sensor-based Human Activity Recognition: Overview, Challenges and Opportunities
R.I.P.
๐ป
Ghosted