A High-Fidelity Open Embodied Avatar with Lip Syncing and Expression Capabilities

September 19, 2019 ยท Entered Twilight ยท ๐Ÿ› International Conference on Multimodal Interaction

๐ŸŒ… TWILIGHT: Old Age
Predates the code-sharing era โ€” a pioneer of its time

"Last commit was 6.0 years ago (โ‰ฅ5 year threshold)"

Evidence collected by the PWNC Scanner

Repo contents: AvatarControl, README.md, imgs, videos

Authors Deepali Aneja, Daniel McDuff, Shital Shah arXiv ID 1909.08766 Category cs.HC: Human-Computer Interaction Cross-listed cs.AI, cs.CV, cs.GR Citations 39 Venue International Conference on Multimodal Interaction Repository https://github.com/danmcduff/AvatarSim โญ 77 Last Checked 1 month ago
Abstract
Embodied avatars as virtual agents have many applications and provide benefits over disembodied agents, allowing non-verbal social and interactional cues to be leveraged, in a similar manner to how humans interact with each other. We present an open embodied avatar built upon the Unreal Engine that can be controlled via a simple python programming interface. The avatar has lip syncing (phoneme control), head gesture and facial expression (using either facial action units or cardinal emotion categories) capabilities. We release code and models to illustrate how the avatar can be controlled like a puppet or used to create a simple conversational agent using public application programming interfaces (APIs). GITHUB link: https://github.com/danmcduff/AvatarSim
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Human-Computer Interaction