TurkEyes: A Web-Based Toolbox for Crowdsourcing Attention Data

January 13, 2020 ยท Entered Twilight ยท ๐Ÿ› International Conference on Human Factors in Computing Systems

๐ŸŒ… TWILIGHT: Old Age
Predates the code-sharing era โ€” a pioneer of its time

"No code URL or promise found in abstract"
"Code repo scraped from project page (backfill)"

Evidence collected by the PWNC Scanner

Repo contents: .gitignore, LICENSE, README.md, assets, config.json, data-analysis, demo_experiment_images, demo_face_sentinels.gif, generate-experiment-files, index.html

Authors Anelise Newman, Barry McNamara, Camilo Fosco, Yun Bin Zhang, Pat Sukhum, Matthew Tancik, Nam Wook Kim, Zoya Bylinskii arXiv ID 2001.04461 Category cs.HC: Human-Computer Interaction Citations 20 Venue International Conference on Human Factors in Computing Systems Repository https://github.com/turkeyes/codecharts-ui โญ 7 Last Checked 7 days ago
Abstract
Eye movements provide insight into what parts of an image a viewer finds most salient, interesting, or relevant to the task at hand. Unfortunately, eye tracking data, a commonly-used proxy for attention, is cumbersome to collect. Here we explore an alternative: a comprehensive web-based toolbox for crowdsourcing visual attention. We draw from four main classes of attention-capturing methodologies in the literature. ZoomMaps is a novel "zoom-based" interface that captures viewing on a mobile phone. CodeCharts is a "self-reporting" methodology that records points of interest at precise viewing durations. ImportAnnots is an "annotation" tool for selecting important image regions, and "cursor-based" BubbleView lets viewers click to deblur a small area. We compare these methodologies using a common analysis framework in order to develop appropriate use cases for each interface. This toolbox and our analyses provide a blueprint for how to gather attention data at scale without an eye tracker.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Human-Computer Interaction