Farsight: Fostering Responsible AI Awareness During AI Application Prototyping

February 23, 2024 ยท Entered Twilight ยท ๐Ÿ› International Conference on Human Factors in Computing Systems

๐Ÿ’ค TWILIGHT: Eternal Rest
Repo abandoned since publication

Repo contents: .eslintrc.cjs, .fttemplates, .gitattributes, .github, .gitignore, .npmignore, .prettierrc, CONTRIBUTING.md, LICENSE, README.md, chrome-extension, index.html, lite, notebook-widget, notebooks, package.json, public, scss.d.ts, signal, src, tsconfig.json, vite.config.ts

Authors Zijie J. Wang, Chinmay Kulkarni, Lauren Wilcox, Michael Terry, Michael Madaio arXiv ID 2402.15350 Category cs.HC: Human-Computer Interaction Cross-listed cs.AI, cs.CY, cs.LG Citations 74 Venue International Conference on Human Factors in Computing Systems Repository https://github.com/PAIR-code/farsight โญ 28 Last Checked 1 month ago
Abstract
Prompt-based interfaces for Large Language Models (LLMs) have made prototyping and building AI-powered applications easier than ever before. However, identifying potential harms that may arise from AI applications remains a challenge, particularly during prompt-based prototyping. To address this, we present Farsight, a novel in situ interactive tool that helps people identify potential harms from the AI applications they are prototyping. Based on a user's prompt, Farsight highlights news articles about relevant AI incidents and allows users to explore and edit LLM-generated use cases, stakeholders, and harms. We report design insights from a co-design study with 10 AI prototypers and findings from a user study with 42 AI prototypers. After using Farsight, AI prototypers in our user study are better able to independently identify potential harms associated with a prompt and find our tool more useful and usable than existing resources. Their qualitative feedback also highlights that Farsight encourages them to focus on end-users and think beyond immediate harms. We discuss these findings and reflect on their implications for designing AI prototyping experiences that meaningfully engage with AI harms. Farsight is publicly accessible at: https://PAIR-code.github.io/farsight.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Human-Computer Interaction