Improving fairness in machine learning systems: What do industry practitioners need?

December 13, 2018 ยท Declared Dead ยท ๐Ÿ› International Conference on Human Factors in Computing Systems

๐Ÿ‘ป CAUSE OF DEATH: Ghosted
No code link whatsoever

"No code URL or promise found in abstract"

Evidence collected by the PWNC Scanner

Authors Kenneth Holstein, Jennifer Wortman Vaughan, Hal Daumรฉ, Miro Dudรญk, Hanna Wallach arXiv ID 1812.05239 Category cs.HC: Human-Computer Interaction Cross-listed cs.CY, cs.LG, cs.SE Citations 919 Venue International Conference on Human Factors in Computing Systems Last Checked 1 month ago
Abstract
The potential for machine learning (ML) systems to amplify social inequities and unfairness is receiving increasing popular and academic attention. A surge of recent work has focused on the development of algorithmic tools to assess and mitigate such unfairness. If these tools are to have a positive impact on industry practice, however, it is crucial that their design be informed by an understanding of real-world needs. Through 35 semi-structured interviews and an anonymous survey of 267 ML practitioners, we conduct the first systematic investigation of commercial product teams' challenges and needs for support in developing fairer ML systems. We identify areas of alignment and disconnect between the challenges faced by industry practitioners and solutions proposed in the fair ML research literature. Based on these findings, we highlight directions for future ML and HCI research that will better address industry practitioners' needs.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Human-Computer Interaction

Died the same way โ€” ๐Ÿ‘ป Ghosted