Governance by Glass-Box: Implementing Transparent Moral Bounds for AI Behaviour
April 30, 2019 ยท Declared Dead ยท ๐ International Joint Conference on Artificial Intelligence
"No code URL or promise found in abstract"
Evidence collected by the PWNC Scanner
Authors
Andrea Aler Tubella, Andreas Theodorou, Virginia Dignum, Frank Dignum
arXiv ID
1905.04994
Category
cs.OH: Other CS
Cross-listed
cs.AI,
cs.CY
Citations
43
Venue
International Joint Conference on Artificial Intelligence
Last Checked
1 month ago
Abstract
Artificial Intelligence (AI) applications are being used to predict and assess behaviour in multiple domains, such as criminal justice and consumer finance, which directly affect human well-being. However, if AI is to improve people's lives, then people must be able to trust AI, which means being able to understand what the system is doing and why. Even though transparency is often seen as the requirement in this case, realistically it might not always be possible or desirable, whereas the need to ensure that the system operates within set moral bounds remains. In this paper, we present an approach to evaluate the moral bounds of an AI system based on the monitoring of its inputs and outputs. We place a "glass box" around the system by mapping moral values into explicit verifiable norms that constrain inputs and outputs, in such a way that if these remain within the box we can guarantee that the system adheres to the value. The focus on inputs and outputs allows for the verification and comparison of vastly different intelligent systems; from deep neural networks to agent-based systems. The explicit transformation of abstract moral values into concrete norms brings great benefits in terms of explainability; stakeholders know exactly how the system is interpreting and employing relevant abstract moral human values and calibrate their trust accordingly. Moreover, by operating at a higher level we can check the compliance of the system with different interpretations of the same value. These advantages will have an impact on the well-being of AI systems users at large, building their trust and providing them with concrete knowledge on how systems adhere to moral values.
Community Contributions
Found the code? Know the venue? Think something is wrong? Let us know!
๐ Similar Papers
In the same crypt โ Other CS
R.I.P.
๐ป
Ghosted
R.I.P.
๐ป
Ghosted
DeepPicar: A Low-cost Deep Neural Network-based Autonomous Car
R.I.P.
๐ป
Ghosted
Pragmatic inference and visual abstraction enable contextual flexibility during visual communication
R.I.P.
๐ป
Ghosted
Design and Implementation of a Novel Compatible Encoding Scheme in the Time Domain for Image Sensor Communication
R.I.P.
๐ป
Ghosted
Detecting Plagiarism based on the Creation Process
R.I.P.
๐ป
Ghosted
automan: a simple, Python-based, automation framework for numerical computing
Died the same way โ ๐ป Ghosted
R.I.P.
๐ป
Ghosted
Language Models are Few-Shot Learners
R.I.P.
๐ป
Ghosted
PyTorch: An Imperative Style, High-Performance Deep Learning Library
R.I.P.
๐ป
Ghosted
XGBoost: A Scalable Tree Boosting System
R.I.P.
๐ป
Ghosted