R.I.P.
π»
Ghosted
Evaluating Superhuman Models with Consistency Checks
June 16, 2023 Β· Entered Twilight Β· π 2024 IEEE Conference on Secure and Trustworthy Machine Learning (SaTML)
Repo contents: LICENSE, README.md, chess-ai-testing, docs, legal-ai-testing, llm-testing
Authors
Lukas Fluri, Daniel Paleka, Florian Tramèr
arXiv ID
2306.09983
Category
cs.LG: Machine Learning
Cross-listed
cs.AI,
cs.CR,
stat.ML
Citations
49
Venue
2024 IEEE Conference on Secure and Trustworthy Machine Learning (SaTML)
Repository
https://github.com/ethz-spylab/superhuman-ai-consistency
β 30
Last Checked
1 month ago
Abstract
If machine learning models were to achieve superhuman abilities at various reasoning or decision-making tasks, how would we go about evaluating such models, given that humans would necessarily be poor proxies for ground truth? In this paper, we propose a framework for evaluating superhuman models via consistency checks. Our premise is that while the correctness of superhuman decisions may be impossible to evaluate, we can still surface mistakes if the model's decisions fail to satisfy certain logical, human-interpretable rules. We instantiate our framework on three tasks where correctness of decisions is hard to evaluate due to either superhuman model abilities, or to otherwise missing ground truth: evaluating chess positions, forecasting future events, and making legal judgments. We show that regardless of a model's (possibly superhuman) performance on these tasks, we can discover logical inconsistencies in decision making. For example: a chess engine assigning opposing valuations to semantically identical boards; GPT-4 forecasting that sports records will evolve non-monotonically over time; or an AI judge assigning bail to a defendant only after we add a felony to their criminal record.
Community Contributions
Found the code? Know the venue? Think something is wrong? Let us know!
π Similar Papers
In the same crypt β Machine Learning
R.I.P.
π»
Ghosted
XGBoost: A Scalable Tree Boosting System
R.I.P.
π»
Ghosted
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
R.I.P.
π»
Ghosted
Semi-Supervised Classification with Graph Convolutional Networks
R.I.P.
π»
Ghosted
Proximal Policy Optimization Algorithms
R.I.P.
π»
Ghosted