An Unethical Optimization Principle

November 12, 2019 ยท Declared Dead ยท ๐Ÿ› Royal Society Open Science

๐Ÿ‘ป CAUSE OF DEATH: Ghosted
No code link whatsoever

"No code URL or promise found in abstract"

Evidence collected by the PWNC Scanner

Authors Nicholas Beale, Heather Battey, Anthony C. Davison, Robert S. MacKay arXiv ID 1911.05116 Category q-fin.RM Cross-listed cs.LG, stat.ML Citations 7 Venue Royal Society Open Science Last Checked 1 month ago
Abstract
If an artificial intelligence aims to maximise risk-adjusted return, then under mild conditions it is disproportionately likely to pick an unethical strategy unless the objective function allows sufficiently for this risk. Even if the proportion $ฮท$ of available unethical strategies is small, the probability ${p_U}$ of picking an unethical strategy can become large; indeed unless returns are fat-tailed ${p_U}$ tends to unity as the strategy space becomes large. We define an Unethical Odds Ratio Upsilon ($ฮฅ$) that allows us to calculate ${p_U}$ from $ฮท$, and we derive a simple formula for the limit of $ฮฅ$ as the strategy space becomes large. We give an algorithm for estimating $ฮฅ$ and ${p_U}$ in finite cases and discuss how to deal with infinite strategy spaces. We show how this principle can be used to help detect unethical strategies and to estimate $ฮท$. Finally we sketch some policy implications of this work.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” q-fin.RM

Died the same way โ€” ๐Ÿ‘ป Ghosted