What Risk Managers Can Learn From Science Fiction
What Risk Managers Can Learn From Science Fiction
Regulations and government guidelines for AI are flawed and do not address the key risks presented by this complex, fast-evolving technology. Financial risk managers who want to measure and mitigate current AI risks can benefit more from reading science fiction than from trying to follow official pronouncements.
Runaway artificial intelligence has been a major concern of science fiction at least since the 1909 publication of E. M. Forster’s The Machine Stops, but it took 114 years to get serious official attention.
On January 26, 2023, the National Institute for Standards and Technology released its AI Risk Management Framework. Many other documents followed, most recently President Biden’s October 30 executive order on Safe, Secure, and Trustworthy Artificial Intelligence. The next day, 28 countries and the European Union signed the Bletchley Declaration on AI Safety.
Aaron Brown
Unfortunately, none of these official documents or others I have seen focus on AI’s essential threats, and none incorporate professional risk management best practices. For the moment anyway, risk managers will do better to consult science fiction discussions than to rely on official standards.