Observers of the US stock market will recall a curious event in early 2010, when algorithms drove a series of high-frequency trades that momentarily crashed indices by over one trillion dollars. The market recovered in a matter of minutes, but the head of the US Securities and Exchange Commission is warning a similar event driven by artificial intelligence (AI) technology may cause a more serious crisis by the end of the decade.
“I do think we will in the future have a financial crisis,” said SEC chair Gary Gensler in an interview with UK media. “In the after action reports people will say ‘Aha! There was either one data aggregator or one model… we’ve relied on.’ Maybe it’s in the mortgage market. Maybe it’s in some sector of the equity market.”
Although artificial intelligence drives financial market actions via a variety of implementations, Gensler notes a small number of AI models such as OpenAI’s ChatGPT currently power such tools. Contributing to the problem is the proprietary, closed-source nature of such models which prevents public scrutiny.
The variety of industries involved also contributes to the dilemma, Gensler noted, because AI tools developed in Silicon Valley may fall outside the typical reach of government financial regulatory bodies.
“I’ve raised this at the Financial Stability Board. I’ve raised it at the Financial Stability Oversight Council. I think it’s really a cross-regulatory challenge,” said Gensler.
The European Union has already drafted broad regulations governing the use of AI, but the United States has been slower to act. A more modest rule was proposed in July requiring firms to disclose “conflicts of interest” regarding their use of predictive analytics tools.
Broader regulatory action, including the SEC’s proposed rule forcing publicly-traded companies to disclose information about carbon emissions, has been met with legal challenges and pushback from Republican Party officials. Such an environment may prove to be an impediment to more comprehensive action on the part of the US financial regulatory body.
19 September 2023, 18:22 GMT
AI technology’s ability to convincingly simulate real human interaction has led to concerns in a number of areas. A recent survey of financial fraud and risk professionals found that the use of AI tools for fraudulent online banking activity was a growing and recognized problem.
Meanwhile, the disquieting ability of AI to simulate phone calls from friends and loved ones in distress has generated significant headlines.
“The very technology that empowers us may also imperil us,” said author and privacy expert Nick Shevelyov. "Everything is accelerating. The technologies used to defend against [fraud] are getting better, but also just the proliferation of false identities are also increasing."