Deregulating AI under Trump could pose risks to financial markets

Deregulating AI under Trump could pose risks to financial markets

As Canada moves toward stronger AI regulation with the proposed Artificial Intelligence and Data Act (AIDA), its southern neighbour appears to be taking the opposite approach.

AIDA, part of Bill C-27, aims to establish a regulatory framework to improve AI transparency, accountability and oversight in Canada, although some experts have argued it doesn’t go far enough.

Meanwhile, United States President Donald Trump’s is pushing for AI deregulation. In January, Trump signed an executive order aimed at eliminating any perceived regulatory barriers to “American AI innovation.” The executive order replaced former president Joe Biden’s prior executive order on AI.




Read more:
How the US threw out any concerns about AI safety within days of Donald Trump coming to office


Notably, the U.S. was also one of two countries — along with the U.K. — that didn’t sign a global declaration in February to ensure AI is “open, inclusive, transparent, ethical, safe, secure and trustworthy.”

Eliminating AI safeguards leaves financial institutions vulnerable. This vulnerability can increase uncertainty and, in a worst-case scenario, increase the risk of systemic collapse.




Read more:
The Paris summit marks a tipping point on AI’s safety and sustainability


The power of AI in financial markets

AI’s potential in financial markets is undeniable. It can improve operational efficiency, perform real-time risk assessments, generate higher income and forecast predictive economic change.

My research has found that AI-driven machine learning models not only outperform conventional approaches in identifying financial statement fraud, but also in detecting abnormalities quickly and effectively. In other words, AI can catch signs of financial mismanagement before they spiral into a disaster.

In another study, my co-researcher and I found that AI models like artificial neural networks and classification and regression trees can predict financial distress with remarkable accuracy.

An illustration of an artificial neural network. The neural network takes in three input features, processes them through two hidden layers and produces a binary prediction based on the activations of the neurons in the output layer.
(Sana Ramzan and Mark Eshwar Lokanan), Author provided (no reuse)

Artificial neural networks are brain-inspired algorithms. Similar to how our brain sends messages through neurons to perform actions, these neural networks process information through layers of interconnected “artificial neurons,” learning patterns from data to make predictions.

Similarly, classification and regression trees are decision-making models that divide data into branches based on important features to identify outcomes.

Our artificial neural networks models predicted financial distress among Toronto Stock Exchange-listed companies with a staggering 98 per cent accuracy. This suggests suggests AI’s immense potential in providing early warning signals that could help avert financial downturns before they start.

However, while AI can simplify manual processes and lower financial risks, it can also introduce vulnerabilities that, if left unchecked, could pose significant threats to economic stability.

The risks of deregulation

Trump’s push for deregulation could result in Wall Street and other major financial institutions gaining significant power over AI-driven decision-making tools with little to no oversight.

When profit-driven AI models operate without the appropriate ethical boundaries, the consequences could be severe. Unchecked algorithms, especially in credit evaluation and trading, could worsen economic inequality and generate systematic financial risks that traditional regulatory frameworks cannot detect.

Algorithms trained on biased or incomplete data may reinforce discriminatory lending practices. In lending, for instance, biased AI algorithms can deny loans to marginalized groups, widening wealth and inequality gaps.

An American flag seen in front of a building that says 'New York Stock Exchange'
The New York Stock Exchange is seen in New York in February 2025.
(AP Photo/Seth Wenig)

In addition, AI-powered trading bots, which are capable of executing rapid transactions, could trigger flash crashes in seconds, disrupting financial markets before regulators have time to respond. The flash crash of 2010 is a prime example where high-frequency trading algorithms aggressively reacted to market signals causing the Dow Jones Industrial Average to drop by 998.5 points in a matter of minutes.

Furthermore, unregulated AI-driven risk models might overlook economic warning signals, resulting in substantial errors in monetary control and fiscal policy.

Striking a balance between innovation and safety depends on the ability for regulators and policymakers to reduce AI hazards. While considering financial crisis of 2008, many risk models — earlier forms of AI — were wrong to anticipate a national housing market crash, which led regulators and financial institutions astray and exacerbated the crisis.

A blueprint for financial stability

My research underscores the importance of integrating machine learning methods within strong regulatory systems to improve financial oversight, fraud detection and prevention.

Durable and reasonable regulatory frameworks are required to turn AI from a potential disruptor into a stabilizing force. By implementing policies that prioritize transparency and accountability, policymakers can maximize the advantages of AI while lowering the risks associated with it.

A federally regulated AI oversight body in the U.S. could serve as an arbitrator, just like Canada’s Digital Charter Implementation Act of 2022 proposes the establishment of an AI and Data Commissioner. Operating with checks and balances inherent to democratic structures would ensure fairness in financial algorithms and stop biased lending policies and concealed market manipulation.

A white man with orange-tinged skin and white hair holds up a signed document in a portfolio
President Donald Trump signs an executive order relating to AI in the Oval Office of the White House on Jan. 23, 2025, in Washington.
(AP Photo/Ben Curtis)

Financial institutions would be required to open the “black box” of AI-driven alternatives by mandating transparency through explainable AI standards — guidelines that are aimed at making AI systems’ outputs more understandable and transparent to humans.

Machine learning’s predictive capabilities could help regulators identify financial crises in real-time using early warning signs — similar to the model developed by my co-researcher and me in our study.

However, this vision doesn’t end at national borders. Globally, the International Monetary Fund and the Financial Stability Board could establish AI ethical standards to curb cross-border financial misconduct.

Crisis prevention or catalyst?

Will AI still be the key to foresee and stop the next economic crisis, or will the lack of regulatory oversight cause a financial disaster? As financial institutions continue adopt AI-driven models, the absence of strong regulatory guardrails raises pressing concerns.

Without proper safeguards in place, AI is not just a tool for economic prediction — it could become an unpredictable force capable of accelerating the next financial crisis.

The stakes are high. Policymakers must act swiftly to regulate the increasing impact of AI before deregulation opens the path for an economic disaster.

Without decisive action, the rapid adoption of AI in finance could outpace regulatory efforts, leaving economies vulnerable to unforeseen risks and potentially setting the stage for another global financial crisis.

The post “Trump’s push for AI deregulation could put financial markets at risk” by Sana Ramzan, Assistant Professor in Business, University Canada West was published on 03/26/2025 by theconversation.com