AI Risks in Health and Finance: When Errors Matter

Chris Billingham
March 18, 2026
Blog

You're probably already using AI in some form, or at least thinking about it. And you should be. The potential is real. But if you work in healthcare or finance, the gap between "impressive demo" and "reliable in production" carries consequences that go well beyond a bad quarter or a frustrated customer. In these industries, errors cost money, trust, and sometimes lives.

There is good news though. You don't have to choose between adopting AI and managing risk. But you do need to understand where things go wrong, and why the right checks and balances matter more here than anywhere else.

Healthcare: When the Algorithm Gets It Wrong, Patients Pay

In March 2025, ECRI, the independent nonprofit that monitors healthcare safety, named insufficient governance of artificial intelligence as the number two patient safety concern for the year. That's not a fringe concern buried in a footnote. It sits just behind medical gaslighting on a list informed by incident data, scientific literature, and expert analysis. ECRI warned that AI-generated medical errors could lead to misdiagnoses and inappropriate treatment decisions, causing injury or death, and that staff may struggle to identify when errors are actually attributable to AI.

The regulatory picture hasn't kept pace, either. A study published in JAMA Health Forum in August 2025 examined recalls of AI-enabled medical devices cleared by the FDA. Researchers found that 43.4% of recalls occurred within the first year of device clearance, which is roughly double the rate for all 510(k) devices. The vast majority of recalled devices had not undergone clinical trials, and devices without reported clinical validation were associated with larger recalls and more recall events per device. Publicly traded companies manufactured 53.2% of AI-enabled devices but accounted for 91.8% of recalls.

Meanwhile, the ongoing legal battle over the nH Predict algorithm continues to highlight the human cost of unchecked AI in healthcare decision-making. In September 2025, a federal judge denied UnitedHealth's request to limit discovery in the class action lawsuit alleging the insurer used the AI tool to override physicians' recommendations and deny post-acute care to elderly Medicare Advantage members. The plaintiffs allege that more than 90% of appealed denials were reversed, yet the tool remained in use.

Finance: Speed Without Oversight Is Just Fast Failure

On 7 April 2025, the Warsaw Stock Exchange suspended all trading for approximately 75 minutes after a flood of automated, high-frequency trading orders overwhelmed the exchange during a period of extreme global volatility. The WIG20 index plunged as much as 7% intraday before the halt was imposed. Bloomberg subsequently reported that the exchange began reviewing its algorithmic trading regulations in the aftermath, noting that algorithmic and high-frequency strategies accounted for 18.4% of Warsaw's equity trading volumes in the prior year.

It's a reminder that in financial markets, automated systems don't just execute faster — they fail faster too. When multiple algorithms react to the same signals simultaneously, without adequate circuit breakers or human oversight, the result can escalate from a minor data error into a market-wide disruption in minutes.

The Theme Isn't Caution. It's Confidence.

None of this means you should avoid AI. These technologies are transforming both industries for good reasons, and standing still carries its own risks. But there's a meaningful difference between moving fast and moving with confidence.

The pattern across every example above is the same: systems deployed without sufficient testing, monitoring, or human oversight. AI that enters production without proper validation. Outputs that nobody checks until something breaks. Governance frameworks that haven't caught up to the tools they're meant to govern.

The teams that will get the most from AI in healthcare and finance are the ones that build verification into their workflows from the start — not as an afterthought. That means understanding what your models are doing, testing them before and after deployment, and maintaining the kind of visibility that lets you catch problems early rather than explaining them later.

It's not about slowing down. It's about knowing what you're shipping actually works.