Healthcare innovation is advancing rapidly as healthcare systems speed up adoption of AI technologies. In 2025, healthcare AI spending hit $1.4 billion, nearly tripling 2024’s investment.
Regulation around this technology is also intensifying. Over 250 bills have been introduced across 47 states, with additional frameworks starting to emerge from non-profit organizations; the latest being from the Joint Commission, who earlier this year in partnership with the Coalition for Health AI, released new guidance for deploying AI tools in healthcare.
With new policy incoming, a noticeable divide has emerged between Silicon Valley’s rapid-iteration culture and markets with stricter oversight, where healthcare AI tools are classified as medical devices requiring higher compliance standards from the very start. The divergence raises important questions about whether speed-focused development models can adequately address healthcare’s complex privacy and safety requirements.
What is now clear is that healthcare AI in the United States has progressed to a point where traditional, HIPAA-style compliance alone is no longer adequate. The next phase of regulation and market expectation will require continuous, medical-grade AI governance, and companies that don’t adapt now will be left behind.
Why now? The market has already moved on
HIPAA was built for 1990s healthcare to enable health insurance portability, administrative simplification and patient data protection. It focuses on records, not intelligence and was designed for humans, not machines, and therefore contains outdated assumptions about data flows — namely that data is static, stored in siloes, and accessed only by a small, known group of people. It only applies to covered entities and business associates, leaving gaps for consumer health apps, AI foundational model providers, shadow IT and analytics platforms. It relies on patient data protection through restriction of access and de-identification.
The result is HIPAA currently protecting some data, missing many risks, and encouraging checkbox compliance rather than systemic safety. You can contrast how HIPAA changes slowly through regulation and guidance, with AI that evolves weekly, iteratively, and through emergent behaviour. With the rapid uptake of AI technologies in health, HIPAA simply cannot keep pace with foundational models, multimodal systems, agentic workflows and real-time clinical co-pilots. Studies have shown that AI models frequently degrade in performance after deployment, yet there’s no formal policies in place to ensure real-time monitoring to detect things like drift and bias.
As a result, healthcare systems are demanding assurances regulators haven’t yet codified. Historically, regulation has followed the market in the US. Today, more than half of U.S. health systems are already deploying AI across clinical and operational workflows, often well ahead of formal regulatory guidance. From my vantage point as Chief Medical Officer at one of the fastest growing healthcare AI companies, the market is already asking deeper questions. Over the last few months at Heidi, we’ve experienced an influx of customers asking deeper questions around model governance, monitoring, and risk ownership. In this case, customers are moving faster than regulation to safely and responsibly deploy these technologies.
A regulatory divide
From a global perspective, the US has one of the more liberal markets for AI, a defining trait that is increasingly becoming a liability. While many startups benefit from this flexibility, the lack of oversight has created a gap between how AI tools are built and how they are ultimately expected to perform to ensure safety and reliability in high-risk industries like healthcare.
In markets such as Europe, many governments have adopted strict frameworks for AI tools used in care delivery. For example, in the UK, ambient voice AI is currently regulated as a medical device. The same technology, when deployed in the US, operates in a largely unregulated environment. This same regulatory determination applies to medical knowledge and evidence platforms used for clinical decision support.
This regulatory divergence is catalyzing a broader cultural reckoning for healthcare AI startups in the United States. The famous Silicon Valley mantra of “build fast and break things” simply does not translate to medical environments, where breaking things can cause serious risk.
I acknowledge that AI governance is not glamorous or fun work, it certainly requires investment of money, time, talent – and a ton of patience. But as health systems become more sophisticated buyers, stronger governance is inevitable.
In this next era of healthcare AI in the US, I see healthcare AI startups having two choices: invest in the necessary resources required for medical-grade AI, or exit healthcare altogether in favor of less regulated markets.
What HIPAA 2.0 must get right
As healthcare AI becomes increasingly engrained in clinical workflows, the industry is approaching what many are beginning to think of as a “HIPAA 2.0” moment. This will not necessarily be a formal rewrite of the law itself, but rather a fundamental shift in expectations from healthcare systems around what compliance is expected to cover in practice. The question is whether this next phase meaningfully addresses AI governance, or simply layers incremental security requirements onto an already outdated framework. At stake is not just data security, but patient trust in the care they receive.
There will no doubt be pressure to keep requirements minimal and leave the core of these issues largely untouched. Large platform providers and cloud vendors may resist governance mandates that introduce friction or slow deployment. But the cost of inaction is much higher. Patients must trust that their data is handled responsibly, providers must trust that AI systems behave reliably, and health systems must trust that governance mechanisms will surface problems before problems arise.
HIPAA 2.0 must recognize that AI risk is not static. Privacy and security safeguards are necessary, but they do nothing to address how AI systems behave once deployed: how models drift over time, how bias emerges, or how errors propagate at scale. Without explicit attention to these dynamics, HIPAA risks becoming obsolete once again.
As oversight tightens, the startups that succeed will not be those that avoided regulation the longest, but those that used it as a forcing function to build safer, more reliable systems from the outset.
What the future looks like
Looking forward, the best healthcare AI companies will build tools from the ground up that go beyond traditional checklists, or simply being HIPAA compliant. This new checklist looks more like:
- Continuous monitoring vs. one-time audits – Real-time observability into model performance, hallucination rates, and failure patterns
- Transparency as a requirement – Model cards, explainability documentation, and rigorous testing across bias, accent, and language variation
- Governance becoming collaborative – Shared standard operating procedures, joint risk ownership, and ongoing partnership models instead of off the shelf software delivery
The most important part of the transition is the recognition by all stakeholders that this current generation of AI systems are not static. They are complex, adaptive systems that must be managed across various clinical contexts. The next generation of responsible healthcare IT leaders, both within health systems and the companies that serve them, will be those who treat governance as a core capability that is built intentionally into products, pursued by buyers, and collaboratively managed in real-world use.
Photo: Ildo Frazao, Getty Images
Dr. Simon Kos is an internationally recognised leader in digital health, working in senior executive roles for over twenty years. He is a registered medical practitioner who has practiced critical care medicine in Australia. He holds an MBBS from UNSW, an MBA from AGSM and is a Fellow of the Australian Institute of Digital Health (FAIDH). Significant past roles include global chief medical officer of Microsoft based in Seattle, CEO of Next Practice, Physician Executive with Cerner, and the co-chair of the Global Commission to end the Diagnostic Odyssey for Children with a Rare Disease. He is currently the global chief medical officer at Heidi, co-founder of Lumyra AI, an advisor to organisations and an investor in digital health start-ups.
This post appears through the MedCity Influencers program. Anyone can publish their perspective on business and innovation in healthcare on MedCity News through MedCity Influencers. Click here to find out how.
