It’s an all-too-common problem in radiology: A patient goes in for imaging. A new AI diagnostic tool flags a pulmonary nodule. The radiologist recommends follow-up imaging. A report is filed, inboxes fill up, and the patient’s next step depends on a fragile chain of handoffs.
Weeks later, nothing has happened.
If that nodule turns out to be cancerous? Now, it’s silently growing, as the follow-up recommendation sits forgotten at the bottom of an inbox or a worklist. The patient’s chances of a good outcome dwindle steadily downward, while the hospital has opened itself up to liability.
This fundamental failing comes in the last mile of imaging care, a pivotal point in which patients are either connected to the follow-up care they need, or, far too frequently, they’re lost in the system.
Why “finding it” ≠ “fixing it”
Radiology is in a paradoxical era. AI is getting better at flagging nodules, lesions, and other incidental findings, but many hospitals still struggle with what happens next. The handoff from radiology report to EHR order to scheduled follow-up to completed exam is often manual, fragmented, and brittle.
That brittleness has consequences. Follow-up recommendations are common enough to be operationally meaningful, and follow-through is inconsistent. Follow-up completion can be as low as ~30% in some settings, meaning the recommendation never becomes care.
At the same time, advanced imaging is producing more incidental findings. RSNA has reported incidental findings in 20% to 40% of advanced imaging studies. Imaging also kicks off many downstream clinical journeys, which means small failures in follow-through can scale into wide operational drag.
Each incidental finding can trigger a cascade: clarifying clinical relevance, communicating with the right team, ordering, prior authorization, scheduling, reminders, and documenting completion. If the system cannot manage that cascade reliably, it generates pockets of manual labor, frustrating data management, and wasted time and talent across clinicians, staff, and patients. For every poorly designed and resourced step, there is a patient at risk of falling through the cracks.
Defining the diagnostic alert trap
This is where point AI tools often create more work than expected.
Many imaging AI products excel at the front end. They detect, prioritize, and generate insights. Too often, their outputs land as alerts or worklists that are not tied to ordering, scheduling, escalation, and closure. Health systems respond with manual scaffolding: spreadsheets, shared inboxes, and recurring huddles to reconcile what the tool found with what the EHR shows. The tool may improve detection, but it also expands the downstream work that humans must stitch together.
New worklists also create a familiar trap. A queue of flagged studies is not a follow-up program. Without clear ownership and a defined closure event, the list becomes a backlog. Clinicians spend time triaging and re-triaging instead of moving patients to the next step. Dashboards can show risk. They do not complete follow-up.
Governance problems add another layer of work. “AI governance” is often discussed as paperwork. In practice, it is about safety. Safety is the strategy. Who sees an AI output, where it is displayed, and how it is labeled determines whether clinicians trust it or double-check everything. When outputs appear in the wrong context, teams add manual review steps to protect patients. That protection is necessary, but it is also a sign that workflow design and role-based controls were missing.
Why orchestration matters more than another model
The last mile breaks because detection is being optimized in isolation. What radiology needs is orchestration.
Healthcare doesn’t need more AI for the sake of AI. In fact, we have more data than doctors. What healthcare needs is a better way to use data by coordinating humans, workflows, and systems. Orchestration allocates humans deliberately. It reserves clinician time, the scarcest currency in healthcare, for the moments that require expertise and empathy, and automates the friction that does not. Many organizations are missing a system of record for follow-up; a reliable layer that connects the report to the action and tracks the outcome.
That distinction matters to patients. Most patients will tolerate automated scheduling outreach if it is clear, timely, and easy to respond to. They do not want automation when they are frightened by a concerning finding and need a person to explain what it means and what comes next. A workflow that cannot distinguish between those moments wastes clinician time on logistics or inserts automation where it erodes patient trust.
Orchestration also requires risk-based triage. Imaging volume is rising, and radiologist capacity is not rising at the same pace. The result is a needle-in-a-haystack problem. Health systems need fewer vague recommendations that expand demand without benefit and more specificity about what is actionable, by when, and why. If every ambiguous phrase becomes a task, follow-up programs drown in low-value work and miss the cases that matter.
Radiology leaders are already pushing the field in this direction. The American College of Radiology has developed a quality measure set aimed at closing the completion loop on radiology follow-up recommendations for noncritical actionable incidental findings, emphasizing communication, tracking, and follow-up to completion.
The next wave of radiology AI value will come from closed-loop follow-up. An actionable finding should start a pathway with a measurable end state: the follow-up exam is completed, the referral is completed, or the recommendation is clinically resolved with a documented rationale. The system should track to completion, surface overdue exceptions early, and provide an auditable record of what happened.
For health system leaders, a few moves separate AI pilots from measurable reliability:
- Define the closure event. Decide what counts as closed, where it is documented, and how it is measured.
- Assign ownership across handoffs. Follow-up spans radiology, ordering clinicians, scheduling, and patient outreach.
- Demand integration that removes work. If a tool creates a parallel worklist, require a clear path from output to action inside existing workflows.
- Treat governance as patient safety. Build role-based visibility, clear labeling, and audit trails into the operating model.
Radiology will remain a proving ground for clinical AI. The organizations that win this phase will be those that stop optimizing the find and start operationalizing the finish.
Photo: athima tongloom, Getty Images
Angela Adams, RN, has been advancing the industry by applying AI to improve healthcare outcomes for over a decade. Angela started her career as a critical care medicine nurse at Duke University Medical Center. During her time in the hospital setting, Angela became increasingly frustrated with the inefficiencies in patient care. Driven to make a broader impact, Angela looked to the emerging healthcare AI segment for solutions that would allow her to help patients as well as assist clinicians to become more effective and efficient in solving complex medical issues. She helped advance AI adoption and overcome skepticism at companies like Jvion (acquired by Lightbeam Health Solutions), where she applied deep machine learning to lower nosocomial event rates and prevent patient deterioration. She went on to create her most recent solution at Inflo Health, where she focuses on missed follow-up radiology appointments.
This post appears through the MedCity Influencers program. Anyone can publish their perspective on business and innovation in healthcare on MedCity News through MedCity Influencers. Click here to find out how.
