Healthcare organizations investing in AI are watching Anthropic’s recent policy decisions with more than passing interest. The company has relaxed internal safety standards on one front while losing defense contracts for holding firm on another. These moves raise a question that matters to clinical and administrative leaders alike: what stable, trustworthy policy posture can be expected from commercial AI firms operating under real-world constraints?
Anthropic finds itself at the center of two controversies: its decision to relax internal safety standards and its rigidity on military applications, resulting in being dropped from defense contracts. What Anthropic should be doing is maturing from a values-only posture into a pragmatic balance between values and market viability.
Decades of research on stakeholder trust suggest that such a pragmatic approach is not only defensible, but also more trustworthy. American customers, for instance, expect companies to charge a fair premium, make a reasonable profit, and thrive. Most customers experience a brand through price, reliability, and service, not mission statements, and they do not expect companies to be driven by the exclusive mission to serve customer needs at any cost to themselves.
Anthropic’s constitution already reflects this balance. It is not a purely values-driven document. It favors the user’s interests within a set of constraints designed to ensure fairness and maintain trust, not to pursue social aims that compromise the business. But it also runs the other way. The constitution explicitly instructs Claude to “respect the operator’s rights to make reasonable product decisions without requiring justification.” Operator business interests are not merely tolerated; they are structurally protected. That is exactly the right disposition for a commercial entity.
Anthropic would be well served to extend this pragmatic approach to its decisions regarding military use of its technology, and across its future path of growth. Given the firm’s investments in AI solutions for the healthcare market, and the natural alignment between its constitution and the patient-focused ethic driving healthcare AI, the firm’s long-term posture will be closely watched by healthcare organizations and their associated ecosystems.
The trust case for pragmatism
There is a widespread assumption that the most trustworthy companies are the most selfless ones, that bending over backwards for customers, even at a company’s own expense, builds the deepest loyalty. The evidence says otherwise. Extensive research into how consumers, employees, and investors form trust in commercial partners points to a consistent finding: stakeholders expect win-win relationships. They want competence and benevolence, not altruism and self-sacrifice.
A company that prices fairly when costs drop, that shares the upside of favorable conditions while retaining enough to thrive, is the partner people trust. Conversely, a company that signals it will sacrifice its own viability for abstract principles actually undermines confidence. Stakeholders begin to question competence: can this organization sustain itself? Will it be around to honor its commitments? For employees, a company that performs well demonstrates both that it knows what it is doing and that it cares about the people who depend on it. For an investment community evaluating an IPO, the calculus is even more direct.
Safety standards: Maturation, not retreat
When Anthropic was a startup, developing its own internal safety standards, ahead of regulation and ahead of competitors, was both principled and strategic. But standards that exist in isolation from the market eventually become liabilities. My research suggests that company purpose is rarely known or understood by consumers, let alone a driver of their purchases. If it were, Walmart and Apple would not have become the behemoths that they are.
Competitors are not following comparable standards; regulators have signaled they do not want them, at least not in the form Anthropic adopted unilaterally. The positioning challenge is one of communication, not substance: Anthropic has moved from a startup’s internally generated standards to a maturing company’s market-informed ones, and that should be seen as a sign of growth, not capitulation.
Military use and the water’s edge
The harder case is military use, but the same framework should apply. The military says it intends lawful applications. The question Anthropic should be asking is: where does responsibility end? This is what I call the water’s edge of pragmatism, and it is not unique to AI.
Consider pharmaceuticals. A company develops a legitimate medication that is subsequently abused. It takes reasonable steps to discourage misuse, including prescribing guidelines, monitoring programs, and public education, but it does not refuse to manufacture the drug. Or consider the smartphone: its camera was designed for photography, yet it can be used for intrusive surveillance. Apple does not disable the camera. These companies do what they can to dissuade abuse, but they do not pre-select who may purchase their products based on hypothetical misuse scenarios.
The water’s edge must be drawn at reasonable safeguards. Pre-emptive policing of downstream use will actually hurt the reputation of AI providers, seen as overreach even for ostensibly value-driven reasons. Can the foundational AI company audit every customer’s business strategy? Should it? The answer, in most cases, is no.
What this means strategically
First, product usage before the fact is extraordinarily difficult to monitor, whether the product is an AI model, a pharmaceutical, or a camera. No company should assume the obligation, or the right, to police use preemptively.
Second, by staking out explicit values positions on contested terrain, Anthropic risks creating unnecessary fissures in its market. Most customers do not know a company’s stated purpose, research consistently shows this, and most do not want to be seen as endorsing a political position simply by using a product. As Anthropic’s visibility grows, especially approaching a potential IPO, this risk intensifies. The company should be cutting across the market’s naturally occurring divides, political, ideological, cultural, not deepening them.
Third, unilateral restraint does not produce the outcome its proponents hope for. If Anthropic declines to serve a particular customer segment, that segment does not go unserved, as in the case of the Pentagon. The better path is advocating for coordinated industry standards, where restraint is collective and therefore effective, while respecting customer sovereignty and freedom.
Fourth, and most directly, commercial viability itself is a governance asset. A thriving Anthropic has more influence over industry norms, regulatory conversations, and safety research than one that has ceded market position in the name of principle.
Anthropic’s stated competitive posture can never be purity. It was, and should remain, credibility: the credibility of a company that takes safety seriously, builds trust with every stakeholder group, and does so while operating as a viable, competitive commercial enterprise. That is the pragmatic-trust position, and it is the one worth aligning with.
Image: created by the author
Deepak Sirdeshmukh, MS, Ph.D., is Co-founder and CEO of Sensal Health, offering IoT and AI-based solutions helping clinical research organizations, pharma companies, and healthcare providers monitor and improve patient compliance to complex medication regimens. Deepak holds an MS in Pharmaceutical Administration and a PhD in Marketing (Consumer Behavior) from The Ohio State University. His research has appeared in the Journal of Marketing, Journal of Marketing Research, Journal of Consumer Research, Journal of the Academy of Marketing Science, the Journal of Consumer Psychology, Academy of Management Journal, Journal of the American Academy of Dermatology, and other peer-reviewed journals. He writes for STAT, MedCity News, and Pharmaceutical Executive, and publishes on patient trust at Psychology Today. Deepak speaks on consumer and patient trust, behavioral change, pragmatic innovation and resilient value propositions.
This post appears through the MedCity Influencers program. Anyone can publish their perspective on business and innovation in healthcare on MedCity News through MedCity Influencers. Click here to find out how.
