The Logic of the Commons: Why Communities in Health Form, Fracture, and Sometimes Flourish

The hardest problem in collective life isn't finding people who share a goal. It's getting them to pay the cost of pursuing it together.

BY GEOFFREY W. SMITH
FEBRUARY 25, 2026

Introduction: After the Market Fails


Last month’s essay in this series argued that the healthcare market cannot heal itself — that the structural failures Kenneth Arrow identified in 1963 have not been corrected by six decades of policy effort, and that the political economy of reform is itself part of the disease. The concentrated interests that benefit from today's arrangements—pharmaceutical companies, hospital systems, private equity-backed physician groups, pharmacy benefit managers fused with the insurers they nominally serve—have consistently outmaneuvered the diffuse interests of patients. The market, left to itself, optimizes for extraction. The political process, shaped by organized lobbying, has largely accommodated it.


This essay takes up where that one left off, but shifts the lens. If markets fail and governments are captured, what remains? The answer, imperfect and underestimated, is community—the sometimes fractious attempt by groups of people to manage shared problems through institutions they build themselves. The question is not whether community is an adequate substitute for functional markets or effective states; it is not. The question is what makes communities capable of producing genuine collective goods at all, why so many fail to do so, and what the examples of communities that have succeeded in healthcare and biomedicine reveal about the conditions for that success.


The initial intellectual scaffolding here is the same Mancur Olson whose work on concentrated interests and diffuse public responses shaped the earlier analysis. But where last month’s essay drew on Olson's logic to explain how organized minorities defeat unorganized majorities in the political arena, this one starts at the foundation: Olson's earlier and more fundamental argument about the structural obstacles to collective action of any kind, an argument that turns out to be both more pessimistic and, in a curious way, more generative than it first appears.


In 1965, Olson published a slim book with an enormous argument. The Logic of Collective Action proposed something that felt counterintuitive then and remains unsettling now: that a group of people who share a common interest will, as a default condition, not act to advance it. Rational individuals, Olson argued, will free-ride on the efforts of others whenever they can—consuming the benefits of collective goods without bearing their share of the cost. The larger the group, the worse the problem. The more diffuse the benefit, the more powerful the incentive to defect.


The insight was so clean it was almost cruel. It punctured the sentimental idea that shared interest naturally produces collective action, and replaced it with a cold structural logic: absent what Olson called “selective incentives”—rewards for participating, penalties for not—most groups will fail to organize around their common good, or will organize in ways that serve narrow subsets of their members rather than the whole. It explained why labor unions required closed shops, why professional associations gatekept licensing, why lobbying organizations proliferated while broader civic institutions decayed. The book was framed as economics. It read like a diagnosis of civilization.


Sixty years later, Olson's framework still has remarkable explanatory power—not least in one of the domains where collective action problems are most consequential and most frequently ignored: the governance of health.

The Problem Is Always the Same

Strip it of its warmer connotations and consider what a community actually is: a group of people with some overlapping interests and some shared costs. The overlap in interests is what makes collective action potentially valuable. The shared costs are what make it hard.


Healthcare communities—patients with a shared diagnosis, researchers working in a common field, hospitals serving a common geography, insurers and providers locked in mutual dependence—have enormous potential for collective gain. Shared data makes diagnosis faster and more accurate. Coordinated care protocols reduce error. Collaborative research compresses the timeline from discovery to treatment. The benefits are real and large and, crucially, non-excludable: the insights from a well-curated patient registry benefit everyone in that disease community, whether they contributed data to it or not.


This is precisely Olson's problem. When a good is non-excludable—when you cannot prevent people from enjoying it once it exists—the rational individual calculus tilts sharply toward free-riding. Why contribute your patient data, your institutional resources, your scientific attention, when you can wait for others to build the commons and then extract value from it without cost? In a world of sophisticated actors, many of whom are also competitive actors, this is not a hypothetical failure mode. It is the default.


The history of biomedical collaboration is littered with ambitious consortia that dissolved into recrimination, data-sharing agreements that were signed and then quietly ignored, and research networks that generated enormous transaction costs while producing surprisingly modest science. The reasons are almost always some variant of the same story: heterogeneous incentives, asymmetric contributions, and the structural logic of free-riding reasserting itself against even the best-designed governance frameworks.

When It Works: The Selective Incentive in Disguise

But communities do sometimes work. Olson knew this—his framework was not a counsel of despair but a diagnostic tool. The question was always: what breaks the free-rider equilibrium?


A strong example from the biomedical world is the Structural Genomics Consortium (SGC), a public-private partnership that has, for two decades, been doing something that the pharmaceutical industry's incentive structure should make impossible: pre-competitively generating and openly publishing structural data on drug targets. The SGC brings together pharmaceutical companies that are, in any other context, fierce competitors. It asks them to fund and share research that individually, each would prefer to keep proprietary. And it works.


The mechanism is Olsonian at its core. The selective incentive is not money but access. Members get early visibility into results, privileged relationships with the academic investigators, and the ability to shape the research agenda toward targets that matter to their portfolios. The shared good (open protein structure data) is real, and the public benefits enormously from it. But the private good that keeps members at the table is real too. Olson would not be surprised. He would recognize the architecture immediately.


Another good example comes from the rare disease community. Groups like the Cystic Fibrosis Foundation (CFF) pioneered a model of patient-driven research funding and venture philanthropy that has since become a template for dozens of disease communities. CFF did not simply raise money and distribute grants. It took equity positions in the drugs it helped fund. It imposed milestones. It controlled the research agenda with a discipline that no academic funder would have been permitted (or dared) to exercise. The result was ivacaftor, a transformative therapy for a subset of CF patients, brought to market years faster than the conventional academic-pharmaceutical pipeline likely would have managed.


What made this possible? The cystic fibrosis community had several unusual properties that allowed it to escape the collective action trap. The patient population was small enough to be organized. The disease was well-understood at the molecular level, lowering scientific risk. CFF had over time built the kind of institutional trust that allowed it to act as a credible central coordinator, what Olson might have called a political entrepreneur capable of bearing disproportionate organizing costs in exchange for disproportionate credit. And crucially, the members of the community—patients, parents, physicians, and researchers—had interests that were genuinely aligned in ways that communities built around economic interest rarely are. When your child or your patient has the disease, the selective incentive is not a discount on membership dues. It is survival.

When It Fails: The Coordination Problem in Real Time

The pandemic offered a global seminar in the failure modes of health community governance, conducted at a pace that left no time for gentler lessons.


The development of COVID-19 vaccines was, in its early stages, a genuine triumph of organized collective action: unprecedented data sharing, accelerated regulatory coordination, and the COVAX facility's attempt to create a pooled procurement mechanism for global vaccine access. Scientists published preprints within days of obtaining results. Genomic sequences were shared internationally in hours. The structural biology of the spike protein was collaborative work that crossed institutional and national lines with a fluency that would have been inconceivable a generation earlier.


And then the community fractured along exactly the lines Olson would have predicted. Wealthy nations, having funded vaccine development either directly or through advance purchase agreements, exercised their selective incentives. They secured priority access. They invoked national interest. The logic of the club asserted itself over the logic of the commons. COVAX, which had been designed as a mechanism to prevent precisely this outcome, was outbid and outmaneuvered by bilateral agreements that offered pharmaceutical companies better terms and faster timelines. By the time the Omicron variant emerged from South Africa whose population was largely unvaccinated (in significant part because wealthy nations had hoarded doses), the collective action failure had become tangible as an epidemiological event. The community had not held. The free-rider problem had, in a terrible inversion, manifested as a first-mover-advantage problem: those who could grab, grabbed.


The lesson was not that international health governance is impossible. It was that governance structures which rely on voluntary compliance and shared norms, without meaningful selective incentives or enforcement mechanisms, are fragile precisely when they are most needed. Crisis concentrates interest. Concentrated interest produces defection.

The Ostrom Correction

Olson's framework was powerful but incomplete, and the correction came from another economist who would eventually win a Nobel Prize for it. Elinor Ostrom spent decades studying communities that had successfully managed common-pool resources (e.g., fisheries, forests, irrigation systems, groundwater) and found that they did not follow either the logic of privatization or the logic of state management that economists had assumed were the only alternatives to collective failure.


Instead, they had developed their own governance institutions: rules created by the users themselves, monitoring systems that relied on peer observation rather than external enforcement, graduated sanctions that allowed communities to discipline defectors without destroying relationships, and the legitimacy that came crucially from having crafted the rules collectively rather than having them imposed from above.


Ostrom's “design principles” for successful commons governance read like a checklist for what most biomedical research consortia get wrong. Boundaries are often unclear—who is in the community? Whose interests count? Rules are frequently imposed by funders or governments rather than developed by participants. Conflict resolution mechanisms are inadequate or absent. Monitoring is weak. And the cost of organizing at all is borne disproportionately by whoever has the deepest pockets and the strongest interest in collective action—which, in biomedicine, is usually the pharmaceutical industry, whose interests are real but not always identical to those of patients or the public.


The communities that work tend to have internalized something like Ostrom's principles without necessarily having read them. They have defined membership clearly enough that obligations can be enforced. They have developed norms of reciprocity strong enough to survive individual defection. They have found ways to make contribution visible and legible — to allow reputation to function as a selective incentive in communities too large for direct monitoring. And they have, almost always, had the luck to begin with a founding moment of genuine shared threat or genuine shared opportunity that allowed norms of solidarity to take root before competitive pressures reasserted themselves.

The Quiet Infrastructure of Health

There is a version of this argument that ends in pessimism: collective action problems are structural, the logic of defection is powerful, and the communities that succeed do so through a combination of unusual circumstance and heroic institutional design that cannot be generalized. This is not quite wrong, but it is not quite right either.


What it misses is the extraordinary amount of successful community governance in health that happens below the level of visibility—not in the high-profile consortia and global health initiatives, but in the quiet infrastructure of professional norms, clinical protocols, reporting systems, and shared data standards that make modern medicine possible at all.


Physicians share case reports. Hospitals report adverse events. Epidemiologists maintain disease registries that serve the entire research community. Pharmacists flag drug interactions. Emergency departments coordinate during mass casualty events with a fluency that their competitive economic relationship would never predict. None of this is altruism. It is the product of decades of institutional design—licensing requirements, accreditation standards, professional culture, legal obligation—that have succeeded in aligning individual and collective interest well enough to produce something that functions like a commons.


The selective incentives are real, even when they are subtle. The physician who shares a case report gains professional recognition. The hospital that participates in a quality improvement network gains benchmarking data it could not otherwise obtain. The researcher who contributes to a shared dataset gains access to analytic tools and collaborative opportunities that solo investigation would foreclose. Olson is always lurking in the architecture. The communities that endure are the ones that have found ways to make his logic work for them rather than against them.

What We Owe Each Other, and How We Collect It

The deeper question behind all of this is not really organizational but moral. Collective action problems exist because individuals face a structural incentive to defect from arrangements that benefit the community. What the design of successful communities reveals is that this incentive is not insuperable, but overcoming it requires more than good intentions. It requires architecture.


The most durable health communities that have produced the most benefit over the longest periods tend to share a characteristic that is easy to overlook: they have been explicit about what they are asking of their members, and they have built the governance machinery to make asking meaningful. They have defined the commons clearly enough to defend it. They have made contribution legible enough to reward it. They have found ways to make free-riding costly enough to discourage it, not through punishment alone but through the construction of a culture in which contribution is the norm and defection is the deviation.


This is not a natural equilibrium. It is an achievement. It requires what Elinor Ostrom called “institutional development” and what the rest of us might call political will—the willingness to invest in the unglamorous work of rule-making, norm-setting, and conflict resolution that makes collective life possible.


Mancur Olson gave us a map of the trap. Elinor Ostrom showed us that people had been escaping it for centuries, in places economists hadn't thought to look. The communities of health (patient advocates, researchers, clinicians, public health officers) are building and rebuilding those escapes every day, in contexts where the stakes are not abstract. The question is whether the institutions we build are adequate to the logic that undermines them.


Usually, the answer is: not quite, but closer than before. While that may seem to be less than a triumphant conclusion, for anyone who has spent time watching communities try to hold themselves together against the gravity of self-interest, it is, perhaps, enough.

COLLABORATE

To collaborate on our research programs, contact us at:

info@digitalisresearch.com

COMMUNICATE

To receive periodic updates on our work, sign up below:

THANK YOU.
Oops! Something went wrong while submitting the form.