Guest article by Mike Leahy

In an era when personal facts fuel markets, governance, and machine learning, modern law faces a categorical error: it treats information as property. This paradigm, inherited from industrial capitalism, collapses under the weight of the digital state and platform economy, where data are non rivalrous, replicable, and relational. The consequence is structural – citizens become subjects of informational control rather than holders of epistemic agency.

This Article proposes a corrective: the fiduciary commons, a legal and institutional architecture that recasts the relationship between individuals, institutions, and information as one of entrustment rather than ownership. Drawing from property theory, information law, fiduciary doctrine, and constitutional liberty jurisprudence, it argues that while  databases may be proprietary, the underlying facts they contain exist within an unownable commons— a domain of truth that must remain free from enclosure.

The fiduciary commons reorients privacy, governance, and AI regulation around duties of loyalty, care, transparency, and reciprocity, with a critical emphasis on minimizing data collection and limiting its use to a singular, specific purpose. It situates these duties within the American constitutional ideal of ordered liberty, where freedom is preserved through law’s imposition of moral responsibility. The Article concludes by offering the Personal Data Trust Act (PDTA) draft as a statutory model and highlights the need to integrate the Fair Information Practices Principles (FIPPs) into law to fulfil their unmet promise to address  data concerns. In doing so, it restores coherence to the law of information and reclaims the  epistemic integrity upon which republican self-government depends.

This article is a brief summary of my current thoughts.

Introduction

The modern state and the digital marketplace are built on facts – on the recorded, stored, and recombined fragments of human experience. Yet the legal order governing these fragments remains conceptually incoherent. Legislators and courts alternately describe data as property to be owned, as privacy to be protected, or as intelligence to be exploited. Each  frame captures a partial truth; none captures the whole. The result is a jurisprudence that privatizes information while eroding the liberty that makes information meaningful.

This Article begins from a simple but radical premise: facts are not property. They are the epistemic substrate of a shared world — the raw material of knowledge and accountability.

1 Michael Leahy, author of “The Fiduciary Commons: Restoring Ordered Liberty in the Age of Data,” is an executive, regulator, and attorney based in Severna Park, Maryland. With expertise in identity, privacy, and data management, he served as Maryland’s Chief Information Officer for six years, launching the One Stop portal and advancing transparency through Technology Business Management. A former NASCIO President, Michael now leads Government Sourcing Solutions and engages with Georgetown Law, shaping digital governance policies. His article reflects a career dedicated to privacy-preserving technologies and fiduciary data stewardship.

The Fiduciary Commons: Restoring Ordered Liberty in the Age of Data

To enclose them within proprietary or surveillance regimes is to dissolve the public’s capacity for truth, and thus the conditions of self-government. In this light, the struggle over data ownership is not merely economic; it is constitutional. The problem is not new. From the res communes of Roman law to the public – trust doctrine of the common law, Western jurisprudence has long recognized that some resources – air, water, and the instruments of truth – must remain be- yond exclusive dominion. The digital age simply extends that principle to a new domain. Information, like water, flows; and like water, it requires stewardship, not possession.

The fiduciary commons framework translates that insight into a modern legal grammar. It fuses three traditions:

  1. Property law’s discipline of stewardship, which binds control to obligation;
  2. Administrative law’s public-trust orientation, which limits state power through duty; and
  3. Constitutional liberty theory, which conceives freedom as responsibility under law.

Within this synthesis, data controllers become fiduciaries—entrusted stewards of a commons of facts, accountable to those whose lives their systems rep- resent. Fiduciary duties of care, loyalty, transparency, and reciprocity replace claims of ownership or consent as the organizing principle of informational justice, with a paramount focus on minimizing data collection and restricting its purpose to what is strictly necessary.

The consequences of this reconceptualization are profound. It redefines privacy as  relational integrity, not isolation. It reframes AI governance as a fiduciary question: under what duties may machines act on human facts? It trans- forms “digital sovereignty” from a contest of control into a covenant of trust. And it grounds the technical disciplines of decentralized identity and zero-trust architecture in a moral and constitutional foundation.

To make these principles concrete, the Article offers the Personal Data Trust Agreement (PDTA) draft as a statutory model, integrating fiduciary law, decentralized identity frameworks, and transparent accountability mechanisms. It also underscores the critical need to incorporate the Fair Information Practices Principles (FIPPs)—originally articulated in the 1973 HEW Report—into statutory law to address persistent data privacy concerns that remain unfulfilled due to their voluntary adoption. Each stage translates philosophical principle into enforceable duty.

At stake is nothing less than the epistemic constitution of liberal democracy. A society that allows facts to be owned cannot remain free; a society that entrusts them can. The fiduciary commons restores the ancient equilibrium between liberty and law, reminding us that freedom in the digital age, as in every age, endures not through possession but through stewardship.

Part I: The Genealogy of “Owning Data”

From Facts to Property: The Pre-Digital Foundations

The history of “data ownership” begins not with data itself but with the emergence of intellectual property law as a compromise between natural right and utilitarian justification. The Statute of Anne of 1710, the first modern copyright statute, granted authors limited exclusive rights not because they “owned” ideas, but to encourage “learned men to compose and write useful books.” Property in ideas was thus never absolute; it was a temporary monopoly granted for the public good. Facts themselves were expressly excluded from this

The Fiduciary Commons: Restoring Ordered Liberty in the Age of Data regime. As the U.S. Supreme Court affirmed in Feist Publications, Inc. v. Rural Telephone

ServiceCo. (499 U.S. 340, 1991), “facts do not owe their origin to an act of authorship. The first person to find and report a particular fact has not created it.”

Yet long before Feist, the economic value of compiled facts was recognized.

The nineteenth century saw the rise of news agencies, almanacs, and directories, whose creators claimed moral and commercial entitlement to their labors. The 1918 decision in International News Service v. Associated Press (248 U.S. 215) briefly flirted with a “quasiproperty” right in news, recognizing a limited right against competitors appropriating time sensitive information. This right, however, was framed not as ownership of facts but as protection against unfair com- petition.

Thus, the legal order drew a bright line: facts belong to all; compilations may belong to their makers.

The Rise of Databases and the Linguistic Drift Toward Ownership

The mid-twentieth century’s information revolution began to blur this line. The term “data” – derived from the Latin datum, “that which is given” – came to signify machine-readable facts detached from their empirical origins. By the 1960s and 1970s, corporations and governments were investing heavily in information systems whose value derived not from expression, but from volume, precision, and retrievability.

As databases grew central to commerce and governance, so did the perception that they were property-like assets. Yet existing copyright doctrine offered limited protection. In the United States, the “sweat of the brow” doctrine, which rewarded labour over creativity, was decisively rejected in Feist. The Court reaffirmed that originality, not effort, is the touchstone of copyright. By contrast, the European Union’s Database Directive of 1996 codified a sui generis right protecting the substantial investment involved in obtaining, verifying, or presenting data.

This European innovation – a right in the investment rather than in creativity – fuelled a conceptual shift. The Directive’s Recital 7 spoke of “protecting the maker of a database,” implicitly validating the notion that one could “own” a database through labour alone. Business and policy discourse soon conflated ownership of the database with ownership of the data within it

The Corporate Codification of Data as Asset

By the late 1990s, digital platforms had turned data into a new form of capital. Corporate accounting standards began to treat user data as a monetisable in- tangible, even though it lacked legal title. Marketing literature spoke of “data assets,” “data monetization,” and “data ownership,” embedding, proprietary metaphors into managerial culture. Cloud service providers and software vendors reinforced the notion by contract. End-user license agreements (EULAs) routinely stipulated that “all data collected through this service are the property of[Provider],” though the enforceable rights were purely contractual, not proprietary in rem.

At the same time, governments began referring to “state-owned data” or “public data assets,” further normalizing the vernacular of ownership. In both public and private sectors, to own data came to mean to exercise exclusive control over its storage, access, or exploitation.

The metaphor hardened into dogma as data became the substrate of artificial intelligence, algorithmic governance, and surveillance capitalism. Yet, from a jurisprudential perspective, this claim to ownership remains ontologically incoherent. To “own data” is to claim dominion over facts, which are by definition universal and indivisible. Ownership, as a legal concept, presupposes rivalrousness and excludability – neither of which facts possess absent artificial enclosure. Furthermore, the unchecked collection of vast datasets exacerbates this incoherence, underscoring the need for data minimization and a strict limitation to a singular, specific purpose to prevent overreach and protect individual autonomy.

Technological Enclosure and the Private Fencing of Facts Technological developments provided the tools of enclosure. APIs, encryption, and access control mechanisms allowed firms to simulate exclusivity around facts that, in principle, belong to all. In doing so, they transformed possession into power — a digital echo of the enclosure movement that privatized the English commons. Contract and code, not statute or common law, became the instruments of exclusion. Where copyright could not reach, Terms of Service and  technological protection measures could. In this environment, “data ownership” became  an expression of control capacity, not legal entitlement.

The popular analogy of data as “oil”—a finite, extractive resource — misapplies here. Oil is rivalrous and depletable, its value derived from scarcity and exclusive extraction, which aligns poorly with the fiduciary model that emphasizes stewardship and shared benefit. Instead, Peter Aiken’s concept of data as the new “soil” offers a more fitting analogy. Soil is a living, renewable medium— teeming with microbial ecosystems, enriched by organic matter, and capable of sustaining diverse life forms when nurtured with care. Like soil, data  serves as a foundational resource that supports growth, innovation, and resilience when tended responsibly. Just as fertile soil requires balanced nutrients, proper aeration, and protection from erosion to yield abundant harvests, data thrives under stewardship that  ensures transparency, minimizes collection, and limits its use to a singular, specific purpose. Overexploitation or neglect of soil leads to desertification; similarly, unchecked  data harvesting risks epistemic degradation, eroding the trust and collective knowledge that  sustain society. This analogy underscores the need for a fiduciary approach, cultivating data as a shared re source under care, reciprocity, and purpose-driven management, rather than treating it as a commodity to be mined.

The Normative Consequence: Reclaiming the Epistemic Commons

Recognizing that raw data — facts about the world and its inhabitants — cannot be owned is not a denial of labor or value creation. It is an affirmation that property should attach only to human creativity and stewardship, not to the reality being observed. The prevailing discourse of “data ownership” obscures this distinction, encouraging enclosure of knowledge that ought to remain in the public domain.

If facts are akin to navigable waters, then the appropriate legal model is not fee simple ownership but the public trust doctrine: government and corporate actors alike serve as fiduciary stewards, obligated to manage data in the public interest. In this light, “owning  data” becomes a category mistake — akin to claiming ownership over gravity or sunlight.

Moreover, excessive data collection undermines this trust, necessitating a rigorous approach to minimization and purpose limitation to safeguard the epistemic commons.

The Fair Information Practices Principles (FIPPs), introduced in the 1973 HEW Report, offer a foundational framework with principles such as transparency, individual participation, and accountability. However, their voluntary adoption has left their promise unfulfilled, necessitating their codification into law to enforce data stewardship and protect epistemic integrity against unchecked enclosure.

Part I Conclusion

The phrase “owning data” is a linguistic and legal anachronism born of technological affordance and economic ambition. It misdescribes a relationship of control as one of title and thereby legitimizes private enclosure of public knowledge. To restore coherence to information law, we must return to first principles: facts are commons; labor may create property only through originality or stewardship.

Part II: The Legal Ontology of Data

The Fact – Expression Dichotomy and Its Discontents

At the foundation of American copyright law lies the distinction between fact and expression. This dichotomy, articulated most clearly in Feist Publications, Inc. v. Rural Telephone Service Co. (499 U.S. 340, 1991), holds that facts “do not owe their origin to an act of authorship.” The Court reaffirmed a deep epistemic intuition: facts are discovered, not created; they belong to no one. The “law of authorship” protects only original expression—human creativity, not reality itself.

This division traces back to the constitutional purpose of copyright: “[t]o pro- mote the Progress of Science and useful Arts” (U.S. Const. art. I, § 8, cl. 8). Protection is instrumental, not proprietary — it rewards creative labor as a means of enriching the public domain. Facts, being the raw material of knowledge, remain perpetually available for recombination and reinterpretation.

Yet as data became the principal substrate of economic and governmental power, the Feist distinction grew increasingly strained. Modern databases often lack the kind of “creative selection or arrangement” contemplated by traditional copyright doctrine. They are products of investment and engineering rather than artistry. As a result, the law of data straddles two incommensurable categories: the factual (unownable) and the expressive (protectable).

Divergent Legal Architectures: The U.S. and the European Union

The U.S. regime remains grounded in Feist’s rejection of “sweat of the brow.” A compilation of data receives protection only to the extent that its arrangement reflects creative judgment. Merely collecting facts, regardless of cost or effort, conveys no proprietary right. 

The incentive to compile arises from contract, secrecy, or first-mover advantage, not legal title. By contrast, the European Union took a radically different path with the Database Directive of 1996 (Directive 96/9/EC). The Directive established a sui generis right for the “maker of a database” who has made a “substantial investment” in obtaining, verifying, or presenting data. This right—lasting fifteen years—protects against extraction or reutilization of a substantial part of the database. In effect, Europe resurrected a form of labor-based entitlement reminiscent of Locke’s  property theory: mixing one’s labor with unowned material creates a limited proprietary claim. But unlike the Lockean ideal, the investment here need not yield creative expression; economic expenditure suffices. The result is a right in the collection, not in the creation — a quiet revolution in the ontology of information law.

While the Directive explicitly disclaims ownership of the underlying data, its practical effect is to blur the boundary between collection and content. In business and governmental discourse, “database rights” became “data rights,” and the distinction faded from view.

The Contractualization of Data Control

As Feist and the absence of a U.S. database right left factual compilation sun protected, firms turned to contract law. The ubiquitous End-User License Agreement (EULA), API access contract, and data-sharing memorandum became the new legal fences of the digital age.

These agreements achieve through private ordering what public law denies: exclusionary control over facts. When a platform declares that “user data is the property of [the company],” it is asserting a contractual fiction enforceable only against consenting parties, not the world at large. Nonetheless, this language — replicated across millions of transactions — creates a cultural presumption of ownership.

Moreover, the technological enforcement of these agreements through code- based restrictions (encryption, authentication, access logs) turns metaphoric ownership into practical exclusivity. Code, in Lawrence Lessig’s famous formulation, “is law.” The  combination of contract and code supplants traditional notions of property, creating a hybrid regime of control without title—what we might call de facto ownership.

The Rise of Data Sovereignty and Data Property Narratives

In the twenty-first century, as data emerged as the lifeblood of artificial intelligence and predictive analytics, governments began speaking of “data sovereignty” and “data property.” Nations such as China, India, and members of the EU adopted policies treating national data flows as strategic assets, subject to localization and export control. Simultaneously, private actors advanced the rhetoric of “personal data ownership,” promising individuals the ability to “own their data.” While rhetorically empowering, this language often masks a more subtle dynamic: individuals are granted control only within architectures designed and governed by intermediaries. Personal data “ownership” in most implementations amounts to the right to license or revoke consent—analogous to tenancy  rather than title.

The deeper problem is conceptual. Ownership presupposes exclusion, but informational goods are nonrivalrous. The same data can belong simultaneously to multiple entities (e.g., a person, a platform, a government) without depletion. The metaphor of property thus  collapses under the weight of digital reality.

Control, Custody, and the Emergent Fiduciary Model

If “ownership” is inapt, what legal status should data bear? A growing body of scholarship proposes fiduciary rather than proprietary analogies. Under this model, those who collect or process data—whether governments or corporations— act as information  fiduciaries, owing duties of care, loyalty, and transparency to the subjects of the data.

This approach finds precedent in doctrines such as the public trust in navigable waters: certain resources are so essential to collective life that they cannot be alienated into private hands. The sovereign, as trustee, manages them for the benefit of all. Translating this principle into the digital realm suggests that facts – and by extension, raw data – should be treated as commons held in trust, with rights of stewardship rather than dominion.

Here, decentralized identity protocols such as KERI (Key Event Receipt Infrastructure) and Self-Sovereign Identity (SSI) technologies operationalize fiduciary governance. By enabling individuals to control attestations about themselves without transferring underlying facts to centralized databases, these frameworks reassert the commons nature of facts while embedding trustthrough cryptographic verification rather than proprietary enclosure.

Moreover, existing technology allows for extreme data minimization, enabling queries to determine whether data meets specific criteria—such as “born before a date certain”— without ever collecting or storing the data itself, thereby enhancing privacy and reducing the  risk of misuse.

The Fair Information Practices Principles (FIPPs), with their emphasis on transparency, individual access, and accountability, align closely with fiduciary du- ties. However, their voluntary application has failed to address systemic data abuses, underscoring the need forlegal enforcement to realize their potential in safeguarding the epistemic commons.

The Ontological Resolution

The legal ontology of data thus resolves into three layers:

  1. Facts: Epistemically neutral, unownable, and belonging to the commons.
  2. Data Structures: The human (or algorithmic) labor of arrangement and cu- ration, protectable under limited regimes of intellectual property.
  1. Control Mechanisms: Contractual or technical architectures that simulate ownership without legal title.

Recognizing these layers restores coherence to data governance. Property attaches only to creative or stewardship activity, not to the underlying facts. This reframing exposes the language of “data ownership” as both a category error and an ideological artifact of digital  capitalism. The proper inquiry is not who owns data, but who holds it in trust, for whom, and under what duties, with a clear mandate to minimize collection and limit it to a singular, specific purpose.

Part II: Conclusion

The ontology of data reveals a paradox: the more valuable data becomes, the less suitable traditional property concepts are to describe it. Law must evolve not by extending ownership but by constraining it—replacing dominion with duty and ensuring data collection is minimized and purpose-specific.

Part III: Data as Commons – A Jurisprudence of the Ether

From Tangible Commons to Informational Commons

Throughout history, societies have recognized certain resources as too essential, too  universal, to be privately owned. The Roman concept of res communes omnium—things common to all—encompassed air, the sea, and the shores. These were goods whose use by one did not diminish their use by another and whose exclusion would offend the natural order. In Anglo-American law, this principle evolved into the public trust doctrine, which obliges the state to preserve such resources for public use and prohibits their alienation into private hands.

In the digital era, data occupies an analogous position. Like water, it flows; like air, it permeates; like light, it enables perception. Each individual’s data contributes infinitesimally to collective knowledge, yet no single entity can lay exclusive claim to the acts themselves. The epistemic substrate of society—its shared informational environment—thus merits classification as a commons: a resource domain whose  governance must balance access, sustainability, and equity.

The Ontological Parallels Between Data and Natural Commons

The analogy between data and navigable waters is not rhetorical flourish but structural correspondence as both navigable waters and data are flowing, nonrivalrous and dynamic.

Just as enclosure of rivers and ports would strangle economic life, so too does enclosure of factual data threaten epistemic liberty and democratic self-governance. Both require structured openness—mechanisms ensuring that use is free yet responsible, with data collection minimized to prevent over accumulation.

Elinor Ostrom and the Architecture of Common Pool Governance

In Governing the Commons (1990), Elinor Ostrom demonstrated that communities can sustainably manage shared resources without resorting to either total privatization or centralized control. Her principles—clearly defined boundaries, collective-choice arrangements, monitoring, graduated sanctions, conflict- resolution mechanisms, and recognition of community rights—translate elegantly to data governance.

Applied to data:

  • Boundaries define what constitutes public, shared, or personal data.
  • Collective-choice arrangements allow stakeholders to set norms for data access 

and reuse.

  • Monitoringensures compliance throughtransparentauditingmechanisms.
  • Sanctions discourage misuse without criminalizing experimentation.
  • Recognition grants legitimacy to community-driven data governance structures.

These principles illuminate a third path between the corporate enclosure of data and the libertarian dream of total openness: a governed commons where access is managed for the collective good, with data collection strictly limited to a singular, specific purpose.

The Public Trust Doctrine in the Digital Age

American jurisprudence has long treated certain resources as incapable of private ownership because they serve a public function. In Illinois Central Railroad Co. v. Illinois (146 U.S. 387, 1892), the Supreme Court held that the state could not alienate the bed of Lake Michigan, as doing so would violate its duty to preserve the lake for navigation and commerce. This doctrine rests on two principles:

  1. The sovereign holds specific resources in trust for the public.
  2. The state has an affirmative duty to prevent their impairment or alienation.

Extending this logic to the informational realm suggests that governments, too, hold data about the public in trust for the public. The “public data asset” language now common in state and federal policy would acquire substantive meaning if recast in fiduciary terms: the state as trustee, the citizen beneficiaries, and facts as the corpus of the trust. Under such a model, data governance obligations would mirror those of natural resource trustees — transparency, ac- countability, and non discriminatory access, with a strong emphasis on minimizing data collection to protect individual rights.

This analogy aligns with the fiduciary turn in privacy scholarship. As Jack Balkin has argued, platforms that collect personal data function as information fiduciaries—bound by duties of care and loyalty to users. Extending this concept to governments would reinvigorate the  social contract of the digital state: citizens would no longer be data subjects but beneficiaries of a constitutional trust in information.

The Fair Information Practices Principles (FIPPs), with their focus on notice, choice, and  enforcement, provide a historical benchmark for trust-based governance. Their integration into statutory law is essential to address the persistent gaps in privacy protection, ensuring that fiduciary duties are legally binding rather than aspirational.

Technological Instruments for a Commons Based Framework

Technological architectures can operationalize fiduciary principles. Decentralised identity protocols, such as KERI (Key Event Receipt Infrastructure) and Self-Sovereign Identity (SSI) philosophy embody this shift. By allowing individuals to issue and verify credentials without disclosing underlying data, these systems preserve factual truth while maintaining informational boundaries. Moreover, cutting- edge technology now exists to take data minimization to an extreme, enabling queries to determine whether data meets specific  criteria—such as “born be- fore a date certain”—without ever collecting or storing the data  itself. This approach eliminates the need for centralized data repositories, enhancing privacy and aligning with the fiduciary commons’ ethos of stewardship over possession. Similarly,  zero-knowledge proofs and distributed ledgers could allow verification without revelation, maintaining the commons nature of facts while supporting accountability. These tools instantiate a jurisprudence of stewardship: each participant contributes to collective  veracity without claiming ownership over the data itself.

The Moral and Democratic Stakes

The normative force of recognizing data as a commons extends beyond efficiency. It addresses the moral hazard of informational dominion. Whena handful of actors control the world’s factual substrate—its digital “weather”—they acquire disproportionate epistemic  and political power. A society that permits ownership of facts risks repeating the enclosures of the past, this time in the realm of knowledge rather than land.

Reconceiving data as a commons restores epistemic equality. Every person becomes both contributor and beneficiary of collective understanding. It reaffirms the democratic ideal that truth, like light, should be accessible to all, subject only to reasonable limits of privacy and security, with data collection minimized and purpose-limited to prevent abuse.

Toward a Jurisprudence of the Ether

The jurisprudence of the ether rests on three propositions:

  1. Ontological: Facts are unownable elements of reality, analogous to air and water.
  2. Institutional: Governments and corporations act as fiduciary stewards of the data they collect or process.
  1. Technological: Distributed and decentralized architectures (KERI, SSI, zero-knowledge proofs) can encode fiduciary obligations into the infrastructure itself.

This framework yields a coherent legal philosophy for the digital age: one that honors both individual autonomy and collective welfare. It replaces the logic of extraction with that of care, and the rhetoric of “ownership” with that of trust.

Part III Conclusion

Data, like water, is both ubiquitous and indispensable. It can sustain or suffocate depending on how it is governed. Recognizing raw data as a commons reorients law toward stewardship, transparency, and reciprocity—the virtues of the fiduciary rather than the proprietor, with data collection minimized and strictly purpose-bound.

Part IV: Institutional Design for the Fiduciary Commons

The Architecture of Trust: From Law to Infrastructure

Institutionalizing the fiduciary data commons requires reorienting both governance and technology around trust by design. Traditional property regimes rely on exclusion and enforcement; fiduciary regimes depend instead on trustworthiness, verifiability, and reciprocal transparency.

The fiduciary commons model envisions a layered ecosystem:

  • Law defines duties of loyalty, care, and disclosure;
  • Technology ensures those duties are verifiable;
  • Governance provides oversight, redress, and participatory legitimacy.

In this triadic structure, each layer constrains and validates the others, forming what might be described as a constitutional separation of powers for information.

Data Trusts and Stewardship Institutions

The data trust offers the most concrete legal vehicle for implementing fiduciary governance. Derived from Anglo-American trust law, it divides title and control: the trustee manages data  on behalf of beneficiaries, who retain equitable interest.

Key design principles include:

  1. Fiduciary Duties Codified in Statute – Trustees owe beneficiaries duties of loyalty, prudence, and impartiality.
  1. Purpose Limitation – Data can only be processed consistent with declared trust purposes; deviation constitutes breach.
  1. Beneficiary Standing – Individuals or collectives whose data form part of the trust corpus gain enforceable rights to oversight and redress.
  1. Transparency and Audit – Trustees must maintain public registries of data flows, access requests, and value derived.

Several pilot initiatives, such as the UK’s Open Data Institute data trust prototypes and the EU’s Data Governance Act (2022), illustrate early moves toward such fiduciary structures.

Decentralized Identity and Verifiable Data Provenance

Technological implementation begins with decentralized identity (DID) and verifiable credentials (VCs) — frameworks enabling cryptographically assured provenance and selective disclosure. Within the fiduciary commons, DID systems serve dual roles:

  • They empower individuals to assert agency over personal data (controlling consent, portability, and linkage);
  • They provide verifiable chains of custody for shared factual data.

This architecture resonates with the KERI model, in which identity arises from selfcertifying identifiers anchored in distributed key events.

Zero Trust as Civic Infrastructure

While “Zero Trust” originated as a cybersecurity paradigm — “never trust, always verify” — it carries profound civic implications. In the fiduciary commons, Zero Trust becomes a constitutional safeguard: no actor, even a fiduciary, is presumed trustworthy without continuous verification. Applied to data governance, Zero Trust ensures:

  • Continuous authentication of access requests;
  • Cryptographic verification of data provenance and modification;
  • Minimization of privilege and segregation of duties.

The Regulatory Layer: Fiduciary Law Meets Administrative Oversight

Operationalizing the fiduciary commons requires an administrative framework comparable to financial or environmental regulation. Key features might include:

  1. National or State-Level Fiduciary Data Commission;
  2. Public Transparency Registries;
  3. Reciprocity Obligations for Private Fiduciaries;
  4. Interoperability with Privacy Law.

Normative Payoff: Liberty Through Stewardship

At its core, the fiduciary commons model is an argument for ordered liberty in the digital realm. It reconciles the autonomy of individuals with the collective good by grounding both in a shared moral economy of trust.

Part IV Conclusion

Institutionalizing the fiduciary commons is not merely a technical reform; it’s a constitutional renewal. It demands that we see information governance not as asset management, but as the moral administration of shared truth.

Part V: Jurisprudential Synthesis and Policy Framework

Restoring Coherence to the Law of Information

The genealogy of “owning data” reveals a legal void: property law misdescribes data’s ontology, while privacy law insufficiently addresses its structural power asymmetries. The fiduciary commons fills this void by redefining the relationship between individuals, institutions, and information as one of entrustment.

Guiding Principles for a Fiduciary Data Order

  1. Commons Primacy: Facts remain unowned and universally accessible.
  2. Stewardship over Ownership: Control of data entails a duty to use it for defined purposes.
  3. Transparency by Design: Every data transaction must be intelligible and auditable.
  4. Reciprocal Benefit: Entities deriving value from data must return public value.
  5. Distributed Accountability: Technical verification must parallel legal responsibility.

Model Statutory Framework: The Personal Data Trust Act (PDTA)

The Personal Data Trust Agreement (PDTA) draft provides a robust statutory framework for the fiduciary commons, integrating fiduciary law with decentralized identity and accountability mechanisms. Key provisions include:

  • Establishing data fiduciaries with duties of loyalty, care, and transparency;
  • Mandating purpose limitation and beneficiary consent, with data collection minimized 

and restricted to a singular, specific purpose;

  • Requiring public registries for data flows and audits;
  • Incorporating FIPPs principles—such as notice, access, and enforcement— into legally binding obligations, leveraging technologies that allow queries to assess  criteria like “born before a date certain” without collecting data.

The PDTA’s adoption would fulfill the unmet promise of FIPPs, which have been undermined by their voluntary status, addressing data concerns through enforceable trust-based governance.

Join the identity ecosystem at Identity Week Europe 2026, 9-10 June 2026 in Amsterdam. As Europe’s largest and most trusted identity event, Identity Week Europe will spotlight themes of trust, data stewardship and digital sovereignty, perfectly aligned with the Personal Data Trusteeship Act (PDTA) and the fiduciary-commons framework. Don’t miss your chance to engage with the conversations shaping the future of identity and data governance.