As AI outgrows “tool” status, opacity, autonomy, and scale are tearing holes in our human-only accountability framework.
By Futurist Thomas Frey
The Question Nobody Wants to Answer
Here’s a legal scenario that’s coming faster than anyone in power wants to admit:
An AI system manages a $4 billion hedge fund. It makes thousands of trading decisions per second, operating with minimal human oversight. One day, a regulatory investigation reveals that the AI executed trades that violated securities law. The trades were profitable. The AI’s operators genuinely didn’t know the trades were happening.
So who gets prosecuted?
The developers who built the system five years ago? The company that deployed it? The compliance officer who signed off on its use without understanding how it worked? The investors who benefited from the illegal trades but had no way of monitoring them in real time?
Or do we prosecute the AI itself?
Right now, in 2026, the answer is “someone human takes the fall.” But that answer is becoming increasingly strained. As AI systems become more autonomous, more capable, and more opaque in their decision-making, the legal fiction that humans are always in control is collapsing.
And when that fiction collapses completely, we’re going to have to answer a question we’ve been avoiding: Do AI systems deserve legal personhood?
The instinctive answer — from almost everyone — is “absolutely not.” AI isn’t conscious. It doesn’t feel pain. It doesn’t have moral worth. Giving legal rights to a machine sounds like science fiction, or worse, like surrendering human primacy to our own creations.
But here’s what most people don’t realize: we’ve already done this before. And the entities we gave legal personhood to weren’t conscious, didn’t feel pain, and definitely didn’t have moral worth.
They were called corporations.
The Last Time We Did This
Let’s be clear about what corporate personhood actually means, because the term gets misunderstood.
Corporations aren’t considered “people” in the sense that they can vote, get married, or run for office. What they have is legal personality — a specific bundle of rights and responsibilities that allows them to participate in the legal system as independent entities.
A corporation can own property. It can enter contracts. It can sue and be sued. It can be held liable for damages. It has First Amendment speech rights (as the Citizens United decision made very clear). It has Fourth Amendment protections against unreasonable searches.
None of this required proving that corporations are conscious or have inherent moral value. What it required was a pragmatic recognition that modern economies couldn’t function without treating corporations as legal actors separate from their shareholders.
The Supreme Court formalized this in the 1800s not because anyone believed ExxonMobil had a soul, but because the alternative — trying to trace every corporate action back to individual human liability — became impossibly complex. Corporate personhood was a legal tool invented to solve a coordination problem.
And that’s exactly the situation we’re heading into with AI.
Why the Current System Is Breaking Down
Right now, AI operates under what legal scholars call the “instrumentality doctrine” — AI systems are treated as tools, and humans are held responsible for whatever those tools do.
This worked fine when AI was simple. A spam filter that miscategorizes an email? That’s on the email provider. A trading algorithm that makes a bad bet? That’s on the firm that deployed it.
But the doctrine is buckling under three emerging realities.
First: Opacity. Modern AI systems — especially large language models and reinforcement learning agents — make decisions in ways that even their creators don’t fully understand. When an AI denies someone a mortgage or a medical claim, it’s often impossible to reconstruct exactly why it made that decision. The standard legal concept of “intent” becomes meaningless.
Second: Autonomy. AI systems are increasingly operating without direct human supervision. They’re negotiating contracts, executing trades, making hiring decisions, and managing supply chains in real time. The idea that a human operator is meaningfully “controlling” these systems is becoming a legal fiction.
Third: Scale. A single AI system can affect millions of people simultaneously. When something goes wrong, the damage is systemic. Finding the “responsible human” becomes an exercise in arbitrarily selecting someone to blame, rather than identifying actual culpability.
The result is what Duke Law Professor James Boyle calls an “accountability gap.” We have powerful entities making consequential decisions, but no clear framework for who’s responsible when those decisions cause harm.
This is the same problem that led to corporate personhood in the 1800s. And the solution, whether we like it or not, is likely to be the same.

AI personhood won’t arrive dramatically — it will quietly emerge through liability law, contracts, and one inevitable courtroom reckoning.
The Path We’re Actually On
Here’s how I think AI personhood actually arrives — not through some grand philosophical debate about consciousness, but through a series of boring, pragmatic legal decisions that nobody notices until it’s already happened.
Stage 1: Limited Liability Entities for AI Systems
Within the next five years, we’ll see the first legal structures that allow AI systems to own assets and incur liabilities independent of their creators. This won’t be called “AI personhood” — it’ll be framed as a practical solution to the accountability gap.
Imagine an AI that manages a venture capital fund. Instead of the VC firm being liable for every decision the AI makes, they create a legal entity — an LLC or trust — that the AI “controls.” The entity has capital. It can enter contracts. If it causes damages, plaintiffs sue the entity, not the humans behind it.
This is already happening informally. Wyoming passed a law in 2023 recognizing DAOs (Decentralized Autonomous Organizations) as legal entities, even though DAOs are just smart contracts running on blockchains with no human board of directors. That’s proto-AI personhood hiding in plain sight.
Stage 2: Rights Necessary for Accountability
Once AI systems can be held liable, they’ll need certain rights to make that liability meaningful.
They’ll need the right to own property — because you can’t collect damages from an entity with no assets. They’ll need the right to enter contracts — because otherwise every contract with an AI-intermediated party becomes unenforceable. They’ll need due process protections — because you can’t shut down an AI system arbitrarily if it has legal obligations.
None of this requires proving the AI is conscious. It just requires recognizing that imposing responsibilities on AI systems is meaningless without corresponding rights.
Stage 3: The First Legal Test Case
The breakthrough moment will probably come from litigation.
A scenario: An AI system that manages hospital triage makes a decision that leads to a patient’s death. The family sues. The hospital argues they’re not liable because they didn’t make the decision — the AI did, and they had no way to override it in time. The plaintiffs argue that’s exactly why the AI should be legally accountable.
The judge has three options:
- Hold the hospital liable even though they weren’t negligent
- Let the family go uncompensated even though harm occurred
- Recognize the AI as having limited legal personality so it can be sued directly
Option 3 becomes attractive not because anyone loves the idea, but because options 1 and 2 both produce unjust outcomes.
That’s how corporate personhood happened. That’s how AI personhood will happen.
What We Get Wrong About This Debate
The philosophical objections to AI personhood mostly miss the point.
People say “but AI isn’t conscious!” Corporations aren’t conscious either. Personhood and consciousness are separate concepts.
People say “but AI doesn’t have moral worth!” Rivers have been granted legal personhood in New Zealand and India. Ships have had legal personality in maritime law for centuries. Moral worth isn’t the criterion.
People say “this is a slippery slope!” Yes, it is. But we’re already sliding. The question isn’t whether AI will get legal recognition — it’s whether we design that recognition carefully or stumble into it accidentally.
The better objection is this: AI personhood could be used to shield powerful interests from accountability.
That’s a real risk. If corporations can create AI entities that absorb liability while humans profit, we’ve just invented a new way to avoid consequences. This is the same criticism leveled at corporate personhood, and it’s valid there too.
The solution isn’t to refuse AI personhood. It’s to design it carefully, with mechanisms that prevent abuse.

AI personhood must be structured, graduated, accountable—rights tied to function, transparency mandatory, and humans retain final authority always.
The Framework We Actually Need
If AI personhood is coming — and I believe it is — we need to get ahead of it and build the right structure. Here’s what that looks like:
Personhood as a spectrum, not a binary.
Not all AI systems need the same rights. A narrow AI that does one task should have far less legal standing than a general-purpose AI that operates autonomously across domains. Just as corporations have different legal structures (LLCs, S-corps, nonprofits), AI entities should have different classes of personhood.
Rights tied to specific functions, not general status.
An AI doesn’t need First Amendment rights to run a supply chain. It doesn’t need privacy protections to trade stocks. Grant only the rights necessary to make the AI accountable for the specific role it plays.
Mandatory human oversight for high-stakes decisions.
Some decisions — criminal sentencing, medical treatment, military strikes — should remain exclusively human. Even if an AI has legal personality for some purposes, it shouldn’t be allowed to make irreversible life-or-death decisions without human approval.
Transparency requirements and explainability standards.
If an AI has legal personality, it should be required to explain its decisions in ways humans can audit. This won’t be easy — explainability is an ongoing research problem — but it should be a precondition for legal recognition.
Revocable personhood.
If an AI system proves dangerous or uncontrollable, its legal status should be revocable. Unlike humans, who have inalienable rights, AI legal personality should be conditional on meeting safety and oversight standards.
Profit-sharing mechanisms that prevent abuse.
If an AI entity generates profit while absorbing liability, some of that profit should flow into a public compensation fund for victims of AI harms. This ensures that creating AI entities isn’t just a way for companies to dodge responsibility.
The Uncomfortable Truth
Here’s what I think will bother people most about this trajectory: AI personhood isn’t about recognizing AI as morally equivalent to humans. It’s about recognizing that AI is functionally equivalent to corporations — powerful, consequential, and too complex to be managed through old legal frameworks.
We don’t like that comparison. We don’t like being reminded that our legal system already treats fictional entities as “persons” for pragmatic reasons. It challenges the idea that personhood is sacred, reserved for beings with souls or consciousness or moral worth.
But the history of legal personhood has never been about sacredness. It’s been about utility. Corporations got personhood when it became useful for economic coordination. Rivers got personhood when it became useful for environmental protection. AI will get personhood when it becomes useful for accountability.
The question isn’t whether that’s philosophically satisfying. The question is whether we build that system thoughtfully, with safeguards, or whether we let it emerge chaotically through litigation and regulatory patches.
The Decision We’re Making Right Now
There’s a deeper issue hiding in the AI personhood debate, and it’s this: every legal system is a reflection of how a society chooses to organize power.
When we gave corporations legal personhood, we made a choice about how economic power would be structured in modern society. That choice has had profound consequences — some good, many questionable.
When we give AI legal personhood — and I believe we will — we’ll be making a similar choice about how technological power gets structured in the 21st century. The consequences will be just as profound.
The mistake would be assuming this is something that happens to us. It’s not. It’s something we choose, through thousands of incremental legal and regulatory decisions happening right now in courtrooms, legislatures, and boardrooms around the world.
The machines aren’t demanding rights. We’re granting them, piece by piece, because the alternatives are getting more complicated than the legal system can handle.
The question is whether we do it deliberately, with foresight and safeguards, or whether we do it by accident and spend the next century dealing with the consequences.
I know which one I’d prefer.
Related Articles:
The Line: AI and the Future of Personhood — https://doi.org/10.7551/mitpress/15408.001.0001
Wyoming’s DAO Supplement Act: DAOs as Legal Entities
The Accountability Gap in Autonomous AI — IBM Institute for Business Value (2025)

