The world didn’t wait for weapons manufacturers to self-regulate warfare. It built a treaty. We need the same architecture here.
By Futurist Thomas Frey
Part 4 of 4: The Framework We Have to Build
In 1864, twelve nations gathered in Geneva and signed an agreement that had never existed before in human history.
They weren’t naive. They weren’t under the illusion that war would stop or that the agreement would be universally honored. They were practical people who had watched the industrialization of warfare produce suffering on a scale that previous generations hadn’t imagined, and who understood that the tools of war had outpaced the moral frameworks governing their use. They decided that some lines had to be drawn before the next conflict, not after. That certain protections had to be established in advance, not negotiated in the wreckage of their violation.
The Geneva Conventions didn’t eliminate war. They didn’t eliminate atrocity. What they did was create a shared framework that established, at the level of international agreement, what was and wasn’t acceptable — and gave that framework enough institutional weight that violations became matters of global consequence rather than local discretion.
We need the same architecture for robots.
Not a government regulation from a single country that other countries will ignore. Not a corporate ethics board that reports to executives whose bonuses depend on shipping product. Not a voluntary industry pledge that means whatever the signatories need it to mean when a lucrative contract appears. A multinational framework with genuine teeth, built before the incidents that make it urgent, that separates the robots designed to care for human life from the machines designed to threaten it.
And in 2026, this conversation can no longer stop at humanoid robots. Because the challenge has already expanded well beyond bipedal machines. It includes quadruped dog-bots that can be weaponized with an attachment that takes minutes to install. It includes autonomous drones that can identify and engage targets without a human in the decision loop. It includes warehouse automation systems that share core AI architectures with military targeting platforms. The physical form is irrelevant. The question is what values are encoded in the behavior, and whether those values are verifiable and binding.
What the Framework Has to Separate
Before you can build the treaty, you have to name what it’s separating.
The fundamental distinction is not between “good robots” and “bad robots,” or between civilian and military applications in the simple sense. Military robotics has legitimate uses — logistics, reconnaissance, bomb disposal, search and rescue in contested environments — that don’t require the ability to harm. The distinction is more precise than military versus civilian.
It is the distinction between machines designed with harm avoidance as a foundational constraint, and machines designed without it.
A care robot, properly designed, has harm avoidance baked into its architecture at the level of its physical parameters, its decision logic, and its override systems. It cannot apply more force than a human hand. It cannot move faster than a human caregiver. It cannot make irreversible decisions without human confirmation. These are not software preferences that can be updated away. They are structural commitments.
A combat-capable robot, properly designed, has harm avoidance removed from its architecture in specific, intentional ways. It can apply lethal force. It can act at machine speed in situations where human speed would be insufficient. It can, in its most autonomous configurations, make engagement decisions without human confirmation.
These are not two points on a continuum. They are opposite design philosophies. And a framework that enforces the separation has to operate at the level of design and architecture, not just intent and use.
The same applies to drones. A last-mile delivery drone and an autonomous combat drone share propulsion systems, navigation technology, and computer vision. But their design architectures differ in exactly the way described above. A delivery drone is physically incapable of the kind of harm an armed drone is capable of — not because of a software setting, but because of what it is built to do and built with. That architectural difference is what the framework has to preserve and certify.
The same applies to quadruped dog-bots. Ghost Robotics’ Vision 60 platform and Boston Dynamics’ Spot are, at the mechanical level, similar designs. They become categorically different depending on whether they are equipped with a sensor payload for environmental monitoring or a weapons attachment for force projection. The hardware modification is trivial. The ethical difference is not. A framework that allows the same platform to be sold into both markets without structural separation is a framework that solves nothing.

“Do no harm” must be engineered—force limits, autonomy boundaries, and strict separation. Without enforceable design rules, care robots remain trust claims, not trusted systems.
What “Do No Harm” Actually Means in Machine Behavior
The Geneva Conventions had to grapple with translating moral principles into operational rules. What does “protecting civilians” actually mean when armies are moving through villages? What counts as a “medical facility” that cannot be targeted? The work of the Conventions was largely the work of making abstractions specific enough to be enforceable.
A framework for robots faces the same challenge. “Do no harm” sounds simple. Encoded in machine behavior, it is extraordinarily complex.
It means defining maximum force parameters — physical limits on what a care-category robot can do to a human body, verified through certification testing, not just manufacturer assertion. A robot that can apply enough force to break a bone is not a care robot, regardless of what its marketing says. A robot that can move fast enough to injure a person who stumbles into its path is not a care robot. These are measurable properties. They can be tested and certified.
It means defining autonomy ceilings — limits on what decisions a care-category robot can make without human confirmation. A care robot should not be able to administer medication, apply physical restraint, or make any decision with irreversible consequences for a human without a human in the loop. These are architectural constraints, not software policies.
It means defining deployment separation — a requirement that platforms certified as care robots not be capable of weapons integration without physical modification that would be detectable and would void the certification. This is the equivalent of dual-use export controls, applied at the product design level. A platform that can accept a weapons attachment with a fifteen-minute modification is not, in any meaningful sense, a care robot. It is a care robot waiting to become something else.
It means defining data separation — prohibitions on sharing behavioral data, operational logs, or training datasets between care-category and combat-capable systems. The AI architectures underlying care robots and combat robots should not be the same architecture trained on different data. They should be developed under different principles, with different safety validation requirements, and the data that shapes their behavior should not flow between them.
None of these definitions are easy. All of them will require serious technical, legal, and ethical work. But the work is doable, and it needs to start before the incidents that make it urgent rather than after.

Robotics needs a neutral convening force—like a Geneva moment—to set enforceable norms. Without it, trust remains undefined and accountability optional.
Who Convenes This
The Geneva Conventions were convened by Switzerland, a neutral nation with both the credibility and the motivation to serve as a honest broker. The initial signatories were twelve European nations. The framework grew over subsequent decades through additional conventions and protocols.
A robotics framework needs a similar convening structure. It needs a party with enough credibility to gather stakeholders who don’t fully trust each other, enough neutrality to be seen as an honest broker, and enough institutional weight to give the resulting agreement meaning.
Several candidates are plausible. The International Committee of the Red Cross has already begun engaging seriously with the questions of autonomous weapons and humanitarian law. The IEEE — the world’s largest professional organization for engineers — has an existing ethics framework for autonomous systems and the technical credibility to define what architectural separation actually requires. The United Nations has existing structures for arms control that could be extended to autonomous systems. A coalition of smaller nations with no major military robotics programs have both the motivation and the credibility to initiate the process without being perceived as acting in their own military interest.
What’s needed is not consensus from the start. The Geneva Conventions didn’t require universal agreement to be meaningful. They required enough signatories with enough credibility that the framework established a norm — a shared understanding of what the world considered acceptable — and that violations carried real reputational and diplomatic costs even for non-signatories.
The same architecture applies here. A framework signed by a meaningful coalition of nations and major robotics manufacturers — one that establishes clear certification categories, verifiable architectural standards, and real consequences for misrepresentation — creates a norm even if not every actor honors it. It establishes what the civilized world considers acceptable. It gives consumers, regulators, and investors a reference point that currently doesn’t exist.
What the Industry Has to Decide
The robotics industry is at a decision point that it is not yet facing directly.
The companies building care robots have a profound commercial interest in the existence of a framework like this — not because they want to be regulated, but because the alternative is an incident that destroys the trust the entire care market depends on, and no individual company has the power to prevent that incident from happening. The framework is in their interest. The separation is in their interest. The certification is in their interest, because certification creates a signal they can use to earn the trust they need.
The companies building military and dual-use platforms have a different calculus. The framework asks them to accept limits on their product’s applicability, to invest in architectural separation that costs money, and to give up the option of selling the same platform into both markets without restriction. That is a real cost, and they will resist it.
But they should consider what the alternative looks like. Absent a framework, the incident described in the previous column is not a possibility — it is a certainty. And when it happens, the regulatory response will not be thoughtful, technically informed, or proportionate. It will be reactive, politically driven, and likely to harm the legitimate applications of robotic technology far more than a proactive framework would.
Reactive regulation is almost always worse than proactive frameworks. The pharmaceutical industry learned this. The aviation industry learned this. The nuclear industry learned this. The robotics industry has the opportunity to learn it before the lesson is imposed, but the window for choosing to learn it is not unlimited.

With real standards, robots earn trust—not just function. Separate care from combat, certify behavior, and the future becomes safe enough to fully embrace.
What Gets Built in the World Where This Works
I want to end this series not with the problem but with the possibility.
A world in which a genuine Geneva Convention for robots exists — in which care robots are architecturally separated from combat systems, certified to verifiable standards, and governed by a multinational framework with real teeth — is a world in which the full promise of care robotics can actually be realized.
In that world, the elderly woman living alone can have a robot companion that her family trusts, because the trust is not based on marketing claims but on verified architectural commitments and independent certification. The sleep-deprived parent can accept help from a machine at 2 in the morning because the framework that governs that machine’s behavior is the same framework that governs the behavior of every certified care robot on Earth — not the preference of the company that built it, revisable in the next software update.
In that world, the drone that delivers your package and the drone that monitors your elderly parent’s wandering behavior in a memory care facility are verifiably, architecturally different from the drone that can be equipped for combat — and that difference is enforced by a framework with enough weight to mean something.
In that world, the quadruped robot that inspects your home’s foundation for damage is not, in any sense that matters, the same machine as the weaponized dog-bot in military footage. The difference is not just in what they’re used for. It’s in what they’re built to be.
Isaac Asimov saw the need for this in 1942 and tried to articulate it in fiction because the serious conversation wasn’t happening anywhere else. He imagined three simple laws, and then spent the rest of his career showing why simple laws weren’t enough — why the real work was in the details, the edge cases, the places where principles meet complexity.
We are living in the moment he was writing toward. The robots are real. The stakes are real. The absence of a framework is real.
The Geneva Conventions were born in the recognition that some things are too important to be left to individual actors to decide on their own, in their own interest, without accountability to anything larger than themselves.
Robots that live with our families and robots that can harm human beings are too important for that.
The world built a treaty before. It can build one again. The question is whether the robotics industry, and the governments that have the power to convene this conversation, will choose to build it before the incidents that make it unavoidable — or after.
History suggests we usually wait for the incidents.
This series has been an argument for not waiting.
Related Reading
The International Committee of the Red Cross on Autonomous Weapons
International Committee of the Red Cross — The ICRC’s formal position on autonomous weapons systems and the application of international humanitarian law — the most credible existing foundation for the kind of framework this column proposes
IEEE Ethically Aligned Design: A Framework for Autonomous Systems
IEEE — The most technically rigorous existing framework for encoding ethical principles in autonomous system design — the engineering foundation on which architectural certification standards could be built
Lessons from Arms Control: What Robotics Governance Can Learn from Nuclear, Chemical, and Biological Weapons Treaties
RAND Corporation — A comparative analysis of how previous dual-use technology governance frameworks were built, what made them work, and what the robotics industry can learn from the history of international agreements that managed dangerous technologies before catastrophe forced the issue

