303.666.4133

Two robots, same tech—one cares, one confronts. When they share origins,
the industry faces a paradox it hasn’t yet acknowledged or resolved.

Trust in robots will not be built incrementally. But it can be destroyed in a single afternoon.

By Futurist Thomas Frey

Part 3 of 4: The Military Paradox Nobody Will Discuss


Let me describe two robots.

The first is designed for eldercare. It moves slowly and deliberately through a home, helps a 78-year-old woman with limited mobility get from her bed to her chair, reminds her to take her medication, detects if she falls, and calls for help if she does. It is gentle by design. Its physical parameters are constrained specifically to prevent it from applying more force than a human hand would use. Its entire architecture is built around one principle: do not harm the person in your care.

The second is designed for military reconnaissance and force projection. It can move fast across difficult terrain, carry significant payload, identify targets using computer vision, and in its more advanced configurations, make or assist with engagement decisions in contested environments. It is capable by design. Its physical parameters are optimized for effectiveness in situations where the humans nearby may be adversaries. Its architecture is built around a completely different principle: accomplish the mission.

Both of these robots exist right now. Both are being actively developed and in some cases deployed. Both use similar foundational technologies — the same locomotion research, the same computer vision systems, the same advances in battery technology and actuator design that have driven the whole field forward.

And both are being developed, in many cases, by the same companies. Or by companies that share investors, share talent, share research lineages, and operate in the same public conversation about the future of robotics.

That is the military paradox. And the robotics industry is not discussing it honestly.

The Funding Reality

To understand why this matters, you need to understand where robotics development money actually comes from.

The Defense Advanced Research Projects Agency has been one of the most important funders of fundamental robotics research for decades. DARPA’s robotics challenges in the 2010s produced technology that directly seeded the current generation of humanoid platforms. Boston Dynamics — whose Atlas robot is the most recognizable humanoid in the world — spent years under the ownership of Google before being sold to Hyundai, but its foundational development included significant defense-adjacent funding and the Atlas platform has been demonstrated in countless military-adjacent contexts.

The US Army has active programs evaluating robotic platforms for logistics, reconnaissance, and combat support. The Defense Department’s vision of the future battlefield includes robotic systems operating alongside human soldiers. The investment flowing into defense robotics is enormous and accelerating, and it is not cleanly separated from the investment flowing into consumer and care robotics. The research is connected. The talent moves between sectors. The companies that win defense contracts build capabilities that transfer.

None of this is secret. It is all documented in public filings, press releases, and conference presentations. What is not being said publicly — at least not in the consumer-facing conversations about the wonderful future of robot caregivers and domestic helpers — is what the convergence of these two development tracks means for the trust that the entire industry depends on.

What Footage Does

Trust is not a technical property. It cannot be engineered into a product the way you engineer payload capacity or battery life. It is a social property — something that exists in the relationship between a technology and the public that encounters it. And it is profoundly asymmetric in how it is built and destroyed.

Building trust in a technology takes years. It requires consistent, reliable, incident-free performance across millions of interactions, in environments that matter to real people, witnessed by enough people that the positive evidence accumulates in public consciousness. It requires the absence of dramatic failures. It requires time.

Destroying trust in a technology can take minutes. It requires one incident, clearly documented, that is frightening enough to crystallize the fears that were always present but suppressed by the weight of positive experience.

Aviation spent decades building the trust that makes billions of people comfortable getting on commercial aircraft. A single high-profile crash, handled badly, can create a confidence crisis that grounds fleets and reshapes industry dynamics for years. The trust is real and hard-won. The vulnerability is permanent.

The robotics industry has not spent decades building public trust. It is in the early stages of that process. The positive experiences are limited to relatively small populations of early adopters, researchers, and industrial users. The general public’s relationship with humanoid robots is still primarily mediated by science fiction, product demonstrations, and news coverage — all of which create impressions, but none of which create the deep experiential trust that comes from living with a technology over time.

Now consider what happens when footage appears — and it will appear, because it always does — of a military robot causing harm. Not a weapon failing to discriminate properly in a war zone thousands of miles away. Something closer. Something that looks, to a person watching it on a phone screen, like the same kind of robot that companies have been telling us will help with our elderly parents and our young children.

The human brain is not equipped to parse the difference between a Boston Dynamics robot deployed in an eldercare demonstration and a Boston Dynamics robot deployed in a military context. It sees the machine. It sees what the machine did. It draws the conclusion that machines of that type do that kind of thing.

That is not irrational. That is how trust works.

Mixing military and care robots blurs trust. If the same technology serves harm and help, the public won’t separate them—and trust collapses.

The Branding Problem That Isn’t Being Named

Several robotics companies are actively pursuing both markets simultaneously — or selling the same underlying platform into both tracks. Figure AI, founded in 2022 and now one of the most heavily funded humanoid robotics companies in the world, has announced partnerships with both BMW for manufacturing and the US military. Sanctuary AI is working on general-purpose robots for commercial environments. Ghost Robotics — which makes quadruped robots physically similar to Boston Dynamics’ Spot — has supplied platforms to the US Air Force and been photographed with weapons attachments. The images went viral. The consumer robotics industry noticed and said almost nothing publicly.

The challenge for the industry is structural, not incidental. Military robotics and care robotics are not merely different products. They are, in the deepest sense, antithetical products. One is optimized for keeping humans safe through force limitation and harm avoidance. The other is optimized for operational effectiveness in environments where harm is the context. The values embedded in these two design tracks are not merely different — they are opposed.

When the same corporate family, or the same underlying technology, is visible in both tracks, the public’s ability to maintain the distinction breaks down. And the public’s ability to maintain that distinction is the entire foundation on which the care robotics market is built.

A parent deciding whether to trust a robot with their child is not running a technical analysis of that specific robot’s safety architecture. They are asking a simpler, more human question: do robots in general feel safe? Is this a technology that is fundamentally oriented toward human wellbeing, or is it a technology that is fundamentally a tool of power, and the care applications are just one version of that tool?

Right now, the honest answer to that question is: we’re not sure. And “we’re not sure” is not a foundation for the kind of trust that care robotics requires.

The Incident That Changes Everything

I want to be specific about the scenario I am describing, because vagueness lets the industry dismiss this concern as speculative.

The scenario is not a hypothetical future event. It is a near-certainty given current trajectories. Here is the shape of it.

A military or law enforcement robot — a real, deployed system, not a prototype — is involved in an incident that causes civilian harm. Or a weapons-equipped quadruped robot appears in footage from a conflict zone operating in a way that the watching public finds disturbing. Or a security robot in a domestic context behaves in a way that is aggressive enough to generate viral footage. Or a military demo video is released that shows a humanoid robot performing actions that, out of context, look alarming.

The footage spreads. Because footage always spreads. The coverage does not carefully distinguish between military and care applications, between quadrupeds and humanoids, between combat robots and eldercare robots. It covers robots. The public discussion does not carefully distinguish either. The comment sections do not distinguish. The legislation that follows does not distinguish.

And the care robotics companies that have spent years building toward the moment when ordinary families trust these machines in their homes will find that the floor has dropped out from under their market. Not because their product failed. Because a different product, built on the same general technology, failed in a way that was visible, frightening, and impossible to contextualize away.

The trust destruction will be rapid. The trust rebuilding will take years. And the people who will suffer most from that lost decade are not the investors. They are the elderly people who needed a robot helper and couldn’t get one because the public turned against the category. The families who could have been supported and weren’t. The caregivers who could have been helped and weren’t.

Care and combat robots and drones can’t blur together. Without clear separation, one incident could collapse trust across the entire industry—before safeguards exist.

What the Industry Is Choosing Not to Do

The solution is not for robotics companies to stop taking defense contracts. The defense dollars are real, the applications are legitimate in their own context, and unilateral disarmament in the face of competitive pressure is not a realistic ask.

The solution is structural separation — a clear, public, verifiable commitment to maintaining the difference between care robots and combat robots at the level of design, deployment, branding, and governance. Not a press release. Not a corporate ethics policy that can be quietly revised when a lucrative contract appears. An architecture that makes the distinction real, visible, and durable.

That architecture does not currently exist. The industry has not built it because building it would require acknowledging the problem, and acknowledging the problem would require saying publicly what most people in the industry know privately: that the military and care robotics tracks are in fundamental tension with each other, that the tension is a threat to the care robotics market’s long-term viability, and that nobody has figured out how to resolve it.

The companies in this space are one incident away from a crisis they are not prepared for. The incident will not be something they caused. It will be something that happened somewhere else, in a different context, with a different product. But it will look enough like their product, on a small screen, viewed by a frightened public that doesn’t know the difference between what was built for a battlefield and what was built for a nursery.

That day is coming. The framework to survive it doesn’t exist yet.

Next: A Geneva Convention for Robots — The world didn’t wait for weapons manufacturers to self-regulate warfare. It built a treaty. What would a binding international framework for robot ethics actually look like — who convenes it, who signs it, and what does “do no harm” mean when encoded in machine behavior?

Related Reading

The Pentagon’s Push for Autonomous Weapons — and What It Means for Everyone Else

RAND Corporation — A rigorous analysis of the current state of military robotics development, the pace of autonomy in defense systems, and the governance questions that dual-use technology raises for both military and civilian applications

When Robots Go to War: The Public Trust Implications of Military Robotics

IEEE Spectrum — How the public perception of military robotic platforms shapes attitudes toward consumer and care robotics — and why the industry’s silence on this connection is a structural vulnerability

The Dual-Use Dilemma: How Defense Funding Shapes Civilian Technology — and Its Risks

Brookings Institution — The history and current dynamics of defense-funded research flowing into civilian applications, the governance frameworks that have and haven’t worked, and what the robotics industry can learn from previous dual-use technology crises

Futurist Speaker
Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.