<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Futurist Thomas Frey Insights Archives - Futurist Speaker</title>
	<atom:link href="https://futuristspeaker.com/category/futurist-thomas-frey-insights/feed/" rel="self" type="application/rss+xml" />
	<link>https://futuristspeaker.com/category/futurist-thomas-frey-insights/</link>
	<description>Thomas Frey Google&#039;s Top Rated Futurist Speaker</description>
	<lastBuildDate>Sun, 19 Apr 2026 21:52:04 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.6.5</generator>

 
	<item>
		<title>A Geneva Convention for Robots</title>
		<link>https://futuristspeaker.com/artificial-intelligence/a-geneva-convention-for-robots/</link>
		
		<dc:creator><![CDATA[Thomas Frey]]></dc:creator>
		<pubDate>Sun, 19 Apr 2026 17:49:04 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Futurist Thomas Frey Insights]]></category>
		<category><![CDATA[Robotics]]></category>
		<category><![CDATA[geneva convention]]></category>
		<category><![CDATA[trust issues]]></category>
		<guid isPermaLink="false">https://futuristspeaker.com/?p=1041799</guid>

					<description><![CDATA[<p>In 1864, nations set rules before catastrophe. Robotics needs the same—clear, enforceable lines between care and harm, defined before the consequences force it. &#8230; The world didn&#8217;t wait for weapons manufacturers to self-regulate warfare. It built a treaty. We need the same architecture here. By Futurist Thomas Frey Part 4 of 4: The Framework We [&#8230;]</p>
<p>The post <a href="https://futuristspeaker.com/artificial-intelligence/a-geneva-convention-for-robots/">A Geneva Convention for Robots</a> appeared first on <a href="https://futuristspeaker.com">Futurist Speaker</a>.</p>
]]></description>
										<content:encoded><![CDATA[<div class="flex flex-col text-sm pb-25">
<section class="text-token-text-primary w-full focus:outline-none [--shadow-height:45px] has-data-writing-block:pointer-events-none has-data-writing-block:-mt-(--shadow-height) has-data-writing-block:pt-(--shadow-height) [&amp;:has([data-writing-block])&gt;*]:pointer-events-auto [content-visibility:auto] supports-[content-visibility:auto]:[contain-intrinsic-size:auto_100lvh] scroll-mt-[calc(var(--header-height)+min(200px,max(70px,20svh)))]" dir="auto" data-turn-id="6883625d-f645-4236-b7a6-611cc31b7ecb" data-testid="conversation-turn-150" data-scroll-anchor="true" data-turn="assistant">
<div class="text-base my-auto mx-auto pb-10 [--thread-content-margin:var(--thread-content-margin-xs,calc(var(--spacing)*4))] @w-sm/main:[--thread-content-margin:var(--thread-content-margin-sm,calc(var(--spacing)*6))] @w-lg/main:[--thread-content-margin:var(--thread-content-margin-lg,calc(var(--spacing)*16))] px-(--thread-content-margin)">
<div class="[--thread-content-max-width:40rem] @w-lg/main:[--thread-content-max-width:48rem] mx-auto max-w-(--thread-content-max-width) flex-1 group/turn-messages focus-visible:outline-hidden relative flex w-full min-w-0 flex-col agent-turn">
<div class="flex max-w-full flex-col gap-4 grow">
<div class="min-h-8 text-message relative flex w-full flex-col items-end gap-2 text-start break-words whitespace-normal outline-none keyboard-focused:focus-ring [.text-message+&amp;]:mt-1" dir="auto" tabindex="0" data-message-author-role="assistant" data-message-id="6883625d-f645-4236-b7a6-611cc31b7ecb" data-turn-start-message="true" data-message-model-slug="gpt-5-3">
<div class="flex w-full flex-col gap-1 empty:hidden">
<div class="markdown prose dark:prose-invert w-full wrap-break-word light markdown-new-styling">
<p style="text-align: center;" data-start="0" data-end="160" data-is-last-node="" data-is-only-node="">In 1864, nations set rules before catastrophe. Robotics needs the same—clear,<br />
enforceable lines between care and harm, defined before the consequences force it.</p>
</div>
</div>
</div>
</div>
</div>
</div>
</section>
</div>
<div class="pointer-events-none h-px w-px absolute bottom-0" style="text-align: center;" aria-hidden="true" data-edge="true">&#8230;</div>
<p><em>The world didn&#8217;t wait for weapons manufacturers to self-regulate warfare. It built a treaty. We need the same architecture here.</em></p>
<p><em>By Futurist Thomas Frey</em></p>
<p><em>Part 4 of 4: The Framework We Have to Build</em></p>
<hr />
<p>In 1864, twelve nations gathered in Geneva and signed an agreement that had never existed before in human history.</p>
<p>They weren&#8217;t naive. They weren&#8217;t under the illusion that war would stop or that the agreement would be universally honored. They were practical people who had watched the industrialization of warfare produce suffering on a scale that previous generations hadn&#8217;t imagined, and who understood that the tools of war had outpaced the moral frameworks governing their use. They decided that some lines had to be drawn before the next conflict, not after. That certain protections had to be established in advance, not negotiated in the wreckage of their violation.</p>
<p>The Geneva Conventions didn&#8217;t eliminate war. They didn&#8217;t eliminate atrocity. What they did was create a shared framework that established, at the level of international agreement, what was and wasn&#8217;t acceptable — and gave that framework enough institutional weight that violations became matters of global consequence rather than local discretion.</p>
<p>We need the same architecture for robots.</p>
<p>Not a government regulation from a single country that other countries will ignore. Not a corporate ethics board that reports to executives whose bonuses depend on shipping product. Not a voluntary industry pledge that means whatever the signatories need it to mean when a lucrative contract appears. A multinational framework with genuine teeth, built before the incidents that make it urgent, that separates the robots designed to care for human life from the machines designed to threaten it.</p>
<p>And in 2026, this conversation can no longer stop at humanoid robots. Because the challenge has already expanded well beyond bipedal machines. It includes quadruped dog-bots that can be weaponized with an attachment that takes minutes to install. It includes autonomous drones that can identify and engage targets without a human in the decision loop. It includes warehouse automation systems that share core AI architectures with military targeting platforms. The physical form is irrelevant. The question is what values are encoded in the behavior, and whether those values are verifiable and binding.</p>
<h4>What the Framework Has to Separate</h4>
<p>Before you can build the treaty, you have to name what it&#8217;s separating.</p>
<p>The fundamental distinction is not between &#8220;good robots&#8221; and &#8220;bad robots,&#8221; or between civilian and military applications in the simple sense. Military robotics has legitimate uses — logistics, reconnaissance, bomb disposal, search and rescue in contested environments — that don&#8217;t require the ability to harm. The distinction is more precise than military versus civilian.</p>
<p>It is the distinction between machines designed with harm avoidance as a foundational constraint, and machines designed without it.</p>
<p>A care robot, properly designed, has harm avoidance baked into its architecture at the level of its physical parameters, its decision logic, and its override systems. It cannot apply more force than a human hand. It cannot move faster than a human caregiver. It cannot make irreversible decisions without human confirmation. These are not software preferences that can be updated away. They are structural commitments.</p>
<p>A combat-capable robot, properly designed, has harm avoidance removed from its architecture in specific, intentional ways. It can apply lethal force. It can act at machine speed in situations where human speed would be insufficient. It can, in its most autonomous configurations, make engagement decisions without human confirmation.</p>
<p>These are not two points on a continuum. They are opposite design philosophies. And a framework that enforces the separation has to operate at the level of design and architecture, not just intent and use.</p>
<p>The same applies to drones. A last-mile delivery drone and an autonomous combat drone share propulsion systems, navigation technology, and computer vision. But their design architectures differ in exactly the way described above. A delivery drone is physically incapable of the kind of harm an armed drone is capable of — not because of a software setting, but because of what it is built to do and built with. That architectural difference is what the framework has to preserve and certify.</p>
<p>The same applies to quadruped dog-bots. Ghost Robotics&#8217; Vision 60 platform and Boston Dynamics&#8217; Spot are, at the mechanical level, similar designs. They become categorically different depending on whether they are equipped with a sensor payload for environmental monitoring or a weapons attachment for force projection. The hardware modification is trivial. The ethical difference is not. A framework that allows the same platform to be sold into both markets without structural separation is a framework that solves nothing.</p>
<div id="attachment_1041804" style="width: 1930px" class="wp-caption aligncenter"><img fetchpriority="high" decoding="async" aria-describedby="caption-attachment-1041804" class="wp-image-1041804 size-full" src="https://futuristspeaker.com/wp-content/uploads/2026/04/Robots-and-Humans-8883.jpg" alt="" width="1920" height="1143" srcset="https://futuristspeaker.com/wp-content/uploads/2026/04/Robots-and-Humans-8883.jpg 1920w, https://futuristspeaker.com/wp-content/uploads/2026/04/Robots-and-Humans-8883-1280x762.jpg 1280w, https://futuristspeaker.com/wp-content/uploads/2026/04/Robots-and-Humans-8883-980x583.jpg 980w, https://futuristspeaker.com/wp-content/uploads/2026/04/Robots-and-Humans-8883-480x286.jpg 480w" sizes="(min-width: 0px) and (max-width: 480px) 480px, (min-width: 481px) and (max-width: 980px) 980px, (min-width: 981px) and (max-width: 1280px) 1280px, (min-width: 1281px) 1920px, 100vw" /><p id="caption-attachment-1041804" class="wp-caption-text">“Do no harm” must be engineered—force limits, autonomy boundaries, and strict separation. Without enforceable design rules, care robots remain trust claims, not trusted systems.</p></div>
<h4>What &#8220;Do No Harm&#8221; Actually Means in Machine Behavior</h4>
<p>The Geneva Conventions had to grapple with translating moral principles into operational rules. What does &#8220;protecting civilians&#8221; actually mean when armies are moving through villages? What counts as a &#8220;medical facility&#8221; that cannot be targeted? The work of the Conventions was largely the work of making abstractions specific enough to be enforceable.</p>
<p>A framework for robots faces the same challenge. &#8220;Do no harm&#8221; sounds simple. Encoded in machine behavior, it is extraordinarily complex.</p>
<p>It means defining maximum force parameters — physical limits on what a care-category robot can do to a human body, verified through certification testing, not just manufacturer assertion. A robot that can apply enough force to break a bone is not a care robot, regardless of what its marketing says. A robot that can move fast enough to injure a person who stumbles into its path is not a care robot. These are measurable properties. They can be tested and certified.</p>
<p>It means defining autonomy ceilings — limits on what decisions a care-category robot can make without human confirmation. A care robot should not be able to administer medication, apply physical restraint, or make any decision with irreversible consequences for a human without a human in the loop. These are architectural constraints, not software policies.</p>
<p>It means defining deployment separation — a requirement that platforms certified as care robots not be capable of weapons integration without physical modification that would be detectable and would void the certification. This is the equivalent of dual-use export controls, applied at the product design level. A platform that can accept a weapons attachment with a fifteen-minute modification is not, in any meaningful sense, a care robot. It is a care robot waiting to become something else.</p>
<p>It means defining data separation — prohibitions on sharing behavioral data, operational logs, or training datasets between care-category and combat-capable systems. The AI architectures underlying care robots and combat robots should not be the same architecture trained on different data. They should be developed under different principles, with different safety validation requirements, and the data that shapes their behavior should not flow between them.</p>
<p>None of these definitions are easy. All of them will require serious technical, legal, and ethical work. But the work is doable, and it needs to start before the incidents that make it urgent rather than after.</p>
<div id="attachment_1041800" style="width: 1306px" class="wp-caption aligncenter"><img decoding="async" aria-describedby="caption-attachment-1041800" class="wp-image-1041800 size-full" src="https://futuristspeaker.com/wp-content/uploads/2026/04/Robots-and-Humans-8887.jpg" alt="" width="1296" height="928" srcset="https://futuristspeaker.com/wp-content/uploads/2026/04/Robots-and-Humans-8887.jpg 1296w, https://futuristspeaker.com/wp-content/uploads/2026/04/Robots-and-Humans-8887-1280x917.jpg 1280w, https://futuristspeaker.com/wp-content/uploads/2026/04/Robots-and-Humans-8887-980x702.jpg 980w, https://futuristspeaker.com/wp-content/uploads/2026/04/Robots-and-Humans-8887-480x344.jpg 480w" sizes="(min-width: 0px) and (max-width: 480px) 480px, (min-width: 481px) and (max-width: 980px) 980px, (min-width: 981px) and (max-width: 1280px) 1280px, (min-width: 1281px) 1296px, 100vw" /><p id="caption-attachment-1041800" class="wp-caption-text">Robotics needs a neutral convening force—like a Geneva moment—to set enforceable norms. Without it, trust remains undefined and accountability optional.</p></div>
<h4>Who Convenes This</h4>
<p>The Geneva Conventions were convened by Switzerland, a neutral nation with both the credibility and the motivation to serve as a honest broker. The initial signatories were twelve European nations. The framework grew over subsequent decades through additional conventions and protocols.</p>
<p>A robotics framework needs a similar convening structure. It needs a party with enough credibility to gather stakeholders who don&#8217;t fully trust each other, enough neutrality to be seen as an honest broker, and enough institutional weight to give the resulting agreement meaning.</p>
<p>Several candidates are plausible. The International Committee of the Red Cross has already begun engaging seriously with the questions of autonomous weapons and humanitarian law. The IEEE — the world&#8217;s largest professional organization for engineers — has an existing ethics framework for autonomous systems and the technical credibility to define what architectural separation actually requires. The United Nations has existing structures for arms control that could be extended to autonomous systems. A coalition of smaller nations with no major military robotics programs have both the motivation and the credibility to initiate the process without being perceived as acting in their own military interest.</p>
<p>What&#8217;s needed is not consensus from the start. The Geneva Conventions didn&#8217;t require universal agreement to be meaningful. They required enough signatories with enough credibility that the framework established a norm — a shared understanding of what the world considered acceptable — and that violations carried real reputational and diplomatic costs even for non-signatories.</p>
<p>The same architecture applies here. A framework signed by a meaningful coalition of nations and major robotics manufacturers — one that establishes clear certification categories, verifiable architectural standards, and real consequences for misrepresentation — creates a norm even if not every actor honors it. It establishes what the civilized world considers acceptable. It gives consumers, regulators, and investors a reference point that currently doesn&#8217;t exist.</p>
<h4>What the Industry Has to Decide</h4>
<p>The robotics industry is at a decision point that it is not yet facing directly.</p>
<p>The companies building care robots have a profound commercial interest in the existence of a framework like this — not because they want to be regulated, but because the alternative is an incident that destroys the trust the entire care market depends on, and no individual company has the power to prevent that incident from happening. The framework is in their interest. The separation is in their interest. The certification is in their interest, because certification creates a signal they can use to earn the trust they need.</p>
<p>The companies building military and dual-use platforms have a different calculus. The framework asks them to accept limits on their product&#8217;s applicability, to invest in architectural separation that costs money, and to give up the option of selling the same platform into both markets without restriction. That is a real cost, and they will resist it.</p>
<p>But they should consider what the alternative looks like. Absent a framework, the incident described in the previous column is not a possibility — it is a certainty. And when it happens, the regulatory response will not be thoughtful, technically informed, or proportionate. It will be reactive, politically driven, and likely to harm the legitimate applications of robotic technology far more than a proactive framework would.</p>
<p>Reactive regulation is almost always worse than proactive frameworks. The pharmaceutical industry learned this. The aviation industry learned this. The nuclear industry learned this. The robotics industry has the opportunity to learn it before the lesson is imposed, but the window for choosing to learn it is not unlimited.</p>
<div id="attachment_1041801" style="width: 1466px" class="wp-caption aligncenter"><img decoding="async" aria-describedby="caption-attachment-1041801" class="wp-image-1041801 size-full" src="https://futuristspeaker.com/wp-content/uploads/2026/04/Robots-and-Humans-8886.jpg" alt="" width="1456" height="816" srcset="https://futuristspeaker.com/wp-content/uploads/2026/04/Robots-and-Humans-8886.jpg 1456w, https://futuristspeaker.com/wp-content/uploads/2026/04/Robots-and-Humans-8886-1280x717.jpg 1280w, https://futuristspeaker.com/wp-content/uploads/2026/04/Robots-and-Humans-8886-980x549.jpg 980w, https://futuristspeaker.com/wp-content/uploads/2026/04/Robots-and-Humans-8886-480x269.jpg 480w" sizes="(min-width: 0px) and (max-width: 480px) 480px, (min-width: 481px) and (max-width: 980px) 980px, (min-width: 981px) and (max-width: 1280px) 1280px, (min-width: 1281px) 1456px, 100vw" /><p id="caption-attachment-1041801" class="wp-caption-text">With real standards, robots earn trust—not just function. Separate care from combat, certify behavior, and the future becomes safe enough to fully embrace.</p></div>
<h4>What Gets Built in the World Where This Works</h4>
<p>I want to end this series not with the problem but with the possibility.</p>
<p>A world in which a genuine Geneva Convention for robots exists — in which care robots are architecturally separated from combat systems, certified to verifiable standards, and governed by a multinational framework with real teeth — is a world in which the full promise of care robotics can actually be realized.</p>
<p>In that world, the elderly woman living alone can have a robot companion that her family trusts, because the trust is not based on marketing claims but on verified architectural commitments and independent certification. The sleep-deprived parent can accept help from a machine at 2 in the morning because the framework that governs that machine&#8217;s behavior is the same framework that governs the behavior of every certified care robot on Earth — not the preference of the company that built it, revisable in the next software update.</p>
<p>In that world, the drone that delivers your package and the drone that monitors your elderly parent&#8217;s wandering behavior in a memory care facility are verifiably, architecturally different from the drone that can be equipped for combat — and that difference is enforced by a framework with enough weight to mean something.</p>
<p>In that world, the quadruped robot that inspects your home&#8217;s foundation for damage is not, in any sense that matters, the same machine as the weaponized dog-bot in military footage. The difference is not just in what they&#8217;re used for. It&#8217;s in what they&#8217;re built to be.</p>
<p>Isaac Asimov saw the need for this in 1942 and tried to articulate it in fiction because the serious conversation wasn&#8217;t happening anywhere else. He imagined three simple laws, and then spent the rest of his career showing why simple laws weren&#8217;t enough — why the real work was in the details, the edge cases, the places where principles meet complexity.</p>
<p>We are living in the moment he was writing toward. The robots are real. The stakes are real. The absence of a framework is real.</p>
<p>The Geneva Conventions were born in the recognition that some things are too important to be left to individual actors to decide on their own, in their own interest, without accountability to anything larger than themselves.</p>
<p>Robots that live with our families and robots that can harm human beings are too important for that.</p>
<p>The world built a treaty before. It can build one again. The question is whether the robotics industry, and the governments that have the power to convene this conversation, will choose to build it before the incidents that make it unavoidable — or after.</p>
<p>History suggests we usually wait for the incidents.</p>
<p>This series has been an argument for not waiting.</p>
<h4>Related Reading</h4>
<h5><a href="https://www.icrc.org/en/document/autonomous-weapons-icrc-position">The International Committee of the Red Cross on Autonomous Weapons</a></h5>
<p><em>International Committee of the Red Cross</em> — The ICRC&#8217;s formal position on autonomous weapons systems and the application of international humanitarian law — the most credible existing foundation for the kind of framework this column proposes</p>
<h5><a href="https://standards.ieee.org/industry-connections/ec/autonomous-systems/">IEEE Ethically Aligned Design: A Framework for Autonomous Systems</a></h5>
<p><em>IEEE</em> — The most technically rigorous existing framework for encoding ethical principles in autonomous system design — the engineering foundation on which architectural certification standards could be built</p>
<h5><a href="https://www.rand.org/topics/arms-control.html">Lessons from Arms Control: What Robotics Governance Can Learn from Nuclear, Chemical, and Biological Weapons Treaties</a></h5>
<p><em>RAND Corporation</em> — A comparative analysis of how previous dual-use technology governance frameworks were built, what made them work, and what the robotics industry can learn from the history of international agreements that managed dangerous technologies before catastrophe forced the issue</p>
<p>The post <a href="https://futuristspeaker.com/artificial-intelligence/a-geneva-convention-for-robots/">A Geneva Convention for Robots</a> appeared first on <a href="https://futuristspeaker.com">Futurist Speaker</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>One Incident Away</title>
		<link>https://futuristspeaker.com/artificial-intelligence/one-incident-away/</link>
		
		<dc:creator><![CDATA[Thomas Frey]]></dc:creator>
		<pubDate>Sun, 19 Apr 2026 12:25:35 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Futurist Thomas Frey Insights]]></category>
		<category><![CDATA[Robotics]]></category>
		<category><![CDATA[military robots]]></category>
		<category><![CDATA[militay bots]]></category>
		<category><![CDATA[trust in bots]]></category>
		<category><![CDATA[trust in drones]]></category>
		<category><![CDATA[trust issues]]></category>
		<guid isPermaLink="false">https://futuristspeaker.com/?p=1041786</guid>

					<description><![CDATA[<p>Two robots, same tech—one cares, one confronts. When they share origins, the industry faces a paradox it hasn’t yet acknowledged or resolved. &#8230; Trust in robots will not be built incrementally. But it can be destroyed in a single afternoon. By Futurist Thomas Frey Part 3 of 4: The Military Paradox Nobody Will Discuss Let [&#8230;]</p>
<p>The post <a href="https://futuristspeaker.com/artificial-intelligence/one-incident-away/">One Incident Away</a> appeared first on <a href="https://futuristspeaker.com">Futurist Speaker</a>.</p>
]]></description>
										<content:encoded><![CDATA[<div class="flex flex-col text-sm pb-25">
<section class="text-token-text-primary w-full focus:outline-none [--shadow-height:45px] has-data-writing-block:pointer-events-none has-data-writing-block:-mt-(--shadow-height) has-data-writing-block:pt-(--shadow-height) [&amp;:has([data-writing-block])&gt;*]:pointer-events-auto scroll-mt-[calc(var(--header-height)+min(200px,max(70px,20svh)))]" dir="auto" data-turn-id="request-69d54aed-2510-832d-a335-41aaf4bfcc9b-37" data-testid="conversation-turn-142" data-scroll-anchor="true" data-turn="assistant">
<div class="text-base my-auto mx-auto pb-10 [--thread-content-margin:var(--thread-content-margin-xs,calc(var(--spacing)*4))] @w-sm/main:[--thread-content-margin:var(--thread-content-margin-sm,calc(var(--spacing)*6))] @w-lg/main:[--thread-content-margin:var(--thread-content-margin-lg,calc(var(--spacing)*16))] px-(--thread-content-margin)">
<div class="[--thread-content-max-width:40rem] @w-lg/main:[--thread-content-max-width:48rem] mx-auto max-w-(--thread-content-max-width) flex-1 group/turn-messages focus-visible:outline-hidden relative flex w-full min-w-0 flex-col agent-turn">
<div class="flex max-w-full flex-col gap-4 grow">
<div class="min-h-8 text-message relative flex w-full flex-col items-end gap-2 text-start break-words whitespace-normal outline-none keyboard-focused:focus-ring [.text-message+&amp;]:mt-1" dir="auto" tabindex="0" data-message-author-role="assistant" data-message-id="ce6d9ac2-f868-4a5a-90ec-2f0fea86ad2b" data-message-model-slug="gpt-5-3" data-turn-start-message="true">
<div class="flex w-full flex-col gap-1 empty:hidden">
<div class="markdown prose dark:prose-invert w-full wrap-break-word light markdown-new-styling">
<p style="text-align: center;" data-start="0" data-end="141" data-is-last-node="" data-is-only-node="">Two robots, same tech—one cares, one confronts. When they share origins,<br />
the industry faces a paradox it hasn’t yet acknowledged or resolved.</p>
</div>
</div>
</div>
</div>
</div>
</div>
</section>
</div>
<div class="pointer-events-none h-px w-px absolute bottom-0" style="text-align: center;" aria-hidden="true" data-edge="true">&#8230;</div>
<p><em>Trust in robots will not be built incrementally. But it can be destroyed in a single afternoon.</em></p>
<p><em>By Futurist Thomas Frey</em></p>
<p><em>Part 3 of 4: The Military Paradox Nobody Will Discuss</em></p>
<hr />
<p>Let me describe two robots.</p>
<p>The first is designed for eldercare. It moves slowly and deliberately through a home, helps a 78-year-old woman with limited mobility get from her bed to her chair, reminds her to take her medication, detects if she falls, and calls for help if she does. It is gentle by design. Its physical parameters are constrained specifically to prevent it from applying more force than a human hand would use. Its entire architecture is built around one principle: do not harm the person in your care.</p>
<p>The second is designed for military reconnaissance and force projection. It can move fast across difficult terrain, carry significant payload, identify targets using computer vision, and in its more advanced configurations, make or assist with engagement decisions in contested environments. It is capable by design. Its physical parameters are optimized for effectiveness in situations where the humans nearby may be adversaries. Its architecture is built around a completely different principle: accomplish the mission.</p>
<p>Both of these robots exist right now. Both are being actively developed and in some cases deployed. Both use similar foundational technologies — the same locomotion research, the same computer vision systems, the same advances in battery technology and actuator design that have driven the whole field forward.</p>
<p>And both are being developed, in many cases, by the same companies. Or by companies that share investors, share talent, share research lineages, and operate in the same public conversation about the future of robotics.</p>
<p>That is the military paradox. And the robotics industry is not discussing it honestly.</p>
<h4>The Funding Reality</h4>
<p>To understand why this matters, you need to understand where robotics development money actually comes from.</p>
<p>The Defense Advanced Research Projects Agency has been one of the most important funders of fundamental robotics research for decades. DARPA&#8217;s robotics challenges in the 2010s produced technology that directly seeded the current generation of humanoid platforms. Boston Dynamics — whose Atlas robot is the most recognizable humanoid in the world — spent years under the ownership of Google before being sold to Hyundai, but its foundational development included significant defense-adjacent funding and the Atlas platform has been demonstrated in countless military-adjacent contexts.</p>
<p>The US Army has active programs evaluating robotic platforms for logistics, reconnaissance, and combat support. The Defense Department&#8217;s vision of the future battlefield includes robotic systems operating alongside human soldiers. The investment flowing into defense robotics is enormous and accelerating, and it is not cleanly separated from the investment flowing into consumer and care robotics. The research is connected. The talent moves between sectors. The companies that win defense contracts build capabilities that transfer.</p>
<p>None of this is secret. It is all documented in public filings, press releases, and conference presentations. What is not being said publicly — at least not in the consumer-facing conversations about the wonderful future of robot caregivers and domestic helpers — is what the convergence of these two development tracks means for the trust that the entire industry depends on.</p>
<h4>What Footage Does</h4>
<p>Trust is not a technical property. It cannot be engineered into a product the way you engineer payload capacity or battery life. It is a social property — something that exists in the relationship between a technology and the public that encounters it. And it is profoundly asymmetric in how it is built and destroyed.</p>
<p>Building trust in a technology takes years. It requires consistent, reliable, incident-free performance across millions of interactions, in environments that matter to real people, witnessed by enough people that the positive evidence accumulates in public consciousness. It requires the absence of dramatic failures. It requires time.</p>
<p>Destroying trust in a technology can take minutes. It requires one incident, clearly documented, that is frightening enough to crystallize the fears that were always present but suppressed by the weight of positive experience.</p>
<p>Aviation spent decades building the trust that makes billions of people comfortable getting on commercial aircraft. A single high-profile crash, handled badly, can create a confidence crisis that grounds fleets and reshapes industry dynamics for years. The trust is real and hard-won. The vulnerability is permanent.</p>
<p>The robotics industry has not spent decades building public trust. It is in the early stages of that process. The positive experiences are limited to relatively small populations of early adopters, researchers, and industrial users. The general public&#8217;s relationship with humanoid robots is still primarily mediated by science fiction, product demonstrations, and news coverage — all of which create impressions, but none of which create the deep experiential trust that comes from living with a technology over time.</p>
<p>Now consider what happens when footage appears — and it will appear, because it always does — of a military robot causing harm. Not a weapon failing to discriminate properly in a war zone thousands of miles away. Something closer. Something that looks, to a person watching it on a phone screen, like the same kind of robot that companies have been telling us will help with our elderly parents and our young children.</p>
<p>The human brain is not equipped to parse the difference between a Boston Dynamics robot deployed in an eldercare demonstration and a Boston Dynamics robot deployed in a military context. It sees the machine. It sees what the machine did. It draws the conclusion that machines of that type do that kind of thing.</p>
<p>That is not irrational. That is how trust works.</p>
<div id="attachment_1041793" style="width: 1466px" class="wp-caption aligncenter"><img decoding="async" aria-describedby="caption-attachment-1041793" class="wp-image-1041793 size-full" src="https://futuristspeaker.com/wp-content/uploads/2026/04/Killer-Military-Bots-3337.jpg" alt="" width="1456" height="816" srcset="https://futuristspeaker.com/wp-content/uploads/2026/04/Killer-Military-Bots-3337.jpg 1456w, https://futuristspeaker.com/wp-content/uploads/2026/04/Killer-Military-Bots-3337-1280x717.jpg 1280w, https://futuristspeaker.com/wp-content/uploads/2026/04/Killer-Military-Bots-3337-980x549.jpg 980w, https://futuristspeaker.com/wp-content/uploads/2026/04/Killer-Military-Bots-3337-480x269.jpg 480w" sizes="(min-width: 0px) and (max-width: 480px) 480px, (min-width: 481px) and (max-width: 980px) 980px, (min-width: 981px) and (max-width: 1280px) 1280px, (min-width: 1281px) 1456px, 100vw" /><p id="caption-attachment-1041793" class="wp-caption-text">Mixing military and care robots blurs trust. If the same technology serves harm and help, the public won’t separate them—and trust collapses.</p></div>
<h4>The Branding Problem That Isn&#8217;t Being Named</h4>
<p>Several robotics companies are actively pursuing both markets simultaneously — or selling the same underlying platform into both tracks. Figure AI, founded in 2022 and now one of the most heavily funded humanoid robotics companies in the world, has announced partnerships with both BMW for manufacturing and the US military. Sanctuary AI is working on general-purpose robots for commercial environments. Ghost Robotics — which makes quadruped robots physically similar to Boston Dynamics&#8217; Spot — has supplied platforms to the US Air Force and been photographed with weapons attachments. The images went viral. The consumer robotics industry noticed and said almost nothing publicly.</p>
<p>The challenge for the industry is structural, not incidental. Military robotics and care robotics are not merely different products. They are, in the deepest sense, antithetical products. One is optimized for keeping humans safe through force limitation and harm avoidance. The other is optimized for operational effectiveness in environments where harm is the context. The values embedded in these two design tracks are not merely different — they are opposed.</p>
<p>When the same corporate family, or the same underlying technology, is visible in both tracks, the public&#8217;s ability to maintain the distinction breaks down. And the public&#8217;s ability to maintain that distinction is the entire foundation on which the care robotics market is built.</p>
<p>A parent deciding whether to trust a robot with their child is not running a technical analysis of that specific robot&#8217;s safety architecture. They are asking a simpler, more human question: do robots in general feel safe? Is this a technology that is fundamentally oriented toward human wellbeing, or is it a technology that is fundamentally a tool of power, and the care applications are just one version of that tool?</p>
<p>Right now, the honest answer to that question is: we&#8217;re not sure. And &#8220;we&#8217;re not sure&#8221; is not a foundation for the kind of trust that care robotics requires.</p>
<h4>The Incident That Changes Everything</h4>
<p>I want to be specific about the scenario I am describing, because vagueness lets the industry dismiss this concern as speculative.</p>
<p>The scenario is not a hypothetical future event. It is a near-certainty given current trajectories. Here is the shape of it.</p>
<p>A military or law enforcement robot — a real, deployed system, not a prototype — is involved in an incident that causes civilian harm. Or a weapons-equipped quadruped robot appears in footage from a conflict zone operating in a way that the watching public finds disturbing. Or a security robot in a domestic context behaves in a way that is aggressive enough to generate viral footage. Or a military demo video is released that shows a humanoid robot performing actions that, out of context, look alarming.</p>
<p>The footage spreads. Because footage always spreads. The coverage does not carefully distinguish between military and care applications, between quadrupeds and humanoids, between combat robots and eldercare robots. It covers robots. The public discussion does not carefully distinguish either. The comment sections do not distinguish. The legislation that follows does not distinguish.</p>
<p>And the care robotics companies that have spent years building toward the moment when ordinary families trust these machines in their homes will find that the floor has dropped out from under their market. Not because their product failed. Because a different product, built on the same general technology, failed in a way that was visible, frightening, and impossible to contextualize away.</p>
<p>The trust destruction will be rapid. The trust rebuilding will take years. And the people who will suffer most from that lost decade are not the investors. They are the elderly people who needed a robot helper and couldn&#8217;t get one because the public turned against the category. The families who could have been supported and weren&#8217;t. The caregivers who could have been helped and weren&#8217;t.</p>
<div id="attachment_1041797" style="width: 1930px" class="wp-caption aligncenter"><img decoding="async" aria-describedby="caption-attachment-1041797" class="wp-image-1041797 size-full" src="https://futuristspeaker.com/wp-content/uploads/2026/04/Killer-Military-Bots-3342.jpg" alt="" width="1920" height="1280" srcset="https://futuristspeaker.com/wp-content/uploads/2026/04/Killer-Military-Bots-3342.jpg 1920w, https://futuristspeaker.com/wp-content/uploads/2026/04/Killer-Military-Bots-3342-1280x853.jpg 1280w, https://futuristspeaker.com/wp-content/uploads/2026/04/Killer-Military-Bots-3342-980x653.jpg 980w, https://futuristspeaker.com/wp-content/uploads/2026/04/Killer-Military-Bots-3342-480x320.jpg 480w" sizes="(min-width: 0px) and (max-width: 480px) 480px, (min-width: 481px) and (max-width: 980px) 980px, (min-width: 981px) and (max-width: 1280px) 1280px, (min-width: 1281px) 1920px, 100vw" /><p id="caption-attachment-1041797" class="wp-caption-text">Care and combat robots and drones can’t blur together. Without clear separation, one incident could collapse trust across the entire industry—before safeguards exist.</p></div>
<h4>What the Industry Is Choosing Not to Do</h4>
<p>The solution is not for robotics companies to stop taking defense contracts. The defense dollars are real, the applications are legitimate in their own context, and unilateral disarmament in the face of competitive pressure is not a realistic ask.</p>
<p>The solution is structural separation — a clear, public, verifiable commitment to maintaining the difference between care robots and combat robots at the level of design, deployment, branding, and governance. Not a press release. Not a corporate ethics policy that can be quietly revised when a lucrative contract appears. An architecture that makes the distinction real, visible, and durable.</p>
<p>That architecture does not currently exist. The industry has not built it because building it would require acknowledging the problem, and acknowledging the problem would require saying publicly what most people in the industry know privately: that the military and care robotics tracks are in fundamental tension with each other, that the tension is a threat to the care robotics market&#8217;s long-term viability, and that nobody has figured out how to resolve it.</p>
<p>The companies in this space are one incident away from a crisis they are not prepared for. The incident will not be something they caused. It will be something that happened somewhere else, in a different context, with a different product. But it will look enough like their product, on a small screen, viewed by a frightened public that doesn&#8217;t know the difference between what was built for a battlefield and what was built for a nursery.</p>
<p>That day is coming. The framework to survive it doesn&#8217;t exist yet.</p>
<p><em>Next: A Geneva Convention for Robots — The world didn&#8217;t wait for weapons manufacturers to self-regulate warfare. It built a treaty. What would a binding international framework for robot ethics actually look like — who convenes it, who signs it, and what does &#8220;do no harm&#8221; mean when encoded in machine behavior?</em></p>
<h4>Related Reading</h4>
<h5><a href="https://www.rand.org/topics/autonomous-weapons-systems.html">The Pentagon&#8217;s Push for Autonomous Weapons — and What It Means for Everyone Else</a></h5>
<p><em>RAND Corporation</em> — A rigorous analysis of the current state of military robotics development, the pace of autonomy in defense systems, and the governance questions that dual-use technology raises for both military and civilian applications</p>
<h5><a href="https://spectrum.ieee.org/military-robots-public-trust">When Robots Go to War: The Public Trust Implications of Military Robotics</a></h5>
<p><em>IEEE Spectrum</em> — How the public perception of military robotic platforms shapes attitudes toward consumer and care robotics — and why the industry&#8217;s silence on this connection is a structural vulnerability</p>
<h5><a href="https://www.brookings.edu/articles/dual-use-technology-governance/">The Dual-Use Dilemma: How Defense Funding Shapes Civilian Technology — and Its Risks</a></h5>
<p><em>Brookings Institution</em> — The history and current dynamics of defense-funded research flowing into civilian applications, the governance frameworks that have and haven&#8217;t worked, and what the robotics industry can learn from previous dual-use technology crises</p>
<p>The post <a href="https://futuristspeaker.com/artificial-intelligence/one-incident-away/">One Incident Away</a> appeared first on <a href="https://futuristspeaker.com">Futurist Speaker</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>The Diaper Test</title>
		<link>https://futuristspeaker.com/artificial-intelligence/the-diaper-test/</link>
		
		<dc:creator><![CDATA[Thomas Frey]]></dc:creator>
		<pubDate>Sat, 18 Apr 2026 19:56:04 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Futurist Thomas Frey Insights]]></category>
		<category><![CDATA[Robotics]]></category>
		<category><![CDATA[diaper test]]></category>
		<category><![CDATA[isaac asimov]]></category>
		<category><![CDATA[robot ethics]]></category>
		<category><![CDATA[turing test]]></category>
		<guid isPermaLink="false">https://futuristspeaker.com/?p=1041783</guid>

					<description><![CDATA[<p>The real test of AI isn’t conversation—it’s care. Until a robot can handle fragile, human moments, it hasn’t earned our trust. &#8230; The real measure of a robot has never been what it can do in a warehouse. It&#8217;s whether you&#8217;d trust it alone with the people you love most. By Futurist Thomas Frey Part [&#8230;]</p>
<p>The post <a href="https://futuristspeaker.com/artificial-intelligence/the-diaper-test/">The Diaper Test</a> appeared first on <a href="https://futuristspeaker.com">Futurist Speaker</a>.</p>
]]></description>
										<content:encoded><![CDATA[<section class="text-token-text-primary w-full focus:outline-none [--shadow-height:45px] has-data-writing-block:pointer-events-none has-data-writing-block:-mt-(--shadow-height) has-data-writing-block:pt-(--shadow-height) [&amp;:has([data-writing-block])&gt;*]:pointer-events-auto scroll-mt-(--header-height)" dir="auto" data-turn-id="c1981f2e-ed7f-4b47-b3c1-1bde2f3db4b1" data-testid="conversation-turn-135" data-scroll-anchor="false" data-turn="user"></section>
<section class="text-token-text-primary w-full focus:outline-none [--shadow-height:45px] has-data-writing-block:pointer-events-none has-data-writing-block:-mt-(--shadow-height) has-data-writing-block:pt-(--shadow-height) [&amp;:has([data-writing-block])&gt;*]:pointer-events-auto scroll-mt-[calc(var(--header-height)+min(200px,max(70px,20svh)))]" dir="auto" data-turn-id="request-69d54aed-2510-832d-a335-41aaf4bfcc9b-34" data-testid="conversation-turn-136" data-scroll-anchor="true" data-turn="assistant">
<div class="text-base my-auto mx-auto pb-10 [--thread-content-margin:var(--thread-content-margin-xs,calc(var(--spacing)*4))] @w-sm/main:[--thread-content-margin:var(--thread-content-margin-sm,calc(var(--spacing)*6))] @w-lg/main:[--thread-content-margin:var(--thread-content-margin-lg,calc(var(--spacing)*16))] px-(--thread-content-margin)">
<div class="[--thread-content-max-width:40rem] @w-lg/main:[--thread-content-max-width:48rem] mx-auto max-w-(--thread-content-max-width) flex-1 group/turn-messages focus-visible:outline-hidden relative flex w-full min-w-0 flex-col agent-turn">
<div class="flex max-w-full flex-col gap-4 grow">
<div class="min-h-8 text-message relative flex w-full flex-col items-end gap-2 text-start break-words whitespace-normal outline-none keyboard-focused:focus-ring [.text-message+&amp;]:mt-1" dir="auto" tabindex="0" data-message-author-role="assistant" data-message-id="3652d9c8-502e-4b0f-aa8c-72caf812fde9" data-message-model-slug="gpt-5-3" data-turn-start-message="true">
<div class="flex w-full flex-col gap-1 empty:hidden">
<div class="markdown prose dark:prose-invert w-full wrap-break-word light markdown-new-styling">
<p style="text-align: center;" data-start="0" data-end="126" data-is-last-node="" data-is-only-node="">The real test of AI isn’t conversation—it’s care. Until a robot<br />
can handle fragile, human moments, it hasn’t earned our trust.<br />
&#8230;</p>
</div>
</div>
</div>
</div>
</div>
</div>
</section>
<p><em>The real measure of a robot has never been what it can do in a warehouse. It&#8217;s whether you&#8217;d trust it alone with the people you love most.</em></p>
<p><em>By Futurist Thomas Frey</em></p>
<p><em>Part 2 of 4: The Wrong Problem</em></p>
<hr />
<p>It was 2 in the morning, and Sarah hadn&#8217;t slept more than three hours in as many days.</p>
<p>Her two-month-old, Leo, had been crying for what felt like hours. She placed him on the changing table, peeled back the diaper, and watched the situation spiral. Leo kicked, squirmed, and managed to make the mess considerably worse. It spread across the changing table, onto Sarah&#8217;s shirt, and across the floor. She was exhausted, overwhelmed, and running out of hands.</p>
<p>I told this story in a column on FuturistSpeaker.com earlier this year, posing what I called the Turing Test for humanoid robots. The original Turing Test — Alan Turing&#8217;s 1950 benchmark for machine intelligence — asked whether a machine could hold a conversation indistinguishable from a human. A meaningful threshold, but an intellectual one. What I proposed was a different kind of threshold entirely: not can the machine think like a human, but can it act like one in the moments of genuine, physical, emotionally loaded caregiving that define what it means to care for another person?</p>
<p>The test: Can a humanoid robot change a dirty diaper at 2 in the morning — gently, competently, calmly, without injuring an infant or escalating the chaos — in a way that a frazzled, sleep-deprived parent would trust it to do alone?</p>
<p>I called it the Diaper Test. And the more I&#8217;ve thought about it since, the more I believe it is not just a benchmark for robotic capability. It is the benchmark for whether this industry has earned the right to be where it&#8217;s heading.</p>
<h4>Why Turing Got It Half Right</h4>
<p>Turing&#8217;s original test was revolutionary because it shifted the question from internal mechanism to observable behavior. We don&#8217;t need to know how a machine thinks, he argued. We just need to know whether its behavior is indistinguishable from thinking. That reframing changed everything about how we approach artificial intelligence.</p>
<p>But Turing was working in the realm of language and cognition. His test lives in conversation — in text or speech, in the back-and-forth of questions and answers. When AI systems pass versions of the Turing Test today, they do so through words. They can argue, persuade, explain, and comfort in language that sounds deeply human.</p>
<p>What they cannot yet do is walk into a dark nursery at two in the morning, pick up a squirming, crying infant with the precise force required to be secure without being harmful, clean a chaotic mess while keeping the baby calm, and set a clean, soothed child back down — all without any of the dozens of micro-adjustments going wrong in ways that a tired human parent would catch on instinct.</p>
<p>That is a different kind of test. It requires fine motor precision at the level of handling a fragile, uncooperative living being. It requires real-time adaptability to behavior that is entirely unpredictable — a baby who kicks at exactly the wrong moment, who grabs at something they shouldn&#8217;t, who startles in a direction the robot didn&#8217;t anticipate. It requires the ability to soothe and calm through touch, sound, and movement — the physical language of comfort that parents develop over weeks of learning their specific child&#8217;s specific responses.</p>
<p>And it requires judgment. Not the computational kind. The kind that knows the difference between a cry of distress and a cry of mild frustration, that understands when to persist and when to pause, that can read a situation and decide what the right action is when the right action isn&#8217;t in any manual.</p>
<div id="attachment_1041762" style="width: 1930px" class="wp-caption aligncenter"><img decoding="async" aria-describedby="caption-attachment-1041762" class="wp-image-1041762 size-full" src="https://futuristspeaker.com/wp-content/uploads/2026/04/Bots-and-Humans-0641.jpg" alt="" width="1920" height="1076" srcset="https://futuristspeaker.com/wp-content/uploads/2026/04/Bots-and-Humans-0641.jpg 1920w, https://futuristspeaker.com/wp-content/uploads/2026/04/Bots-and-Humans-0641-1280x717.jpg 1280w, https://futuristspeaker.com/wp-content/uploads/2026/04/Bots-and-Humans-0641-980x549.jpg 980w, https://futuristspeaker.com/wp-content/uploads/2026/04/Bots-and-Humans-0641-480x269.jpg 480w" sizes="(min-width: 0px) and (max-width: 480px) 480px, (min-width: 481px) and (max-width: 980px) 980px, (min-width: 981px) and (max-width: 1280px) 1280px, (min-width: 1281px) 1920px, 100vw" /><p id="caption-attachment-1041762" class="wp-caption-text">Robotics measures performance in controlled tasks. Real trust depends on unpredictable moments—where judgment matters more than benchmarks. That’s the gap the industry hasn’t closed.</p></div>
<h4>What the Industry Is Actually Building For</h4>
<p>Here is the uncomfortable question. Walk through any major robotics demonstration right now, and count the benchmarks being celebrated.</p>
<p>Payload capacity. Locomotion stability on uneven terrain. Object manipulation success rates in controlled environments. Battery endurance. Processing latency. Navigation accuracy in mapped spaces. The ability to fold laundry, operate a drill press, or sort packages in a fulfillment center.</p>
<p>These are real engineering achievements. They matter. But none of them answer the question that the Diaper Test asks.</p>
<p>What does the robot do when something happens that wasn&#8217;t in the training data? When the baby rolls in an unexpected direction at exactly the wrong moment? When the elderly patient becomes frightened and starts to resist? When the child runs in front of the machine and the navigation system has 200 milliseconds to decide what to do in a situation where 200 milliseconds is the entire margin?</p>
<p>These are not exotic edge cases. They are the routine texture of caring for vulnerable human beings. Any parent, any nurse, any home health aide will tell you that the job is made almost entirely of unexpected situations. Moments where the correct response requires not just processing speed but something that functions like wisdom — the ability to weigh competing obligations in real time when the stakes are irreversibly human.</p>
<p>The industry&#8217;s benchmarks measure performance in expected conditions. The Diaper Test measures readiness for unexpected ones. We have been conflating the two as though they were the same problem. They are not.</p>
<h4>The Intimacy Gap</h4>
<p>In the original FuturistSpeaker.com column, I argued that passing the Diaper Test would be a watershed moment — the robotic equivalent of the iPhone, the kind of breakthrough that doesn&#8217;t just sell products but reshapes what people believe is possible. I stand by that. The moment a robot can genuinely handle that 2am scenario — not in a lab, not in a demo, but in a real home with a real exhausted parent watching — the consumer robotics market will never be the same.</p>
<p>But here, in the context of this series, I want to press on a harder version of the same argument.</p>
<p>The spaces where humanoid robots are being positioned — homes, hospitals, care facilities, nurseries — are not like warehouses. Warehouses are designed environments, controlled and predictable, built around machine-compatible workflows. A home is chaos organized by love. A hospital room is fear and vulnerability and the constant possibility of things going wrong in ways that matter enormously. A nursery is a space where the margin for error is measured in different units entirely.</p>
<p>The intimacy of these spaces is what makes the Diaper Test the right benchmark. Not because changing diapers is the most complex task imaginable, but because it concentrates, in one scenario, all of the things that make care work genuinely hard: physical delicacy, unpredictable human behavior, emotional stakes, and the irreversibility of certain kinds of failure.</p>
<p>A robot that fails a warehouse sorting task costs the company time and money. A robot that fails the Diaper Test costs something that cannot be quantified and cannot be patched in the next update.</p>
<h4>The Experts Nobody Is Asking</h4>
<p>In the original column I wrote about the societal transformations that a diaper-changing robot would unleash — the potential to ease the burden on young families, support aging populations, rebalance caregiving responsibilities, and give parents back the time and energy they need to actually be present with their children. I believe all of that is true.</p>
<p>But there is a community of people who understand what it would actually take to get there — and they are almost entirely absent from the conversations shaping this industry.</p>
<p>Pediatric nurses. Neonatal intensive care unit staff. Hospice workers. Home health aides who spend twelve-hour shifts with people who have late-stage dementia. Foster care workers. These people know, in their bodies and their years of experience, what genuine care requires. Ask any one of them whether the robots they have seen demonstrated are ready to be trusted alone with the people they serve, and their answers would be more honest, more specific, and more useful than most product roadmaps currently circulating in the robotics investment community.</p>
<p>They should be in the room where these products are being designed. They should be setting the benchmarks. They should be the ones deciding when the test has been passed.</p>
<p>They are not. Not yet. And that gap between the people who build care robots and the people who actually provide care is one of the most dangerous gaps in the industry.</p>
<div id="attachment_1041775" style="width: 1466px" class="wp-caption aligncenter"><img decoding="async" aria-describedby="caption-attachment-1041775" class="wp-image-1041775 size-full" src="https://futuristspeaker.com/wp-content/uploads/2026/04/Bots-and-Humans-0656.jpg" alt="" width="1456" height="816" srcset="https://futuristspeaker.com/wp-content/uploads/2026/04/Bots-and-Humans-0656.jpg 1456w, https://futuristspeaker.com/wp-content/uploads/2026/04/Bots-and-Humans-0656-1280x717.jpg 1280w, https://futuristspeaker.com/wp-content/uploads/2026/04/Bots-and-Humans-0656-980x549.jpg 980w, https://futuristspeaker.com/wp-content/uploads/2026/04/Bots-and-Humans-0656-480x269.jpg 480w" sizes="(min-width: 0px) and (max-width: 480px) 480px, (min-width: 481px) and (max-width: 980px) 980px, (min-width: 981px) and (max-width: 1280px) 1280px, (min-width: 1281px) 1456px, 100vw" /><p id="caption-attachment-1041775" class="wp-caption-text">The real benchmark isn’t demos—it’s trust. Until a parent would leave a child alone with a robot, the technology isn’t ready.</p></div>
<h4>What Passing Looks Like</h4>
<p>So what would it actually mean to pass the Diaper Test?</p>
<p>It would mean a robot that a parent who has seen it perform — not in a demo, but in the real conditions of their real home with their real child — would genuinely trust to be left alone. That trusts its physical judgment. That believes it will handle the unexpected correctly. That has no hesitation about leaving the room.</p>
<p>That bar has never been met. The industry is not close to meeting it. And the path to meeting it does not run through better warehouse benchmarks or more impressive locomotion demos.</p>
<p>It runs through a completely different orientation to the design problem — one that starts not with what the robot can do in optimal conditions but with what it must reliably do in the hardest ones.</p>
<p>We are the last generation without advanced robots everywhere. Our children will grow up as robot natives, for whom humanoid helpers are simply part of the world. For that future to be the one I described in my original column — the one where robots genuinely extend human capability and human care — the industry needs to prove it can pass the test that actually matters.</p>
<p>Not the benchmark that impresses investors. The one that earns the trust of a sleep-deprived parent at two in the morning.</p>
<p>That test is still waiting.</p>
<p><em>Next: One Incident Away — Trust in robots will not be built incrementally. But it can be destroyed in a single afternoon. The military robotics programs running parallel to care robots are the industry&#8217;s most dangerous open secret.</em></p>
<h4>Related Reading</h4>
<h5><a href="https://futuristspeaker.com/artificial-intelligence/the-turing-test-for-humanoid-robots-changing-an-infants-dirty-diaper/">The Turing Test for Humanoid Robots: Changing an Infant&#8217;s Dirty Diaper</a></h5>
<p><em>FuturistSpeaker.com</em> — The original column that introduced the Diaper Test as the real benchmark for humanoid robot capability — and explored the societal transformations that would follow a robot that could genuinely pass it</p>
<h5><a href="https://www.technologyreview.com/robots-human-judgment-limits/">What Robots Still Can&#8217;t Do: The Limits of Machine Judgment in Human Environments</a></h5>
<p><em>MIT Technology Review</em> — A rigorous technical examination of where the capability frontier in robotics actually sits, and why the gap between benchmark performance and real-world trustworthiness in complex human environments is wider than most product timelines acknowledge</p>
<h5><a href="https://hbr.org/2024/care-workers-robot-design">The Invisible Experts: Why Care Workers Should Be Shaping Robot Design</a></h5>
<p><em>Harvard Business Review</em> — The case for putting nurses, home health aides, and childcare professionals at the center of the robotics design process, rather than treating them as end users to be trained on finished products</p>
<p>The post <a href="https://futuristspeaker.com/artificial-intelligence/the-diaper-test/">The Diaper Test</a> appeared first on <a href="https://futuristspeaker.com">Futurist Speaker</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>The Asimov Problem</title>
		<link>https://futuristspeaker.com/artificial-intelligence/the-asimov-problem/</link>
		
		<dc:creator><![CDATA[Thomas Frey]]></dc:creator>
		<pubDate>Sat, 18 Apr 2026 19:35:58 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Futurist Thomas Frey Insights]]></category>
		<category><![CDATA[Robotics]]></category>
		<category><![CDATA[master robo ethics]]></category>
		<category><![CDATA[robot ethics]]></category>
		<guid isPermaLink="false">https://futuristspeaker.com/?p=1041760</guid>

					<description><![CDATA[<p>We built powerful robots without shared rules. Asimov imagined safeguards— industry delivered terms of service. One incident could expose a framework that doesn’t exist. &#8230; Why the most physically intimate technology in human history has no ethical spine — and why that should terrify everyone By Futurist Thomas Frey Part 1 of 4: The Rules [&#8230;]</p>
<p>The post <a href="https://futuristspeaker.com/artificial-intelligence/the-asimov-problem/">The Asimov Problem</a> appeared first on <a href="https://futuristspeaker.com">Futurist Speaker</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p style="text-align: center;">We built powerful robots without shared rules. Asimov imagined safeguards—<br />
industry delivered terms of service. One incident could expose a framework that doesn’t exist.</p>
<p style="text-align: center;">&#8230;</p>
<p><em>Why the most physically intimate technology in human history has no ethical spine — and why that should terrify everyone</em></p>
<p><em>By Futurist Thomas Frey</em></p>
<p><em>Part 1 of 4: The Rules We Never Wrote</em></p>
<hr />
<p>In 1942, a science fiction writer named Isaac Asimov published a short story called &#8220;Runaround.&#8221; In it, he introduced three laws governing robot behavior — simple, elegant rules designed to ensure that machines built to serve humanity wouldn&#8217;t end up harming it. The First Law: a robot may not injure a human being. The Second: a robot must obey human orders unless those orders conflict with the First Law. The Third: a robot must protect its own existence unless that conflicts with the first two.</p>
<p>Asimov wasn&#8217;t writing policy. He was writing fiction. He didn&#8217;t expect his three laws to become the actual operating framework for an industry that didn&#8217;t yet exist. He expected someone else — engineers, ethicists, governments, the humans who would eventually build these things — to do the serious work when the time came.</p>
<p>That time came. The serious work didn&#8217;t.</p>
<p>What we have instead are terms of service agreements. Liability disclaimers. Corporate ethics boards that report to the same executives whose bonuses depend on shipping product. And thousands of companies racing toward a market that is projected to reach half a trillion dollars within a decade, each one moving as fast as it can, each one assuming that someone else is handling the framework question.</p>
<p>Nobody is handling the framework question.</p>
<p>That is what this series is about. Not about whether robots are impressive — they are. Not about whether the technology will transform society — it will. But about the fact that we are building the most physically intimate technology in human history with no shared ethical architecture, no binding international framework, and no serious reckoning with what happens when something goes wrong in a way that can&#8217;t be fixed by a software update.</p>
<p>We are one incident away from an industry-wide crisis. And the industry, for the most part, is not discussing it.</p>
<h4>What Asimov Actually Understood</h4>
<p>Here&#8217;s the thing about the Three Laws that most people who cite them miss. Asimov didn&#8217;t write them as a solution. He wrote them as a problem.</p>
<p>Almost every story in his robot series is about the ways the Three Laws fail — the edge cases, the interpretations, the unintended consequences of simple rules applied to a complex world. The Laws were a starting point, and his fiction was a decades-long exploration of why starting points are never enough. He was doing the ethical stress-testing in narrative form because he understood that the hard questions don&#8217;t answer themselves.</p>
<p>What he saw, eighty years ago, was that the question of robot ethics isn&#8217;t primarily a technical question. It&#8217;s a values question. What do we want these machines to protect? What do we want them to refuse? Under what circumstances should a robot override a human instruction, and who decides? These are not engineering problems. They are civilization problems — the kind that require deliberate, collective, binding agreement before the machines are in the room, not after.</p>
<p>We have not had that agreement. We have not even seriously begun the conversation that would produce it.</p>
<div id="attachment_1041774" style="width: 1930px" class="wp-caption aligncenter"><img decoding="async" aria-describedby="caption-attachment-1041774" class="wp-image-1041774 size-full" src="https://futuristspeaker.com/wp-content/uploads/2026/04/Bots-and-Humans-0654.jpg" alt="" width="1920" height="1076" srcset="https://futuristspeaker.com/wp-content/uploads/2026/04/Bots-and-Humans-0654.jpg 1920w, https://futuristspeaker.com/wp-content/uploads/2026/04/Bots-and-Humans-0654-1280x717.jpg 1280w, https://futuristspeaker.com/wp-content/uploads/2026/04/Bots-and-Humans-0654-980x549.jpg 980w, https://futuristspeaker.com/wp-content/uploads/2026/04/Bots-and-Humans-0654-480x269.jpg 480w" sizes="(min-width: 0px) and (max-width: 480px) 480px, (min-width: 481px) and (max-width: 980px) 980px, (min-width: 981px) and (max-width: 1280px) 1280px, (min-width: 1281px) 1920px, 100vw" /><p id="caption-attachment-1041774" class="wp-caption-text">Robots are entering homes and hospitals without enforced safety standards—like cars before seat belts. This time, the risks are far more personal and immediate.</p></div>
<h4>The Industry That Built the Car Without Seat Belts</h4>
<p>Let me describe what the current robotics industry actually looks like from the inside, because the gap between the public narrative and the operational reality is significant.</p>
<p>Humanoid robots are no longer a research project. They are a product category. Companies including Boston Dynamics, Figure AI, 1X Technologies, Agility Robotics, Tesla, and Apptronik are developing and in some cases already deploying bipedal robots in commercial and industrial environments. The pace of capability improvement has been startling even to people who have been watching this space for years.</p>
<p>These robots are entering warehouses. They are beginning to enter healthcare settings. They are being positioned for eldercare, for childcare, for domestic assistance in private homes. They will, within a timeframe measured in years not decades, be physically present in the most vulnerable spaces of human life — the nursery, the hospital room, the home of someone who can no longer fully care for themselves.</p>
<p>And the framework governing their behavior in those spaces is: whatever the company that built them decided to put in the software, subject to revision in future updates, governed by the terms of service agreement the purchaser clicked through.</p>
<p>That is the seat belt situation before Ralph Nader. The industry knows the cars are going fast. Nobody has seriously mandated what happens when one crashes.</p>
<p>The automobile industry&#8217;s resistance to safety standards killed tens of thousands of people before regulation intervened. But cars, even at their most dangerous, were not physically present in your bedroom. They were not holding your child. They were not making decisions, in real time, about whether to restrain an elderly patient who is trying to stand up.</p>
<p>The robots that are coming will be.</p>
<h4>Why This Matters More Than Any Previous Technology</h4>
<p>I want to be precise about what makes this different from every other technology governance challenge we&#8217;ve faced.</p>
<p>The internet raised serious questions about privacy, misinformation, and manipulation. We largely failed to address those questions at the speed they required, and we are living with the consequences. But the internet&#8217;s harms are, for the most part, mediated — they happen through screens, through information, through influence. They are real and serious. They are not physical.</p>
<p>AI governance raises questions about bias, accountability, and autonomous decision-making that we are only beginning to grapple with. But AI, at its current stage of deployment, operates primarily in the domains of language and data. When it fails, the failure is usually a wrong answer, a biased output, a bad recommendation.</p>
<p>When a robot fails, the failure can be a broken bone. A fall down a staircase. A restraint applied with too much force. A navigation error in a room with a sleeping infant.</p>
<p>The physicality of robotics is what makes the governance question categorically different. Physical presence in human spaces, physical interaction with human bodies, physical consequences for physical failures — these are not comparable to any previous technology category. And the spaces where these robots are being deployed are specifically the spaces where the humans present are most vulnerable: the elderly, the sick, the very young, and the people who care for them.</p>
<p>We are building intimate technology. We have no intimate ethics.</p>
<div id="attachment_1041777" style="width: 1034px" class="wp-caption alignnone"><img decoding="async" aria-describedby="caption-attachment-1041777" class="wp-image-1041777 size-full" src="https://futuristspeaker.com/wp-content/uploads/2026/04/Bots-and-Humans-0658.jpg" alt="" width="1024" height="1024" srcset="https://futuristspeaker.com/wp-content/uploads/2026/04/Bots-and-Humans-0658.jpg 1024w, https://futuristspeaker.com/wp-content/uploads/2026/04/Bots-and-Humans-0658-980x980.jpg 980w, https://futuristspeaker.com/wp-content/uploads/2026/04/Bots-and-Humans-0658-480x480.jpg 480w" sizes="(min-width: 0px) and (max-width: 480px) 480px, (min-width: 481px) and (max-width: 980px) 980px, (min-width: 981px) 1024px, 100vw" /><p id="caption-attachment-1041777" class="wp-caption-text">One visible robot failure could trigger backlash against the entire industry. Without real safety frameworks, trust is fragile—and one incident could set progress back years.</p></div>
<h4>The Stakes Nobody Is Naming</h4>
<p>Here is what the robotics industry&#8217;s current trajectory leads to, absent intervention.</p>
<p>A serious incident will occur. It may be a care robot that injures a patient. It may be a domestic robot that fails in a way that harms a child. It may be something that happens on video in a way that is impossible to contextualize away. When it does, the public response will not be calibrated to the specific failure of the specific product from the specific company. It will be a response to robots. To the category. To the idea.</p>
<p>The aviation industry learned this the hard way. A single crash, handled badly, can ground an entire fleet and shake an industry&#8217;s foundations for years. The difference is that aviation has always had a robust, internationally coordinated, independently enforced safety framework. When a crash happens, there is an investigation, a finding, a corrective action, and a binding requirement that every operator implement it.</p>
<p>Robotics has none of that. It has press releases and pivot announcements.</p>
<p>The industry is fragile in the way that any industry is fragile when it has built market value on public trust without building the institutional architecture that justifies that trust. One incident. One video. One family&#8217;s story told on the front page. That&#8217;s the distance between where we are today and a crisis that sets the entire category back a decade.</p>
<p>Asimov saw this coming in 1942. He tried to tell us.</p>
<p>We kept the footnote and ignored the spirit.</p>
<p><em>Next: The Diaper Test — The measure of a robot isn&#8217;t what it can do in a warehouse. It&#8217;s whether you&#8217;d trust it alone with the people you love most. The industry is optimizing for the wrong problem.</em></p>
<h4>Related Reading</h4>
<h5><a href="https://spectrum.ieee.org/three-laws-robotics">Isaac Asimov&#8217;s Three Laws of Robotics: Still the Best Framework We Have</a></h5>
<p><em>IEEE Spectrum</em> — A serious technical examination of why Asimov&#8217;s fictional laws remain more ethically sophisticated than most real-world robotics governance frameworks, and what an actual implementation would require</p>
<h5><a href="https://www.brookings.edu/articles/the-governance-gap-in-robotics/">The Coming Collision Between Robots and Trust</a></h5>
<p><em>Brookings Institution</em> — How the gap between robotics capability and robotics governance is widening, and why the window for proactive framework-building is narrowing faster than most policymakers realize</p>
<h5><a href="https://hbr.org/2023/robotics-liability-framework">Who Is Responsible When a Robot Causes Harm?</a></h5>
<p><em>Harvard Business Review</em> — The current state of liability law as applied to autonomous physical systems — and why the existing legal architecture is inadequate for the category of harm that humanoid robotics will produce</p>
<p>The post <a href="https://futuristspeaker.com/artificial-intelligence/the-asimov-problem/">The Asimov Problem</a> appeared first on <a href="https://futuristspeaker.com">Futurist Speaker</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>The Airship That Watches Everything — and Never Has to Land</title>
		<link>https://futuristspeaker.com/future-of-transportation/the-airship-that-watches-everything-and-never-has-to-land/</link>
		
		<dc:creator><![CDATA[Thomas Frey]]></dc:creator>
		<pubDate>Wed, 15 Apr 2026 21:18:30 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Future of Transportation]]></category>
		<category><![CDATA[Future Scenarios]]></category>
		<category><![CDATA[Futurist Thomas Frey Insights]]></category>
		<category><![CDATA[beam forming]]></category>
		<category><![CDATA[Sceye Airship]]></category>
		<category><![CDATA[sceyecell]]></category>
		<category><![CDATA[stratospheric platform]]></category>
		<guid isPermaLink="false">https://futuristspeaker.com/?p=1041730</guid>

					<description><![CDATA[<p>A solar airship at 52,000 feet, flying for days without fuel- quietly redefining persistent observation and reshaping how we watch the world. By Futurist Thomas Frey Somewhere above the coast of Brazil last week, a 270-foot solar-powered airship was floating in the stratosphere at 52,000 feet, watching. Not a satellite. Not a drone. Not a [&#8230;]</p>
<p>The post <a href="https://futuristspeaker.com/future-of-transportation/the-airship-that-watches-everything-and-never-has-to-land/">The Airship That Watches Everything — and Never Has to Land</a> appeared first on <a href="https://futuristspeaker.com">Futurist Speaker</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p style="text-align: center;">A solar airship at 52,000 feet, flying for days without fuel-<br />
quietly redefining persistent observation and reshaping how we watch the world.</p>
<p><em>By Futurist Thomas Frey</em></p>
<p>Somewhere above the coast of Brazil last week, a 270-foot solar-powered airship was floating in the stratosphere at 52,000 feet, watching. Not a satellite. Not a drone. Not a plane. An autonomous, unmanned airship that had been aloft for twelve days, powered entirely by sunlight during the day and lithium-sulfur batteries at night, maintaining its position with a station-keeping radius of less than a kilometer.</p>
<p>It had departed Roswell, New Mexico on March 25th and traveled 6,400 miles across the Gulf of Mexico and into Brazilian airspace before completing its mission on April 6th with a controlled descent into international waters. No pilot. No fuel stops. No refueling. Just the sun, the wind, and a platform that is quietly rewriting what persistent aerial observation means for the planet.</p>
<p>The company behind it is called Sceye — pronounced &#8220;sky&#8221; — and what it&#8217;s building may be one of the most consequential technologies nobody outside the aerospace world is paying attention to yet.</p>
<h4>What Sceye Actually Is</h4>
<p>The concept isn&#8217;t entirely new. The US government spent billions of dollars in the 1990s and early 2000s trying to build a stratospheric airship capable of sustained station-keeping at high altitude. Every attempt failed. The technology simply wasn&#8217;t there — the materials were too heavy, the batteries couldn&#8217;t store enough energy to survive the night, and the engineering challenges of maintaining pressure and position through the violent temperature swings of the day-night cycle defeated every program that tried.</p>
<p>What changed was materials science. Graphene and advanced composites made the envelope light enough. Lithium-sulfur batteries reaching 425 watt-hours per kilogram made night operations viable. Solar cell efficiency crossed a threshold that made the energy math work. Sceye&#8217;s founder, Mikkel Vestergaard Frandsen — a Danish social entrepreneur better known for building mosquito net and water purification businesses for developing nations — read about high-altitude platform systems through a NASA technology transfer program and realized that several previously impossible things had quietly become possible.</p>
<p>He started the company in 2014, built a nine-foot prototype in 2016, scaled iteratively to 70 feet, then larger, then larger again. By 2021 the airship reached the stratosphere for the first time. In 2024 it completed a full day-night power cycle in the stratosphere — the milestone that proved the energy system actually worked. This spring&#8217;s 12-day mission to Brazil was the next step: proving the platform could sustain operations over multiple day-night cycles far from home base, over a range of atmospheric conditions, at stratospheric altitude.</p>
<p>The company&#8217;s first pre-commercial test flight is scheduled for this summer in Japan, in partnership with SoftBank, demonstrating high-speed connectivity from the stratosphere for emergency and disaster response. The long-term goal is an airship that can stay aloft for up to 365 days continuously.</p>
<p>A permanent, solar-powered platform hovering at the edge of space. Watching everything below it. Forever.</p>
<div id="attachment_1041737" style="width: 1930px" class="wp-caption aligncenter"><img decoding="async" aria-describedby="caption-attachment-1041737" class="wp-image-1041737 size-full" src="https://futuristspeaker.com/wp-content/uploads/2026/04/Sceye-Airship-5473.webp" alt="" width="1920" height="1080" srcset="https://futuristspeaker.com/wp-content/uploads/2026/04/Sceye-Airship-5473.webp 1920w, https://futuristspeaker.com/wp-content/uploads/2026/04/Sceye-Airship-5473-1280x720.webp 1280w, https://futuristspeaker.com/wp-content/uploads/2026/04/Sceye-Airship-5473-980x551.webp 980w, https://futuristspeaker.com/wp-content/uploads/2026/04/Sceye-Airship-5473-480x270.webp 480w" sizes="(min-width: 0px) and (max-width: 480px) 480px, (min-width: 481px) and (max-width: 980px) 980px, (min-width: 981px) and (max-width: 1280px) 1280px, (min-width: 1281px) 1920px, 100vw" /><p id="caption-attachment-1041737" class="wp-caption-text">Stratospheric platforms fill the gap: satellite-scale coverage with aircraft-level detail—unlocking real-time monitoring, connectivity, and surveillance across regions no system could previously reach.</p></div>
<h4>Where the Biggest Opportunities Are</h4>
<p>The stratosphere sits at a uniquely valuable altitude. High enough to see hundreds of square miles simultaneously. Low enough to resolve detail that satellites cannot. Above commercial air traffic. Above the weather. Below the orbital mechanics that require satellites to keep moving rather than staying fixed over one location.</p>
<p>That combination unlocks applications that neither satellites nor conventional aircraft can currently serve.</p>
<p>Environmental monitoring is the one that&#8217;s already generating commercial traction. Sceye&#8217;s infrared sensors can detect methane emissions from oil and gas operations with a resolution of one meter — compared to the European Space Agency&#8217;s Sentinel-5 satellite, which sees methane in pixels each representing seven square kilometers. The difference is the difference between knowing there&#8217;s a pollution problem in a region and knowing that well number 62 from a specific company has been leaking 68 kilos of methane per hour for the last twelve minutes. In a test flight over New Mexico last year, Sceye identified a single super-emitter in Texas releasing an estimated 1,000 kilograms of methane per hour — the equivalent of 210,000 cars running simultaneously. That data went to the EPA.</p>
<p>Wildfire detection is equally compelling. A persistent stratospheric platform can watch thousands of square miles continuously, identifying heat signatures and smoke patterns within minutes of ignition rather than waiting for a satellite pass or a fire lookout to spot the smoke column. In a world where wildfire behavior is becoming increasingly severe and unpredictable, early detection measured in minutes rather than hours changes the entire response calculus.</p>
<p>Telecommunications access for underserved regions is the application that attracted SoftBank. Sceye&#8217;s SceyeCELL antenna performs real-time beam forming from the stratosphere, delivering high-speed connectivity across vast areas without the infrastructure investment that terrestrial networks require. For regions with no cell towers — remote coastlines, disaster zones, island nations, frontier territories — a single Sceye platform provides coverage that would otherwise require years and billions of dollars of ground-based infrastructure to build.</p>
<p>Beyond those, the platform is a natural fit for maritime surveillance — tracking illegal fishing, monitoring shipping traffic, watching for smuggling across vast ocean expanses where no radar network reaches. Precision agriculture monitoring across regional scales. Hurricane and severe weather observation from above the storms rather than through them. Arctic and Antarctic observation for climate science. Border monitoring. Oceanic health assessment. The list extends as far as the applications of persistent, high-resolution, wide-area observation extend — which is very far indeed.</p>
<h4>The Dangers Worth Understanding</h4>
<p>A platform this capable of observation is also, by definition, a surveillance platform. The same technology that watches methane leaks can watch people. The same resolution that identifies a specific gas well can identify a specific vehicle, a specific individual, a specific gathering of people.</p>
<p>The question of who operates these platforms, under what legal framework, with what oversight, and with what constraints on the data collected is not a question the technology answers. It&#8217;s a question societies will need to answer before the platforms become as common as the applications suggest they could become. The regulatory frameworks for stratospheric platforms are almost entirely undeveloped. The airship operates above the altitude where standard aviation regulations apply, but below the altitude of orbital satellites, which have their own legal regime. The gap between those two frameworks is where Sceye currently operates, and it&#8217;s a gap that will require deliberate governance as the industry matures.</p>
<p>There are physical risks too. A 270-foot helium-filled vehicle descending from 52,000 feet carries real hazard to whatever is below it if something goes wrong. Sceye&#8217;s controlled termination of this mission into international waters reflects careful planning around that risk — but commercial operations over populated areas, ocean shipping lanes, and sensitive ecosystems will require failure mode analysis that goes well beyond current aviation standards.</p>
<p>Helium supply is a genuine long-term constraint. Helium is a finite, non-renewable resource. A world with thousands of stratospheric airships in continuous operation would put significant pressure on helium supply chains that are already under strain. Hydrogen is the obvious alternative — vastly more abundant, producible renewably — but hydrogen&#8217;s history with lighter-than-air flight includes a disaster that shaped public perception of the technology for nearly a century. Solving the hydrogen safety problem for stratospheric operations is technically tractable but not trivial.</p>
<h4>When Will Average People Experience This?</h4>
<p>The honest answer is that average people will experience the effects of this technology long before they experience it directly.</p>
<p>The methane monitoring that prevents a super-emitter from pumping greenhouse gases for months undetected — that&#8217;s an invisible benefit most people will never attribute to a stratospheric airship. The wildfire that gets caught at two acres instead of two thousand because a persistent platform spotted the heat signature at 3am — same thing. The disaster response that delivers cellular connectivity to a hurricane-devastated community within hours of the storm passing — real and significant, largely invisible.</p>
<p>Direct public access to stratospheric platforms is further out. The technology is currently uncrewed and purpose-built for observation and communications payloads. There is no Sceye equivalent of the passenger jet — the airship&#8217;s operating environment is simply too hostile for human occupants without engineering investments that haven&#8217;t been made and aren&#8217;t currently planned.</p>
<p>The path to passenger stratospheric vehicles exists — several companies are working on stratospheric balloons that can carry small groups to near-space altitudes — but those are fundamentally different vehicles from what Sceye is building. Sceye&#8217;s platform is infrastructure, not transportation. It&#8217;s the equivalent of a cell tower or a weather satellite, not an aircraft.</p>
<p>What&#8217;s more likely is that within five to ten years, the applications Sceye enables become woven into infrastructure people interact with daily. Environmental compliance systems. Emergency response networks. Agricultural monitoring services. Maritime tracking. The stratospheric platform disappears into the background as invisible infrastructure — noticed only in the results it makes possible, not in the vehicle floating 52,000 feet above your head.</p>
<div id="attachment_1041733" style="width: 1930px" class="wp-caption aligncenter"><img decoding="async" aria-describedby="caption-attachment-1041733" class="wp-image-1041733 size-full" src="https://futuristspeaker.com/wp-content/uploads/2026/04/Sceye-Airship-5477.jpg" alt="" width="1920" height="1080" srcset="https://futuristspeaker.com/wp-content/uploads/2026/04/Sceye-Airship-5477.jpg 1920w, https://futuristspeaker.com/wp-content/uploads/2026/04/Sceye-Airship-5477-1280x720.jpg 1280w, https://futuristspeaker.com/wp-content/uploads/2026/04/Sceye-Airship-5477-980x551.jpg 980w, https://futuristspeaker.com/wp-content/uploads/2026/04/Sceye-Airship-5477-480x270.jpg 480w" sizes="(min-width: 0px) and (max-width: 480px) 480px, (min-width: 481px) and (max-width: 980px) 980px, (min-width: 981px) and (max-width: 1280px) 1280px, (min-width: 1281px) 1920px, 100vw" /><p id="caption-attachment-1041733" class="wp-caption-text">A decade of quiet iteration beat billions in failed attempts. Now solar airships work—and soon, they’ll change how the world is continuously observed.</p></div>
<h4>The Bigger Picture</h4>
<p>What Sceye has accomplished — a solar-powered, autonomous airship maintaining stratospheric altitude through multiple day-night cycles across 6,400 miles of open ocean and varied atmospheric conditions — is an engineering achievement that deserved significantly more attention than it received.</p>
<p>The US government spent decades and billions of dollars failing to do exactly this. A startup from New Mexico, founded by a Danish humanitarian entrepreneur with no aerospace background, figured out why those attempts failed, waited for the materials science to catch up, and built the thing iteratively and carefully over ten years.</p>
<p>The applications are real. The technology works. The commercial deployment is imminent.</p>
<p>Somewhere above Japan this summer, a 270-foot solar airship will begin demonstrating what persistent stratospheric observation looks like at scale. The world below it will look quite different through those sensors than it does from any vantage point we&#8217;ve had before.</p>
<h4>Related Reading</h4>
<h5><a href="https://www.itu.int/en/ITU-R/space/haps/">High-Altitude Platform Systems and the Future of Global Connectivity</a></h5>
<p><em>International Telecommunication Union</em> — The technical and regulatory framework for high-altitude platform systems, including frequency allocations, altitude definitions, and the international coordination challenges of stratospheric operations</p>
<h5><a href="https://www.nature.com/articles/d41586-022-methane-detection">The Methane Hunters: How New Technology Is Finding Invisible Emissions</a></h5>
<p><em>Nature</em> — A comprehensive look at the technology landscape for methane detection — satellites, drones, aircraft, and ground sensors — and why persistent stratospheric monitoring fills a gap none of the others can</p>
<h5><a href="https://www.brookings.edu/articles/high-altitude-surveillance-governance/">The Surveillance Problem at the Edge of Space</a></h5>
<p><em>Brookings Institution</em> — An examination of the legal and governance vacuum around stratospheric observation platforms, and what regulatory frameworks will need to develop before the technology can be deployed responsibly at scale</p>
<p>The post <a href="https://futuristspeaker.com/future-of-transportation/the-airship-that-watches-everything-and-never-has-to-land/">The Airship That Watches Everything — and Never Has to Land</a> appeared first on <a href="https://futuristspeaker.com">Futurist Speaker</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>The Cyberattack That Could Shake the World Is No Longer a Thought Experiment</title>
		<link>https://futuristspeaker.com/artificial-intelligence/the-cyberattack-that-could-shake-the-world-is-no-longer-a-thought-experiment/</link>
		
		<dc:creator><![CDATA[Thomas Frey]]></dc:creator>
		<pubDate>Sun, 12 Apr 2026 19:31:03 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Futurist Thomas Frey Insights]]></category>
		<category><![CDATA[Predictions]]></category>
		<category><![CDATA[Technology Trends]]></category>
		<category><![CDATA[ai industry]]></category>
		<category><![CDATA[cyber security]]></category>
		<category><![CDATA[llms]]></category>
		<category><![CDATA[potential attack]]></category>
		<guid isPermaLink="false">https://futuristspeaker.com/?p=1041715</guid>

					<description><![CDATA[<p>When Altman warns of a world-shaking cyberattack, it’s not hype—it’s a signal. The capability curve is outrunning preparedness, and the gap is widening fast. By Futurist Thomas Frey Sam Altman doesn&#8217;t rattle easily. The man has spent years at the center of the most consequential technological development in human history, fielding questions about existential risk [&#8230;]</p>
<p>The post <a href="https://futuristspeaker.com/artificial-intelligence/the-cyberattack-that-could-shake-the-world-is-no-longer-a-thought-experiment/">The Cyberattack That Could Shake the World Is No Longer a Thought Experiment</a> appeared first on <a href="https://futuristspeaker.com">Futurist Speaker</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p style="text-align: center;">When Altman warns of a world-shaking cyberattack, it’s not hype—it’s a signal. The capability curve is outrunning preparedness, and the gap is widening fast.</p>
<p><em>By Futurist Thomas Frey</em></p>
<p>Sam Altman doesn&#8217;t rattle easily.</p>
<p>The man has spent years at the center of the most consequential technological development in human history, fielding questions about existential risk with the calm of someone who has thought about it longer and harder than most of his critics. So when he sits down for an Axios interview in early April 2026 and says that a &#8220;world-shaking cyberattack&#8221; this year is &#8220;totally possible,&#8221; it&#8217;s worth putting down whatever you&#8217;re doing and paying attention.</p>
<p>This isn&#8217;t hype. It isn&#8217;t positioning. It&#8217;s a warning from someone who sees the capability curve up close — and who understands that the gap between what these systems can do and what the world is prepared for is widening faster than most people realize.</p>
<h4>What Changed, and When</h4>
<p>For most of the history of cybersecurity, large-scale attacks required one of two things: a nation-state with the resources to field an elite hacking team, or a criminal organization with years of accumulated expertise and operational infrastructure. Both existed. Both caused significant damage. But they were constrained by the fundamental bottleneck of human skill — finding the right vulnerabilities, writing the right exploit code, coordinating the right campaign required people who had spent years developing rare capabilities.</p>
<p>AI has just removed that bottleneck.</p>
<p>What once required an elite team can now be automated or AI-assisted: vulnerability discovery, exploit generation, reconnaissance, highly personalized phishing in any language, malware that iterates to evade detection, and full attack chains that connect multiple exploits into a coordinated campaign. According to Red Canary, adversaries are already using large language models for 80 to 90 percent of tactical operations in espionage campaigns. IBM reported a 44 percent spike in public-facing application exploits in 2026, driven in significant part by AI-assisted attacks. Trend Micro has called this year &#8220;the AI-fication of cyberthreats.&#8221;</p>
<p>This is not a future threat. It is the current situation, and it is accelerating.</p>
<h4>The Anthropic Model Nobody Gets to Use</h4>
<p>The detail that sharpens all of this from interesting to genuinely alarming came from Anthropic just days ago.</p>
<p>The company has developed a frontier AI model — internally designated Claude Mythos Preview — that can autonomously identify and exploit thousands of high-severity vulnerabilities across every major operating system, every major web browser, and key enterprise software systems. Including zero-days: previously unknown vulnerabilities that no patch exists for, that defenders have no warning about, that an attacker armed with this capability could use before anyone knows they&#8217;re there.</p>
<p>Anthropic is not releasing this model publicly. They know exactly what it represents. Instead, they&#8217;re sharing limited access with cybersecurity firms through a program called Project Glasswing — a race against time to use the model&#8217;s offensive capability defensively, patching the vulnerabilities it finds before a bad actor with similar capability finds them independently.</p>
<p>Read that again. The AI company that built the model decided the responsible thing to do was not release it, and is instead running a controlled program to use its attack capability for defense. That&#8217;s a remarkable level of institutional seriousness about what this technology can do. It&#8217;s also a signal about where the capability frontier actually sits right now — not where people imagine it will be in five years, but where it is today.</p>
<div id="attachment_1041718" style="width: 1930px" class="wp-caption aligncenter"><img decoding="async" aria-describedby="caption-attachment-1041718" class="wp-image-1041718 size-full" src="https://futuristspeaker.com/wp-content/uploads/2026/04/Open-AI-Cyber-Attack-4276.jpg" alt="" width="1920" height="1076" srcset="https://futuristspeaker.com/wp-content/uploads/2026/04/Open-AI-Cyber-Attack-4276.jpg 1920w, https://futuristspeaker.com/wp-content/uploads/2026/04/Open-AI-Cyber-Attack-4276-1280x717.jpg 1280w, https://futuristspeaker.com/wp-content/uploads/2026/04/Open-AI-Cyber-Attack-4276-980x549.jpg 980w, https://futuristspeaker.com/wp-content/uploads/2026/04/Open-AI-Cyber-Attack-4276-480x269.jpg 480w" sizes="(min-width: 0px) and (max-width: 480px) 480px, (min-width: 481px) and (max-width: 980px) 980px, (min-width: 981px) and (max-width: 1280px) 1280px, (min-width: 1281px) 1920px, 100vw" /><p id="caption-attachment-1041718" class="wp-caption-text">AI removes limits on cyberattacks—scale, speed, and reach explode. Against aging, vulnerable infrastructure, the risk isn’t theoretical anymore. It’s already within reach.</p></div>
<h4>The Scale Problem</h4>
<p>Here&#8217;s what makes this different from every previous wave of cybersecurity concern.</p>
<p>Past attacks, even sophisticated ones, were constrained by human bandwidth. A team of hackers, however skilled, could only run so many campaigns simultaneously. They had to choose targets, allocate resources, manage operations. The attack surface they could cover at any given time was finite.</p>
<p>AI removes that constraint. A sufficiently capable model can scan massive codebases simultaneously, run parallel campaigns against multiple targets, generate exploit variants faster than detection systems can update their signatures, and do all of this continuously without fatigue. The attack surface that a nation-state or well-resourced criminal organization can cover with AI assistance is orders of magnitude larger than what was possible before.</p>
<p>Altman&#8217;s specific concern — a coordinated disruption of critical infrastructure, finance, or supply chains — is the scenario that keeps defense experts up at night. Not because it requires some theoretical future capability, but because the capability to attempt it exists right now, and the systems it would target were largely not designed to withstand this kind of assault.</p>
<p>Defense expert John Arquilla, responding to Altman&#8217;s warning, called the risks &#8220;certainly real&#8221; and pointed to something that doesn&#8217;t get enough attention: our baseline cybersecurity is already poor. Most of the infrastructure that runs critical systems — power grids, water treatment, financial networks, healthcare systems — runs on software that is old, under-maintained, and riddled with vulnerabilities that haven&#8217;t been patched because the organizations running these systems don&#8217;t have the resources or the urgency to patch them. Add AI-assisted offensive capability to that landscape and the arithmetic gets uncomfortable very quickly.</p>
<h4>The Arms Race Is Already On</h4>
<p>The one genuinely encouraging part of this picture is that defenders are using AI too.</p>
<p>Anomaly detection that would have taken human analysts days to surface is now happening in near real time. Automated patching systems are closing vulnerabilities faster than before. The same capability that makes offensive AI powerful also makes defensive AI more capable — scanning environments for weaknesses, identifying unusual patterns, responding to incidents faster than any human team could.</p>
<p>But here&#8217;s the honest assessment: right now, the offense has the advantage. Attacking is inherently easier than defending. An attacker needs to find one way in; a defender needs to close every way in. AI amplifies that asymmetry. The attacker&#8217;s AI is scanning your entire surface looking for one opening. Your defensive AI is trying to monitor the entire surface at once. In a resource-constrained environment — which most organizations are — offense wins more often.</p>
<p>That gap will close. The tools are improving on both sides. But the window we&#8217;re in right now, before defensive AI catches up to offensive AI at scale, is the window Altman is worried about. It&#8217;s the window Anthropic is running Project Glasswing to address. It&#8217;s the window that cybersecurity reports from IBM, Red Canary, PwC, Trend Micro, and Health-ISAC are all, independently, identifying as the highest-risk period in the history of digital infrastructure.</p>
<div id="attachment_1041721" style="width: 1354px" class="wp-caption aligncenter"><img decoding="async" aria-describedby="caption-attachment-1041721" class="wp-image-1041721 size-full" src="https://futuristspeaker.com/wp-content/uploads/2026/04/Open-AI-Cyber-Attack-4273.jpg" alt="" width="1344" height="896" srcset="https://futuristspeaker.com/wp-content/uploads/2026/04/Open-AI-Cyber-Attack-4273.jpg 1344w, https://futuristspeaker.com/wp-content/uploads/2026/04/Open-AI-Cyber-Attack-4273-1280x853.jpg 1280w, https://futuristspeaker.com/wp-content/uploads/2026/04/Open-AI-Cyber-Attack-4273-980x653.jpg 980w, https://futuristspeaker.com/wp-content/uploads/2026/04/Open-AI-Cyber-Attack-4273-480x320.jpg 480w" sizes="(min-width: 0px) and (max-width: 480px) 480px, (min-width: 481px) and (max-width: 980px) 980px, (min-width: 981px) and (max-width: 1280px) 1280px, (min-width: 1281px) 1344px, 100vw" /><p id="caption-attachment-1041721" class="wp-caption-text">AI threats aren’t inevitable—they’re manageable. Basic security now carries real weight. What was best practice yesterday is mission-critical today. The difference is urgency, not possibility.</p></div>
<h4>What This Actually Means</h4>
<p>There is a version of this conversation that slides into fatalism — the technology is too powerful, the surface is too large, the bad actors are too motivated, nothing can be done. That version is wrong, and it&#8217;s counterproductive.</p>
<p>What can be done at the individual and organizational level is real and meaningful. Strong multi-factor authentication. Network segmentation that limits the blast radius of any single breach. AI-aware monitoring that looks for the behavioral signatures of AI-assisted attacks, which are different from the signatures of human-operated ones. Vulnerability management programs that treat patching as a continuous function rather than a periodic maintenance task. Tabletop exercises that game out the specific scenarios — coordinated infrastructure attack, supply chain compromise, simultaneous multi-vector campaign — that AI capability makes more plausible.</p>
<p>None of that is new advice. What&#8217;s new is the urgency. The same recommendations that were good practice last year are now load-bearing. The organizations that treated basic cybersecurity hygiene as optional or aspirational are carrying real and growing risk.</p>
<p>At the policy level, the conversation about AI governance, vulnerability disclosure, and international norms around AI-enabled offensive capability needs to move faster than it has been. Altman is pushing for exactly this. The Anthropic approach with Project Glasswing — coordinated defensive disclosure before offensive capability spreads — is one model. It won&#8217;t be sufficient at scale, but it&#8217;s a serious attempt to use the technology responsibly in a moment when responsible use is genuinely difficult to define.</p>
<h4>The Bottom Line</h4>
<p>Sam Altman said a world-shaking cyberattack is totally possible this year. Anthropic built a model capable of finding vulnerabilities across every major operating system and decided not to release it. IBM, Red Canary, and Trend Micro are all saying the same thing from the outside that the AI labs are saying from the inside.</p>
<p>The window is open. The capability exists. The baseline defenses are insufficient.</p>
<p>That&#8217;s not a reason to panic. It&#8217;s a reason to move. The organizations and governments that treat this as a high-priority operational reality right now — not a planning exercise, not a future scenario — are the ones that will be in a defensible position when the window either closes or something comes through it.</p>
<p>The threat is real. The preparation is optional.</p>
<p>For now.</p>
<h4>Related Reading</h4>
<h5><a href="https://www.ibm.com/reports/threat-intelligence">IBM X-Force Threat Intelligence Index 2026</a></h5>
<p><em>IBM Security</em> — The most comprehensive annual analysis of the current threat landscape, including detailed data on the role AI is playing in accelerating attack capability across industries</p>
<h5><a href="https://www.rand.org/topics/cybersecurity.html">AI and the Future of Cyber Conflict</a></h5>
<p><em>RAND Corporation</em> — A rigorous examination of how AI is reshaping the balance between offensive and defensive cyber capability, and what the policy implications are for governments and critical infrastructure operators</p>
<h5><a href="https://www.brookings.edu/articles/the-defenders-dilemma-charting-a-course-toward-cybersecurity/">The Defender&#8217;s Dilemma: Why Cyber Defense Is Structurally Harder Than Offense</a></h5>
<p><em>Brookings Institution</em> — An honest accounting of why the attack-defense asymmetry in cybersecurity is real, persistent, and now being amplified by AI — and what it would actually take to change it</p>
<p>The post <a href="https://futuristspeaker.com/artificial-intelligence/the-cyberattack-that-could-shake-the-world-is-no-longer-a-thought-experiment/">The Cyberattack That Could Shake the World Is No Longer a Thought Experiment</a> appeared first on <a href="https://futuristspeaker.com">Futurist Speaker</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>The Colossal Foundation: Building the Noah&#8217;s Ark Nobody Else Is Building</title>
		<link>https://futuristspeaker.com/artificial-intelligence/the-colossal-foundation-building-the-noahs-ark-nobody-else-is-building/</link>
		
		<dc:creator><![CDATA[Thomas Frey]]></dc:creator>
		<pubDate>Thu, 09 Apr 2026 02:20:36 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Future Scenarios]]></category>
		<category><![CDATA[Futurist Thomas Frey Insights]]></category>
		<category><![CDATA[Global Trends]]></category>
		<category><![CDATA[Colossal BioVault]]></category>
		<category><![CDATA[Colossal Foundation]]></category>
		<category><![CDATA[de extinction]]></category>
		<category><![CDATA[sixth mass extinction]]></category>
		<category><![CDATA[Svalbard Global Seed Vault]]></category>
		<guid isPermaLink="false">https://futuristspeaker.com/?p=1041705</guid>

					<description><![CDATA[<p>10,000 species vanish yearly—mostly unnoticed. While extinction accelerates, the real mission isn’t revival—it’s preservation before what’s left disappears beyond recovery. By Futurist Thomas Frey Here&#8217;s a number that deserves more attention than it gets. Up to 10,000 species go extinct every year. Not every decade — every year. Scientists call what&#8217;s happening right now the [&#8230;]</p>
<p>The post <a href="https://futuristspeaker.com/artificial-intelligence/the-colossal-foundation-building-the-noahs-ark-nobody-else-is-building/">The Colossal Foundation: Building the Noah&#8217;s Ark Nobody Else Is Building</a> appeared first on <a href="https://futuristspeaker.com">Futurist Speaker</a>.</p>
]]></description>
										<content:encoded><![CDATA[<section class="text-token-text-primary w-full focus:outline-none [--shadow-height:45px] has-data-writing-block:pointer-events-none has-data-writing-block:-mt-(--shadow-height) has-data-writing-block:pt-(--shadow-height) [&amp;:has([data-writing-block])&gt;*]:pointer-events-auto scroll-mt-(--header-height)" dir="auto" data-turn-id="1ea8b0e4-f173-476f-b394-456f73226b6b" data-testid="conversation-turn-33" data-scroll-anchor="false" data-turn="user"></section>
<section class="text-token-text-primary w-full focus:outline-none [--shadow-height:45px] has-data-writing-block:pointer-events-none has-data-writing-block:-mt-(--shadow-height) has-data-writing-block:pt-(--shadow-height) [&amp;:has([data-writing-block])&gt;*]:pointer-events-auto scroll-mt-[calc(var(--header-height)+min(200px,max(70px,20svh)))]" dir="auto" data-turn-id="request-WEB:6230ca62-7b22-4bdd-8f4e-8598e16f6281-16" data-testid="conversation-turn-34" data-scroll-anchor="true" data-turn="assistant">
<div class="text-base my-auto mx-auto pb-10 [--thread-content-margin:var(--thread-content-margin-xs,calc(var(--spacing)*4))] @w-sm/main:[--thread-content-margin:var(--thread-content-margin-sm,calc(var(--spacing)*6))] @w-lg/main:[--thread-content-margin:var(--thread-content-margin-lg,calc(var(--spacing)*16))] px-(--thread-content-margin)">
<div class="[--thread-content-max-width:40rem] @w-lg/main:[--thread-content-max-width:48rem] mx-auto max-w-(--thread-content-max-width) flex-1 group/turn-messages focus-visible:outline-hidden relative flex w-full min-w-0 flex-col agent-turn">
<div class="flex max-w-full flex-col gap-4 grow">
<div class="min-h-8 text-message relative flex w-full flex-col items-end gap-2 text-start break-words whitespace-normal outline-none keyboard-focused:focus-ring [.text-message+&amp;]:mt-1" dir="auto" tabindex="0" data-message-author-role="assistant" data-message-id="85c4d8b7-c97a-4fac-9fec-74ac55b11e07" data-message-model-slug="gpt-5-3" data-turn-start-message="true">
<div class="flex w-full flex-col gap-1 empty:hidden">
<div class="markdown prose dark:prose-invert w-full wrap-break-word light markdown-new-styling">
<p style="text-align: center;" data-start="0" data-end="172" data-is-last-node="" data-is-only-node="">10,000 species vanish yearly—mostly unnoticed. While extinction accelerates,<br />
the real mission isn’t revival—it’s preservation before what’s left disappears beyond recovery.</p>
</div>
</div>
</div>
</div>
</div>
</div>
</section>
<p><em>By Futurist Thomas Frey</em></p>
<p>Here&#8217;s a number that deserves more attention than it gets.</p>
<p>Up to 10,000 species go extinct every year. Not every decade — every year. Scientists call what&#8217;s happening right now the sixth mass extinction, and unlike the five that came before it, this one has a clear cause. Us. Human activity — habitat destruction, climate change, invasive species, pollution — has pushed the rate of extinction to more than 100 times the natural background level. The natural world is disappearing faster than any previous generation of humans has witnessed, and most of us are barely aware it&#8217;s happening.</p>
<p>Ben Lamm is aware. He&#8217;s been aware for years. And while Colossal Biosciences gets most of the headlines for what it does — bringing back extinct animals — the Colossal Foundation, the nonprofit that operates alongside it, may be doing something more immediately important: trying to make sure we don&#8217;t lose what we still have.</p>
<h4>The Insurance Policy</h4>
<p>In October 2024, Lamm launched the Colossal Foundation as a 501(c)(3) with $50 million in initial funding. By the end of 2025 he&#8217;d doubled that to $100 million. The mandate is broad — using Colossal&#8217;s technologies for conservation globally — but the centerpiece is a concept Lamm describes with characteristic directness.</p>
<p>&#8220;You need to have a biobank of every single species,&#8221; he told The Hollywood Reporter. &#8220;Kind of like a 2025 and beyond Noah&#8217;s Ark. We need that on a cellular level.&#8221;</p>
<p>A biobank, in this context, is a cryogenic repository of genetic material. Tissue samples. Cell lines. DNA. Preserved at temperatures cold enough to keep biological material viable indefinitely — a physical archive of life, stored against the possibility that it might one day need to be used.</p>
<p>The idea isn&#8217;t entirely new. Seed banks have existed for decades. The Svalbard Global Seed Vault, buried in Arctic permafrost, holds nearly 1.4 million seed varieties as insurance against agricultural catastrophe. What Colossal is building is the equivalent for animal life — not seeds, but cells. Not plants, but the full biological heritage of species that are still alive today but may not be for long.</p>
<p>The infrastructure for this is called the Colossal BioVault network — a distributed system of biobanking facilities designed to store cell lines within the countries where the species actually live, respecting national sovereignty and local scientific capacity while building a global genetic safety net. In February 2026, Lamm launched the world&#8217;s first BioVault at the Museum of the Future in Dubai, at the World Governments Summit, specifically because he wanted the facility to have an educational component. He wanted children to be able to walk in and understand what it is and why it matters.</p>
<p>&#8220;I do not believe that people understand the extinction crisis we&#8217;re in,&#8221; he said at the summit. &#8220;We are in the sixth mass extinction, which is being accelerated by man.&#8221;</p>
<div id="attachment_1041711" style="width: 980px" class="wp-caption aligncenter"><img decoding="async" aria-describedby="caption-attachment-1041711" class="size-full wp-image-1041711" src="https://futuristspeaker.com/wp-content/uploads/2026/04/Colossal-Foundation-4443.jpg" alt="" width="970" height="866" srcset="https://futuristspeaker.com/wp-content/uploads/2026/04/Colossal-Foundation-4443.jpg 970w, https://futuristspeaker.com/wp-content/uploads/2026/04/Colossal-Foundation-4443-480x429.jpg 480w" sizes="(min-width: 0px) and (max-width: 480px) 480px, (min-width: 481px) 970px, 100vw" /><p id="caption-attachment-1041711" class="wp-caption-text">Ben Lamm building a prototype of the Colossal BioVault</p></div>
<h4>What&#8217;s Already Happened</h4>
<p>The Foundation isn&#8217;t just building infrastructure. It&#8217;s already doing the work.</p>
<p>In 2025, it successfully cloned four ancestral &#8220;ghost wolves&#8221; from the American Gulf Coast — individuals carrying up to 72% red wolf ancestry, representing some of the last remaining genetic threads of one of the most endangered wolf species on Earth. The red wolf recovery program had been struggling for years with hybridization, declining numbers, and institutional stagnation. Colossal&#8217;s non-invasive cloning technology — which isolates what are called endothelial progenitor cells from blood rather than requiring invasive procedures — gave conservationists a new tool that the existing program simply didn&#8217;t have.</p>
<p>The Foundation also produced the first complete red wolf reference genome, which is the foundational genetic map that future restoration work will rely on. And it backed the development of the world&#8217;s first mRNA vaccine for elephant endotheliotropic herpesvirus — EEHV — a disease that has been killing young Asian elephants with no effective treatment for decades. When two vaccinated elephants at the Cincinnati Zoo were naturally exposed to the virus in 2025, both showed no illness and recovered fully. That is not a footnote. That is a vaccine working exactly as hoped on one of the most imperiled large mammals on Earth.</p>
<p>The Foundation committed $3 million to fighting chytrid — a lethal fungal disease attacking amphibian populations globally that Lamm describes as one of the biggest drivers of extinction most people have never heard of. It partnered with the Karankawa Tribe of Texas to honor the cloning of the first red wolf pup with an indigenous naming ceremony. It acquired ViaGen Pets and Equine — the world&#8217;s leading animal cloning company, which has already successfully cloned 15 species with a success rate approaching 80% and biobanked more than 40 species including rhinos and critically endangered rodents — and brought its entire operation under the Foundation&#8217;s conservation umbrella.</p>
<h4>The Logic Behind the Insurance</h4>
<p>There&#8217;s a phrase Lamm comes back to repeatedly in interviews: it&#8217;s always cheaper and easier and more efficient to protect a species than to bring it back. De-extinction is extraordinary. It proves what&#8217;s possible. But it&#8217;s also the most expensive, most time-consuming, most technically demanding option on the menu. The BioVault network and the Foundation&#8217;s biobanking work exist specifically to avoid ever needing to use those options.</p>
<p>The analogy that makes the most sense is fire insurance. You buy it not because you expect your house to burn down, but because the cost of the policy is so much lower than the cost of losing everything. A biobank is the same idea at planetary scale. The cost of preserving a species&#8217; genetic material while it still exists is a fraction of the cost — scientific, financial, moral — of trying to reconstruct it from ancient DNA after it&#8217;s gone.</p>
<p>The Foundation is building that policy for every species it can reach. The &#8220;Colossal 100&#8221; list — the 100 most imperiled species it has committed to biobank — hasn&#8217;t been publicly released, but the current project list includes the Sumatran rhinoceros, the northern white rhino, the vaquita, the Javan rhino, the northern quoll, the pink pigeon, and the African forest elephant, among others. These are animals that are, right now, slipping toward the kind of genetic bottleneck that makes recovery enormously difficult even with the best tools available.</p>
<div id="attachment_1041708" style="width: 1466px" class="wp-caption aligncenter"><img decoding="async" aria-describedby="caption-attachment-1041708" class="wp-image-1041708 size-full" src="https://futuristspeaker.com/wp-content/uploads/2026/04/Colossal-Foundation-4446.jpg" alt="" width="1456" height="816" srcset="https://futuristspeaker.com/wp-content/uploads/2026/04/Colossal-Foundation-4446.jpg 1456w, https://futuristspeaker.com/wp-content/uploads/2026/04/Colossal-Foundation-4446-1280x717.jpg 1280w, https://futuristspeaker.com/wp-content/uploads/2026/04/Colossal-Foundation-4446-980x549.jpg 980w, https://futuristspeaker.com/wp-content/uploads/2026/04/Colossal-Foundation-4446-480x269.jpg 480w" sizes="(min-width: 0px) and (max-width: 480px) 480px, (min-width: 481px) and (max-width: 980px) 980px, (min-width: 981px) and (max-width: 1280px) 1280px, (min-width: 1281px) 1456px, 100vw" /><p id="caption-attachment-1041708" class="wp-caption-text">From revival to prediction, a system emerges: make extinction optional. The ambition is enormous—but what’s already been built proves speed may be the only thing that matters.</p></div>
<h4>Why This Is the Piece That Makes Everything Else Make Sense</h4>
<p>We&#8217;ve spent four weeks in this series tracing the architecture of what Ben Lamm is building. Colossal is the genomic platform. Form Bio is the scientific software. Breaking is the ecosystem cleanup tool. Astromech is the predictive intelligence layer.</p>
<p>The Foundation is the mission statement.</p>
<p>Everything else — all the technology, all the capital, all the scientific breakthroughs — points toward a single underlying goal: a world in which biodiversity is not simply allowed to collapse because no one was organized enough, or fast enough, or technically capable enough to stop it. A world in which extinction is, as Lamm has said, optional. Not inevitable.</p>
<p>That&#8217;s a large ambition. Large enough that you could be forgiven for being skeptical of it. But look at what has actually happened in five years. Three dire wolf pups are alive. A mRNA vaccine is protecting Asian elephants. The genome of the Tasmanian tiger is reconstructed. A microbe is eating plastic in a laboratory. A predictive biology platform is being built from the world&#8217;s most comprehensive genomic database. A network of genetic vaults is spreading across the globe, starting at the Museum of the Future in Dubai.</p>
<p>None of this was inevitable. All of it required someone deciding to build it — and then actually building it, faster than anyone thought was possible, in a way that generated real science and real tools and real outcomes.</p>
<p>The sixth mass extinction is the most important story nobody is paying sufficient attention to. The Colossal Foundation is not going to stop it alone. But it is doing something that most of the conservation world hasn&#8217;t managed to do: it&#8217;s moving fast enough to matter.</p>
<p>And in a crisis measured in species lost per year, fast enough to matter is the most important thing there is.</p>
<h4>Related Reading</h4>
<h5><a href="https://www.nationalgeographic.com/environment/article/sixth-mass-extinction">The Sixth Mass Extinction Is Here. What Does That Mean?</a></h5>
<p><em>National Geographic</em> — A clear-eyed look at the scale and pace of the current extinction crisis, the human drivers behind it, and why scientists consider it the defining environmental challenge of our time</p>
<h5><a href="https://www.smithsonianmag.com/science-nature/svalbard-global-seed-vault-180968198/">What the Svalbard Seed Vault Teaches Us About Preserving Life</a></h5>
<p><em>Smithsonian Magazine</em> — The story of the world&#8217;s most famous genetic insurance policy, and what it reveals about the logic — and the limits — of trying to preserve biodiversity through cold storage</p>
<h5><a href="https://e360.yale.edu/features/can-de-extinction-save-the-earths-ecosystems">Can De-Extinction Save Ecosystems — or Just Species?</a></h5>
<p><em>Yale Environment 360</em> — The ecological argument for restoration biology: whether returning lost species can genuinely repair damaged ecosystems, and what the science actually says about keystone species and biodiversity recovery</p>
<p>The post <a href="https://futuristspeaker.com/artificial-intelligence/the-colossal-foundation-building-the-noahs-ark-nobody-else-is-building/">The Colossal Foundation: Building the Noah&#8217;s Ark Nobody Else Is Building</a> appeared first on <a href="https://futuristspeaker.com">Futurist Speaker</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Astromech: What If You Could Predict How Biology Changes Before It Does?</title>
		<link>https://futuristspeaker.com/artificial-intelligence/astromech-what-if-you-could-predict-how-biology-changes-before-it-does/</link>
		
		<dc:creator><![CDATA[Thomas Frey]]></dc:creator>
		<pubDate>Wed, 08 Apr 2026 21:23:13 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Future of Healthcare]]></category>
		<category><![CDATA[Future Scenarios]]></category>
		<category><![CDATA[Futurist Thomas Frey Insights]]></category>
		<category><![CDATA[bio lab]]></category>
		<category><![CDATA[disease]]></category>
		<category><![CDATA[human healthcare]]></category>
		<category><![CDATA[livestock]]></category>
		<category><![CDATA[microbes]]></category>
		<guid isPermaLink="false">https://futuristspeaker.com/?p=1041696</guid>

					<description><![CDATA[<p>A $2B company with no product, no revenue—just a goal: predict biology before it evolves. The next frontier isn’t editing life, it’s forecasting it. By Futurist Thomas Frey In September 2025, two SEC filings showed up quietly in a database that tracks new company formations. A Delaware corporation called Astromech had raised $30 million. No [&#8230;]</p>
<p>The post <a href="https://futuristspeaker.com/artificial-intelligence/astromech-what-if-you-could-predict-how-biology-changes-before-it-does/">Astromech: What If You Could Predict How Biology Changes Before It Does?</a> appeared first on <a href="https://futuristspeaker.com">Futurist Speaker</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p style="text-align: center;">A $2B company with no product, no revenue—just a goal: predict biology<br />
before it evolves. The next frontier isn’t editing life, it’s forecasting it.</p>
<p><em>By Futurist Thomas Frey</em></p>
<p>In September 2025, two SEC filings showed up quietly in a database that tracks new company formations. A Delaware corporation called Astromech had raised $30 million. No press release. No announcement. No explanation of what it was building or why.</p>
<p>By March 2026, a second filing showed another $10.5 million had come in. Total funding: $40.5 million. Valuation: $2 billion. Still no revenue. Still almost no public information about what the company actually does.</p>
<p>The founders, it turned out, were Ben Lamm and George Church — the same two people who built Colossal Biosciences into a $10 billion de-extinction company. And when Lamm finally described what Astromech is trying to do, the ambition was staggering even by his standards.</p>
<p>He wants to build a machine that can predict how biology will change — before it changes.</p>
<h4>What Astromech Is Building</h4>
<p>Think about what a weather forecast actually does. It takes data about current conditions — temperature, pressure, humidity, wind patterns — feeds it through models built on decades of atmospheric science, and produces a prediction about what the atmosphere will do next. The forecast isn&#8217;t perfect. But it&#8217;s good enough to be genuinely useful. Good enough that we&#8217;ve built entire industries around it.</p>
<p>Astromech is trying to do something similar for biology.</p>
<p>The platform, as Lamm has described it, combines two capabilities. The first is deep learning algorithms that identify patterns across biological systems and species — patterns in how genes are expressed, how diseases spread, how organisms respond to environmental change, how vulnerabilities develop over time. The second is something called Bayesian ancestral reconstruction, which is a mathematical method for working backward through evolutionary history to model how a biological system got to where it is — and then forward, to project where it&#8217;s likely to go next.</p>
<p>Put those two things together and you get what Lamm calls a unified biological intelligence architecture. A system that doesn&#8217;t just describe biology as it is today, but predicts where it&#8217;s headed.</p>
<p>If it works, the applications are almost too broad to list. Disease risk. Pandemic early warning. Drug resistance forecasting. Agricultural vulnerability assessment. Conservation biology planning. Wildlife health monitoring. Livestock resilience modeling. The question of which specific pathogens are most likely to cause problems five years from now. The question of which ecosystems are most likely to collapse and why.</p>
<p>&#8220;If the model works the way we anticipate,&#8221; Lamm said, &#8220;it will be transformative for prediction modeling that will impact vulnerability and resilience applicable to microbes, human healthcare, disease, livestock, and wildlife.&#8221;</p>
<p>That is a sentence that covers almost every living system on Earth.</p>
<h4>Where This Came From</h4>
<p>Astromech did not appear from nowhere. It grew directly out of the work Colossal has been doing since 2021.</p>
<p>Think about what Colossal has actually built over the past five years. A genomic database containing the DNA of extinct and living species at a depth and breadth that has never existed before. Computational tools — many of them now commercialized through Form Bio — for analyzing massive biological datasets. A scientific team that thinks routinely about evolutionary timescales, about how species changed over thousands of years, about the genetic mechanisms that drive adaptation and vulnerability. A set of techniques for reading ancient DNA and comparing it to living genomes to identify what changed and when.</p>
<p>All of that is, at its core, the raw material for exactly what Astromech is trying to build. A model that has been trained on the history of biological change across deep time — one that can look at a living system and say: based on everything we know about how biology evolves, here is what we expect to happen next, and here is where the system is most vulnerable.</p>
<p>Astromech is hiring for genomic inference, synthesis design, ancestral modeling, gene regulation, sequence reconstruction, metabolic modeling, and protein folding. The job listings read like a map of the exact scientific capabilities that Colossal spent four years assembling. The spinout isn&#8217;t a departure from the main mission. It&#8217;s the main mission&#8217;s most powerful tool, built out as its own company.</p>
<div id="attachment_1041698" style="width: 1466px" class="wp-caption aligncenter"><img decoding="async" aria-describedby="caption-attachment-1041698" class="wp-image-1041698 size-full" src="https://futuristspeaker.com/wp-content/uploads/2026/04/Future-BioLab-1117.jpg" alt="" width="1456" height="816" srcset="https://futuristspeaker.com/wp-content/uploads/2026/04/Future-BioLab-1117.jpg 1456w, https://futuristspeaker.com/wp-content/uploads/2026/04/Future-BioLab-1117-1280x717.jpg 1280w, https://futuristspeaker.com/wp-content/uploads/2026/04/Future-BioLab-1117-980x549.jpg 980w, https://futuristspeaker.com/wp-content/uploads/2026/04/Future-BioLab-1117-480x269.jpg 480w" sizes="(min-width: 0px) and (max-width: 480px) 480px, (min-width: 481px) and (max-width: 980px) 980px, (min-width: 981px) and (max-width: 1280px) 1280px, (min-width: 1281px) 1456px, 100vw" /><p id="caption-attachment-1041698" class="wp-caption-text">Medicine reacts after damage begins. Astromech aims to predict threats before they emerge—turning biology from a crisis response system into an early-warning engine for what comes next.</p></div>
<h4>The Problem It&#8217;s Solving</h4>
<p>One of the persistent frustrations of modern medicine and public health is that we are almost always reactive. A new pathogen emerges, and we scramble to understand it. A disease becomes drug-resistant, and we scramble to find alternatives. An ecosystem begins to collapse, and we scramble to identify the cause. The scrambling is expensive, slow, and often too late.</p>
<p>The COVID-19 pandemic made this painfully visible to the entire world. The virus existed in animal populations long before it crossed into humans. The genetic tools to identify it were available. The computational power to model its likely behavior was available. What wasn&#8217;t available was a system sophisticated enough to put those pieces together and say: this is coming, and this is what it will do.</p>
<p>Astromech is a direct response to that gap. Not the only response — there are other early-warning and pandemic-preparedness initiatives working on related problems. But it may be the most ambitious one, because it&#8217;s not just trying to spot specific known threats earlier. It&#8217;s trying to build a general model of biological vulnerability — one that could flag a threat that nobody has identified yet, because the model has recognized the pattern that precedes it.</p>
<p>That&#8217;s the difference between a smoke detector and a system that predicts where fires are most likely to start.</p>
<h4>Why This Valuation Makes Sense</h4>
<p>Two billion dollars for a company with no revenue and no product yet in the market sounds, on the surface, like the kind of number that raises eyebrows. But the valuation logic is straightforward if you understand the market.</p>
<p>The global pandemic preparedness market alone is measured in hundreds of billions of dollars, and governments around the world spent the last five years being reminded, painfully, what under-investment in early warning systems actually costs. Drug discovery — which predictive biology could accelerate dramatically by identifying drug resistance patterns before they become treatment failures — is a multi-trillion dollar industry. Agricultural biotech, conservation biology, livestock health management: each of these is a substantial market in its own right.</p>
<p>A platform that works across all of them, built on some of the most sophisticated genomic and evolutionary data ever assembled, co-founded by the team that just built a $10 billion company from scratch in four years — investors have seen enough from Lamm and Church to know the ambition is real. The question isn&#8217;t whether the idea is valuable. It&#8217;s whether the science will hold.</p>
<p>Lamm thinks it&#8217;s undervalued. That&#8217;s the kind of thing founders say. But he said the same thing about de-extinction in 2021, and three dire wolf pups are living on a farm somewhere right now as evidence that he wasn&#8217;t wrong.</p>
<div id="attachment_1041699" style="width: 1466px" class="wp-caption aligncenter"><img decoding="async" aria-describedby="caption-attachment-1041699" class="wp-image-1041699 size-full" src="https://futuristspeaker.com/wp-content/uploads/2026/04/Future-BioLab-1116.jpg" alt="" width="1456" height="816" srcset="https://futuristspeaker.com/wp-content/uploads/2026/04/Future-BioLab-1116.jpg 1456w, https://futuristspeaker.com/wp-content/uploads/2026/04/Future-BioLab-1116-1280x717.jpg 1280w, https://futuristspeaker.com/wp-content/uploads/2026/04/Future-BioLab-1116-980x549.jpg 980w, https://futuristspeaker.com/wp-content/uploads/2026/04/Future-BioLab-1116-480x269.jpg 480w" sizes="(min-width: 0px) and (max-width: 480px) 480px, (min-width: 481px) and (max-width: 980px) 980px, (min-width: 981px) and (max-width: 1280px) 1280px, (min-width: 1281px) 1456px, 100vw" /><p id="caption-attachment-1041699" class="wp-caption-text">From revival to prediction—the tools keep expanding. Astromech’s bet isn’t fixing biology, but forecasting it, shifting humanity from reaction to anticipation at a planetary scale.</p></div>
<h4>The Biggest Bet Yet</h4>
<p>Each company in this series has been bigger than the one before it. Colossal brought back an extinct species. Form Bio built the operating system for a new era of biological research. Breaking developed a microbe that eats one of the most persistent pollutants in history. Each one started as a tool built to solve a specific problem, and became something larger than the problem that created it.</p>
<p>Astromech is the biggest bet in the portfolio. Not because the technology is further from reality — it&#8217;s actually built on real science with real precedents. But because the potential outcomes are the most consequential. A forecasting engine for biology, if it works the way Lamm describes, doesn&#8217;t just change one industry. It changes how humanity manages its relationship with the living world — from treating disease after it strikes to anticipating it before it forms.</p>
<p>That&#8217;s not a pharmaceutical company. That&#8217;s not a biotech company. That&#8217;s something new.</p>
<p><em>Up Next: The Colossal Foundation — the Noah&#8217;s Ark that Lamm is building at the cellular level, and what it means to preserve the genetics of every species before they&#8217;re gone.</em></p>
<h4>Related Reading</h4>
<h5><a href="https://www.scientificamerican.com/article/the-next-pandemic-could-come-from-anywhere/">The Next Pandemic Could Come From Anywhere. Here&#8217;s How Scientists Are Watching for It</a></h5>
<p><em>Scientific American</em> — How early-warning systems for biological threats actually work today, what their limitations are, and why predictive modeling is the frontier the field is racing toward</p>
<h5><a href="https://www.nature.com/articles/d41586-022-00997-5">AlphaFold and the AI Revolution in Biology</a></h5>
<p><em>Nature</em> — The story of how AI cracked one of biology&#8217;s hardest problems and what it opened up — the clearest existing precedent for what a truly powerful predictive biology platform could accomplish</p>
<h5><a href="https://www.quantamagazine.org/can-scientists-predict-evolution-20181017/">Can We Predict Evolution?</a></h5>
<p><em>Quanta Magazine</em> — A deep look at the science of evolutionary forecasting — what biologists have already shown is predictable about how living systems change, and where the real frontiers of the field lie</p>
<p>The post <a href="https://futuristspeaker.com/artificial-intelligence/astromech-what-if-you-could-predict-how-biology-changes-before-it-does/">Astromech: What If You Could Predict How Biology Changes Before It Does?</a> appeared first on <a href="https://futuristspeaker.com">Futurist Speaker</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Breaking: The Company Using Biology to Eat the Plastic Crisis</title>
		<link>https://futuristspeaker.com/future-of-healthcare/breaking-the-company-using-biology-to-eat-the-plastic-crisis/</link>
		
		<dc:creator><![CDATA[Thomas Frey]]></dc:creator>
		<pubDate>Wed, 08 Apr 2026 20:05:36 +0000</pubDate>
				<category><![CDATA[Future of Healthcare]]></category>
		<category><![CDATA[Futurist Thomas Frey Insights]]></category>
		<category><![CDATA[Predictions]]></category>
		<category><![CDATA[microplastics]]></category>
		<category><![CDATA[plastic problem]]></category>
		<guid isPermaLink="false">https://futuristspeaker.com/?p=1041685</guid>

					<description><![CDATA[<p>Five billion tons of plastic already surrounds us—and growing. This isn’t waste; it’s accumulation without end. The real breakthrough will be how we undo it. By Futurist Thomas Frey There is a number that should stop you cold. Five thousand million tons. That&#8217;s how much plastic is currently sitting in landfills, floating in oceans, and [&#8230;]</p>
<p>The post <a href="https://futuristspeaker.com/future-of-healthcare/breaking-the-company-using-biology-to-eat-the-plastic-crisis/">Breaking: The Company Using Biology to Eat the Plastic Crisis</a> appeared first on <a href="https://futuristspeaker.com">Futurist Speaker</a>.</p>
]]></description>
										<content:encoded><![CDATA[<section class="text-token-text-primary w-full focus:outline-none [--shadow-height:45px] has-data-writing-block:pointer-events-none has-data-writing-block:-mt-(--shadow-height) has-data-writing-block:pt-(--shadow-height) [&amp;:has([data-writing-block])&gt;*]:pointer-events-auto scroll-mt-(--header-height)" dir="auto" data-turn-id="e277625d-44ae-43b4-811c-6894905024ee" data-testid="conversation-turn-19" data-scroll-anchor="false" data-turn="user"></section>
<section class="text-token-text-primary w-full focus:outline-none [--shadow-height:45px] has-data-writing-block:pointer-events-none has-data-writing-block:-mt-(--shadow-height) has-data-writing-block:pt-(--shadow-height) [&amp;:has([data-writing-block])&gt;*]:pointer-events-auto scroll-mt-[calc(var(--header-height)+min(200px,max(70px,20svh)))]" dir="auto" data-turn-id="request-WEB:6230ca62-7b22-4bdd-8f4e-8598e16f6281-9" data-testid="conversation-turn-20" data-scroll-anchor="true" data-turn="assistant">
<div class="text-base my-auto mx-auto pb-10 [--thread-content-margin:var(--thread-content-margin-xs,calc(var(--spacing)*4))] @w-sm/main:[--thread-content-margin:var(--thread-content-margin-sm,calc(var(--spacing)*6))] @w-lg/main:[--thread-content-margin:var(--thread-content-margin-lg,calc(var(--spacing)*16))] px-(--thread-content-margin)">
<div class="[--thread-content-max-width:40rem] @w-lg/main:[--thread-content-max-width:48rem] mx-auto max-w-(--thread-content-max-width) flex-1 group/turn-messages focus-visible:outline-hidden relative flex w-full min-w-0 flex-col agent-turn">
<div class="flex max-w-full flex-col gap-4 grow">
<div class="min-h-8 text-message relative flex w-full flex-col items-end gap-2 text-start break-words whitespace-normal outline-none keyboard-focused:focus-ring [.text-message+&amp;]:mt-1" dir="auto" tabindex="0" data-message-author-role="assistant" data-message-id="c692e984-c3b3-4382-95da-7519c19b5849" data-message-model-slug="gpt-5-3" data-turn-start-message="true">
<div class="flex w-full flex-col gap-1 empty:hidden">
<div class="markdown prose dark:prose-invert w-full wrap-break-word light markdown-new-styling">
<p style="text-align: center;" data-start="0" data-end="157" data-is-last-node="" data-is-only-node="">Five billion tons of plastic already surrounds us—and growing.<br />
This isn’t waste; it’s accumulation without end. The real breakthrough will be how we undo it.</p>
</div>
</div>
</div>
</div>
</div>
</div>
</section>
<p><em>By Futurist Thomas Frey</em></p>
<p>There is a number that should stop you cold.</p>
<p>Five thousand million tons.</p>
<p>That&#8217;s how much plastic is currently sitting in landfills, floating in oceans, and embedded in ecosystems around the world. Not the amount produced since plastic was invented — the amount that&#8217;s already out there, already dispersed, already working its way into the food chain and the water supply and the bodies of every living creature on Earth. Scientists have found plastic particles in Antarctic sea ice, in the deepest ocean trenches, and in human blood. A liter of bottled water contains, on average, nearly a quarter of a million nanoplastic fragments.</p>
<p>And every year, we add 390 million more tons to the pile.</p>
<p>The recycling system that was supposed to manage this — the one with the little arrows on the bottom of every container — handles roughly 9% of what gets produced. The rest is incinerated, buried, or abandoned. Incineration releases toxic gases. Burial means the plastic sits there for centuries. A plastic fishing line, left alone, takes 600 years to break down. A dental floss container, 80 years. A paintbrush, up to a thousand.</p>
<p>This is the problem that Breaking was built to solve. And the way they&#8217;re going about it is unlike anything that&#8217;s been tried before.</p>
<h4>A Microbe That Eats Plastic for Breakfast</h4>
<p>In 2022, researchers at the Wyss Institute for Biologically Inspired Engineering at Harvard University discovered something extraordinary in their lab. A microorganism — not engineered, just found — that could break down plastic by eating it. Not one type of plastic. Multiple types. Including polyolefins, which are the toughest plastics in common use, the ones that have historically resisted every biological degradation attempt on record.</p>
<p>The microbe was catalogued as X-32. And what it does is genuinely remarkable. It breaks down the hydrocarbon chains inside plastic polymers — the chemical bonds that make plastic so durable and so persistent — using those plastics as its primary food source. The byproducts are carbon dioxide, water, and biomass. No toxic residue. No microplastic fragments. Just the basic building blocks of organic chemistry, which the environment already knows how to handle.</p>
<p>In lab tests, X-32 started breaking down paintbrush bristles, fishing wire, and dental floss within five days. At scale, it has demonstrated the ability to degrade up to 90% of certain polyesters and polyolefins in under 22 months. In plastic terms, that is essentially instantaneous.</p>
<p>Breaking, the company that was spun out of Colossal Biosciences in April 2024, launched with $10.5 million in seed funding specifically to develop X-32 into a commercial product. The founding team reads like a who&#8217;s-who of synthetic biology: George Church from Harvard, Donald Ingber who founded the Wyss Institute, and CEO Sukanya Punthambaker, a career synthetic biologist who has spent decades working toward exactly this kind of breakthrough.</p>
<p>Ben Lamm co-founded Breaking and serves on its board. Kent Wakeford, who you&#8217;ll remember as the co-CEO of Form Bio, is the executive chairman.</p>
<p>The pattern is the same. A tool built inside Colossal&#8217;s orbit, spun out when it became clear the problem it was solving was bigger than Colossal&#8217;s mission alone.</p>
<div id="attachment_1041694" style="width: 1930px" class="wp-caption aligncenter"><img decoding="async" aria-describedby="caption-attachment-1041694" class="wp-image-1041694 size-full" src="https://futuristspeaker.com/wp-content/uploads/2026/04/Plastic-Problem-5451.jpg" alt="" width="1920" height="1280" srcset="https://futuristspeaker.com/wp-content/uploads/2026/04/Plastic-Problem-5451.jpg 1920w, https://futuristspeaker.com/wp-content/uploads/2026/04/Plastic-Problem-5451-1280x853.jpg 1280w, https://futuristspeaker.com/wp-content/uploads/2026/04/Plastic-Problem-5451-980x653.jpg 980w, https://futuristspeaker.com/wp-content/uploads/2026/04/Plastic-Problem-5451-480x320.jpg 480w" sizes="(min-width: 0px) and (max-width: 480px) 480px, (min-width: 481px) and (max-width: 980px) 980px, (min-width: 981px) and (max-width: 1280px) 1280px, (min-width: 1281px) 1920px, 100vw" /><p id="caption-attachment-1041694" class="wp-caption-text">You can’t restore life in a plastic-filled world. Cleanup isn’t separate from revival—it’s prerequisite. Fix the environment first, or nothing else we bring back will survive.</p></div>
<h4>Why This Connects to Everything Else</h4>
<p>Lamm has been direct about why a de-extinction company is in the plastic business. You cannot restore an ecosystem if the ecosystem is full of plastic. The northern white rhino, the woolly mammoth, the Tasmanian tiger — none of them can thrive in an environment saturated with synthetic polymers that their biology has no way to process. Ecosystem restoration and plastic remediation are not two separate goals. They&#8217;re the same goal looked at from different angles.</p>
<p>That framing matters because it explains why Breaking isn&#8217;t just an environmental startup that happened to spin out of a biotech company. It&#8217;s a mission-critical piece of Colossal&#8217;s larger puzzle — the piece that has to work before the rest of the restoration agenda can fully work.</p>
<p>The first commercial applications are targeted at the food waste and composting industry, which turns out to be a surprisingly concrete entry point. Food waste in American landfills costs taxpayers $16 billion per year. The reason so much of it goes to landfills rather than compost is that it&#8217;s contaminated with plastic packaging that composting facilities can&#8217;t process. If X-32 can remove that plastic contamination efficiently and cheaply, it unlocks a massive and largely untapped composting infrastructure — with direct benefits for greenhouse gas emissions, landfill reduction, and soil health.</p>
<p>From there, the roadmap extends to wastewater treatment, marine bioreactors for ocean microplastic cleanup, and industrial waste management. Each application uses the same core technology, scaled and adapted for a different environment.</p>
<h4>The Hard Question</h4>
<p>There is an obvious question that every thinking person asks when they hear about a microbe that eats plastic: what happens when you release a plastic-eating organism into the environment?</p>
<p>It&#8217;s a fair question. Breaking takes it seriously. Lamm has been consistent that X-32 has no known negative environmental ramifications, that it produces only harmless byproducts, and that the team is focused carefully on all regulatory and safety requirements before any open-environment deployment. The initial applications — food waste facilities, industrial wastewater systems, controlled bioreactors — are contained environments where behavior is observable and risks are manageable.</p>
<p>The broader question of deploying engineered organisms in open ecosystems is one that the regulatory frameworks are still catching up to. This is not unique to Breaking. It&#8217;s the central challenge of the entire synthetic biology field. The science is moving faster than the governance. That gap is not an argument against the science — it&#8217;s an argument for building the governance faster.</p>
<p>What sets Breaking apart from most of the solutions that have been proposed to the plastic crisis is that it actually works on polyolefins. Polyethylene. Polypropylene. The most common plastics in the world, present in virtually every form of packaging, textile, and consumer product. Every previous microbial approach has stumbled on polyolefins because the carbon bonds are simply too strong for most biological systems to break. X-32 breaks them.</p>
<div id="attachment_1041688" style="width: 1354px" class="wp-caption alignnone"><img decoding="async" aria-describedby="caption-attachment-1041688" class="wp-image-1041688 size-full" src="https://futuristspeaker.com/wp-content/uploads/2026/04/Plastic-Problem-5457.jpg" alt="" width="1344" height="896" srcset="https://futuristspeaker.com/wp-content/uploads/2026/04/Plastic-Problem-5457.jpg 1344w, https://futuristspeaker.com/wp-content/uploads/2026/04/Plastic-Problem-5457-1280x853.jpg 1280w, https://futuristspeaker.com/wp-content/uploads/2026/04/Plastic-Problem-5457-980x653.jpg 980w, https://futuristspeaker.com/wp-content/uploads/2026/04/Plastic-Problem-5457-480x320.jpg 480w" sizes="(min-width: 0px) and (max-width: 480px) 480px, (min-width: 481px) and (max-width: 980px) 980px, (min-width: 981px) and (max-width: 1280px) 1280px, (min-width: 1281px) 1344px, 100vw" /><p id="caption-attachment-1041688" class="wp-caption-text">From genomes to software to cleanup—this is a coordinated system for rewriting biology itself. The tools are finally matching the scale of the problems we created.</p></div>
<h4>The Bigger Picture</h4>
<p>Each company in this series has shown us a different face of the same underlying strategy. Colossal builds the biological tools. Form Bio builds the software to manage the data those tools generate. Breaking takes the synthetic biology capability developed in Colossal&#8217;s labs and turns it toward one of the most urgent environmental problems on the planet.</p>
<p>Together, they form something that starts to look less like a collection of companies and more like a coordinated system — one designed to read the living world, understand it, and intervene in it at the level where the real damage is being done.</p>
<p>Plastic is one of the defining problems of the last century. The tools to solve it are, for the first time, starting to look adequate to the scale of the challenge.</p>
<p>Five thousand million tons is a big number. X-32 is a very small organism. But so is every microbe that has ever changed the world.</p>
<p><em>Up Next: Astromech — the stealth AI startup that just surfaced with a $2 billion valuation and a goal that might be the most ambitious thing Ben Lamm has ever tried: predicting biological change before it happens.</em></p>
<h4>Related Reading</h4>
<h5><a href="https://www.nationalgeographic.com/environment/article/plastic-pollution">The Plastic Problem Is Worse Than You Think</a></h5>
<p><em>National Geographic</em> — A comprehensive look at the scale of global plastic contamination, where it ends up, and why the recycling system was never designed to handle what we&#8217;re actually producing</p>
<h5><a href="https://www.nature.com/articles/d41586-021-01115-z">The Promise and Peril of Plastic-Eating Microbes</a></h5>
<p><em>Nature</em> — A measured scientific assessment of microbial plastic degradation — what&#8217;s been demonstrated in labs, what the path to scale actually looks like, and what questions still need answering</p>
<h5><a href="https://www.weforum.org/agenda/2023/01/synthetic-biology-nature-climate-change/">Synthetic Biology and the Future of Environmental Remediation</a></h5>
<p><em>World Economic Forum</em> — How engineered organisms are moving from laboratory curiosities to serious environmental tools, and what the governance frameworks need to look like before widespread deployment</p>
<p>The post <a href="https://futuristspeaker.com/future-of-healthcare/breaking-the-company-using-biology-to-eat-the-plastic-crisis/">Breaking: The Company Using Biology to Eat the Plastic Crisis</a> appeared first on <a href="https://futuristspeaker.com">Futurist Speaker</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Form Bio: The Operating System for Science</title>
		<link>https://futuristspeaker.com/business-trends/form-bio-the-operating-system-for-science/</link>
		
		<dc:creator><![CDATA[Thomas Frey]]></dc:creator>
		<pubDate>Wed, 08 Apr 2026 16:39:37 +0000</pubDate>
				<category><![CDATA[Business Trends]]></category>
		<category><![CDATA[Future of Healthcare]]></category>
		<category><![CDATA[Future Scenarios]]></category>
		<category><![CDATA[Futurist Thomas Frey Insights]]></category>
		<category><![CDATA[Predictions]]></category>
		<category><![CDATA[ben lamm]]></category>
		<category><![CDATA[Colossal Biosciences]]></category>
		<category><![CDATA[crispr]]></category>
		<category><![CDATA[Form Bio]]></category>
		<category><![CDATA[gene therapy]]></category>
		<category><![CDATA[george church]]></category>
		<guid isPermaLink="false">https://futuristspeaker.com/?p=1041672</guid>

					<description><![CDATA[<p>Ben Lamm (left) and George Church (right) pose in front of a woolly mammoth. By Futurist Thomas Frey When Colossal Biosciences launched in 2021, one of the first things Ben Lamm did was sit down with his team and map out all the software they would need to actually do the work. The list came [&#8230;]</p>
<p>The post <a href="https://futuristspeaker.com/business-trends/form-bio-the-operating-system-for-science/">Form Bio: The Operating System for Science</a> appeared first on <a href="https://futuristspeaker.com">Futurist Speaker</a>.</p>
]]></description>
										<content:encoded><![CDATA[<div data-test-render-count="1">
<div class="group">
<div class="contents">
<div class="group relative relative pb-3" data-is-streaming="false">
<div class="font-claude-response relative leading-[1.65rem] [&amp;_pre&gt;div]:bg-bg-000/50 [&amp;_pre&gt;div]:border-0.5 [&amp;_pre&gt;div]:border-border-400 [&amp;_.ignore-pre-bg&gt;div]:bg-transparent [&amp;_.standard-markdown_:is(p,blockquote,h1,h2,h3,h4,h5,h6)]:pl-2 [&amp;_.standard-markdown_:is(p,blockquote,ul,ol,h1,h2,h3,h4,h5,h6)]:pr-8 [&amp;_.progressive-markdown_:is(p,blockquote,h1,h2,h3,h4,h5,h6)]:pl-2 [&amp;_.progressive-markdown_:is(p,blockquote,ul,ol,h1,h2,h3,h4,h5,h6)]:pr-8">
<div class="grid grid-rows-[auto_auto] min-w-0">
<div class="row-start-2 col-start-1 relative grid isolate min-w-0">
<div class="row-start-1 col-start-1 relative z-[2] min-w-0">
<div class="standard-markdown grid-cols-1 grid [&amp;_&gt;_*]:min-w-0 gap-3 standard-markdown">
<p style="text-align: center;">Ben Lamm (left) and George Church (right) pose in front of a woolly mammoth.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]"><em>By Futurist Thomas Frey</em></p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">When Colossal Biosciences launched in 2021, one of the first things Ben Lamm did was sit down with his team and map out all the software they would need to actually do the work. The list came to 55 different applications and algorithms. Fifty-five separate tools, each handling a different piece of the research pipeline, none of them talking to each other particularly well, none of them designed for the kind of work Colossal was trying to do.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">There was no single platform that could take a scientist from a raw idea all the way through data analysis, workflow management, result visualization, and collaboration with researchers at other institutions. Not for this kind of biology. Not at this scale. Not with the complexity that de-extinction research demands.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">So they built one.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">And then — almost by accident — they realized they&#8217;d built something the entire life sciences industry had been waiting for.</p>
<h4 class="text-text-100 mt-2 -mb-1 text-base font-bold">The Problem Nobody Had Solved</h4>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">Here&#8217;s what biological research actually looks like inside a modern lab, away from the glamour of the headlines. A scientist has a dataset — maybe a genome sequence, maybe the results of a CRISPR editing experiment, maybe microarray analysis from a gene therapy trial. That dataset is enormous. It connects to other datasets. It needs to be analyzed using computational models, cross-referenced with other results, validated through additional experiments, and eventually shared with collaborators at other universities or companies who are using completely different software systems.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">In most labs today, that process is held together with institutional knowledge, personal preference, and a lot of custom code that one specific researcher wrote and that no one else fully understands. When that researcher leaves, a piece of the lab&#8217;s institutional memory walks out the door with them.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">The situation at Harvard, where Colossal&#8217;s co-founder George Church runs one of the world&#8217;s most advanced genetics labs, was typical. Fifty-five different data systems in active use. Researchers from Colossal and Harvard trying to collaborate, but with no common infrastructure for sharing experiments, workflows, or results in a way that was consistent and reproducible.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">&#8220;There was no cohesive solution,&#8221; said Kent Wakeford, who became co-CEO of Form Bio when it spun out. &#8220;So we developed one.&#8221;</p>
<div id="attachment_1041674" style="width: 1930px" class="wp-caption aligncenter"><img decoding="async" aria-describedby="caption-attachment-1041674" class="wp-image-1041674 size-full" src="https://futuristspeaker.com/wp-content/uploads/2026/04/Colossal-Biosciences-7331.jpg" alt="" width="1920" height="1246" srcset="https://futuristspeaker.com/wp-content/uploads/2026/04/Colossal-Biosciences-7331.jpg 1920w, https://futuristspeaker.com/wp-content/uploads/2026/04/Colossal-Biosciences-7331-1280x831.jpg 1280w, https://futuristspeaker.com/wp-content/uploads/2026/04/Colossal-Biosciences-7331-980x636.jpg 980w, https://futuristspeaker.com/wp-content/uploads/2026/04/Colossal-Biosciences-7331-480x312.jpg 480w" sizes="(min-width: 0px) and (max-width: 480px) 480px, (min-width: 481px) and (max-width: 980px) 980px, (min-width: 981px) and (max-width: 1280px) 1280px, (min-width: 1281px) 1920px, 100vw" /><p id="caption-attachment-1041674" class="wp-caption-text">Science is becoming software. When biology runs on integrated platforms, discovery accelerates, collaboration scales, and the real breakthrough isn’t the experiment—it’s the infrastructure powering it.</p></div>
<h4 class="text-text-100 mt-2 -mb-1 text-base font-bold">What Form Bio Actually Does</h4>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">The simplest way to describe Form Bio is this: it&#8217;s what happens when you apply software product thinking to the workflow of science.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">Scientists aren&#8217;t typically software engineers. The tools they use were mostly built by other scientists or small academic teams, optimized for specific tasks, and never designed to work together as a system. Form Bio replaces that patchwork with a single integrated platform — one place where a researcher can design an experiment, run computational analysis using AI and machine learning models, visualize the results, and share everything with collaborators anywhere in the world, with proper permissions and data security built in.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">George Church, who has spent decades running one of the most productive genetics labs on the planet, put it plainly: the platform is &#8220;critical to pave the way&#8221; for the kind of science that&#8217;s now becoming possible. When one of the architects of modern genomics says your software is necessary infrastructure, that&#8217;s not a testimonial. That&#8217;s a signal.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">The use cases stretch well beyond de-extinction. Drug discovery. Gene therapy development — specifically the design of AAV vectors, which are the delivery vehicles used to get gene-editing tools into human cells. Biomanufacturing. Agricultural biotech. Academic research across every field that generates large biological datasets, which is most of them now. The CIA&#8217;s venture arm, In-Q-Tel, invested in Colossal specifically — by their own admission — not because of the mammoths, but because of the underlying capability. The computational biology infrastructure is what interested them.</p>
<h4 class="text-text-100 mt-2 -mb-1 text-base font-bold">The Pattern Behind the Spinout</h4>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">Form Bio was spun out of Colossal in September 2022 with a $30 million Series A that was oversubscribed — meaning investors wanted in faster than the round could close. It launched as an independent company with its own leadership team, its own staff, and its own capital structure, while maintaining a close relationship with Colossal as both a customer and a co-development partner.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">This is a pattern worth paying attention to, because it&#8217;s not an accident. Lamm has been explicit that Colossal&#8217;s long-term strategy involves spinning out the technologies built in the process of doing the research — letting each tool become its own company, raise its own capital, and pursue its own market, rather than trying to run everything under one roof.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">It&#8217;s the same instinct that made NASA&#8217;s technology transfer program one of the most productive sources of commercial innovation in American history. When you&#8217;re solving genuinely hard problems at the frontier of what&#8217;s possible, you generate tools that have value far beyond the original problem. The question is whether you&#8217;re organized to capture that value. Lamm is organized to capture it.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">Form Bio is the first example. Breaking — the plastic degradation company built on Colossal&#8217;s synthetic biology infrastructure — is the second. Astromech, the predictive biology AI that surfaced publicly just last week, is the third. Each one started as internal tooling built to solve a specific problem inside Colossal. Each one turned out to be a product.</p>
<div id="attachment_1041681" style="width: 1210px" class="wp-caption aligncenter"><img decoding="async" aria-describedby="caption-attachment-1041681" class="wp-image-1041681 size-full" src="https://futuristspeaker.com/wp-content/uploads/2026/04/Colossal-Biosciences-7338.jpg" alt="" width="1200" height="727" srcset="https://futuristspeaker.com/wp-content/uploads/2026/04/Colossal-Biosciences-7338.jpg 1200w, https://futuristspeaker.com/wp-content/uploads/2026/04/Colossal-Biosciences-7338-980x594.jpg 980w, https://futuristspeaker.com/wp-content/uploads/2026/04/Colossal-Biosciences-7338-480x291.jpg 480w" sizes="(min-width: 0px) and (max-width: 480px) 480px, (min-width: 481px) and (max-width: 980px) 980px, (min-width: 981px) 1200px, 100vw" /><p id="caption-attachment-1041681" class="wp-caption-text">Form Bio was born trying to bring back a woolly mammoth. Where it ends up may be considerably larger than that.</p></div>
<h4 class="text-text-100 mt-2 -mb-1 text-base font-bold">Why This Matters Beyond Biology</h4>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">There&#8217;s a larger story here about what happens when software thinking meets science.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">Academic research has always moved slowly, and part of the reason is structural. Scientists work in relative isolation, each lab developing its own methods, its own tools, its own ways of doing things. Reproducibility — the ability for another lab to run the same experiment and get the same result — is one of the most persistent problems in modern science, and a lot of it comes down to the fact that the computational infrastructure for sharing and standardizing workflows simply hasn&#8217;t existed.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">Form Bio is building that infrastructure. The comparison its co-CEO reached for was GitHub — the platform that transformed software development by giving programmers a shared environment for building, testing, and collaborating on code. What GitHub did for software, Form Bio wants to do for biology. Create a common layer. Make the workflows reproducible. Let researchers spend their time on science instead of on data wrangling.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">That&#8217;s not a small ambition. Biology is becoming the defining technology of this century in the same way that computing defined the last one. The platform that becomes the operating system for biological research — the place where scientists from Cambridge to Tokyo to Dallas all run their experiments and share their discoveries — will be one of the most consequential pieces of software infrastructure ever built.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]"><em>Up Next: Breaking — how Colossal&#8217;s synthetic biology toolbox turned into a potential solution for 5,000 million tons of plastic.</em></p>
<h4 class="text-text-100 mt-2 -mb-1 text-base font-bold">Related Reading</h4>
<h5 class="text-text-100 mt-2 -mb-1 text-sm font-bold"><a class="underline underline underline-offset-2 decoration-1 decoration-current/40 hover:decoration-current focus:decoration-current" href="https://www.in-q-tel.org/blog/colossal-biosciences">When the CIA Invests in De-Extinction, Read the Fine Print</a></h5>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]"><em>In-Q-Tel</em> — The intelligence community&#8217;s venture arm explains why it backed Colossal — and makes clear the investment was about computational biology capability, not the animals</p>
<h5 class="text-text-100 mt-2 -mb-1 text-sm font-bold"><a class="underline underline underline-offset-2 decoration-1 decoration-current/40 hover:decoration-current focus:decoration-current" href="https://www.nature.com/articles/d41586-020-00502-w">The Data Deluge Threatening to Drown Modern Science</a></h5>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]"><em>Nature</em> — A foundational look at why biological research generates more data than scientists can currently process, and why the tools to manage that data have become as important as the science itself</p>
<h5 class="text-text-100 mt-2 -mb-1 text-sm font-bold"><a class="underline underline underline-offset-2 decoration-1 decoration-current/40 hover:decoration-current focus:decoration-current" href="https://www.technologyreview.com/2023/github-science-research-platforms/">GitHub for Science? The Race to Build Research Infrastructure</a></h5>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]"><em>MIT Technology Review</em> — How a new generation of platforms is trying to do for biological research what GitHub did for software development — and why the stakes are higher than most people realize</p>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
<div class="flex justify-start" role="group" aria-label="Message actions">
<div class="text-text-300">
<div class="text-text-300 flex items-stretch justify-between">
<div class="w-fit" data-state="closed">
<div class="relative text-text-500 group-hover/btn:text-text-100">
<div class="transition-all opacity-100 scale-100"></div>
<div class="absolute top-0 left-0 transition-all opacity-0 scale-50"></div>
</div>
</div>
<div class="w-fit" data-state="closed">
<div class="text-text-500 group-hover/btn:text-text-100"></div>
</div>
<div class="w-fit" data-state="closed">
<div class="text-text-500 group-hover/btn:text-text-100"></div>
</div>
<div class="flex items-center">
<div class="w-fit" data-state="closed">
<div class="text-text-500 group-hover/btn:text-text-100"></div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
<div class="h-px w-full pointer-events-none" aria-hidden="true"></div>
<p>The post <a href="https://futuristspeaker.com/business-trends/form-bio-the-operating-system-for-science/">Form Bio: The Operating System for Science</a> appeared first on <a href="https://futuristspeaker.com">Futurist Speaker</a>.</p>
]]></content:encoded>
					
		
		
			</item>
	</channel>
</rss>

<!--
Performance optimized by W3 Total Cache. Learn more: https://www.boldgrid.com/w3-total-cache/?utm_source=w3tc&utm_medium=footer_comment&utm_campaign=free_plugin

Page Caching using Disk: Enhanced 

Served from: futuristspeaker.com @ 2026-04-19 17:43:47 by W3 Total Cache
-->