AI

The Genesis Mission and the Healthcare Gap

By John Ferguson

On November 24, 2025, President Trump signed an Executive Order launching the Genesis Mission, a coordinated national effort to build an integrated AI platform leveraging federal scientific datasets, national laboratory supercomputers, and public-private partnerships. The stated goal: dramatic acceleration of scientific discovery across domains critical to national competitiveness. 

The initiative explicitly invokes the Manhattan Project as its model. This comparison deserves serious attention not because it’s wrong, but because it’s revealing. 

As a surgeon who has spent 25 years in clinical practice while simultaneously building AI systems for healthcare, I find myself genuinely impressed by several elements of this initiative and genuinely concerned by what it omits. The Genesis Mission represents real progress in federal AI infrastructure. It also reveals, with unusual clarity, where our national priorities actually lie. 

What the Genesis Mission Gets Right 

Credit where it’s due: this Executive Order addresses several longstanding problems in federal AI development. 

First, the unified platform architecture. Federal scientific data has historically been fragmented across agencies, locked in incompatible formats, and inaccessible to researchers who could use it productively. Section 3’s establishment of the American Science and Security Platform integrating DOE supercomputers, cloud-based AI environments, and standardized data access represents a genuine infrastructure advancement. The emphasis on “digitization, standardization, metadata, and provenance tracking” suggests someone actually understands the data quality problems that plague AI development. 

Second, the content-controlled approach. The Mission’s focus on “federally curated” datasets and “domain-specific foundation models” aligns with what I’ve learned building medical AI: reliability comes from constraining your knowledge sources, not from training on everything and hoping guardrails catch the errors. Garbage in, garbage out, but when the garbage can affect national security or scientific validity, maybe don’t train on garbage. 

Third, the intellectual property framework. Section 5(c)’s requirement for “clear policies for ownership, licensing, trade-secret protections, and commercialization” addresses a persistent barrier to academic-industry collaboration. Researchers and companies need to know the rules before they invest resources. 

Fourth, the security architecture. The emphasis on supply chain security, cybersecurity standards, and “highest standards of vetting and authorization” reflects appropriate seriousness about protecting sensitive research infrastructure. 

These are not trivial accomplishments. Federal AI initiatives have often failed precisely because they lacked unified infrastructure, quality data pipelines, clear IP rules, and adequate security. The Genesis Mission appears to have learned from those failures. 

The Healthcare Omission Isn’t Accidental 

Section 4(a) identifies the priority domains for the Mission’s “national science and technology challenges”: advanced manufacturing, biotechnology, critical materials, nuclear fission and fusion energy, quantum information science, and semiconductors. 

Healthcare is absent. Clinical medicine is absent. Patient care is absent. 

Biotechnology made the list, but biotechnology is not healthcare. Biotechnology encompasses drug discovery, agricultural applications, industrial processes, and biological manufacturing. It’s the domain where pharmaceutical companies develop compounds, not where physicians care for patients. The distinction matters because biotechnology produces patentable products with clear commercial pathways, while clinical medicine produces outcomes in individual humans with messy liability implications. 

This omission is not an oversight. It’s a design choice that reveals something important about how we conceptualize AI risk. 

Consider the domains that did make the list. Semiconductors: if an AI-designed chip fails, it doesn’t work. Nuclear energy: heavily regulated with clear liability frameworks and institutional accountability. Quantum computing: still largely theoretical with failure modes that affect research timelines, not human bodies. Advanced manufacturing: product liability law provides established mechanisms for accountability. 

Now consider clinical medicine. If an AI system contributes to a diagnostic error, a treatment decision, or a care pathway that harms a patient who is accountable? The algorithm? The developer? The hospital that deployed it? The physician who relied on it? The federal platform that trained it? 

The Genesis Mission’s architects appear to have recognized, perhaps unconsciously, that healthcare AI presents accountability challenges that the current framework cannot resolve. Rather than address those challenges, they simply excluded the domain. 

The Accountability Void 

The Executive Order uses the word “autonomous” repeatedly. Section 3(a)(vi) references “autonomous and AI-augmented experimentation and manufacturing.” Section 3(e) directs the Secretary to review capabilities for “AI-directed experimentation and manufacturing, including automated and AI-augmented workflows.” 

Autonomous. AI-directed. Automated. 

The Order extensively addresses protection of assets: intellectual property, data security, export controls, cybersecurity, classification requirements. It establishes clear accountability for protecting valuable federal resources from unauthorized access or misuse. 

What the Order does not address at any point, in any section is accountability for outcomes that harm humans. There is no mention of liability frameworks for AI-directed decisions. No discussion of what happens when autonomous systems produce results that damage health, safety, or welfare. No mechanism for determining responsibility when the algorithm optimizes for the wrong objective function. 

This asymmetry is striking. The Genesis Mission carefully protects data, IP, and infrastructure. It does not protect people. 

In my world, clinical medicine, this gap would be disqualifying. I carry malpractice insurance because consequences are real and accountability is non-negotiable. When I make a decision that affects a patient, there is a clear chain of responsibility. If I’m wrong, mechanisms exist to identify the failure, compensate the harmed party, and prevent recurrence. 

AI systems operating under the Genesis Mission framework would have no such accountability. They can be “autonomous” without bearing consequences. They can be “AI-directed” without anyone directing them being responsible for where they direct. 

Perhaps this works for semiconductor design. It cannot work for medicine. 

The Manhattan Project Problem 

The Executive Order explicitly frames the Genesis Mission as comparable to “the Manhattan Project that was instrumental to our victory in World War II.” This framing deserves scrutiny. 

The Manhattan Project was a crash program to build weapons of unprecedented destructive power under conditions of wartime urgency. It operated with minimal external oversight, extraordinary secrecy, and explicit prioritization of speed over deliberation. It succeeded in its technical objectives. It also produced outcomes, Hiroshima, Nagasaki, the nuclear arms race, that humanity has spent eighty years attempting to manage. 

When I first read “Genesis Mission,” I thought: creation. New beginnings. The biblical resonance suggested building something generative, life-affirming. 

The Manhattan Project comparison reframes that interpretation. This is not primarily about creation. It’s about power. About building capabilities so consequential that the entire competitive landscape must reckon with them. About winning a race where the stakes justify compressed timelines and reduced deliberation. 

For semiconductors and quantum computing, perhaps this framing is appropriate. National competitiveness in foundational technologies genuinely matters. 

For healthcare, it would be catastrophic. Clinical medicine cannot operate on Manhattan Project timelines. Drug development already suffers from pressure to accelerate approval processes; the consequences include withdrawn medications, unexpected side effects, and patient harm. Diagnostic AI systems rushed to deployment have failed spectacularly: dermatology algorithms that couldn’t recognize melanoma on darker skin, sepsis predictors that generated thousands of false alarms, imaging systems that missed cancers visible to trained radiologists. 

The Manhattan Project built weapons. Healthcare AI, if done poorly, builds weapons too just weapons that discharge inside hospital rooms rather than over cities. 

The Sensing Gap the Platform Cannot Close 

There’s a deeper issue that the Genesis Mission’s architecture cannot address, regardless of how much computing power or curated data it provides. 

Human clinicians possess approximately 10 billion sensory receptors continuously sampling their environment. When I examine a patient, I’m not just processing the data in their chart. I’m integrating visual cues, skin color, respiratory effort, the subtle signs of distress that don’t translate to vital sign monitors. I’m registering olfactory information: the fruity breath of diabetic ketoacidosis, the distinctive smell of certain infections. I’m sensing the quality of a handshake, the tremor that suggests anxiety or pathology, the way a patient guards their abdomen. 

This sensory integration isn’t a nice-to-have supplement to clinical reasoning. It’s foundational. Evolution spent 3.8 billion years debugging these systems through the harsh reality of consequences; organisms that failed to sense threats accurately got eaten, and their genes didn’t propagate. The result is a threat-detection and pattern-recognition apparatus of extraordinary sophistication, continuously refined by real-world feedback. 

AI systems face no such selection pressure. They can be confidently wrong at scale because being wrong doesn’t kill them. They can optimize for measurable metrics while missing everything that matters but resists quantification. They can achieve impressive benchmark performance on structured datasets while failing completely when confronted with the messy, multimodal reality of actual patients. 

The Genesis Mission’s platform can provide computing resources and curated data. What it cannot provide is the sensing capability that makes clinical judgment possible. No amount of foundation model training bridges this gap, because the gap isn’t about processing power; it’s about the fundamental architecture of intelligence itself. 

Until AI systems have to metaphorically wrestle a velociraptor for dinner, until they face real consequences for being wrong, they will lack the evolutionary pressure that produced human judgment. Recognizing this isn’t technophobia. It’s engineering realism. 

What a Healthcare-Inclusive Framework Would Require 

If a future iteration of the Genesis Mission were to include clinical medicine, and I believe it eventually must, the framework would need substantial additions. 

First, explicit liability mechanisms. Any AI system that influences clinical decisions must operate within a clear accountability framework. This means either: the developer accepts liability for system failures (which would require malpractice-equivalent insurance), or the deploying institution accepts liability (which requires they have meaningful control over system behavior), or the clinician accepts liability (which requires they can meaningfully override or reject AI recommendations). The current ambiguity where AI systems influence decisions but no one is responsible for the influence is untenable. 

Second, clinical validation requirements distinct from technical benchmarks. An AI system can achieve excellent performance on retrospective datasets while failing in prospective clinical deployment. The Genesis Mission’s current framework emphasizes computational capability and data access. Healthcare AI requires validation in actual clinical environments, with actual patients, over timelines sufficient to detect delayed harms. 

Third, human-in-the-loop requirements that aren’t cosmetic. The phrase “AI-augmented” can mean anything from “provides information to human decision-makers” to “makes decisions that humans rubber-stamp.” Healthcare AI must optimize clinicians, not replace them. This requires system designs where human judgment remains genuinely central, not nominally present. 

Fourth, equity requirements with enforcement mechanisms. AI systems trained on data from academic medical centers serving affluent populations will perform poorly on underserved populations with different disease presentations, different social determinants, and different healthcare access patterns. A federal healthcare AI initiative must either ensure representative training data or explicitly constrain deployment to validated populations. 

Fifth, content control at the source level. My own work on medical AI emphasizes operating on curated, validated knowledge sources—not because it’s more impressive, but because it’s more reliable. A healthcare AI trained on the general internet will confidently reproduce the misinformation, outdated guidance, and commercially motivated content that saturates online health information. Federal healthcare AI should operate on peer-reviewed, professionally curated content with clear provenance. 

A Path Forward 

None of this is meant to suggest that the Genesis Mission is misguided or that federal AI infrastructure investment is inappropriate. The initiative addresses real problems and represents genuine progress. 

But the healthcare omission matters. The accountability void matters. The Manhattan Project framing matters. 

If we are building national AI infrastructure that will eventually touch healthcare—and we are, regardless of what this Executive Order explicitly includes—we need to address the gaps now, not after autonomous systems are deployed at scale in clinical environments. 

The Genesis Mission demonstrates that federal coordination of AI resources is possible. The next step is ensuring that coordination includes the domain where AI consequences are most immediately human: the domain where success means someone goes home to their family, and failure means they don’t. 

Semiconductors are important. Quantum computing is important. Nuclear energy is important. 

Keeping people alive is also important. Perhaps it’s time our national AI priorities reflected that. 

Author

Related Articles

Back to top button