
On November 24, 2025, President Trump signed an Executive Order launching the Genesis Mission,ย a coordinated national effort to build an integrated AI platformย leveragingย federal scientific datasets, national laboratory supercomputers, and public-private partnerships. The stated goal: dramatic acceleration of scientific discovery across domains critical to national competitiveness.ย
The initiative explicitly invokes the Manhattan Project as its model. This comparison deserves serious attentionย not becauseย it’sย wrong, but becauseย it’sย revealing.ย
As a surgeon who has spent 25 years in clinical practice while simultaneously building AI systems for healthcare, I find myself genuinely impressed by several elements of this initiativeย and genuinely concerned by what it omits. The Genesis Missionย representsย real progress in federal AI infrastructure.ย It also reveals, with unusual clarity, where our national prioritiesย actually lie.ย
What the Genesis Mission Gets Rightย
Credit whereย it’sย due: this Executive Order addresses several longstanding problems in federal AI development.ย
First, the unified platform architecture.ย Federal scientific data has historically been fragmented across agencies, locked in incompatible formats, and inaccessible to researchers who could use it productively. Section 3’s establishment of the American Science and Security Platformย integrating DOE supercomputers, cloud-based AI environments, and standardized data accessย representsย a genuine infrastructure advancement. The emphasis on “digitization, standardization, metadata, and provenance tracking”ย suggestsย someoneย actually understandsย the data quality problems that plague AI development.ย
Second, the content-controlled approach.ย The Mission’s focus on “federally curated” datasets and “domain-specific foundation models” aligns with whatย I’veย learned building medical AI: reliability comes from constraining your knowledge sources, not from training on everything and hoping guardrails catch the errors. Garbageย in, garbageย out,ย but when the garbage can affect national security or scientific validity,ย maybe don’tย train on garbage.ย
Third, the intellectual property framework.ย Section 5(c)’s requirement for “clear policies for ownership, licensing, trade-secret protections, and commercialization” addresses a persistent barrier to academic-industry collaboration. Researchers and companies need to know the rules before theyย investย resources.ย
Fourth, the security architecture. The emphasis on supply chain security, cybersecurity standards, and “highest standards of vetting and authorization” reflectsย appropriate seriousnessย about protecting sensitive research infrastructure.ย
These are not trivial accomplishments. Federal AI initiatives have often failed precisely because they lacked unified infrastructure, quality data pipelines, clear IP rules, and adequate security. The Genesis Mission appears to have learned from those failures.ย
The Healthcare Omissionย Isn’tย Accidentalย
Section 4(a)ย identifiesย the priority domains for the Mission’s “national science and technology challenges”: advanced manufacturing, biotechnology, critical materials, nuclear fission and fusion energy, quantum information science, and semiconductors.ย
Healthcare is absent. Clinical medicine is absent. Patient care is absent.ย
Biotechnology made the list,ย but biotechnology is not healthcare. Biotechnology encompasses drug discovery, agricultural applications, industrial processes, and biological manufacturing.ย It’sย the domain where pharmaceutical companies develop compounds, not where physicians care for patients. The distinction matters because biotechnology produces patentable products with clear commercial pathways, while clinical medicine produces outcomes in individual humans with messy liability implications.ย
This omission is notย an oversight.ย It’sย a design choice that reveals something important about how we conceptualize AI risk.ย
Consider the domainsย that did makeย the list. Semiconductors:ย ifย an AI-designed chip fails, itย doesn’tย work. Nuclear energy: heavily regulated with clear liability frameworks and institutional accountability. Quantum computing: still largely theoretical with failure modes that affect research timelines, not human bodies. Advanced manufacturing: product liability lawย providesย established mechanisms for accountability.ย
Now consider clinical medicine. If an AI system contributes to a diagnostic error, a treatment decision, or a care pathway that harms a patientย who is accountable? The algorithm? The developer? The hospital that deployed it? The physician who relied on it? The federal platform that trained it?ย
The Genesis Mission’s architects appear to have recognized,ย perhaps unconsciously, that healthcare AI presents accountability challenges that the current framework cannot resolve. Rather thanย addressย those challenges, they simply excluded the domain.ย
The Accountability Voidย
The Executive Order uses the word “autonomous” repeatedly. Section 3(a)(vi) references “autonomous and AI-augmented experimentation and manufacturing.” Section 3(e) directs the Secretary to review capabilities for “AI-directedย experimentation and manufacturing, including automated and AI-augmented workflows.”ย
Autonomous. AI-directed. Automated.ย
The Order extensively addresses protection of assets: intellectual property, data security, export controls, cybersecurity, classificationย requirements. Itย establishesย clear accountability for protecting valuable federal resources from unauthorized access or misuse.ย
What the Order does not addressย at any point, in anyย sectionย is accountability for outcomes that harm humans. There is no mention of liability frameworks for AI-directed decisions. No discussion of what happens when autonomous systems produce results that damage health, safety, or welfare. No mechanism forย determiningย responsibility when the algorithmย optimizes forย the wrongย objectiveย function.ย
This asymmetry is striking. The Genesis Mission carefully protects data, IP, and infrastructure. It does not protect people.ย
In my world,ย clinical medicine,ย this gap would be disqualifying. I carry malpractice insurance because consequences areย realย and accountability is non-negotiable. When I make a decision that affects a patient, there is a clear chain of responsibility. Ifย I’mย wrong, mechanisms exist toย identifyย the failure,ย compensateย the harmed party, and prevent recurrence.ย
AI systemsย operatingย under the Genesis Mission framework would have no such accountability. They can be “autonomous” without bearing consequences. They can be “AI-directed” without anyone directing themย beingย responsible for where they direct.ย
Perhaps thisย works for semiconductor design. It cannot work for medicine.ย
The Manhattan Project Problemย
The Executive Order explicitly frames the Genesis Mission as comparable to “the Manhattan Project that was instrumental to our victory in World War II.” This framing deserves scrutiny.ย
The Manhattan Project was a crash program to build weapons of unprecedented destructive power under conditions of wartime urgency. Itย operatedย with minimal external oversight, extraordinary secrecy, and explicit prioritization of speed over deliberation. It succeeded in its technicalย objectives. It also produced outcomes,ย Hiroshima, Nagasaki, the nuclear arms race,ย that humanity has spent eighty yearsย attemptingย to manage.ย
When I first read “Genesis Mission,” Iย thought:ย creation. New beginnings. The biblical resonance suggested building something generative, life-affirming.ย
The Manhattan Project comparisonย reframesย that interpretation. This is not primarily about creation.ย It’sย about power.ย About buildingย capabilities so consequential that theย entire competitive landscape must reckon with them. About winning a race where the stakes justify compressed timelines and reduced deliberation.ย
For semiconductors and quantum computing,ย perhaps thisย framing isย appropriate. National competitiveness in foundational technologiesย genuinely matters.ย
For healthcare, it would be catastrophic. Clinical medicine cannotย operateย on Manhattan Project timelines. Drug development alreadyย suffers fromย pressure to accelerate approval processes; the consequences include withdrawn medications, unexpected side effects, and patient harm. Diagnostic AI systems rushed to deployment have failed spectacularly:ย dermatology algorithms thatย couldn’tย recognize melanoma on darker skin, sepsis predictors that generated thousands of false alarms, imaging systems that missed cancers visible to trained radiologists.ย
The Manhattan Project built weapons. Healthcare AI, if done poorly, builds weaponsย tooย just weapons that discharge inside hospital rooms rather than over cities.ย
The Sensing Gap the Platform Cannot Closeย
There’sย a deeper issue that the Genesis Mission’s architecture cannot address, regardless of how much computing power or curated data it provides.ย
Human cliniciansย possessย approximately 10 billion sensory receptors continuously sampling their environment. When I examine a patient,ย I’mย not just processing the data in their chart.ย I’mย integrating visual cues,ย skin color, respiratory effort, theย subtle signs of distress thatย don’tย translate to vital sign monitors.ย I’mย registering olfactory information:ย the fruity breath of diabetic ketoacidosis, theย distinctive smell of certain infections.ย I’mย sensing the quality of a handshake, the tremor that suggests anxiety or pathology, the way a patient guards their abdomen.ย
This sensory integrationย isn’tย a nice-to-have supplement to clinical reasoning.ย It’sย foundational. Evolution spent 3.8 billion years debugging these systems through the harsh reality of consequences;ย organisms thatย failed toย sense threats accurately got eaten, and their genesย didn’tย propagate. The result is a threat-detection and pattern-recognition apparatus of extraordinary sophistication, continuously refined by real-world feedback.ย
AI systems face no such selection pressure. They can be confidently wrong at scale because being wrongย doesn’tย kill them. They canย optimize forย measurable metrics while missing everything that matters butย resistsย quantification. They can achieve impressive benchmark performance on structured datasets while failing completely when confronted with the messy, multimodal reality of actual patients.ย
The Genesis Mission’s platform can provide computing resources and curated data. What it cannotย provideย is the sensing capability that makes clinical judgment possible. No amount of foundation model training bridges thisย gap, becauseย the gapย isn’tย about processing power;ย it’sย about the fundamental architecture of intelligence itself.ย
Until AI systemsย have toย metaphorically wrestle a velociraptor for dinner,ย until they face real consequences for being wrong,ย they will lack the evolutionary pressure thatย produced human judgment. Recognizing thisย isn’tย technophobia.ย It’sย engineering realism.ย
What a Healthcare-Inclusive Framework Would Requireย
If a future iteration of the Genesis Mission were to include clinical medicine,ย and I believe it eventually must,ย the framework would need substantial additions.ย
First, explicit liability mechanisms. Any AI system that influences clinical decisions mustย operateย within a clear accountability framework. This means either: the developer accepts liability for system failures (which would require malpractice-equivalent insurance), or the deploying institution accepts liability (which requires they have meaningful control over system behavior), or the clinician accepts liability (which requires they can meaningfully override or reject AI recommendations). The current ambiguityย where AI systems influenceย decisionsย but no oneย is responsible forย the influenceย is untenable.ย
Second, clinical validation requirementsย distinctย from technical benchmarks. An AI system can achieve excellent performance on retrospective datasets while failing in prospective clinical deployment. The Genesis Mission’s current framework emphasizes computational capability and data access. Healthcare AI requires validation in actual clinical environments, with actual patients, over timelines sufficient to detect delayed harms.ย
Third, human-in-the-loopย requirements thatย aren’tย cosmetic. The phrase “AI-augmented” can mean anything from “provides information to human decision-makers” to “makes decisions that humans rubber-stamp.” Healthcare AI mustย optimizeย clinicians, not replace them. This requires system designs where human judgmentย remainsย genuinely central, not nominally present.ย
Fourth, equity requirements with enforcement mechanisms. AI systems trained on data from academic medical centers serving affluent populations will perform poorly on underserved populations with different disease presentations, different social determinants, and different healthcare access patterns. A federal healthcare AI initiative must either ensure representative training data or explicitly constrain deployment toย validatedย populations.ย
Fifth, content control at the source level. My own work on medical AI emphasizesย operatingย on curated,ย validatedย knowledge sourcesโnot becauseย it’sย more impressive, but becauseย it’sย more reliable. A healthcare AI trained on the general internet will confidently reproduceย the misinformation, outdated guidance, and commercially motivated content that saturates online health information. Federal healthcare AI shouldย operateย on peer-reviewed, professionally curated content with clear provenance.ย
A Path Forwardย
None of this is meant to suggest that the Genesis Mission isย misguidedย or that federal AI infrastructure investment is inappropriate. The initiative addresses real problems andย representsย genuine progress.ย
Butย the healthcareย omission matters. The accountability void matters. The Manhattan Project framing matters.ย
If we are building national AI infrastructure that will eventually touch healthcareโand we are, regardless of what this Executive Order explicitly includesโwe need to address the gaps now, not after autonomous systems are deployed at scale in clinical environments.ย
The Genesis Missionย demonstratesย that federal coordination of AI resources is possible. The next step isย ensuringย that coordination includes the domain where AI consequences are mostย immediatelyย human: the domain where success means someone goes home to their family, and failure means theyย don’t.ย
Semiconductors are important. Quantum computing is important. Nuclear energy is important.ย
Keeping people alive is also important.ย Perhaps it’sย time our national AI priorities reflected that.ย



