
Abstract
Current discourse on artificial intelligence strategy presents a binary choice between sophisticated prediction engines and explanatory intelligence systems. Through analysis of practical AI implementation in medicine, we identify a third philosophical approach: constrained competence through controlled information sources. This approach sidesteps debates about artificial general intelligence by focusing on information quality rather than processing capability. We explore the theoretical foundations of this framework, drawing parallels to Plato’s allegory of the cave and Herbert Simon’s bounded rationality, and propose a hierarchical credibility system analogous to evidence-based medicine’s levels of evidence. This philosophical framework suggests that the path to reliable AI in high-stakes domains lies not in pursuing greater intelligence but in implementing systematic constraints on knowledge sources.
Introduction
The contemporary debate about artificial intelligence development has crystallized around a perceived dichotomy: should organizations invest in scaling prediction capabilities or developing explanatory intelligence? This framing, while capturing important tensions in AI development, obscures a fundamental question about the relationship between information quality and intelligent behavior. Through examination of practical AI deployments in healthcare, we propose that the most pressing challenge is not how AI processes information but what information it processes.
The distinction matters because it reframes the AI alignment problem from one of values and reasoning to one of epistemology and information curation. Rather than asking whether AI can think or understand, we ask whether AI has access to reliable information and can recognize gradients of credibility.
The Philosophical Precedent
The question of whether intelligence emerges from processing capability or information quality has deep philosophical roots. Plato’s allegory of the cave presents prisoners whose entire reality consists of shadows on a wall. When freed, the prisoners don’t gain new cognitive abilities; they gain access to better information. Their processing remains constant; their dataset expands from shadows to actual objects.
This ancient insight finds modern expression in Herbert Simon’s bounded rationality, which acknowledges that rational agents operate within limits of available information, cognitive capacity, and time. Simon’s framework suggests that intelligence is not about optimization but about satisficing within constraints. Applied to artificial intelligence, this implies that reliable behavior might emerge more readily from constraining information sources than from expanding processing power.
The frame problem in AI, articulated by McCarthy and Hayes, asks how an intelligent agent determines what information is relevant to a given situation. Current approaches attempt to solve this computationally through ever-larger models. The constrained competence approach solves it curatorially by pre-determining relevance through controlled knowledge bases.
Three Philosophical Approaches to AI
Prediction: Intelligence as Pattern Recognition
The prediction paradigm views intelligence as sophisticated pattern matching across vast datasets. Large language models exemplify this approach, trained on internet-scale text to predict probable completions. The underlying assumption is that sufficient pattern recognition approximates understanding. Scale becomes the primary driver of capability.
Explanation: Intelligence as Reasoning
The explanatory intelligence paradigm, advocated by those concerned with AI’s brittleness, seeks systems that can articulate their reasoning and adapt to novel scenarios. This approach assumes that true intelligence requires not just pattern matching but the ability to generate and test hypotheses, to explain why rather than just predict what. The focus shifts from scale to interpretability.
Constrained Competence: Intelligence as Curated Knowledge
We propose a third approach that sidesteps the prediction-explanation debate entirely. Constrained competence treats intelligence not as a property of the processing system but as a property of the information-processing relationship. By controlling information sources, we can achieve reliable behavior without solving artificial general intelligence.
This approach acknowledges that in high-stakes domains, the quality of decisions depends more on the quality of information than on the sophistication of reasoning. A medical AI with access only to peer-reviewed literature will make better clinical recommendations than a more sophisticated AI trained on the entire internet, including medical misinformation.
The Medical Case Study
In medical practice, the implementation of constrained competence takes concrete form. One author’s AI assistant operates exclusively on a curated database of peer-reviewed medical literature. This system cannot engage in creative medical theorizing or explain its deep understanding of biological systems. Instead, it excels at retrieving and synthesizing validated medical knowledge.
The key insight from this implementation is that reliability emerges from constraint rather than capability. By preventing the system from accessing unreliable information, we eliminate entire categories of failure modes. The AI cannot hallucinate treatments that don’t exist because it only knows about treatments in its curated database.
This mirrors how medical education actually functions. Medical students don’t learn by reading everything ever written about health; they learn from carefully selected textbooks and journals. The curriculum represents a conscious constraint on information sources, prioritizing quality over quantity.
Hierarchical Credibility: A Formal Framework
Building on the medical example, we propose a hierarchical credibility framework applicable across domains. Just as evidence-based medicine recognizes that not all studies carry equal weight, AI systems should recognize gradients of information reliability.
The hierarchy might structure as follows:
Level 1: Primary sources with highest credibility (systematic reviews, regulatory filings, supreme court decisions)
Level 2: Validated secondary sources (peer-reviewed studies, appellate rulings, audited statements)
Level 3: Professional observations (case reports, expert opinions, industry analyses)
Level 4: Indirect evidence (news reports, conference proceedings, preprints)
Level 5: Unverified claims (social media, blogs, forums)
Level 6: Known misinformation (retracted papers, conspiracy theories, fraudulent sources)
This hierarchy enables AI systems to communicate uncertainty appropriately. Rather than presenting all information with equal confidence, the system can indicate when recommendations rest on solid evidence versus speculation.
Implications for AI Development
The constrained competence approach suggests several departures from current AI development practices:
First, instead of training on all available data, development should begin with careful curation of high-quality sources. The goal is not to maximize training data but to optimize information quality.
Second, evaluation metrics should prioritize reliability over capability. A system that correctly refuses to answer questions outside its knowledge domain may be more valuable than one that generates plausible but potentially incorrect responses.
Third, the path to artificial general intelligence may be less important than the path to artificial specialized competence. Rather than building systems that can do everything, we should build systems that excel within carefully defined boundaries.
The Epistemological Dimension
The constrained competence approach represents a fundamental epistemological position: knowledge is not generated through reasoning alone but through interaction with reliable information. This challenges the implicit assumption in much AI research that intelligence can bootstrap itself from raw data.
Consider Searle’s Chinese Room argument, which contends that syntactic manipulation cannot create semantic understanding. The constrained competence approach sidesteps this debate by acknowledging that understanding may be less important than access to validated knowledge. The system doesn’t need to understand medicine if it can reliably access medical knowledge.
This position aligns with externalist theories of knowledge that locate intelligence not solely within cognitive agents but in their relationship with information environments. The quality of that environment becomes as important as the sophistication of the agent.
Challenges and Objections
Critics might argue that constrained competence merely postpones the intelligence problem. Who determines what information is reliable? How do we handle novel situations not covered by existing knowledge? Doesn’t this approach limit innovation and discovery?
These concerns are valid but not fatal. The determination of information quality can follow established practices in each domain, such as peer review in science, precedent in law, and audit in finance. Novel situations can be addressed through explicit communication of uncertainty rather than confident speculation. Innovation emerges from recombining validated knowledge rather than generating unfounded theories.
A more fundamental objection concerns the scalability of curation. Can we really manually curate all knowledge domains? The answer may be that we don’t need to; we need only curate the domains where reliability matters more than creativity.
Conclusion
The debate between prediction and explanation in AI development presents a false choice. Both approaches assume that intelligence emerges from processing capability rather than the quality of information. The constrained competence approach suggests that in high-stakes domains, the path to reliable AI lies not in building systems that can think better but in building systems with better things to think about.
This philosophical shift has practical implications. Instead of investing billions in scaling models to achieve artificial general intelligence, organizations might achieve better outcomes by investing in information curation and credibility assessment. The future of AI might depend less on computational breakthroughs than on epistemological frameworks that acknowledge the fundamental relationship between information quality and intelligent behavior.
The question is not whether AI can achieve human-level intelligence but whether it can achieve human-level discrimination about information sources. That capability, the ability to recognize that not all information is equally valid, may be more important than any amount of reasoning power.
In the end, the prisoners in Plato’s cave didn’t need better cognitive abilities; they needed better information. The same may be true for artificial intelligence.



