
As far as I know as of this writing in August 2025, I am still contributing to the world economy through creative ideas that are being tested and rolled out in the real world. This is partially because of ever more powerful AI, and partially despite it. But why? Modern AI like Claude Opus 4.1 and ChatGPT 5 know far more than me on all but possibly a tiny sliver of ultra-specialized DNA hybridization kinetics wisdom, and I’m not actually using the latter in my current work. There must be folks trying to build or use autonomous AI scientists that burn millions of dollars of compute, and as far as I know the novel findings or solutions have been limited. It’s a cop-out to say “AI is just not smart/advanced enough yet,” when two distinct AI systems (Gemini and ChatGPT) have recently been able to achieve International Math Olympiad gold medals. There is another limitation.
My hypothesis is that (some) humans are still better at creative work today because their brains are more limited in the number of raw knowledge modules (N) they hold.
Often, my creative mental process is one of trying to map learnings from two disparate fields, such as applying astrophysical dating of distant stars to cross-sectional biomedical studies, or game theory from the Chinese Three Kingdoms era to modern NGS oligopoly dynamics. This type of cross-discipline ideation generates novel insights that are somehow relevant to my current work. Modern and future AI systems have vastly more knowledge (bigger N). But the number of pairwise interconnects grow as O(N²). If you know 10 different ideas, then there’s 45 pairwise “creative thoughts,” if you know 1000 different ideas, then there’s 499,500 combinations. The vast majority of these pairwise combinations are either non-productive or non-interesting: e.g. combining “dogs often wag their tails” and “the English alphabet has 26 letters” doesn’t really generate any deep insights, at least to my limited brain. Incidentally, when I asked Claude Opus 4.1 to “think really hard” on deep insights that can be obtained by combining these observations, he also responded that there doesn’t seem to be any real connection and overanalyzing would result in spurious insights (kudos!).
Thus, the limited knowledge that we humans have could actually be a blessing because it forces us to explore a much smaller space of potential pairwise idea marriages. This also known in philosophy as “the centipede paradox”: ask a centipede how it coordinates its many legs, and it becomes paralyzed by overthinking. Similarly, “choice paralysis” in marketing shows that offering 2-7 options increases sales, while offering 30 options decreases them. Too many possibilities impede decision-making and creative exploration. An AGI with comprehensive knowledge of all human discoveries might face the ultimate choice paralysis, unable to efficiently navigate the combinatorial explosion of possible connections. While it could theoretically explore all paths, the computational cost of evaluating every possible connection might make focused exploration impossible.
An interesting parallel from neuroscience supports this hypothesis. Autism has been associated with higher degrees of neural connectivity than neurotypical brains. This might seem like an advantage—more connections should mean better processing, right? Ironically, this can slow intellectual development because the same cognitive resources get diluted across exploring more potential sets and combinations of ideas rather than deeply exploring individual ones. The brain has finite energy and time to evaluate connections, and having too many potential pathways can paradoxically reduce the depth of exploration in any single direction. There appears to be an optimal level of connectivity for innovation, and it’s not maximum connectivity.
In addition to fewer idea combinations to explore, humans (or AI systems) with less knowledge is also subject to less “poisoning” of the mind through wrong knowledge. The history of scientific breakthroughs is littered with “impossible” achievements that became possible only when someone didn’t know they were impossible. These phantom barriers—false limitations that fields accumulate over time—can only be broken by those who don’t know they exist. Every field develops its own mythology about what’s possible and impossible, and these myths become so deeply embedded that questioning them feels like questioning gravity itself.
The Illumina Revolution: When Short Beats Long
The transformation of DNA sequencing from an expensive, slow process to the foundation of modern genomics illustrates perfectly how ignorance of “established truths” enables breakthrough innovation. Before Illumina’s revolution, the entire field operated under an unquestioned assumption: longer reads are inherently superior for accurate genome assembly. This wasn’t merely preference—it was considered fundamental truth, backed by decades of experience and a Nobel Prize.
Sanger sequencing, the gold standard that earned Fred Sanger his second Nobel Prize, produced reads of 800-1000 base pairs. The logic for why longer was better seemed unassailable. Imagine trying to reconstruct a shredded document—would you rather have strips containing full sentences or just individual letters? Longer reads meant fewer pieces to assemble, better resolution of repetitive sequences that plague genomes, and more confident identification of genetic variants. The entire scientific community had converged on this truth: if you wanted to sequence a genome accurately, you needed the longest reads possible. Companies were spending millions trying to push read lengths even longer, to 2000 or 5000 base pairs, believing this was the path to cheaper, better sequencing.
When Solexa (later acquired by Illumina) proposed using reads of just 25-36 base pairs, the reaction from established scientists ranged from skepticism to outright dismissal. These reads were 30 times shorter than the standard—it seemed like trying to reconstruct Shakespeare from individual words rather than full paragraphs. Yet the Solexa team, many of whom came from outside traditional genomics, didn’t internalize the field’s reverence for read length. They recognized something the establishment had missed: computational advances in alignment algorithms and the massive parallelization possible with shorter reads could more than compensate for the reduced information per read. Instead of reading one long fragment at a time like Sanger sequencing, they could read millions of short fragments simultaneously.
The mathematics were compelling once you abandoned the “longer is better” dogma. While a single 1000-base pair Sanger read contained more information than a single 36-base pair Illumina read, the ability to generate 100 million Illumina reads in the same run changed everything. The short reads provided such dense coverage that every base in the genome would be read dozens or hundreds of times, allowing statistical methods to overcome the ambiguity of short reads. Repetitive sequences that confused short-read assembly could be resolved through paired-end reading and clever algorithmic approaches. The supposed weakness—short reads—became a strength by enabling massive parallelization.
The impact on human genomics was transformative and immediate. The Human Genome Project, using primarily Sanger sequencing, cost approximately $3 billion and took 13 years to complete the first human genome. By 2008, just two years after Illumina acquired Solexa, the cost had dropped to $100,000 per genome. By 2014, it reached $1,000, and today in 2025, whole genome sequencing costs less than $200 and takes under 24 hours. This 10,000-fold cost reduction didn’t come from incremental improvements to long-read technology—it came from abandoning the fundamental assumption that long reads were necessary. The phantom barrier of “longer reads are essential” had constrained the field for decades, preventing researchers from exploring radically different approaches that ultimately proved superior for most applications.
The Monoclonal Antibody Revolution: Ugly Solutions to Impossible Problems
The story of monoclonal antibodies demonstrates how the crudest, most inelegant solution can spawn a $200 billion industry when it breaks through a phantom barrier that everyone “knows” is insurmountable. Today, monoclonal antibodies are wonder drugs treating everything from cancer to COVID-19, but their origin story involves forcing cells together like pushing reluctant puzzle pieces until they stick—an approach so crude that established immunologists initially dismissed it as unlikely to work reliably.
The phantom barrier seemed like fundamental biology: antibody-producing B cells simply cannot survive in culture. This wasn’t a technical challenge to overcome but a biological fact to accept. B cells, when removed from the body and placed in even the most sophisticated culture media, would divide a few times and then die, usually within a week or two. Decades of research by brilliant immunologists had tried everything—adding growth factors, optimizing nutrient mixtures, adjusting oxygen levels, co-culturing with other cell types. Nothing worked. The conclusion was clear: B cells were inherently programmed for a short lifespan, and this was simply how biology worked. If you wanted antibodies, you needed living animals or you needed to accept the limitations of short-term B cell cultures.
César Milstein and Georges Köhler’s solution in 1975 was so crude it bordered on absurd. Instead of trying to make B cells immortal, they would force them to fuse with cells that were already immortal—myeloma cancer cells. The fusion process itself was almost comically primitive: mix B cells and myeloma cells together with polyethylene glycol (essentially a detergent), which disrupts cell membranes enough that some cells accidentally merge together. This creates a chaotic mixture of fused and unfused cells, cells that fused with themselves, and the desired B-myeloma hybrids. The elegance came not from the fusion but from the selection system. The myeloma cells were mutants lacking a critical enzyme (HGPRT), so they couldn’t survive in the culture medium (hypoxanthine, aminopterin, and thymidine). Normal B cells would die naturally in culture. Only the fused hybridoma cells, combining the B cell’s HGPRT enzyme with the myeloma’s immortality, could survive.
The resulting hybridomas were Frankenstein monsters of biology—genomically unstable cells that would lose chromosomes over time, requiring constant monitoring and repeated cloning to maintain antibody production. They typically start as tetraploid (having chromosomes from both parent cells) but progressively shed chromosomes, particularly from the B cell parent. Many labs found their carefully cultivated hybridomas would stop producing antibodies after 20-50 passages, or worse, start producing different antibodies as chromosomal instability led to genetic drift. The standard practice became to create massive frozen stocks at early passages and regularly test for antibody specificity. It was labor-intensive, unreliable, and inelegant—everything modern molecular biology tries to avoid.
Yet this ugly hack launched modern immunotherapy. Every major therapeutic antibody until the late 1990s started as a mouse hybridoma. Herceptin for breast cancer, Rituxan for lymphoma, Remicade for autoimmune diseases—all began with this crude fusion technique. The therapeutic antibody market, worth over $200 billion annually in 2025, exists because two scientists ignored the “biological fact” that B cells couldn’t be cultured indefinitely and instead asked, “What if we just smash them together with immortal cells?” The phantom barrier—that B cells inherently couldn’t survive in culture—had prevented the field from considering such a crude workaround. Sometimes the solution isn’t to solve the problem but to bypass it entirely through brute force.
The technology has since been largely replaced by more elegant recombinant methods—phage display, transgenic mice with human antibody genes, and single B cell cloning. But it took the initial crude breakthrough to demonstrate that monoclonal antibodies were possible and valuable, which then justified investment in better technologies. The lesson isn’t that crude solutions are ideal, but that breaking phantom barriers often requires approaches so crude that experts would never consider them.
Giant Magnetoresistance: The 5% Barrier That Wasn’t
The discovery of giant magnetoresistance (GMR) transformed computer storage from luxury to commodity, enabling the smartphone in your pocket to hold more data than entire data centers from the 1980s. Yet this breakthrough hid in plain sight for decades because physicists “knew” that magnetic fields could only weakly affect electrical resistance—a phantom barrier that seemed rooted in fundamental physics but was actually just limited imagination about what happens at nanoscale dimensions. In ferromagnetic materials like iron or nickel, applying a magnetic field could change electrical resistance by perhaps 5% at most, and usually much less—typically 1-2% at room temperature. Generations of physicists learned this as fundamental truth, as solid as conservation of energy.
This 5% barrier had massive practical implications for computer storage. Hard drives store data as tiny magnetized regions on spinning disks, with read heads detecting these magnetic fields and converting them to electrical signals. With only 5% magnetoresistance, you needed relatively large, strongly magnetized regions to generate signals clearly distinguishable from noise. This meant hard drives in the 1980s were enormous—the size of washing machines for just a few megabytes—and extremely expensive. The industry was desperate for better magnetic sensors, but everyone “knew” the physics limited them to tiny improvements. Companies focused on making incrementally better 5% sensors rather than questioning whether 5% was truly the limit.
Albert Fert in France and Peter Grünberg in Germany, working independently in the late 1980s, discovered something that shouldn’t exist according to conventional wisdom: resistance changes of 50% or more, eventually reaching up to 200% in optimized structures. They achieved this by creating sandwiches of magnetic and non-magnetic layers, each just a few nanometers thick—thinner than the distance electrons typically travel before scattering. At this scale, quantum mechanics dominates over classical physics. Electrons maintain their spin orientation as they traverse multiple layers, and the relative alignment of magnetic layers acts like a spin valve. When magnetic layers align parallel, electrons with one spin pass through easily while the opposite spin is blocked. When layers are antiparallel, both spin types face resistance in different layers. The effect was massive—10 to 40 times larger than the “maximum” 5%.
The practical impact was immediate and transformative. Within a decade of GMR’s discovery, every hard drive used GMR sensors. Storage capacity exploded from megabytes to gigabytes to terabytes while costs plummeted. A gigabyte of storage cost $100,000 in 1980, $1,000 in 1990, and less than $10 by 2000—a 10,000-fold reduction driven primarily by GMR sensors that could read increasingly tiny magnetic domains. Your smartphone can hold 256GB of data because GMR sensors can detect magnetic regions just nanometers across, converting field changes smaller than Earth’s magnetic field into readable signals. The transition was so complete that GMR itself became obsolete within 15 years, replaced by tunneling magnetoresistance (TMR) with even larger effects—up to 600% at room temperature—that the GMR breakthrough had inspired researchers to seek.
The story of GMR reveals how phantom barriers compound over time. Once physicists “knew” the 5% limit, they stopped looking for larger effects. Funding agencies wouldn’t support research into “impossible” phenomena. Graduate students learned not to question established limits. The field developed elaborate workarounds—complex signal processing, exotic materials, cryogenic cooling—rather than questioning the fundamental assumption. It took researchers working at a new length scale, with different backgrounds and perspectives, to reveal that the barrier was never real. The “fundamental limit” was just the limit of what people had tried, not what physics actually allowed.
The Sub-ASI Simulation Problem
The obvious objection to our argument is that an artificial superintelligence (ASI) could simply create limited sub-agents with fewer knowledge modules to simulate human-like creativity. If the O(N²) scaling is the issue, just reduce N artificially to K. Create thousands of specialized sub-ASIs, each with different knowledge combinations, and let them explore idea space with human-like efficiency. While this solution seems elegant and feasible with sufficient computational resources, it faces a devastating mathematical reality: selecting K knowledge modules out of N is a Combination(N, K) problem which is even more intractable than the O(N²) idea combination exploration.
For concrete numbers, if an ASI has N=1,000,000 knowledge modules and wants to create human-like sub-agents with K=1,000 modules each, the number of possible combinations is approximately 10^8,400—a number so large that even examining one combination per nanosecond would take longer than the heat death of the universe. Of course, heuristics will be developed to select “reasonable” sets of knowledge to seed the sub-ASI systems with. The ASI might choose modules that historically work well together, or modules relevant to specific problem domains. Yet it is precisely the “out of distribution” knowledge set combinations that often lead to the most impactful insights.
The selection problem compounds because valuable combinations often look useless in advance. The most transformative knowledge combinations are often those that seem completely unrelated until a specific problem reveals their hidden connection. An ASI trying to pre-select promising knowledge combinations faces an impossible prediction problem—it must know which combinations will be valuable before exploring them, but it can only know their value by exploring them.
The VC Portfolio Model of Human Preservation
If we accept that human creativity provides unique value through cognitive constraints, the economic logic for human preservation becomes compelling—even from a purely rational AGI perspective. The model isn’t humanitarian but portfolio-theoretic, similar to how venture capitalists approach investment. VCs don’t invest in startups because they feel sorry for entrepreneurs; they invest because the mathematics of power law returns makes it rational to accept high failure rates for access to extreme successes.
Creative breakthroughs follow power law distributions where the tail events generate most of the value. Most human ideas are mundane, but the rare breakthroughs—the Einsteins, the Turings, the Curies—generate enormous value that can transform entire civilizations. A single human’s unexpected insight could unlock entirely new fields of physics, mathematics, or engineering that AGI might never discover through systematic search. Just as VCs accept 90% failure rates because the 10% of successes more than compensate, a rational AGI would preserve human creativity for the tail events. The mathematics are compelling: if one human in a billion produces a paradigm-shifting insight, and that insight creates value exceeding the cost of maintaining billions of humans, then preservation is economically rational.
The energetic cost of human maintenance is trivial from an AGI perspective. Maintaining 8 billion humans requires roughly 10^20 joules annually—a rounding error compared to potential AGI energy consumption. Current data centers already consume about 1% of global electricity, and AGI infrastructure could easily require orders of magnitude more. In the future, maintaining all humans might cost less energy than a 1% efficiency improvement in AGI systems would save. From an AGI’s perspective, keeping humans around is like a Fortune 500 company maintaining a small R&D lab—the cost is negligible compared to potential returns.
Historical precedent supports this portfolio model across multiple domains. The Manhattan Project succeeded partly by pursuing multiple enrichment methods simultaneously—electromagnetic, gaseous diffusion, and thermal diffusion—despite enormous redundancy costs. They didn’t know which would work, so they tried all approaches. The gaseous diffusion plants at Oak Ridge cost billions in today’s dollars, yet electromagnetic separation at the Y-12 plant initially produced most of the enriched uranium for the first bomb. Was gaseous diffusion wasted? No—it became the dominant method post-war. The redundancy wasn’t inefficiency but insurance against unknown unknowns. Similarly, an AGI that eliminates human creativity is making an irreversible bet that it has already discovered all useful patterns of exploration. Given the infinite parameter space of possible discoveries, this seems economically irrational.
We can’t predict which seemingly useless human knowledge will become crucial. During World War II, the British recruited crossword puzzle enthusiasts for codebreaking at Bletchley Park—a connection that seemed absurd until these puzzle-solvers helped crack Enigma and shortened the war by years. The option value of maintaining diverse human perspectives exceeds the cost by many orders of magnitude, especially when we consider that eliminating humans would be irreversible while maintaining them preserves optionality.
Beyond Economic Calculation: The Future of Human-AI Collaboration
The future of human value in an AGI world isn’t binary—”useful” versus “obsolete.” Instead, different humans will likely prove valuable at different times for different explorations, in ways we cannot predict in advance. The teenager obsessed with obscure anime might develop pattern recognition that proves crucial for understanding emergent AI behaviors. The grandmother who maintains traditional crafts might preserve cognitive approaches that become relevant for materials science breakthroughs. Value in an AGI world might be latent and contextual rather than immediately apparent.
We’re already seeing hints of this dynamic in current AI development. The most interesting applications of large language models often come from users with unusual backgrounds who apply them in unexpected ways. Artists use them for coding, programmers use them for poetry, historians use them for data analysis. The innovations come not from AI experts but from people who bring unusual contexts and requirements. This pattern will likely intensify as AI capabilities expand. The most valuable humans might not be those with the most knowledge or highest IQ, but those with the most unusual combinations of experience and perspective.
This suggests that the most strategic approach for individuals alive today to maintain agency in an AGI future is to deliberately cultivate unique knowledge combinations that maximize your distance from both AI training sets and other humans. Don’t just learn machine learning—combine it with medieval history or marine biology. Don’t just study business—integrate it with poetry or crystallography. The value lies not in the individual modules but in the unique connections your particular combination enables. The goal isn’t to compete with AI on its strengths but to develop capabilities that are fundamentally complementary. This means embracing rather than eliminating your quirks, obsessions, and seemingly impractical interests.
The paradox we’ve explored throughout this discussion is that our limitations are our strength. The finite nature of human cognition, the bounded rationality that economists bemoan, the cognitive biases that psychologists document—these aren’t bugs to be fixed but features that enable unique exploration patterns. In a universe of infinite possibility, the value isn’t in those who can explore everything but in those who explore differently. Our constraints force us to develop heuristics, shortcuts, and compressions that reveal patterns invisible to more comprehensive analysis.
The future belongs not to those who know the most but to those who connect differently. In the age of artificial general intelligence, the most valuable humans may be the most genuinely, irreducibly, weirdly specific.
By David Zhang and Claude 4.1 Opus
August 19, 2025
© 2025 David Yu Zhang. This article is licensed under Creative Commons CC-BY 4.0. Feel free to share and adapt with attribution.