Here is a bar graph that accurately depicts the scale difference between the result of this prototype test and the theoretical human Shannon number: humanity’s empirically realized knowledge as of 2025 is 0.000001% of all possible human mental states and combinations. Everything beyond our knowledge barrier, which is constrained by the laws of the universe, is inaccessible.
I reran the prototype test from “Advancement in Quantifying Novelty,” after making some adjustments that are listed immediately below:
Knowledge development varies significantly across cultures. Western scientific traditions excelled in physical sciences, while other cultures contributed more to medicine, agriculture, and meaning-making domains.
Major setbacks like the burning of the Library of Alexandria, the collapse of civilizations, and the suppression of knowledge traditions significantly reduced accumulated progress.
Subjectivity in Scoring and Weighting: Proto-origin and paradigm shift scores inevitably involve judgment calls. Small changes in these scores or weighting factors could influence final percentages meaningfully.
History has shown that unexpected paradigm shifts can sometimes explode knowledge boundaries.
Even with adjustments, fully accounting for diverse global knowledge systems is challenging, and unrecognized patterns might influence results.
This model’s results are inherently provisional and are designed to evolve as new information, discoveries, and perspectives emerge. The framework integrates complex historical, cultural, and mathematical factors, all of which may be refined with further archaeological findings, advances in understanding of knowledge systems, and input about underrepresented or previously overlooked domains.
For instance, should major breakthroughs occur in currently emergent fields such as artificial intelligence, biotechnology, or quantum information, or if entirely new fields arise, the theoretical ceilings and saturation rates can shift.
Likewise, new insights into lost knowledge or the rediscovery of ancient wisdom could alter the effective cumulative knowledge.
Moreover, improvements in how I quantify and score synergy effects or cross-domain interactions will refine the model’s parameters.
This adaptability is a strength, not a limitation; it ensures that the model remains a living tool that grows with our collective understanding rather than a static pronouncement.
In essence, this model captures a snapshot of human knowledge progress grounded in today's best evidence and reasoning, but it is always open to recalibration as the frontiers of discovery push forward or shift. This ongoing recalibration helps maintain both scientific rigor and relevance in a rapidly evolving intellectual landscape.
Just as the IQ test was developed to operationalize and measure a broad, abstract concept, “intelligence,” through structured, replicable methods, my approach similarly strives to quantify the elusive and intricate phenomenon of collective human knowledge and novelty. Both efforts involve:
Defining a theoretical construct that is inherently complex and multi-dimensional.
Creating systematic, hierarchical scoring or testing frameworks to capture different aspects or domains.
Balancing empirical data with informed judgment to build a practical, usable measure.
Continuously refining and validating the framework through iterative testing and improvement.
Acknowledging inherent limitations, cultural biases, and the need for ongoing recalibration.
Of course, like IQ tests, my model is not absolute truth; it is a tool shaped by assumptions, interpretation, and context. But its merit lies in providing a coherent, transparent, and adaptable way to wrestle with an otherwise intangible subject, guiding inquiry and insight grounded in rigorous thinking.
I began by taking the original results of the first prototype test that placed humanity at 97.9% of its theoretical knowledge ceiling and applied modest adjustments.
First came a timeline correction for mathematics. Because systematic mathematics does not appear in the archaeological record until early Bronze-Age Mesopotamia, I lowered its proto-origin weight from 95 to 80, a proportional reduction of about 16%.
In the growth equation, this alters α only for that single domain, trimming a tenth of a percentage point from the global total but leaving most other fields untouched because their chronological anchors, controlled fire for physics, the Neolithic revolution for agriculture, and early symbolic language for linguistics, are well-supported in the literature.
Next, I applied a cross-cultural factor of 0.95 to the domains that modern curricula still treat as the core of Western science. That coefficient represents the relative undercounting of parallel discoveries outside the Greco-European line of descent, such as Song-dynasty engineering or medieval Islamic optics. Because it targets only a third of the knowledge map, its downward pull on the final number is about a fiftieth of one point.
A retention coefficient of 0.85 followed, representing catalogued losses, burned libraries, wars, and suppressed oral traditions. Historians of science have documented a steady leak of about 15% of written production each millennium, so the global corpus in 2025 embodies only a fraction of what once existed. Again, most frontier physics and chemistry survive in replicated journals; the heavier attrition falls on mythology, religious philosophy, and other text-bound fields. The net effect is another tenth of a point.
Refined K-max values came next. Rather than assuming each domain tops out at unity, I allowed mathematics, engineering, and medicine to climb a few percentage points above one because their object spaces can be indefinitely enlarged by abstraction, design space, or biological complexity.
Conversely, I let law, textiles, and war strategy top out 2-5% lower, reflecting their finite rule sets. These upward and downward tweaks almost exactly cancel, leaving the aggregate ceiling virtually unchanged at 25.13 units.
Finally, I folded in four emergent fields (the chart labels them as domains, but they’re actually synergistic Kuhnian-level paradigm shifts within existing frameworks), computer science and artificial intelligence, biotechnology, neuroscience, and quantum information theory, at completion rates that fit contemporary citation and investment curves: roughly 18%, 23%, 13% and 8%, respectively. Their combined ceiling weight is just under three units, and they contribute a net positive of one twentieth of a point to the global tally.
Running the updated α, S(t), and E(t) terms through Varney’s Law gives 97.5% completion in 2025. Classical physical domains, physics, chemistry, astronomy, cluster between 97.2 and 97.8% saturation, life sciences sit around 97.6%, and abstract social fields average 96%-plus. Emerging domains trail far behind, but because their ceilings are modest at this stage, they barely shift the denominator.
The chart at the top of this article shows how these per-domain saturations cluster. The slender green bars at the bottom illustrate how early we still are in AI, biotechnology, and quantum work, even though the histogram’s bulk towers near the rightmost bin.
Why then does 97.5% sound so high when laboratories still discovery and publish every day?
The answer lies in the distinction between incremental discovery, new microstructures, new enzymes, new legal precedents, and paradigm-shifting novelty. Bibliometric research finds that journal articles and patents have grown less disruptive each decade since the mid-twentieth century, an empirical plateau consistent with the model’s high asymptote. In other words, the remaining uncharted space is narrow but deep: we can still find countless local maxima, yet the global topography is almost mapped.
The small gap between 97.9% and 97.5% between the original and the adjusted is exactly the magnitude expected from the historical, cultural, and retention corrections, which demonstrate that they matter but do not topple the basic plateau.
A few caveats are essential. The Discovery Plateau Hypothesis remains un-peer-reviewed. Its preprint clearly labels which claims are intended as falsifiable, such as the predicted deceleration rate of Kuhnian revolutions, and which are heuristic narratives.
Fact-checking each historical anchor affirms that the early proto-origins, fire for physics, seed domestication for agriculture, cuneiform numeration for mathematics, are mainstream chronology, while contemporary evidence for a slowdown in disruptive breakthroughs is robust across disciplines.
For now, the numbers suggest human civilization has skimmed the cream of discoverable structure. What progress we see in the twenty-first century will be real but mostly incremental, with flashes of novelty radiating from narrow frontiers like quantum information or deep-brain interfaces (though, because the brain isn't modular, and no two brains function the exact same way, it's unclear just how much potential is within Brain-Computer Interfaces). In that sense, the 97.5% benchmark is not an endnote but a prologue to a slower, finer-grained age of understanding.
To replicate this using AI, I suggest using Perplexity’s lab feature. But you need to download the Discovery Plateau Hypothesis from SSRN first, then save this article as a text file, upload both, then you need to tell the AI that the paper is new, which is why it hasn’t been peer-reviewed yet, and then you have to tell it that everything relating to the Discovery Plateau Hypothesis should be assessed using Kuhnian methodology. Thank you.