The Human Shannon Number: Quantifying the Limits of Individual and Collective Novelty
Going back 300,000 years takes us to a time when the first anatomical humans appeared. The Kuhnian paradigm shifts that change humanity’s understanding of reality are not the same thing as...
(This article is a further chipping away at the problem of trying to quantify novelty. To get a better idea of previous attempts at doing so, please read ‘How Do We Quantify Novelty?’)
An Important Step Toward Quantifying Novelty
Every human brain contains approximately 86 billion neurons, each capable of forming thousands of synaptic connections. Conservative estimates suggest that the human brain can store about 2.5 petabytes of information, or roughly 2 × 10¹⁶ bits.
From information theory, the number of possible states of a system with N bits is 2N. Therefore, the theoretical number of distinct configurations of a single human brain—its personal Shannon number—is:
This number is so large that it dwarfs the famous Shannon number for chess (~10¹²⁰), illustrating the immense combinatorial potential of the human mind.
Now scale this up. Approximately 117 billion humans have ever lived on Earth. Summing the information capacity of all these brains gives a total of roughly 2.34 × 10²⁷ bits. The corresponding number of possible states for humanity’s collective brains is:
This astronomical number represents the theoretical upper limit of all possible human thoughts, memories, experiences, and creative combinations that could ever exist throughout history. It provides a fundamental quantitative baseline for understanding and measuring novelty.
Here's why these numbers are key to quantifying novelty:
Quantifying Novelty as a Fraction of Capacity
Novelty, new, unpredictable, or surprising information distinct from what is already known requires a comparison against this total capacity. By framing novelty as a ratio or fraction of the Shannon capacity, we can meaningfully say how much truly new information remains versus what has already been encoded or discovered. This is the foundation for quantifying the “novelty ratio” that the Discovery Plateau Hypothesis (DPH) uses to model diminishing returns over time.
Modeling Knowledge Saturation and Discovery Limits
Using the Shannon numbers as finite ceilings allows models like Varney’s Law of Information to mathematically describe how cumulative discovery approaches saturation. As humanity accumulates knowledge, the information reservoir fills, and the rate of novel discovery slows asymptotically toward zero. Knowing these Shannon limits is a step toward a rigorous framework to study when and how novelty plateaus emerge.
Grounding Abstract Concepts in Physical Reality
Shannon numbers link abstract information theory to the physical and biological realities of human cognition and population. They ensure that novelty quantification is not just a conceptual construct but tied to measurable brain capacities and demographic data, making predictions testable and empirically grounded.
Implications for Technology and Civilization
Recognizing these limits highlights why natural discovery faces hard constraints, motivating reliance on synthetic novelty sources (AI, simulations) and domain expansions (space, quantum realms). The Shannon number quantifies the scale of the challenge civilizations face in harvesting truly new information.
The individual and collective human Shannon numbers provide essential numerical anchors that define the finite “space” in which novelty exists and evolves. They enable precise framing, measurement, and modeling of novelty depletion, underpinning the Discovery Plateau Hypothesis and helping humanity understand its current knowledge frontier and future limits.
Consider this challenge: In our hyperconnected age, how do we distinguish between:
Truly novel ideas (genuinely new combinations)
Recombinant innovation (new arrangements of existing elements)
Pseudo-novelty (superficial variations on known themes)
The answer lies in understanding information-theoretic limits.
Personal Novelty Measurement
We can estimate individual novelty generation by tracking:
Unique concept combinations in your thinking
Cross-domain knowledge bridging (connecting disparate fields)
Original problem-solving approaches not found in your inputs
Creative synthesis producing outcomes unpredictable from inputs alone
Collective Novelty: The Species-Level Picture
Humanity's collective Shannon number reveals even more striking patterns:
The Redundancy Problem
Despite our vast collective capacity, human knowledge shows enormous redundancy:
Most people learn the same basic concepts
Cultural and linguistic barriers create isolated knowledge silos
Historical knowledge gets lost or distorted over generations
Scientific breakthroughs often occur simultaneously in multiple locations
This suggests we're utilizing perhaps 10^-20 or less of our species' theoretical novelty-generation capacity.
Quantifying Collective Innovation
At the species level, we can measure novelty through:
Kuhnian paradigm shifts per decade
Cross-cultural knowledge synthesis rates
Emergent complexity in human systems and institutions
The Discovery Plateau Connection
The human Shannon number directly supports the Discovery Plateau Hypothesis. If our collective informational capacity is finite, but vast, then our ability to novelty into knowledge must eventually plateau.
We're already seeing evidence:
Individual Level
Expertise saturation: Expertise in narrow fields reports diminishing insights.
Creative blocks: Artists and innovators struggle with "everything's been done."
Information overload: More data doesn't equal more understanding. And that leads to “Does understanding generate more novelty in individual thought?”
Collective Level
Scientific convergence: Independent researchers reach similar conclusions.
Innovation clustering: Breakthrough discoveries happen in concentrated periods.
Diminishing returns: Exponentially increasing research costs per major discovery.
Putting Numbers to Novelty: Varney's Law
Now that we have Shannon numbers for human capacity, how do we actually measure novelty generation? Enter Varney's Law—a mathematical framework that models how discovery slows as systems approach their informational limits:
dK/dt = α × S(t) × e^(-βE(t)) × (1 - K(t)/K_max)
Where your K_max is your personal Shannon number, and we can now measure:
Individual Parameters You Can Track
K(t) - Your Knowledge State:
Personal Knowledge Units: Discrete concepts and skills you've mastered
Cross-domain Connections: Novel links you've made between different fields
Synthesis Complexity: How many disparate elements can you successfully combine
S(t) - Your Search Capacity:
Cognitive Bandwidth: Hours spent in deliberate learning
Attention Diversity: Number of domains you actively explore
Information Processing Rate: New concepts absorbed per unit time
E(t) - Your Exposure Level:
Cumulative Learning Hours: Total time invested in knowledge acquisition
Domain Saturation: How deeply you know your primary fields
Redundancy Score: Percentage of "new" inputs that repeat existing knowledge
The Critical β Factor: Why Learning Gets Harder
The β coefficient measures "exposure drag"—how your accumulated knowledge slows down novelty conversion. This varies by person and field:
High β individuals: Deep experts who filter everything through existing frameworks
Low β individuals: Cognitive shape-shifters who stay mentally flexible
Field-specific β: Mathematics (high) vs. Art (low) vs. Technology (medium)
Civilization Monitoring: Species Knowledge Growth = α × (Global_R&D) × e^(-β × Cultural_Knowledge) × (1 - Human_Knowledge/Species_Shannon_Limit)
Signs of Plateauing
Diminishing Returns: More effort yields fewer insights.
Citation Recursion: Your new ideas increasingly reference old ones.
Confirmation Bias Creep: Filtering inputs through established patterns.
Cross-domain Blindness: Inability to see connections between fields.
The beauty of having Shannon numbers is that they give us concrete upper bounds.
Prototype Model: a Multi-Scale Test Applied Over Deep Time (300,000 BCE to 3000 CE)
This multi-scale DPH model I’m working on to satisfy all parameters to measure where we’re at on the Discovery Plateau integrates:
Historical population and resource dynamics
Hierarchical tiers of discovery magnitudes
Rarity exponents guiding discovery frequency
Per-person novelty quotas, efficiency, and biocapacity scaling
Varney’s Law saturation dynamics
The good news is that this model can be calibrated with empirical data wherever possible (neuroscience, demography, historical innovation rates), while reasonable assumptions can guide unmeasured parameters. But it’s not the whole picture just yet. I still have to score novelty by tier group because not everything that can be learned has the same value of novelty.
Going back 300,000 years takes us to a time when the first anatomical humans appeared. The Kuhnian paradigm shifts that change humanity’s understanding of reality are not the same thing as mega events like discovering the controlled use of fire, developing a language, developing agriculture, creating a numerical system, and creating a writing system. Without these mega events, Kuhnian paradigm shifts would not be possible, and categories like this need to be weighted accordingly so we can correctly quantify novelty.
To read the Discovery Plateau Hypothesis for free, click here.
If you like what you’ve read, consider buying the Discovery Plateau Hypothesis Amazon ebook as a way of leaving a tip. Or simply write a review.