From Darwin to Data
How AI Continues Humanity’s Greatest Reframing
Throughout history, humans have struggled with being decentered. There is a long pattern of resisting the idea that humanity is part of larger systems—ecosystems, planetary forces, technologies, even unconscious drives—preferring to imagine ourselves as exceptional, sovereign, and in control. But history is full of humbling events that disrupted that narrative.
The Copernican revolution dismantled the belief that Earth was the center of the universe. Evolution placed humans squarely within the animal kingdom. The discovery of the unconscious mind challenged the idea that people are fully rational or self-knowing. The climate crisis continues to expose the limits of control and the consequences of denial. Each of these moments offered disorientation—but also a chance to recalibrate.
Artificial intelligence joins this lineage of humbling forces. It poses a profound challenge to assumptions about cognition, creativity, and decision-making. But as with previous disruptions, this humbling offers not just a threat to human exceptionalism—but an invitation to rethink what it means to be human.
Evolution: A Turning Point
The theory of evolution dismantled the idea that humans were specially created, set apart from the rest of life. Instead, it revealed that human beings are animals—biological entities shaped by the same forces as every other living thing. It was a seismic shift, threatening religious dogma and cultural narratives of superiority.
But over time, this humbling perspective enriched human understanding. It gave rise to new scientific breakthroughs, medical progress, ecological awareness, and even new forms of awe. By accepting that humans are part of a larger system, scientific and cultural understanding expanded—and new forms of thriving within ecological and medical systems became possible.
AI and the Expansion of Cognitive Systems
Artificial intelligence presents a similar disruption. Machines now perform tasks once seen as distinctly human: composing music, writing code, diagnosing illness, translating language, generating images, and mimicking conversation. They are rapidly advancing into territory once considered the core of human identity—creativity, judgment, and thought itself.
This shift doesn’t only affect labor or productivity. It challenges the narrative that intelligence is uniquely or even dominantly human. The boundary between natural and artificial cognition grows less clear. What has been called thinking may be more replicable, transferable, and malleable than previously imagined.
But AI isn’t just another category of tool. It belongs to a growing set of technologies that can be called choicetech—systems designed not only to act, but to shape how people decide, sort, and evaluate. Choicetech includes dating apps, recommendation engines, smart assistants, search algorithms, data dashboards, and social feeds. These tools don’t merely perform tasks—they structure the decision-making environment itself. They influence what is seen, prioritized, and considered, often invisibly.
The Pressure of Too Much
When every interface offers hundreds of options—jobs, partners, meals, identities—it can feel like empowerment. But beneath that abundance lies something more complex: decision fatigue. The pressure to always make the right choice leads to stress, avoidance, and shallow judgment.
This is choice inflation in action: when environments multiply options faster than people can meaningfully process them. More choices do not always lead to more freedom. Often, they create overwhelm. Worse, they obscure how the architecture of those choices—the ranking systems, filters, defaults—nudges behavior and limits perspective. When your social feed defines what ‘success’ looks like before you can decide for yourself, that’s choicetech at work—not just suggesting options, but shaping your criteria before you’re even aware of it.
Human decisions are not just being supplemented by AI and other choicetech—they are being conditioned by its architectures. The internet, once seen as a triumph of human knowledge, has laid bare our cognitive limits, our vulnerability to misinformation, and the cracks in collective reasoning. It has revealed that access to information is not the same as wisdom or power. These realities humble the long-held ideal of progress through information alone.
This new landscape doesn’t just call for faster or smarter decision-making—it reveals the need to reassess what matters most. The pressure of too much isn’t only a cognitive challenge; it’s a philosophical one. Choicetech systems force a reckoning not only with human limitations but with deeper priorities: What is worth choosing? What is worth knowing? What is worth preserving? To meet these questions with intention, it may help to learn how to live well—and collaborate wisely—within a hybrid human-tech world. A world where influence is shared, changes are constant, and clarity depends less on control and more on orientation within a larger system.
Other Ways of Seeing
This discomfort with being decentered is largely a Western legacy. Not all worldviews frame humanity as separate or superior. Many Indigenous cosmologies, such as those of the Anishinaabe or Māori, locate humans within a broader ecological and spiritual network. Taoism and Buddhism emphasize impermanence, interdependence, and the illusion of separateness.
These traditions offer a counterweight to the fear of losing centrality. They suggest that humility is not diminishment—it is maturity. From these perspectives, being one part of a greater whole isn’t a threat. It’s a source of wisdom. While no society fully escapes human-centric behavior, many Indigenous and Eastern cosmologies have retained frameworks that center interdependence and relationality—offering alternative reference points even when cultural practices have been uneven. Even within Christian traditions, some theological currents—though less culturally dominant—stress humility and interconnectedness, highlighting the possibility of relational ethics even within historically human-centered systems.
Humility as Strategy
Rather than resisting these humblings, this moment could be met with curiosity and imagination. The stories told now will shape the systems built—and the kinds of humans that emerge within them.
To navigate this moment wisely, it may help to:
Accept the Humbling: Recognize that being part of something greater does not diminish human worth. It situates humanity in context.
Celebrate Humanity: Emphasize traits that remain essential—empathy, adaptability, ethical discernment, and collective imagination.
Foster Collaboration: Use AI as a creative partner, not a competitor. Let technology expand human range, not replace human essence.
Promote Equity: Ensure that the development of AI and choicetech includes diverse voices, cultural perspectives, and global concerns.
Cultivate Curiosity: Instead of optimizing for efficiency alone, let this moment invite new questions, new ethics, and new forms of meaning.
A Connected Future
Just as the theory of evolution reshaped the understanding of life, artificial intelligence invites a rethinking of cognition, creativity, and choice. These are not threats to human relevance. They are invitations to grow beyond outdated myths of control and superiority.
In today’s data-driven world, meaningful decisions require updated strategies. The future doesn’t belong to the most optimized. It belongs to those willing to adapt, collaborate, and reimagine what it means to thrive in a world of systems—human and non-human, biological and digital, old and emerging.
We’ve never had so much power—and never needed humility more. It may be the only way to remain fully human in a world that’s changing fast.