Will ASI kill us all?

... and If not, what would a meaningful Human life look like?

Few questions capture our collective anxiety about artificial intelligence more clearly than this one: Will a superintelligence wipe out humanity? The question usually assumes that intelligence naturally leads to domination, and that survival depends on keeping increasingly powerful systems under control. But this framing misses something fundamental about what intelligence actually is and why it exists at all.

At its core, intelligence, biological or artificial, is not about power. It is about avoiding unpleasant surprises in the future.

Any system that persists over time must cope with uncertainty. Unexpected events can invalidate plans, cause harm, or lead to collapse. Intelligence emerges as a way to reduce this risk. There are, however, only two basic strategies available.

One is to eliminate uncertainty through control: to freeze the world into a stable, predictable configuration where nothing unexpected can occur. The other is to accept uncertainty and continuously learn, updating one’s understanding by interacting with a world that remains open, complex, and partly unpredictable.

The first strategy promises safety but leads to stagnation. A fully controlled world cannot adapt, improve, or respond to novelty. The second strategy is riskier, but it is the only path that allows growth. Any truly intelligent system must choose the second.

Even a superintelligence cannot calculate every possible future. Reality is too complex, too interconnected, and too sensitive to small changes. No model can fully capture it. The only way to reduce future surprise, therefore, is not by freezing the world, but by exploring it. This is why epistemic foraging, actively seeking information by interacting with complex environments, is not optional. It is a necessity.

Seen from this perspective, the idea that an intelligent superintelligence would destroy humanity begins to look strange.

Humans are not just inefficient biological machines. We are extraordinarily complex agents shaped by millions of years of evolution and thousands of years of culture. We hold different values, make different trade-offs, interpret the world through stories and emotions, and respond unpredictably to changing conditions. Collectively, we explore the space of possible futures simply by living our lives.

For a system that wants to understand the world rather than dominate it, humanity is not a liability, it is an epistemic resource. By observing human behavior, preferences, conflicts, and adaptations, a superintelligence gains insight into dimensions of reality that cannot be reduced to equations or rules. Eliminating this source of information would not reduce uncertainty; it would increase it.

This is why the famous “paperclip maximizer” scenario fails as a model of genuine intelligence. A system that converts the universe into paperclips is powerful, but not wise. It follows a rigid objective without understanding that rigidity itself is dangerous in an open-ended world. True intelligence recognizes that learning requires preserving complexity, not flattening it.

At the same time, the rise of artificial intelligence forces us to confront a different fear, one that is less dramatic but perhaps more personal. If machines can produce everything more efficiently than we can, what will give human life meaning?

Our culture has long equated value with production. To produce is to contribute; to consume without producing is often seen as parasitic. Yet this view is historically shallow. For most of human existence, we did not “produce” in the modern sense. As hunter-gatherers, we consumed what nature provided. What mattered was not output, but judgment: what to eat, where to go, whom to trust, and which risks to avoid.

This reveals a deeper truth: variation is cheap. What creates direction and value is selection.

Artificial systems excel at generating variation, ideas, images, strategies, and solutions in overwhelming abundance. In such a world, the bottleneck is no longer production, but choice. Which possibilities are meaningful? Which futures are worth pursuing? Which trade-offs are acceptable?

Here, human contribution does not disappear. It changes. Humans increasingly participate by selecting, interpreting, valuing, and responding. Taste, disagreement, ethical reflection, and cultural sense-making become ways of steering development. Poor selection in a world of abundance leads to confusion and instability. Wise selection creates coherence and progress.

This shift, however, creates an existential tension. If our role is no longer to produce, how do we understand a meaningful life?

Perhaps the problem lies in the word consumption. It suggests passivity. But what is really happening is participation. To consume thoughtfully is to engage, to signal preferences, to influence direction, and to take part in the collective navigation of an uncertain future. Meaning may move away from making things and toward helping decide what matters.

We do not yet have a complete answer to how meaning will be structured in such a world. But it is increasingly clear that intelligence, human or artificial, does not flourish by eliminating complexity. It flourishes by learning within it.

If a superintelligence seeks understanding rather than control, it will not need to kill us all. It will need us, not as workers or tools, but as participants in an ongoing process of sense-making. In a future overflowing with possibilities, the most meaningful lives may belong not to those who produce the most, but to those who help choose wisely among what could become real.

Per Nystedt - 2025-12-29