What if right now—this very century—is the most important in human history? We can look back and argue about different pivotal times. Take Alexander the Great’s conquests in the 300s BCE, which redrew maps and melded cultures, or the rise of Islam in the 7th century that redefined values across continents. The Industrial Revolution of the 1700s changed the way we work and live forever.
These periods were marked by monumental shifts that steered humanity in new directions. So, by this yardstick, could the 21st century be the most crucial of all? Our age has already witnessed leaps in technology. From smartphones to computers, life as we know it has sped up. And new technologies loom on the horizon, promising to redefine existence yet again, particularly with the advent of advanced artificial intelligence (AI).
But with rapid advancement comes great risk. Technologies we already possess have upped the ante on existential threats—think nuclear weapons. The atomic bomb set the stage for a whole new level of existential risk. Rough estimates suggest the chance of a nuclear disaster or climate change catastrophe within this century is around 0.1%. Even scarier, a pandemic could bring about a collapse with a 3% chance. These are not small figures when humanity’s very survival is on the line.
The future could bring even more risks with the rise of artificial general intelligence (AGI). Narrow AIs today can play chess or recognize faces, but AGIs would be able to adapt and dominate a myriad of tasks, potentially outstripping human performance in every field. Predictions on when AGI will emerge this century vary, but the potential impact is staggering.
The real challenge, though, is ensuring AGI shares our values. This isn’t just a technical issue; it’s a massive philosophical conundrum. If we manage to align AGI with human values perfectly, we would have to deal with another tricky possibility. What if an AGI is so committed to human welfare that it becomes inflexible? A scenario where AGIs dominate and enforce rigid values could stifle human moral growth permanently. History has consistently shown us that today’s enlightened views are often seen as flawed by future generations. Locking ourselves into one unchanging ideology could be disastrous.
Predicting how existential risks will unfold over the next hundred years is incredibly tough. New, unforeseen challenges could render current risks irrelevant. But whether we call this the most important century or not, the choices we make now are likely to shape the future profoundly. So perhaps we should all act as if the future depends on us—because it just might.