Wednesday, November 30, 2022

 

The Precipice: Existential Risk and the Future of Humanity

 

The Precipice is Toby Ord’s apt word for the current precarious human condition.  Most of the book details existential risk for our species - meaning extinction.  He clearly shows that natural risks – anything from asteroids and comets to supervolcanic eruptions and stellar explosions, are not very high, based both on Earth history and projecting the rate of risk across the aeons ahead. 

Far more worrying are anthropogenic risks, including nuclear weapons, climate change, and overall environmental damage. In a chapter on future anthropogenic risks, he adds artificially induced pandemics and “unaligned artificial intelligence.”  He agrees with Future of Humanity Director Nick Bostrom in ranking these existential risks, with AI the highest (at 10%, which is a worrisome figure), followed by nuclear weapons, pandemics, and climate change fourth. The fifteen-year history of this blog has focused on climate change within the context of the global ecocrisis, including habitat loss, exhausting resources, and what has been called the Sixth Extinction (Ord thinks this is premature).  It’s only fair that I present a larger picture, based on the risk probabilities of these eminent Oxford institutions.

I learned of Toby Ord’s work reading the fine New Yorker piece on William MacAskill,  the reluctant prophet of effective altruism .  As a small-scale philanthropist, donating most of my federally mandated Required Minimum Distribution (RMD) to non -profits, Will’s central argument to donate to organizations that would have the most effective longterm influence for humanity (and the Earth) made sense.  And the fact that he was a reluctant leader made his pitch even more attractive. 

The in-depth interview, following Will around for a week and later dropping in for various EA events, highlighted the key moment when Toby Ord attended a meeting in Durham (NC) and was sold on the idea.  Ord is a moral philosopher at Oxford, and co-founded the charitable arm of the EA organization with MacAskill.  The organization shares a building at Oxford with the Future of Humanity Institute, which also shares many of EA’s concerns (Ord works for both), and clearly informs Ord’s book. The Precipice:Existential Risk and the Future of Humanity, is the culmination of his work on global poverty and his pledge to donate 10% of his income to help improve the world, underpinned by his training and teaching in ethics.

The biggest shock for me was hearing detailed history of AI, and recognizing the speed at which this development is cascading.  It feels very much like the development of nuclear bombs, in which the thrill of the chase is again outrunning prudence and safeguards (this is true as well of viral research).  I have been hearing for years from a colleague in the Forge Guild, a trans-traditional group of spiritually oriented folks, warning us about AI.  But his posts did not go into the kind of harrowing detail that Ord presents here.  The real concern is not AI per se, but the development of artificial general intelligence (AGI), where the robot has the full panoply of brainpower, including invention, choice, and motivation, at a level far more advanced than our species.  I will not detail the multiple scenarios where things could go wrong enough to lead to our extinction, but clearly there needs to be better monitoring of what’s going on before it’s too late.  And we are on the threshold of too late. 

Ord also outlines various dystopian scenarios. Foremost among these would be a global fascist government with extreme control tools. Three of the biggest democracies on the planet are edging towards fascism.  India under the BJP and its Hindutva ideology has already passed the boundary, though it is not clear to me that it is irrevocably fascist. A planetary disaster due to the loss of the Amazon rainforest carbon buffer has been averted (perhaps) by the election in Brazil, where the socialist Lula narrowly defeated Trump’s clone Bolsonaro. This was a huge concern heading into the mid-term election in the US, but the crisis has been defused for now by the partial Democratic victory.   

Ord’s position is one of a pragmatic humanist, a rationalist perspective relying heavily on the science of risk analysis.   But I find it odd that a book that is drenched in risk probabilities does not fully take  account of the scenario of risk synchronization.  He does speak of the “increasing risk of a cascading failure of ecosystem services” (118), but why doesn’t he do the calculations that he performs for individual risk categories?  

The ending chapter, on “Our Potential”, fully demonstrates Ord’s breathtaking optimism about our future.  In his scenario, if we can work through the current multifaceted crisis (the Precipice) to give ourselves some time for reflection (the “Long Reflection”) by slowing down the pace of research and economic growth, that potential is unlimited.  Here he echoes the book’s first sentence: If all goes well, human history is just beginning.  The promise of “heights of flourishing unimaginable today” is outlined in an extraordinary fable of colonizing the cosmos, adding “trillions of years to explore billions of worlds.” Not only trillions of years, but 80 trillion human beings (MacCaskill’s number), based upon the average span of mammalian species.  I gasped at these mind-boggling figures.  Ord is morphing from a philosopher doing risk analysis to science fiction.

Despite the catastrophes of two world wars, multiple instances of genocide, two global pandemics and the climate chaos of the present century, he doubles down on Condorcet’s Enlightenment-era optimism. For Ord, man is indeed the measure of all things, not only on Earth, but in the Universe itself. He makes it clear that humanity alone creates value, thus we should colonize the universe, giving it value.  He could be a spokesperson for Elon Musk.

Ord agrees with many anthropologists that we are a young species.  But his heady optimism assumes that we will go from adolescence to mature wisdom in a generation or two.  It would be the greatest evolutionary leap in Earth history.  When asked how long it would take our species to grow up, Pulitzer poet Gary Snyder laughed and said “10,000 years.”  Dark Mountain founder PaulKingsnorth and other dark ecologists would agree. That is, if we survive the Precipice.

But Ord presents a strong argument against those who would lead us back to traditional practices (Gandhi and his spinning wheel, Kingsnorth with his scythe for every occasion) and minimal tech solutions: “…forever preserving humanity as it is now may also squander our legacy, relinquishing the greater part of our potential.” The problem is that we have to be pretty much perfect in our choices going forward, so rapid is the pace of technological advance, and so severe the consequences if we make a key mistake in the several areas of risk Ord outlines so thoroughly. And our track record is not good. 

Ord makes gestures at various junctures of The Precipice indicating he respects the fact that we are part of an entire life web, but his anthropomorphic bias overwhelms these statements. And his thoroughgoing rationalist, pragmatic humanism ignores the immanence of the divine in the tiniest corner and widest reaches of the cosmos.  Though his Oxford institutions are doing invaluable work in risk analysis, reminding us that we are clearly at a precipice, a far more promising pathway is my mentor Sunderlal Bahuguna’s insistence on the union of advaita (non-dual, the “Buddhist” end of the vast Hindu theological terrain) and science as a pathway forward for imperiled humanity and our exquisite earthly home. See my post on his death.



This page is powered by Blogger. Isn't yours?

Subscribe to Posts [Atom]