Fromm believed that science and religious faith are responses to the same problem: the anxiety stemming from the modern condition of doubt

It is this promise of certainty that makes these algorithms so appealing—and also what makes them a threat to the edifice of liberal humanism.

We already defer to machine wisdom to recommend books and restaurants and potential dates; as the programs become more powerful, and the available data on us more vast, we may allow them to guide us in whom to marry, what career to pursue, whom to vote for. “Whereas humanism commanded: ‘Listen to your feelings!’” Harari argues, “Dataism now commands: ‘Listen to the algorithms! They know how you feel.’”

For Joy, the matter is deeply personal: “I may be working to create tools which will enable the construction of the technology that may replace our species,” he realizes. “How do I feel about this? Very uncomfortable.” Joy worries not only about the potential for authoritarianism, but about the existential threats these machines pose. Machine intelligence, as he sees it, is a unique risk—more dangerous even than nuclear warfare, since it can, theoretically, develop the ability to self-replicate, at which point humans will lose control. Joy concludes his essay by arguing that the risks of artificial intelligence outweigh their potential usefulness and should be abandoned; humans needed to find some other outlet for their creative energies. His conclusion was radical at the time, but seems even more so today—essentially, Joy proposes that humans limit their pursuit of knowledge.

Since Joy issued his warning, nearly twenty years have passed. The risks he feared have not been addressed; in fact, the stakes have only been raised. If so few in the industry share his concern, it’s perhaps because they see themselves, like Dostoevsky’s Grand Inquisitor and Kaczynski’s “good shepherds,” as custodians of a spiritual mission. The same Kaczynski passage that Joy confronted is quoted at length in Ray Kurzweil’s book/The Age of Spiritual Machines/. Kurzweil, who is now a director of engineering at Google, claims he agrees with much of Kaczynski’s manifesto but that he parts ways with him in one crucial area. While Kaczynski feared these technologies were not worth the gamble, Kurzweil believes they are: “Although the risks are quite real, my fundamental belief is that the potential gains are worth the risk.” But here, Kurzweil is forced to admit that his belief is just that—a belief. Just as Ivan Karamazov is forced to concede that his rationalism rests on an act of faith, Kurzweil admits that his conviction is “not a position I can easily demonstrate.”

There’s a robust school of theologians who conceived of a God with attenuated powers. They understood that a truly humanistic faith demands a deity with such limits. This doctrine requires that humans relinquish their need for certainty—and for an intelligence who can provide definitive answers—and instead accept life as an irreducible mystery. If science persists in its quest for superintelligence, it might learn much from this tradition.