spoiler

I. Θεογονία

Sometime twelve hundred years ago or so, monastic communities around Europe started to wonder why, if the dedications of their labors were to return mankind to divinity, were they not seeing the accumulations of their progress? The Augustinian view held that proximity to God was the product of chastity, work ethic, and obsequence. “Technology,” David Noble writes in The Religion of Technology, “had nothing whatsoever to do with transcendence; indeed, it signified the denial of transcendence.” This view began to fade, replaced with a supremacist view where mankind’s command over nature was God’s will, a perspective that molded Western society into the dominionist culture that it is today.

Ascetic life never really faded out, though traditional monasteries in the last centuries have lost their political, theological, and cultural influence over Western society. In their place, both public and secret societies were formed, some of which sought the ascension of mankind through the mastery of technology. Masonic lodges began to consecrate the “useful arts,” leveraging the influence of their networks and wealth to create institutions dedicated to technological progress. The 19th century birthed the era of civil engineering and industrial progress. Stephen van Rensselaer, the founder of my alma mater, Rensselaer Polytechnic Institute, created the university “for the application of science and technology to the common purposes of life,” though it was no coincidence that the concept was formed while surveying land for the Erie Canal.

It was technology that gave America its ability to spread its wings and cover a continent on the currents of Manifest Destiny, a dominionist and white supremacist conviction that America was God’s chosen country with a divine mandate to spread from “sea to shining sea.” Children still sing these words in songs today. It was not difficult to see why it was so easy for the young nation to convince itself of its deservëd fate: the virgin landscapes, tended for millennia by indigenous Americans, showed none of the scarring and exploitation of the tiny, inhospitable European continent. The land itself was like something out of the wistful German Romanticism trendy at the time; the fantasy scenes of a medieval Europe that no longer existed were real again as settlers looked down over the Shenandoah Valley and points westward. The implicit mythology of a unified and racially pure Europe was reborn as White Man’s Burden, and it was technology that brought the long reach of the continent into the newborn nation’s grasp.

As transportation shrunk the vast distances, Americans began to see the face of God in the still-unspoiled landscape. Visitors to Niagara Falls began to write of its sublime. David Nye cites the words of a visitor from Michigan:

when I saw Niagara, I stood dumb, “lost in wonder, love and praise.” Can it be, that the mighty God who has cleft these rocks with a stroke of his power, who has bid these waters roll on to the end of time, foaming, dashing, thundering in their course; can it be, that this mighty Being has said to insignificant mortals, “I will be thy God and thou shalt be my people?”

God created the American landscape for His people, so the belief went, and it was our God-given duty to put the vast continent and all its potential and all its fury in our reach and to tame it and to let it rise us.

America’s manifest destiny led ultimately to its successful conquest from coast to coast, the genocides and atrocities it committed along the way merely the price of doing God’s work. The railroad stitched the country together. Edward Everett spoke and called the railway “a miracle of science, art, and capital, a magic power… by which the forest is thrown open, the lakes and rivers are bridged, and all Nature yields to man.” If man’s destiny was dominion over the natural world, then it was technology that provided the means. At the dedication of the Niagara State Reservation, created by the government to protect the beauty of the Falls so consecrated by public opinion, James C. Carter merged the natural and the artificial into the same divine right:

There is in man a supernatural element, in virtue of which he aspires to lay hold of the Infinities by which he is surrounded. In all ages men have sought to find, or to create, the scenes or the objects which move it to activity. It was this spirit which consecrated the oracle at Delphi and the oaks of Dordona; reared the marvel of Eleusis, and hung in heavens the dome of St. Peter. It is the highest, the profoundest, element of man’s nature. Its possession is what most distinguishes him from other creatures, and what most distinguishes the best among his own ranks from their brethren.

Over the next hundred years, as man mastered flight, as transportation became accessible, as mighty rivers could be crossed by the rising spans of the suspension bridge and the heavens could be touched by the mighty steel-framed towers, he definitively answered the question of his domination of nature. There was no doubt we could hold back the fury of a mighty river or contain the spark of creation. There remained only one unanswered question to our power over nature: could man create life?

The question stayed for decades in the realm of theology and science fiction until after the Second World War. The war showed us that we could bottle the might of God and use it for apocalyptic death, but it was the twin discoveries in the early 1950s of the theory of computation and the double helix structure of DNA that gave us a path to seriously consider the genesis of life. Just a few years prior to the discovery of the structure of DNA, the mathematician John von Neumann began exploring cellular automata as the abstract foundations of the building blocks of life, the so-called universal constructor. DNA’s role in carrying information across cellular division only reinforced the faith in this idea, a faith which has carried forward even to to the modern era and culminated in the core ideas found in Stephen Wolfram’s A New Kind of Science, his much anticipated magnum opus. Unfortunately, the mathematics were beyond von Neumann’s reach 70 years ago, as they were to Wolfram 20 years ago, too. They remain elusive, and despite the tantalizing promise, cellular automata have failed to produce much by way of meaningful scientific, mathematical, or technological advancement.

But it wasn’t the success of the theory that became infectious, it was its allure. Buoyed by the impact of computing power in cracking the enigma code, the splitting of the atom, and the unlocking of the cell’s mysteries, great thinkers began to revisit the question of artificial life. It was easy to convince the military to invest in the creation of technology that could think and act on its own accord: fresh from the devastation and horrors of the war and in the afterglow of Trinity, generals and politicians alike began to fear the totality of the next war. The U.S. Department of Defense saw promise in the technology and began investing money in early AI development.

Early efforts in AI research predated microcomputing technology and focused less on general AI and more on machine understanding for limited contexts. In the early 1970s, the Defense Advanced Research Projects Agency (DARPA) funded Speech Understanding Research, an effort which attempted to understand natural language to extract information about the “U.S., Soviet, and British fleets.” After five years of funding, the project failed to meet its goals. This failure mirrored earlier failed attempts at machine translation and early neural networks called perceptrons.

These failures were made manifest when the British Government released the Lighthill Report in 1973, which eviscerated the state of research in words that echo still today:

Workers entered the field around 1950, and even around 1960, with high hopes that are very far from having been realized in 1972. In no part of the field have the discoveries made so far produced the major impact that was then promised… In the meantime, claims and predictions regarding the potential results of AI research had been publicised which went even farther than the expectations of the majority of workers in the field, whose embarrassments have been added to by the lamentable failure of such inflated predictions.

The first AI winter was upon us and it arrived with blizzard proportions with the failure of LISP machines erased any argument that it was simply lacking computing power that held back the development of early Artificial Intelligence. The chill of the 70s would repeat itself in what would become the chorus of the song of AI: in the 80s, the expensive and public failure of the Japan’s 5th Generation Computer System in the early 90’s not only brought a second AI winter, it also stuck the dagger in the heart of expert systems research and brought with it the death of Prolog as a serious programming language. Nearly half a century had gone by filled with promises of synthetic life and all we had to show for it were the taxpayer-financed receipts of a dozen high-profile failures.

Failures didn’t dampen the fervor or defiant faith of AI’s most ardent champions, itself a foretelling of our present affairs. By the 1980s, despite a total lack of evidence of success, artificial intelligence advocates lifted their sights even higher. No longer was machine intelligence a goal. Rather, they reshaped their movement towards artificial life, supported financially by NASA and the U.S. Air Force Office of Scientific Research. Here, acolytes kept von Neumann’s decades-old dream alive. Rudy Rucker wrote in 1989, “[c]ellular automatas will lead to intelligent artificial life. If all goes well, many of us will see live robot boppers on the moon.” Researchers began to speak of themselves as creators of the divine, invoking the same motifs of American westward expansion from the century prior. “The manifest destiny of mankind is to pass the torch of life and intelligence on to the computer,” Rucker claimed. “I think the cleanest thing would be to say that all living things have a soul and that is in fact the thing that makes them living… If you can envision something living in an artificial realm, then it’s hard not to be able to envision, at least at some point in the future, arbitrarily advanced life-forms—as advanced as us—therefore they would probably have a soul, too,” claimed physicist Norman Packard.

By the 1990s, artificial intelligence research had failed to meet any of its stated goals, but this lack of success did not impede the field from reifying Christian dominionism and reshaping themselves in the messianic tradition.

The benediction of AI research was hardly unforeseeable. In fact it was von Neumann himself who formulated an early version of the Singularity Hypothesis, which in essence states that the exponential increase in human technological achievement will eventually lead to mankind’s transcendence not only over earthly nature by the fabric of physical nature itself. Singularity theorists embrace not only AI capabilities, but also genetic and biophysical engineering, spaceflight, and quantum computing advancements as evidence that humanity is marching inexorably to an era where immortality is inevitable, with human consciousness transcending the need for corporeal bodies.

The AI evangelist Earl Cox authored a book in the 1990s titled Beyond Humanity: CyberRevolution and Future Mind in which he proposed that “humans may be able to transfer their minds into [] new cybersystems'' and that “we will download our minds into vessels created by our machine children and, with them, explore the universe… freed from our frail biological form” in a sort of collective consciousness. Ray Kurzweil brought this idea to the mainstream nearly a decade later with The Singularity is Near, wherein he was so bold as to predict that humankind would transcend into a sort of energy-based collective spanning the universe, vibing, sometime by the 2040s.

It’s worth pointing out that these extreme claims are not being promoted by the ignorant and the conspiratorially-minded. Von Neumann was one of the greatest mathematicians of his time. Cox was an expert in fuzzy logic. Kurzweil studied at M.I.T. And many of these thinkers were funded by military-backed scientific research initiatives. Despite this pedigree, there is little meaningful science backing these claims. If anything, the repeated historical failures of AI to provide the promised society-altering changes has only led to a redoubling of faith-based prognostication. Each AI winter is followed by the rejection of rational fundamentalism in favor of a quasi-religious kind.

By the late 1990s, the Internet was starting to creep into a significant fraction of American households and the numerological relevance of the change of the millennium was starting to have an effect on public optimism. Western nations emerged from the cold war victorious and the decades-long threat of nuclear annihilation quickly seemed like a faded memory. The United States of America emerged perhaps as the only global superpower and for many Christian fundamentalists this simply served as further evidence that the U.S. was God’s chosen country and Jesus Christ’s return was imminent. We had long ago bridged the divide between faith and technology, even if some sectarian disagreements about genetic engineering, censorship, and morality lingered. David Noble’s 1999 book The Religion of Technology opens by drawing the parallels between faith and techno-optimism:

Perhaps nowhere is the intimate connection between religion and technology more manifest tha in the United States, where an unrivaled popular enchantment with technological advance is matched by an equally earnest popular expectation of Jesus Christ’s return… If we look closely at some of the hallmark technological enterprises of our day, we see the devout not only in the ranks but at the helm. Religious preoccupations pervade the space program at every level, and constitute a major motivation behind extraterrestrial travel and exploration. Artificial intelligence advocates wax eloquent about the possibilities of machine-based immortality and resurrection, and their disciples, the architects of virtual reality and cyberspace, exult in their expectation of God-like omnipresence and disembodied perfection.

Computing began to pervade Western society. At Stephen van Rensselaer’s institute, where I matriculated in the year 2000, I joined one of the first classes of students mandated to carry a laptop computer. The internet started shaking boundaries of communication widening the limits of knowledge. By all rights, the new millennium should have led to the rebirth of the redeeming power of AI, but it did not. Instead, like Icarus, we flew too close to the sun. The internet came too fast and too uncontrolled, and before techno-optimists could return their gaze to the conception of synthetic life, the dotcom crash pulled the rug out from underneath them, and not long after, the victoriousness of Christian fundamentalism met its new match in a rising Islamic fundamentalist movement that literally shook the foundations of American techno-capitalism on the 11th of September, 2001 years after Christ. The optimism of the 90s died. We would not arrive at the Kingdom that day.

Continued in comments

  • Frank [he/him, he/him]
    ·
    8 months ago

    How is Gorcenski doing? Haven't seen much of her since I was kicked off Bird Hell

  • happybadger [he/him]
    hexagon
    ·
    8 months ago

    This is a really good essay about the religious elements of the AI mania and how they've evolved historically. Here is a socialist neo-luddite podcast discussing it: https://player.fm/series/this-machine-kills/ep-302-god-is-the-machine

    Part 2:

    spoiler

    II. Resurrection

    The truth is, of course, more complicated than that. The end of the Cold War and the Clinton administration saw a tightening of the Pentagon’s budget. There was no longer infinite money to pour into longshot AI research projects and the cynicism of a generation of researchers whose career work left only empty, unfulfilled promises on the table failed to inspire a new generation to take their place. Computing power remained expensive and the supercomputers that were being built were being tasked to more tangible goals, like nuclear simulations, weather modeling, and astrophysics research. What algorithm development was progressing was living instead in more niche fields like control systems research, a field which carefully distanced its robotics work from sci-fi notions of intelligent Terminators taking over the world. By 2008, when I began working with neural networks, they were merely niche tools to solve specific technical problems, not lofty approaches intending to create Artificial General Intelligence. AI research was back at the fringe. Neural networks were a failed promise.

    That changed in 2012. In the years prior, a research team from IDSIA, a Swiss research lab had developed a technique known as convolutional neural networks (ConvNet) and integrated them into the relatively new field of GPU-based computing. Initially created to accelerate graphical rendering for video games and special effects, GPUs were purpose-built processors with a specialized architecture. Their inherent capabilities in parallel processing gave them an advantage in computationally-intensive work like neural networking, whereas CPUs were mostly designed for running application workloads for personal computers. The IDSIA team began entering their GPU-based ConvNet approach into computer vision competitions the previous year, finding remarkable success.

    Before ConvNet took over the field, computer vision techniques were relying on more conventional machine learning techniques like Linear Discriminant Analysis. ConvNet’s results were remarkable, displaying for the first time an almost preternatural capability that exceeded the human benchmark. Moreover, unlike previous incarnations of AI research which required specialized hardware (e.g. LISP machines) or arcane programming techniques (e.g. logic programming), the ConvNet algorithms could leverage commercial, off-the-shelf hardware. Seemingly overnight, neural networks became cool again, the promise of AI started to look realistic again, and ordinary companies started believing that they, too, could benefit from the technology.

    The industry, too, had remade itself after the collapse of the dotcom bubble and the Great Recession of 2008. Web 2.0 had arrived in the middle of the previous decade: social networks took off and the World Wide Web had rapidly shifted away from the domain of the nerds to become an essential part of daily life. More than that, the technology industry started assuming a leading role in the evolution of mainstream culture. National news started talking about celebrities' tweets; social media helped fuel revolutions in Tunisia and Egypt, and before long Facebook would emerge as a force that literally reshaped the borders of the world.

    The industry changed its character, too. Suddenly a new career emerged—data science. And companies around the world started stealing away academics and postdocs with promises of eye-watering salaries, hierophants versed in a form of statistical legerdemain that would materialize money out of vast collections of data. Armed with a degree in Computational Mathematics and a decade of experience in machine learning, I became one, too, chasing money and status in an industry with deep pockets. It was too easy to succeed. The AI movement had reëmerged from its slumber, eager not to make the mistakes of its past by promising the synthetic genesis of consciousness, but rather the manifestation of business value and wealth. Data science embraced the Prosperity Gospel and venture capitalists and angel investors readily handed over wheelbarrows of money.

    It worked, too, at least for those with the good sense to realize what their data money could buy. Many businesses found good value implementing rather mundane solutions like recommendation engines and predictive analytics, and while these efforts were able to squeeze a few more drops of blood from the e-commerce stone, they didn’t reshape society. The real impact came by manipulating people directly.

    Plenty of words have been written about the impact of Facebook on the Rohingya genocide and Cambridge Analytica’s influence on Western elections. I don’t need to revisit them. What is interesting, however, is who these companies worked with. Donald Trump’s 2016 presidential campaign leveraged CA’s harvested data to profile and microtarget users, part of a shockingly sophisticated digital strategy. Trump’s most fervent base was, of course, the American evangelical movement who saw Trump as a critically important actor to help bring about their millenarian end-times theory. When the Trump administration moved the American embassy in Israel to Jerusalem, it was an enormous validation for those who believe the Rapture is near. Trump was the evangelicals' agent; technology became their medium.

    The relationships between fascism, mysticism, and technology are hardly new. Albert Speer, Adolf Hitler’s Minister of Armaments and War Production, ends the epilogue of his memoir Inside the Third Reich by admitting, “[d]azzled by the possibilities of technology, I devoted crucial years of my life to serving it. But in the end my feelings about it are highly skeptical.” The Nazi party served its mythology of Aryan supremacy in part by highlighting the superiority of German engineering and the efficiency of its war machine, even as certain elements of the party dabbled in elements of the Occult.

    Fascists in the modern era have proved incredibly adept at wielding technology, either for spreading propaganda or driving societal wedges to serve their necessary mythology of being both victim and savior of a world under attack by Jewish forces. Among the defining features that separates fascism from other brands of political authoritarianism is the commitment to the belief in the rebirth of a national ideal, which we saw in the United States through the sloganeering of Donald Trump’s “Make America Great Again'' promise.

    Trump’s followers propagated this myth not through the filtered and critical perspective of mainstream news and media, but rather through the nascent, uncontrolled, and misunderstood domain of social media. Trump’s modern fascist acolytes needed to synthesize an enemy; specifically, a cultural enemy. They found this first by putting modern liberalism under their gaze. Gamergate was the perfect opportunity; what began as a sexist contrivance aimed at a feminist game developer quickly became an indictment of modern liberal feminism as the morally corrupt, removed enemy of traditional American values. It was almost step for step out of the same playbook that led the Nazis to declare die Aktion wider den undeutschen Geist (the action against the ungerman spirit) in the early weeks of their reign.

    The movement might have ended there had Hillary Clinton not become the Democratic nominee. Suddenly, online troll armies began the “Meme Wars” to dominate the social media space, injecting themselves into online discourse and harassing woefully unprepared social media netizens. The result was a bloodbath: online liberals, accustomed to holding and winning debates on terms based on reason, civility, and decency had no tools with which to respond to the bad faith attacks. Worst of all, they lacked a crucial understanding that their alt-right opponents had: how to deliberately manipulate the AI algorithms that drove social media behavior. Pushing back only made the problem worse—a lesson still yet to be learned—and the so-called Algorithm turbocharged America’s political divorce.

    The alt-right’s mastery over online manipulation was a natural consequence of their origins. Beaten back by a public hungry for social change, the American right abandoned their failed neo-conservative approach of promising small government and lower taxes and started throwing spaghetti against the wall to see what would stick. New right wing movements began to emerge from the fringe. Among them was the Neoreactionary Movement, also known as NRx. Unlike neo-conservatism, the NRx movement was not composed of Washington insiders and old money. Instead, it found its roots in the blogosphere, its most prominent thinkers publishing under pseudonyms. Among them was a blogger known as Mencius Moldbug, whose real name was Curtis Yarvin.

    Yarvin was in his own right a small celebrity in the technology industry. A leading figure in the functional programming community, Yarvin’s neoreactionary advocacy reflected a sense of techno-supremacy, echoing a belief that (some, namely white, male) software engineers differentiate themselves by their intellectual supremacy over the rest of the world. Whereas neo-conservatism played lip service to the idea of an egalitarian society, neoreactionaryism discarded the idea entirely and advocated for rule by elitism and, in some cases, the return of monarchy.

    • happybadger [he/him]
      hexagon
      ·
      8 months ago

      Part 3:

      spoiler

      Neoreactionaryism was frequently incomprehensible, but its unabashed racism and inherent techno-solutionism showed that there could be a youthful energy in conservative politics. Neo-conservatism was a boomer idea sold to boomers and the back-to-back losses of proven Republican all-stars John McCain and Mitt Romney to Barack Obama, an upstart Black man, demonstrated the need for younger blood in the Republican sphere.

      The NRx movement was always too dense for the mainstream. It was Richard Spencer, whose road to ruin I myself helped pave, who was able to leverage its ideas as ideological backing and distill them to be accessible to the intellectual potatoes that were the alt-right’s footsoldiers. It was this army, marching under the banners and the clarion call of neo-fascism, that dominated the battlescape of American liberal democracy until its resounding defeat on the streets and in the courthouses of Charlottesville, Virginia in 2017.

      But by then the damage was done. Donald Trump was President and the Republican party had extracted all they needed from the movement. The alt-right was discarded and deplatformed, and its figurehead, Richard Spencer, so destitute that he couldn’t afford a lawyer to defend him in the civil suit accusing him of a racially-motivated conspiracy. Steve Bannon’s strategy to “flood the zone with shit”, powered by AI-driven virality, gave the evangelicals everything they needed to move one step closer to Judgment Day.

      It wasn’t the toxicity of Curtis Yarvin’s political ideas that first struck me. It was his role as a prominent technologist. Yarvin was invited to speak in 2016 at LambdaConf, a functional programming conference, which led to a public uproar that divided the tech industry. On the one side, some believed that politics and technology should stay separated; on the other, those who believed that spaces should be inclusive and the invitation of an alleged racist and techno-monarchist betray that idea. LambdaConf ignited a political fury: alt-right figures like Vox Day and Milo Yiannopoulos seized the opportunity to use it as a political wedge, framing Yarvin’s opponents as shrieking liberal banshees opposing progress and technological purity. A list of “social justice warriors'' was created and I earned my own place on it, one of my first acts of public antifascism.

      But what’s perhaps most unique about the mostly-forgotten brouhaha is that it erupted from the functional programming community. Functional programming is a style of writing code that favors purity and abstraction over brute force. Alexander Grothendieck once described fellow mathematician Emmy Noether’s approach as letting a sea of abstraction “submerge and dissolve” a problem, standing in contrast to her contemporaries' “hammer-and-chisel” approach. Functional programming seeks the same elegance: where software development through object-oriented code is an exercise in structure and persistence, functional code feels to the coder like an act of ingenuity.

      The headiness of functional programming makes it difficult for beginners and non-experts—and the growing tech industry is full of beginners—and its cleanliness appeals to folks in Yarvin’s mold who find themselves obsessed with purity and intellectual supremacy. What is also perhaps no coincidence is that LISP, the leading AI language in the ’60s and ’70s, is a kind of a functional programming language, one with the novel capacity for a program to evolve its own source code. Artificial intelligence has been catnip for fascists ever since its earliest days, a trait it has not lost today.

      In the early 1970s, the Israeli philosopher Nathan Rotenstreich explored the relationship between technology and politics, developing an authoritarian framing to describe the relationship between the two. He writes:

      The technological development as it stands is a function of this intensification of man’s authoritarianism, both in relation to nature and in relation to his fellow men. The authoritarian drive in man has become the technological drive; it feeds technology, makes its progress possible, forces countries and nations to invest the best of their manpower, their best minds, and a great deal of their money in the progress of technology.
      

      As such we can extrapolate from the authoritarianism of fascism to the dominionism found in the quest to create artificial life. As 19th century Americans enacted what they believed to be God’s will in seeking conquest over nature, today’s theocratic fascists aim for mastery of the technological sphere and the creation of life in its purest form. Synthesis of artificial life would be the ultimate expression of man’s authoritarianism.

      Rotenstreich’s writing didn’t explore AI but rather the television and the role it played in puncturing the membrane between public and private life. His observations of the influence of technology as an agent of political and electoral change look prescient in light of the 2016 and 2020 U.S. presidential elections and the 2016 Brexit campaign. European observers watched carefully and reacted swiftly in an attempt to prevent the infectiousness of algorithmically-driven manipulation from endangering its carefully-constructed and fragile democracy. The Union had no desire to see a rebirth of the nationalisms that tore the continent apart three times in the previous century.

      These actions may have even had an effect. The public became generally conscious of data harvesting and its impacts well before GDPR even went into effect. In the business world, people started questioning whether the “Big Data” revolution actually paid off and whether it was worth sending truckloads of money to anyone with a Ph. D. and basic knowledge of statistics. By the time the COVID pandemic hit there was even discussion of another possible AI winter and in any case, the real money was somewhere else: crypto.

      • happybadger [he/him]
        hexagon
        ·
        8 months ago

        Part 4:

        spoiler

        If ever there was a case for the triumph of faith over reason, one would need to look no further than the stratospheric growth of cryptocurrency between 2018 and 2021. The crypto community managed to convince thousands of investors that infinite growth was not only inevitable, but that an investment bubble was mathematically impossible. This, of course, was a lie, and the implosion of FTX and the downfall of its founder, Sam Bankman-Fried showed once and for all that pretty much the entire crypto industry was a Ponzi scheme stood on top of layer cake of fraud.

        The success of that fraud depended partly on the arcane vocabulary of the field. Crypto enthusiasts spoke with the fluency of shibboleths; anyone who tried to contradict their theories or to point out the physical impossibility of infinite growth would be immediately met with a flurry of language they had no capacity to comprehend. It was pseudo-intellectualism at its peak: by sounding unattainably smart, one couldn’t help but be perceived as correct. But this approach only works in casual forums. Never did the emperor have less clothes on than when Bankman-Fried tried playing this game in federal court earlier this year.

        Nevertheless, the boundless wealth of cryptocurrency bred a new form of techno-optimism. Dubbed “web3,” crypto evangelists promised an anarchic utopia free of corporate control. Where Web 2.0 was marked by the rise of infinite scroll, social media, and centralized mega-corporate entities, web3 promised decentralization through the blockchain: a public, write-only database fueled by extraordinarily high computational workload requirements.

        Before long, web3’s central lie of decentralization became clear. While there is certainly at least an academic appeal to a monetary model free of government control, the rest of the web3 technology stack couldn’t be more unrelated to its stated goals. Non-fungible tokens (NFTs), an attempt to create digital scarcity in a copy-paste world, briefly rose in popularity and Facebook rebranded itself to Meta to embrace and promote the idea of the “metaverse”: a parallel virtual, persistent reality that called back to the earlier visions of techno-utopianism described above. I wrote last year on the Metaverse and my thesis for its real purpose: the establishment of a digital heaven for the wealthy to live on after death. There is no other functional explanation. Meta invested billions to earn just thousands hawking a technology no one outside Silicon Valley actually wants.

        The ascent of crypto, NFTs, and Metaverse can seem like a coincidence if taken alone. But they rose nearly simultaneously and perhaps a beat too early. It’s not until we add the fourth leg of the web3 quadrilogy that it all begins to make sense: Generative AI. Taken together, we can start to deduce a sense of transcendentalism in the technologist’s mindset: the metaverse creates a digital universe for us to live in; cryptocurrency offers an entire economy detached from material production; NFTs provide scarcity and a sense of ownership in digital space; and supercharged AI with genuine creative capabilities power synthetic consciousness. Taken together, they are the foundation of a digital afterlife, the building blocks of von Neumann’s and Cox’s and Kurzweil’s Singularity, the merging of human consciousness with the machine, the key to immortality achieved through technology. Singularity theory was back. All it needed was a go-to-market strategy.

        Early Christian missionaries traveled the pagan lands looking for heathens to convert. Evangelical movements almost definitionally involve spreading the word of Jesus Christ as a core element of their faith. The missionary holds the key that unlocks eternal life and the only cost is conversion: the more souls saved, the holier the work. The idea of going out into the world to spread the good word and convert them to our product/language/platform is a deep tradition in the technology industry. We even hire people specifically to do that. We call them technology evangelists.

        Successful evangelism has two key requirements. First, it must offer the promised land, the hope of a better life, of eternal salvation. Second, it must have a willing mark, someone desperate enough (perhaps through coercion) to be included in that vision of eternity, better if they can believe strongly enough to become acolytes themselves. This formed the basis of the crypto community: Ponzi schemes sustain only as long as there are new willing participants and when those participants realize that their own continued success is contingent on still more conversions, the incentive to act in their own best interest is strong. It worked for a while to keep the crypto bubble alive. Where this failed was in every other aspect of web3.

        • happybadger [he/him]
          hexagon
          ·
          8 months ago

          Part 5:

          spoiler

          The central problem with Singularity theory is that it is really only attractive to nerds. Vibing with all of humanity across the universe would mean entangling your consciousness with that of every other creep, and if you’re selling that vision and don’t see that as an issue, then it probably means that you’re the creep. Kurzweil’s The Singularity is Near is paternalistic and at times downright lecherous; paradise for me would mean being almost anywhere he’s not. The metaverse has two problems with its sales pitch: the first is that it’s useless; the second is that absolutely nobody wants Facebook to represent their version of forever.

          Of course, it’s not like Meta (Facebook’s rebranded parent company) is coming right out and saying, “hey we’re building digital heaven!” Techno-utopianism is (only a little bit) more subtle. They don’t come right out and say they’re saving souls. Instead they say they’re benefitting all of humanity. Facebook wants to connect the world. Google wants to put all knowledge of humanity at your fingertips. Ignore their profit motives, they’re being altruistic!

          In Paul’s first letter to the Corinthians, he develops the Christian virtue of charity:

          8 Charity never faileth: but whether there be prophecies, they shall fail; whether there be tongues, they shall cease; whether there be knowledge, it shall vanish away. 9 For we know in part and we prophesy in part. 10 But when that which is perfect is come, then that which is in part shall be done away. 11 When I was a child, I spake as a child, I understood as a child, I thought as a child: but when I became a man, I put away childish things. 12 For now we see through a glass, darkly; but then face to face: now I know in part; but then shall I know even as also I am known. 13 And now abideth faith, hope, charity, these three; but the greatest of these is charity.
          

          The King James translation introduces the word charity to mean altruistic love, and other translations use the word love in its place. Nevertheless, the meanings are often conflated, and more than one wealthy person in history has attempted to show love (or to seek redemption) through charitable giving.

          It was the tech industry who found the ultimate corruption of the concept. In recent years, a bizarre philosophy has gained traction among silicon valley’s most fervent insiders: effective altruism. The basic gist is that giving is good (holy) and in order to give more one must first earn more. Therefore, obscene profit, even that which is obtained through fraud, is justifiable because it can lead to immense charity. Plenty of capitalists have made similar arguments through the years. Andrew Carnegie built libraries around the country out of a belief in a bizarre form of social darwinism, that men who emerge from deep poverty will evolve the skills to drive industrialism forward. There’s a tendency for the rich to mistake their luck with skill.

          But it was the canon of Singularity theory that brought this prosaic philosophy to a new state of perversion: longtermism. If humanity survives, vastly more humans will live in the future than live today or have ever lived in the past. Therefore, it is our obligation to do everything we can to ensure their future prosperity. All inequalities and offenses in the present pale in comparison to the benefit we can achieve at scale to the humans yet to exist. It is for their benefit that we must drive steadfast to the Singularity. We develop technology not for us but for them. We are the benediction of all of the rest of mankind.

          Longtermism’s biggest advocates were, unsurprisingly, the most zealous evangelists of web3. They proselytized with these arguments for years and the numbers of their acolytes grew. And the rest of us saw the naked truth, dumbfounded watching, staring into our black mirrors, darkly.

          • happybadger [he/him]
            hexagon
            ·
            8 months ago

            Part 6:

            III. Apotheosis

            spoiler

            The metaverse failed almost as quickly as it was hyped. NFTs, which anyone retaining a sense of critical thinking could easily see, turned out to be a scam, too. Crypto guru-cum-effective altruist Sam Bankman-Fried collapsed his house of cards known as FTX and before long found him on the wrong side of an extradition treaty from the Bahamas and then the wrong side of a prison cell door. His on-again-off-again girlfriend, CEO of FTX sister company, Alameda Research, and Harry Potter fanfic writer turned state’s evidence. A construction company ripped the FTX logo off the now-formerly FTX Arena before Twitter had even had a chance to stop laughing. Crypto markets collapsed, demand evaporated, and newly-emboldened regulators are setting out to prove that the state still has the biggest dog in this fight.

            Consumers—that is, normal people who have to worry about things like the price of eggs and elections and getting sick and going on dates and catching up on the latest Marvel movie—showed that they really could not care less about the Singularity. FTX’s collapse rivaled Enron’s in scale yet had almost no meaningful effect on the economy. The challenge with trying to create a parallel reality is that there’s pretty much nothing in this reality to induce people to care about whether it succeeds or fails.

            The metaverse has almost no meaningful real-world use cases, and the few with any relevance at all hardly justify the enormous investment. NFTs signified ownership only in the abstract sense: they didn’t actually define ownership of the asset, which itself was not encoded in the blockchain. NFT missionaries were mocked by people right-clicking the images and saving them for free. The tech didn’t even carry relevant copyright context.

            By all rights, Singularity theory should have faded back into obscurity for at least another decade or so. But nearly simultaneously with the dual crashing-and-burning of metaverse and crypto came the rise of techno-futurists' last great hope: generative AI.

            Strictly speaking, generative AI has been around for a while. Misinformation researchers have warned about deep fake capabilities for nearly a decade. A few years ago, chatbots were all the rage in the business world, partly because someone was trying to figure out what to do with all of the data scientists they hired, and partly because chatbots would allow them to decimate their customer service teams. (Of course, consumers didn’t ask for this. Nobody actually wants to interact with a chatbot over a human being.) AI has been writing mundane sports recaps for a few years at least.

            These earlier incarnations of generative AI failed to find mainstream traction. They required a lot of specific technology knowledge and frankly weren’t very good. Engineers and data scientists had to spend a lot of time tuning and implementing them. The costs were huge. Average users couldn’t access them. That changed when ChatGPT’s public demo became available.

            ChatGPT’s public release arrived less than 3 weeks after the collapse of FTX. The technology was a step change from what we’d seen with generative AI previously. It was far from perfect, but it was frighteningly good and had clear general purpose functionality. Image generation tools like DALL-E, Stable Diffusion, and Midjourney jumped on this bandwagon. Suddenly, everyone was using AI, or at least playing around with it.

            • happybadger [he/him]
              hexagon
              ·
              8 months ago

              Part 7:

              spoiler

              The tech industry’s blink-and-you’ll-miss-it pivot was fast enough to give you whiplash. Crypto was out. Metaverse was out. Mark Zuckerberg’s company, which traded out its globally-known household name to rebrand as Meta, laid off thousands of technologists it had hired to build the metaverse and pivoted to AI. Every social media crypto-charlatan quietly removed the “.ETH” label from their user names and rebranded themselves as a large language model (LLM) expert. Microsoft sank eye-watering money into OpenAI and Google and Amazon raced to keep up. Tech companies sprinted to integrate generative AI into their products, quality be damned. And suddenly every data scientist found themselves playing a central role in what might be the most important technology shift since the advent of the world wide web.

              There was one group of people who weren’t nonplussed by this sudden change. Technology ethicists had been tracking these developments from both inside and outside the industry for years, sounding the alarm about the potential harms posed by, inter alia, AI, crypto, and the metaverse. Disproportionately women and people of color, the community has struggled for years to raise awareness of the multifaceted social risks posed by AI. I’ve spoken on some of these issues myself over the years, though I’ve mostly retired from that work. Many of the arguments have grown stale and the field suffers from the same mistake made by American liberals during the 2016 election: you can’t argue from a position of decency if your opponent has no intention to act decently to begin with. Longtermists offered a mind-blowing riposte: who cares about racism today when you’re trying to save billions of lives in the future?

              GenAI solved two challenges that other Singularity-aligned technology failed to address: commercial viability and real-world relevance. The only thing standing in its way is a relatively small and disempowered group of responsible technology protestants, who may yet possess enough gravitas to impede the technology’s unrestricted adoption. It’s not that the general public isn’t concerned about AI risk. It’s that their concerns are largely misguided, worrying more about human extinction and less about programmed social inequality.

              The idea of a robot uprising has captured our imagination for over a century. The term robot comes from a 1920 Czech play called Rossumovi Univerzální Roboti, in which synthetic life-forms unhappy with their working conditions organize and revolt, leading to the extinction of humanity. Before their demise, the human characters wonder whether it would have been better to ensure that the robots could not speak a universal language, whether they should have destroyed the Tower of Babel and prevented their children from unseating humanity from its heavenly kingdom.

              Singularity theorists have capitalized on these fears by engaging in arbitrage. On the one hand, they’re playing a game of regulatory capture by overstating the risk of the emergence of a super-intelligent AI, promising to support regulation that would prevent companies from birthing such a creation. On the other hand, they’re actively promoting the imminence of the technology. OpenAI’s CEO, Sam Altman, was briefly fired when OpenAI employees apparently raised concerns to the board over such a possibility. What followed was a week of chaos that saw Altman hired by Microsoft only to return to OpenAI and execute a Game of Thrones-esque power grab, ousting the two women on the board who had tried to keep the supposedly not-for-profit company on-mission.

              Humanity’s demise is a scarier idea than, say, labor displacement. It’s not a coincidence that AI advocates are keeping extinction risk as the preëminent “AI safety” topic in regulators' minds. It’s something they can easily agree to avoid without any negligible impact in the day-to-day operations of their business: we are not close to the creation of an Artificial General Intelligence (AGI), despite the breathless claims of the Singularity disciples working on the tech. This allows them to distract from and marginalize the real concerns about AI safety: mass unemployment, educational impairment, encoded social injustice, misinformation, and so forth. Singularity theorists get to have it both ways: they can keep moving towards their promised land without interference from those equipped to stop them.

              • happybadger [he/him]
                hexagon
                ·
                8 months ago

                Part 8:

                spoiler

                Timnit Gebru was fired from Google. Microsoft dismissed their Responsible AI team. Facebook did the same. And those who have the courage left to continue to write and speak out on the issue find themselves brigaded and harassed on social media in a manner frighteningly similar to the 2016 meme wars or Gamergate which preceded them. There is no coincidence here. I recognize I am approaching the 8,000th word of this piece. I doubt any Hacker News regulars have made it this far, but if they did, I am confident this post will not be well-received there.

                I texted my good friend, Eve Ettinger, the other night after a particularly frustrating exchange I had with some AI evangelists. Eve is a brilliant activist whose experience escaping an evangelical Christian cult has shaped their work. “Are there any tests to check if you’re in a cult,” I wondered.

                “Can you ask the forbidden questions and not get ostracized?”

                There’s a joke in the data science world that goes something like this: What’s the difference between statistics, machine learning, and AI? The size of your marketing budget. It’s strange, actually, that we still call it “artificial intelligence” to this day. Artificial intelligence is a dream from the 40s mired in the failures of the ’60s and ’70s. By the late 1980s, despite the previous spectacular failures to materialize any useful artificial intelligence, futurists had moved on to artificial life.

                Nobody much is talking about artificial life these days. That idea failed, too, and those failures have likewise failed to deter us. We are now talking about creating “cybernetic superintelligence.” We’re talking about creating an AI that will usher a period of boundless prosperity for humankind. We’re talking about the imminence of our salvation.

                The last generation of futurists envisioned themselves as gods working to create life. We’re no longer talking about just life. We’re talking about making artificial gods.

                I’m certainly not the first person to shine a light on the eschatological character of today’s AI conversation. Sigal Samuel did it a few months back in far fewer words than I’ve used here, though perhaps glossing over some of the political aspects I’ve brought in. She cites Noble and Kurzweil in many of the same ways. I’m not even the first person to coin the term “techno-eschatology.” The parallels between the Singularity Hypothesis and the second coming of Christ are plentiful and not hard to see.

                Still, I wonder why so many technologists, many of whom pride themselves on their rationalism, fail to make the connection. Rapture metaphors even emerge from rationalist hangouts like Less Wrong, where Roko’s Basilisk made its first appearance. Roko’s Basilisk is the infamous “information hazard” which, after only mild examination, reveals itself to be nothing more than a repackaged Antichrist mythology.

                I suspect that the answer lies somewhere between Rotenstreich’s authoritarian view on technology and politics—that any change in the direction of technology must be accompanied by a change in the direction of society—and an internalized belief in the dominionist mindset that underscores American culture. Effective altruism is a political gift to the wealthy, packaged absolution that gives them moral permission to extract as much as they want. It is also perilously close to the edge of the cliff of fascism.

                • happybadger [he/him]
                  hexagon
                  ·
                  8 months ago

                  Part 9:

                  spoiler

                  Marc Andreesen, the famous venture capitalist, took a flying swan dive off that cliff last month. In a rambling “techno-optimist” manifesto, he references both longtermist ideas as well as neoreactionary and classically fascist ones. He calls the reader to engage with the ideas of many of the people mentioned already in this post: Wolfram and von Neumann and Kurzweil. Andreesen lists off his “enemies;” among them: tech ethics, social responsibility, and, of course, communism. These outspoken enemies of techno-optimism, of effective altruism, of unrestrained AI growth—so frequently women, people of color, immigrants, and those displaced by rampant, unchecked capitalism—are the same as the enemies of neoreactionaryism and fascism. One may as well summarize the entire philosophy with fourteen simple words: “we must secure the existence of our people and a future for our children.” This is just one small change away from a different 14 words, but simply look at some pictures of these philosophers and ask yourself to whom “our'' refers.

                  Effective altruism, longtermism, techno-optimism, fascism, neoreactionaryism, etc are all just variations on a savior mythology. Each of them says, “there is a threat and we are the victim. But we are also the savior. And we alone can defeat the threat.” (Longtermism at least pays lip service to democracy but refuses to engage with the reality that voters will always choose the issues that affect them now.) Every savior myth also must create an event that proves that salvation has arrived. We shouldn’t be surprised that they’ve simply reinvented Revelations. Silicon Valley hasn’t produced a truly new idea in decades.

                  Eve’s second test for cult membership was, “is the leader replaceable or does it all fall apart.”

                  And so the vast majority of OpenAI’s employees threatened to quit if Altman was not reinstated. And so Altman was returned to the company five days after the board fired him, with more power and influence than before.

                  The idea behind this post is not to simply call everything I don’t like fascist. Sam Altman is a gay Jewish man who was furious about the election of Donald Trump. The issue is not that Altman or Bankman-Fried or Andreesen or Kurzweil or any of the other technophiles discussed so far are “literally Hitler.” The issue is that high technology shares all the hallmarks of a millenarian cult and the breathless evangelism about the power and opportunity of AI is indistinguishable from cult recruitment. And moreover, that its cultism meshes perfectly with the American evangelical far-right. Technologists believe they are creating a revolution when in reality they are playing right into the hands of a manipulative, mainstream political force. We saw it in 2016 and we learned nothing from that lesson.

                  Doomsday cults can never admit when they are wrong. Instead, they double down. We failed to make artificial intelligence so we pivoted to artificial life. We failed to make artificial life so now we’re trying to program the messiah. Two months before the Metaverse went belly-up, McKinsey valued it at up to $5 trillion dollars by 2030. And it was without a hint of irony or self-reflection that they pivoted and valued GenAI at up to $4.4 trillion annually. There’s not even a hint of common sense in this analysis.

                  As a career computational mathematician, I’m shaken by this. It’s not that I think machine learning doesn’t have a place in our world. I’m also not innocent. I’ve earned a few million dollars lifetime hitting data with processing power and hoping money comes out, not all of that out of pure goodwill. Yet I truly believe there are plenty of good, even humanitarian applications of data science. It’s just that creating godhood is not one of them.

                  This post won’t convince anyone on the inside of the harms they are experiencing nor the harms they are causing. That’s not been my intent. You can’t remove someone from a cult if they’re not ready to leave. And the eye-popping data science salaries don’t really incentivize someone to get out. No. My intent was to give some clarity and explanatory insight to those who haven’t fallen under the Singularity’s spell. It’s a hope that if—when—the GenAI bubble bursts, we can maybe immunize ourselves against whatever follows it. And it’s a plea to get people to understand that America has never stopped believing in its manifest destiny.

                  • happybadger [he/him]
                    hexagon
                    ·
                    8 months ago

                    Part 10:

                    David Nye described 19th and 20th century American perception technology using the same concept of the sublime that philosophers used to describe Niagara Falls. Americans once beheld with divine wonder the locomotive and the skyscraper, the atom bomb and the Saturn V rocket. I wonder if we’ll behold AI with that same reverence. I pray that we will not. Our real earthly resources are wearing thin. Computing has surpassed aviation in terms of its carbon threat. The earth contains only so many rare earth elements. We may face Armageddon. There will be no Singularity to save us. We have the power to reject our manifest destinies.