Ancient Wisdom, Modern Tools

Michael M. Rosen

Current Issue

If such a thing as a prophet exists in the modern world, Stephen Thaler just might be one.

An engineer, computer scientist, serial inventor, and entrepreneur based in St. Charles, Missouri, Thaler's latest and greatest creation is a machine called DABUS, which stands for "Device for the Autonomous Bootstrapping of Unified Sentience." And DABUS, in Thaler's view, is itself an inventor.

After writing for several years about Thaler's quixotic quest to secure patent protection for DABUS's purported inventions, which include an easily graspable beverage container that transfers heat, I decided I had to visit the prophet in person and behold his prophecy with my own eyes.

What I found seemed unremarkable on the surface: a non-descript computer in an office park outside of St. Louis, a warren of equipment-strewn rooms emitting a "mad scientist" vibe, and a series of prototypes of DABUS's container. But spending the better part of a day conversing with Thaler impressed on me the complexity and importance of the questions he is tackling — questions that concern the nature of consciousness and creativity.

How can a machine invent something? Has artificial intelligence advanced to the point where robots possess creative abilities and impulses? If so, what does this development mean for humanity? Will we soon arrive at artificial general intelligence (AGI), a stage where, according to many definitions, computers can perform most tasks at least as well as humans? Will that stage portend the effective end of humanity, either because our robot overlords will exterminate or enslave us, or because humans will have exhausted their natural desire and capacity for creating original things and ideas?

Questions like these have troubled philosophers, policymakers, technologists, and human beings in general for decades, but the emergence in late 2022 of ChatGPT — the do-it-all large language model (LLM) that has revolutionized many aspects of everyday life — turbocharged the discussion. Almost overnight, OpenAI's shiny new toy was helping write wedding toasts, obituaries, news summaries, and even term papers. The chatbot's emergence inspired a host of think pieces about the future of writing, research, and creativity itself. Individuals, companies, and governments the world over fiercely debated the potential costs and benefits of AGI.

The accelerating development of machine technology also ignited congressional debate, sparked regulatory efforts in the United States and abroad, and spawned coalitions and counter-coalitions that alternately sought to promote and impede AI's evolution. Many cheered the transformations these tools had already begun to effect. Others strenuously decried their fearsome capabilities. Some downplayed the breakthroughs and continued to view our machines as extensions of ourselves, though they embraced AI's potential. And there were those who minimized and even ridiculed the machines' achievements.

These four reactions represent four distinct schools of thought, which we can label the "Positive Autonomists," "Negative Autonomists," "Positive Automatoners," and "Negative Automatoners." These four ways of thinking about AI have been clashing for years, but their skirmishes intensified in the wake of ChatGPT's emergence. The schools differ across two axes, one analytical and the other normative — or, if you like, one descriptive and the other prescriptive. The analytical or descriptive axis considers precisely how independent machines are from their creators, while the normative or prescriptive axis places ethical, societal, and practical value on the descriptive analysis.

Can these approaches be fruitfully reconciled? I believe they can, if we turn to ancient Jewish models of supernatural, superintelligent creatures: the golem, the dybbuk, and the maggid. These myths turn out to represent themes that societies have grappled with for centuries. Exploring these inherited lessons guides us to principles that can help contemporary society capture the best of AI while mitigating the worst. These principles include encouraging and channeling machine development, insisting that LLM developers create and operate AI in an ethical way, making sure our machines embody what is good in humanity, and building into AI systems some form of "kill switch" to prevent potential catastrophe.

POSITIVE AUTONOMISTS

Before considering AI models and policy solutions, let us first understand the debate and its participants.

The first camp consists of Positive Autonomists like Thaler, Sam Altman, and Marc Andreessen. They regard AI's recent advances as truly revolutionary, representing a difference in kind, not just degree, from previous computing technology. They believe that machines have already achieved — or will soon achieve — a measure of autonomy, whether we label it AGI, sentience, awareness, or consciousness. They extol these breakthroughs and their capacity to enhance and extend life. Descriptively, they view AI as autonomous; prescriptively, they see it as positive.

For instance, "OpenAI's mission," the company's charter declares, "is to ensure that artificial general intelligence (AGI) — by which we mean highly autonomous systems that outperform humans at most economically valuable work — benefits all of humanity." In May 2023, Altman, the company's CEO, told the Senate Judiciary Subcommittee on Privacy, Technology, and the Law that "OpenAI was founded on the belief that safe and beneficial AI offers tremendous possibilities for humanity."

Meanwhile, in a viral June 2023 post titled "Why AI Will Save the World," Andreessen argued: "What AI offers us is the opportunity to profoundly augment human intelligence to make all of these outcomes of intelligence — and many others, from the creation of new medicines to ways to solve climate change to technologies to reach the stars — much, much better from here."

For his part, Thaler contends that advances in AI will catapult humanity into an ethereal stratosphere. In 2010, he christened his discovery a "Neureligion" and predicted that

it will be recognized as consciousness itself, the permeating force of the universe, and the key to sustaining the future of any civilization willing to embrace it for what it really is: the master idea that by its very definition yields answers to the foremost questions mankind has posed over the millennia.

Half-measures, these are not.

NEGATIVE AUTONOMISTS

Negative Autonomists, including Elon Musk, Geoffrey Hinton, Erik Hoel, and Eliezer Yudkowsky, also regard generative AI as transformative technology that will deeply alter human existence. They, too, see the emerging machines as genuinely autonomous — capable of acting in important ways on their own, without human guidance or supervision. But they part ways with their positive cousins in the valence they place on AI's rapid evolution, deeming it dangerous, harmful, and even existentially risky. Some go so far as to urge its immediate and permanent deactivation. Analytically, they understand AI to be independent; normatively, they regard it with horror.

Negative autonomists have proposed a variety of solutions to what ranges, in their view, from a serious nuisance to a soul-sucking parasite to an existential threat. Some have argued for more rigorous alignment of values between machines and humans. Others have pushed for a pause in AI research until we can iron out the kinks. Others still have pressed tech companies to pledge that their machines' intelligence will never exceed that of humans. A few have called for a permanent and irreversible halt to all AI development — even if it would require a nuclear strike.

A decade ago, Musk notoriously labeled AI humanity's "biggest existential threat," and likened facilitating its development to "summoning the demon." In 2014, he claimed on Twitter that superintelligent machines may be "[p]otentially more dangerous than nukes," and expressed his hope that "we're not just the biological boot loader for digital superintelligence. Unfortunately, that is increasingly probable."

The computing pioneer Hinton, whom some have called the "Godfather of AI," developed some of the building blocks of today's LLMs. He quit Google in May 2023, telling the New York Times: "It is hard to see how you can prevent the bad actors from using it for bad things" — "it" being the technology to which he had devoted much of his career. He also informed the Times, in understated fashion, "I don't think they should scale this up more until they have understood whether they can control it."

And in a March 2023 Time article entitled "Pausing AI Developments Isn't Enough. We Need to Shut It All Down," Yudkowsky, a computer scientist, wrote that "the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die. Not as in 'maybe possibly some remote chance,' but as in 'that is the obvious thing that would happen.'"

POSITIVE AUTOMATONERS

Then we have the Positive Automatoners, who view the recent breakthrough in machine learning as merely an improvement, however significant, on existing computer technology. To this group, which includes Orly Lobel, Yasuo Kuniyoshi, and Shuichi Shitara, AI remains, and may always remain, no more than an extension of human capabilities — a force multiplier that reflects and implements its programmers' own abilities, assumptions, and biases. Accordingly, they ascribe to LLMs the potential to improve human outcomes, so long as the humans who design them do so appropriately and take care to impose our own community norms on the models' development. The "automatoner" label describes their conception of AI as an automaton — a lifeless robot commanded to follow its developers' orders, which can secure positive results through determined action.

In her book The Equality Machine: Harnessing Digital Technology for a Brighter, More Inclusive Future, my friend and former University of San Diego Law School colleague Orly Lobel writes that "AI is no fairy godmother — it's just an extension of our culture," and notes that if "a machine is fed partial, inaccurate, or skewed data, it will mirror those limitations or biases." Indeed, even Dave Willner, then the head of trust and safety at OpenAI, acknowledged of GPT-4 that the model "had a tendency to be a bit of a mirror."

Yasuo Kuniyoshi heads the University of Tokyo's Next Generation Artificial Intelligence Research Center, where he leads groundbreaking work in what he calls "embodiment" — or, roughly, the relationship between physical features and cognition. "The reason why I focus on embodiment," Kuniyoshi explained to me when we met in his book-lined office, "is that, when you think about an intelligent agent which does autonomous learning, the system is just like a child. It explores what it can do and then learns through experience, gradually acquir[ing] various capabilities." Yet he maintains that AI does not truly "think" in the way humans do.

Shuichi Shitara, a Tokyo-based lawyer, points to Japanese AI systems that are sifting through "millions of variations of chemical formulas," a task that is "really hard for humans." In these situations, "the inventive step is the accurate [identification of useful chemical] inventions... and the AIs would do the best for that kind of thing" — far better than humans. Yet he's still skeptical about any sort of "humanity" residing within the AI systems, which, despite displaying impressive computational prowess, aren't (yet) truly creating. "Maybe [they aren't doing] the creative things, making new things," he mused, as opposed to efficiently rifling through chemical formulas.

NEGATIVE AUTOMATONERS

Last come the Negative Automatoners, people and entities such as Noam Chomsky, Gary Marcus, and the United Kingdom Intellectual Property Office (UKIPO) who, like their positive counterparts, downplay the significance of generative AI and view it as a mere mechanical prosthetic. But unlike the Positive Automatoners, and like the Negative Autonomists, they worry that machines will harm humanity. Unlike with the Negative Autonomists, though, this worry stems not from AI's powerful abilities, but from its ample shortcomings. Analytically, they regard machines as mere automatons; normatively, they view them as perilous.

In March 2023, Chomsky and two colleagues took to the New York Times op-ed page to argue that current AI is "stuck in a prehuman or nonhuman phase of cognitive evolution" and will never remotely approach our level of linguistic ability. In their view, ChatGPT differs "profoundly from how humans reason and use language" and, worse, will likely "degrade our science and debase our ethics by incorporating into our technology a fundamentally flawed conception of language and knowledge."

Similarly, in January 2023, the famous AI scientist Gary Marcus, emeritus professor of psychology and neural science at New York University, told the New York Times's Ezra Klein that LLMs are "just putting together pieces of text. [They don't] know what those texts mean." Marcus described ChatGPT's results as "glorified cut and paste," and criticized some of Sam Altman's statements about neural networks for containing "mysticism and confusion."

Across the Atlantic, the UKIPO, which rejected DABUS's patent application, determined that "an AI machine is unlikely to be motivated to innovate by the prospect of obtaining patent protection." It further elaborated that "it is not clear...how recognising a machine as an inventor will affect the likelihood of dissemination of innovation to the public, as this decision will be down to the owner or developers of the AI machine." The U.K. Supreme Court upheld the UKIPO's ruling and cast doubt on the assumption that the AI system was the creator: "DABUS is not a person or persons....[I]t is not a tenable interpretation of [U.K. law] that a machine can be an inventor."

LEGENDARY INSPIRATION

How can such widely divergent views be reconciled? Only by turning to our past.

According to legend, 16th-century Rabbi Yehuda Loew, better known as the Maharal of Prague, sought to protect the Jews of Bohemia from Christian persecution. So he undertook a period of careful study followed by ritual purification, and then fashioned a man from clay. On its forehead, Rabbi Loew inscribed the Hebrew word emet (תמא), or "truth." The Maharal trained this golem — the word derives from the Hebrew root gelem (מ ל ג), which means "raw material" — using a mystical formula. He could then program the clay humanoid to perform acts of kindness and self-defense, and it would autonomously fulfill his orders.

According to the story, Rabbi Loew's golem indeed saved Prague's Jewish community from the depredations of its hateful neighbors. The golem, properly and carefully crafted by the Maharal, behaved with kindness and respect. And the Maharal safely stored it, when inactive, in a tallit, or prayer shawl, in the attic of the Altneuschul, Prague's central synagogue, ready for reanimation at the appropriate time. In many ways, the golem is the Positive Autonomist perspective personified, or at least as personified as a humanoid can be — the product of human hands, but independent, benevolent, and transformative.

Other versions of the golem story emphasize purity of purpose and intent. Twelfth-century scholar Eleazar ben Judah ben Kalonymus of Worms wrote that "anyone engaging [creating a golem] must purify himself and don white garments," and cautioned against "engaging alone" with the mystical work, insisting that any study of the matter be carried out in groups of at least two or three. Later on, 13th-century Spanish scholar Avraham Abulafia insisted that "whoever pursues the lore transmitted to us, in accordance with the divine name, in order to use it in operations of every kind of the glory of God, he is sanctifying the Name of God," while one who creates a golem for one's own personal benefit is "wicked and a sinner who defiles the name of God."

However, some iterations of the story reach a dark conclusion. One tells of how, as the golem performed its daily tasks, it grew curious about its abilities and began — slowly at first, and then all at once — to defy the Maharal. As the "golem-gone-bad's" powers intensified, Rabbi Loew panicked and pulled the plug, so to speak; he erased "truth" from the corrupted golem's forehead, ending its life. Similarly, according to an 18th-century Polish legend, "when the Rabbi saw that the creature of his hands grew stronger and greater," he became "afraid that he would be harmful and destructive" and therefore "quickly overcame him and he tore the folio on which the name was written and separated it from the forehead, and he fell as a lump of dust."

This golem-gone-bad theme gained striking expression in the Czech writer Karel Čapek's 1920 play R.U.R., the origin of the word "robot." (R.U.R. stands for Rossumovi Univerzální Roboti, or Rossum's Universal Robots.) The robots in R.U.R. are humanoids created by people in a factory with the ability to think for themselves. At the start of the play, they happily serve their human masters. But conflict eventually ensues, and the robots wind up destroying humanity.

The golem-gone-bad story furnishes a model for the Negative Autonomist viewpoint, depicting the archetypal Frankenstein's monster that overpowered its creator and threatened to destroy the world. While the golem story is apocryphal, it illustrates several important tenets about the creation of a humanoid: It was the product of human innovation, created by a person using holy words, capable of operating on its own, and with a purpose that was fundamentally good. Most importantly, its operation remained dependent on its human creator, who was able and willing to terminate its existence when absolutely necessary, such as when the initially good golem turned bad.

But the golem is not the only supernatural creature we can learn from as we enter the AI age; another mythical entity infused the consciousness of Jewish and non-Jewish people in the medieval and early modern periods. This phantasm — known as the dybbuk, or demon — possessed humans living in tight-knit communities, reflecting and amplifying particular character traits of the person it occupied. The dybbuk would torment its host, at times channeling departed loved ones, at others emphasizing the host's ethical or spiritual flaws.

To rid the body of this demon, the community would enlist the assistance of a trusted religious leader, who would perform an exorcism. Summoning the malign spirit required the rabbi and other congregation members to identify and isolate the offending character trait — be it heresy, sexual deviance, fraudulent business conduct, or the like — and expunge it from the host and, by extension, the community.

In the classic early 20th-century Russian and Yiddish play The Dybbuk, or Between Two Worlds, by S. Ansky, we meet Leah, a young woman who becomes possessed by her late beloved, Khonen. Before his untimely demise, Khonen observes: "Our saints all have the task of cleansing human souls, they root out the evil spirit of sin and restore our souls to radiant perfection." Rabbinic figures charged with ridding demons from the congregation's midst were thought to be performing a holy act of communal hygiene. In this way, the supernatural construct of the dybbuk represented human impulses, and its identification and eradication enabled society to better itself. As an entity derivative of human flaws and harmful to human flourishing, the dybbuk can stand for Negative Automatoner fears about intelligent machines.

The dybbuk also had a friendlier sibling known as the maggid, a force that would inhabit and inspire scholars, much as a muse arouses a poet. Centuries ago, rabbis recorded their ecstatic encounters with maggids, who spurred bursts of creativity and amplified their better angels.

For instance, Rabbi Joseph Karo, the 16th-century Spanish kabbalist and philosopher whose codification of Jewish law is still considered authoritative today, attributed his virtuosity to a maggid. "I ate and drank but a small amount, I studied mishnayot at the beginning of the night, and I slept until daybreak," he records in the Maggid Meisharim (or The Maggid of Truthfulness). At daybreak, he was visited by a spirit who told him, "God will be with you everywhere you journey, and God will ensure the success of everything you have done and will do."

In our modern world, where science reigns, these forces can be seen as manifestations of internal psychological conditions rather than external entities. But whether as supernatural beings or psychological forces, the concepts of the maggid and the dybbuk offer templates for the Positive and Negative Automatoner perspectives. The maggid model maps onto the Positive Automatoners' conception of AI as an essentially beneficent force that reflects our own humanity, doing good but fundamentally requiring our input.

FROM LEGEND TO POLICY

With a deeper understanding of the medieval and early modern predecessors of the four schools of AI thought, we can begin to formulate a sensible approach to best exploit this technology's potential while minimizing its drawbacks.

First and foremost, our old and new examples should inspire us to embrace the tremendous possibilities that today's machines present. Whether or not AI can be properly characterized as autonomous, it is rapidly transforming numerous fields, from basic science to drug discovery to language processing to artistic expression. We should welcome innovation that enhances and extends human life, just as the ancients ushered the golem into existence. The Positive Autonomists (as well as the Positive Automatoners) present a compelling argument on this front.

Second, even as we push forward, we should ensure that AI operates in an ethical and responsible fashion. Much as purity of spirit and thought was required to create the golem, we must develop our machines for appropriate purposes and in a spirit of communal responsibility. While numerous regulatory schemes have arisen aiming to do exactly this, many of them would unduly shackle the golem, so to speak.

We should approach regulation with care when we consider how the likes of Microsoft, Nvidia, and OpenAI have actively lobbied for more, not less, government regulation, repeating the behavior of Facebook, Ford, and Amazon in the past. The Biden administration obliged with a top-down, one-size-fits-all executive order (which President Donald Trump has since repealed) to match a similarly burdensome measure enacted by the European Union. We should train a skeptical eye on industry titans' exhortations to restrain themselves and their competitors; when government and industry are holding hands and smiling, we should be nervous.

Instead, we should embrace a rigorous set of voluntary guidelines that both AI companies and industry organizations would adopt and enforce. Companies have already indicated a willingness to do so: OpenAI has pledged "to use any influence we obtain over AGI's deployment to ensure it is used for the benefit of all, and to avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power." For its part, Google has promised to "design our AI systems to be appropriately cautious, and seek to develop them in accordance with best practices in AI safety research." Groups like the Partnership on AI and the AI Alliance — which comprise dozens of universities, foundations, and companies large and small — present a far more appealing model than heavy-handed regulation. These voluntary standards would account for the critiques of the Negative Autonomists and Automatoners while forging a path toward the Positive Autonomist vision.

Third, in promulgating and enforcing boundaries on AI development, we must ensure that our machines reflect the best of ourselves — that they serve as our maggids and not our dybbuks. We must carefully identify and examine the biases and prior assumptions embedded in our individual and communal identities and make sure to exorcise our darker impulses. In this regard, both the Positive and Negative Automatoners have much to teach us.

"We recognize that distinguishing fair from unfair biases is not always simple, and differs across cultures and societies," Google acknowledged on its AI website in 2018. "We will seek to avoid unjust impacts on people, particularly those related to sensitive characteristics such as race, ethnicity, gender, nationality, income, sexual orientation, ability, and political or religious belief." The company's AI offerings, however, have generated content veering from outright racist to hyper-progressive — none of which is constructive. Only by programming our machines to be more fair and accurate can we hope to obtain just results; only by expelling our dybbuks and summoning our maggids can we expect to improve ourselves and our environments.

Finally, we must attune ourselves to the possibility — however slim — that AI could cause catastrophic harm to our planet. Even if the probability of such an event is extremely small, we must make certain to include some sort of kill switch that will enable us to terminate our machines in the event of calamity. Just as the Maharal had to deactivate the golem-gone-bad when it began to malfunction, we must ensure that our contemporary golems remain eternally under ultimate human control. It's quite possible that the Negative Autonomists are overstating the risk of apocalypse, but prudence dictates taking common-sense steps to preclude it from ever happening.

Fortunately, many scientists are hard at work developing a kill switch for today's machines that will enable humans to retain ultimate control without disturbing their operation. For instance, in a February 2024 paper, a group of programmers, academics, and policy analysts from the Harvard Kennedy School, OpenAI, Oxford University, the University of Cambridge, and other institutions proposed a robust set of guardrails that LLM developers should build into their infrastructure:

In situations where AI systems pose catastrophic risks, it could be beneficial for regulators to verify that a set of AI chips are operated legitimately or to disable their operation (or a subset of it) if they violate rules. Modified AI chips may be able to support such actions, making it possible to remotely attest to a regulator that they are operating legitimately, and to cease to operate if not.

AI is a profoundly transformative technology that is likely to become genuinely autonomous in important ways and that already offers enormous promise to society. We are morally and practically obligated to harness these powers, to channel our own divinely given creativity toward developing these critically important machines.

But at the same time, we must train a critical eye on this powerful technology, and take care to guide it toward a positive trajectory. Locally, nationally, and globally, we must ensure that a unified thalamus extends some level of supervision to the neurons firing in our collective cortex. Our tools must remain our tools.

In the end, we should all — AI skeptics and exponents, automatoners and autonomists, futurists and Luddites, left, right, and center — heed the wise words that the celebrated Jewish historian Gershom Scholem uttered 60 years ago in remarks inaugurating the second-ever computer in Israel: "Develop peacefully and don't destroy the world. Shalom."

Michael M. Rosen is an attorney and writer in Israel, a non-resident senior fellow at the American Enterprise Institute, and author of Like Silicon from Clay: What Ancient Jewish
Wisdom Can Teach Us about AI, from which this essay is adapted.


Insight

from the

Archives

A weekly newsletter with free essays from past issues of National Affairs and The Public Interest that shed light on the week's pressing issues.

advertisement

Sign-in to your National Affairs subscriber account.


Already a subscriber? Activate your account.


subscribe

Unlimited access to intelligent essays on the nation’s affairs.

SUBSCRIBE
Subscribe to National Affairs.