How to Regulate Artificial Intelligence

Dean Woodley Ball

Spring 2024

Since the release of generative artificial-intelligence (AI) software like Midjourney and ChatGPT in the last few years, there has emerged a growing chorus of advocates offering proposals for how to regulate this new technology.

To everyone from academics and journalists to policymakers and AI researchers themselves, the imperative for regulation seems obvious. Not only do they assert a need to regulate, but a need to do so quickly, with experts warning that other countries are "ahead" of the United States on the regulatory front. "As the EU and China lead the race to regulate AI," writes Columbia Law School professor Anu Bradford in Foreign Affairs, "Washington needs to decide whether it wants to have a role in building the digital world of the future." Also in Foreign Affairs, political scientist Ian Bremmer and Inflection AI CEO Mustafa Suleyman argue:

Governments are already behind the curve. Most proposals for governing AI treat it as a conventional problem amenable to the state-centric solutions of the twentieth century; compromises over rules hashed out by political leaders sitting around a table. But that will not work for AI....That means starting from scratch, rethinking and rebuilding a new regulatory framework from the ground up.

Of course, political leaders hashing out rules while sitting around a table have accomplished — albeit imperfectly and unevenly — a great deal over the broad sweep of human history; it's not clear we should discount their efforts from the outset.

What's more, building a new regulatory framework from first principles would not be wise, especially with the urgency these authors advocate. Rushing to enact any major set of policies is almost never prudent: Witness the enormous amount of fraud committed through the United States' multi-trillion-dollar Covid-19 relief packages. (As of last August, the Department of Justice has brought fraud-related charges against more than 3,000 individuals and seized $1.4 billion in relief funds.)

AI capabilities may be expanding quickly, but not at the rate the virus initially spread throughout the world, and not at a rate that would justify an emergency response. We have more time than many AI-regulation proponents would lead us to believe.

A recent paper from Google's DeepMind found that the most effective way to communicate with large language models like ChatGPT is to begin one's inquiry with reminders like "[t]ake a deep breath and work on this problem step-by-step." We would be wise to heed this same advice as we consider the regulation of AI.

WHAT IS ARTIFICIAL INTELLIGENCE?

In broad strokes, AI is not a technology with discrete, bounded applications; it is a general-purpose technology whose long-term applications are unbounded and hence unknown.

Economists define general-purpose technologies as new methods of production that have a wide range of uses, giving them the potential to transform entire economies. The most primitive examples include stone tools and the wheel. Economists Timothy Bresnahan and Manuel Trajtenberg, who coined the term in 1992, identified the steam engine and the electric motor as archetypal general-purpose technologies. Other examples include the printing press and the internet.

AI is a most general of general-purpose technologies. We make use of it every time we search Google, take a picture with our phones, or check social media. It has been used to master chess, improve photography and video, organize supply chains, and help cure diseases, to name a few of its innumerable applications. It will eventually suffuse and fundamentally enable almost everything human beings find useful, much as electricity has done.

AI's capacity to revolutionize our economy has been apparent to industry observers well before the state-of-the-art image- and text-generation products emerged in 2022. These more recent applications of AI have sparked our curiosity (and fear) because they are far more general purpose than previous AI systems. OpenAI's GPT-4, for instance, has gained the capacity to reason and construct an elementary world model, as demonstrated by its ability to play chess and even draw rudimentary images after training purely on large amounts of text. This suggests that the models are not simply mimicking human communication patterns, but beginning to gain higher-order cognitive capabilities.

We do not know exactly how far these circumscribed and still rather primitive reasoning abilities will develop. Indeed, one of the most difficult things about AI is that it is a field of live science. The creators of these systems often do not understand fully how they work or what capabilities the models they are working on will have once they are complete. But we do know the ultimate aim of firms such as OpenAI and Google's DeepMind: the creation of artificial general-intelligence machines that can perform all economically valuable cognitive work as well as or better than the best humans. General purpose, indeed.

The uncertain and seemingly unbounded applications of this technology have provoked a combination of awe and fear in almost everyone paying attention. Some of the most fearful are those who occupy positions of prestige in our institutions of government, education, and the media. Their employers will be on the long list of institutions that AI may indelibly change. Some of their institutions will be weakened, others strengthened; some will be made less relevant, and some will crumble altogether. It is no surprise, then, that we hear the loudest calls for urgent regulation of AI from these quarters.

Of course, many AI researchers have called for regulation as well. In some cases, they may be motivated by a cynical desire to create regulatory burdens that make it harder for startups and other businesses to compete with their firms. For the most part, though, their pleas for regulation probably stem from real fears about AI's as-yet-unknown applications and abilities.

Fear is an understandable reaction to the sudden surge of AI capabilities we have recently witnessed. Those who have raised alarms fall broadly into two groups, with the first believing that a sufficiently capable AI will develop its own preferences that will ultimately clash with those of humans. One prominent AI safety advocate, Eliezer Yudkowsky, believes that the "most likely result" of continuing AI research in its current form is that "literally everyone on Earth will die."

This may sound like the domain of science fiction, and for the time being, that is where it should remain: No empirical evidence or mathematical model exists to support these claims (tempting though it may be to extrapolate from our decades of exposure to films like 2001: A Space Odyssey or The Matrix). While we cannot and should not discount this view outright, we also should not risk destroying a nascent general-purpose technology at best, and triggering a world war at worst, because of a hypothetical concern with no evidence to support it.

The second group believes that AI-initiated doom will come from humans' misuse of the technology. Human beings have misused every tool we have ever invented, and intelligence is in some sense the most dangerous tool we possess. Governments, corporations, and individuals have already used AI to harmful ends, and will no doubt continue to do so. The possibilities for its misuse are innumerable.

PROPOSED SOLUTIONS

Both types of AI doomsayers have put forth policy proposals to tidy up what they see as the mess these pesky AI researchers have created. One prominent example is to initiate research pauses or bans. If this were a game, advocates argue, now would be an ideal moment for halftime.

But the world is not a game. The ability to conduct this research is already quite dispersed, and becoming more so by the day. Competitive pressures between firms and great powers alike will keep the incentives to maintain this research strong. Furthermore, any such pause or ban would ultimately necessitate the use of violence among allied countries to enforce. It is far easier to see how that could go wrong in the short term than AI research itself.

Other proposals are perhaps more realistic. One of the most common is to create national or worldwide regulatory agencies that would control AI deployment and install various other guardrails to mitigate against potential harms. An example of such a proposal, from U.S. senators Josh Hawley and Richard Blumenthal, focuses on developing formal channels of information sharing, and would require people developing AI models beyond an established threshold of computing power to register their AI with a government agency. Their proposal would also mandate public transparency about the technical details of powerful AI models, as well as require that human users always be informed that they are interacting with an AI system.

These regulatory schemes raise some difficult questions. For instance, will the agency envisioned by this proposal regulate all models beyond a static threshold of capability, or will that threshold be adjusted as the technology advances? Additionally, the line between AI and human labor can be quite blurry, and will undoubtedly become even more so in the future. How does a "watermarking" requirement apply in such circumstances?

A much more troubling aspect of this proposal is that it includes an accountability regime that could severely harm the AI field:

Congress should ensure that A.I. companies can be held liable through oversight body enforcement and private rights of action when their models and systems breach privacy, violate civil rights, or otherwise cause cognizable harms. Where existing laws are insufficient to address new harms created by A.I., Congress should ensure that enforcers and victims can take companies and perpetrators to court, including clarifying that Section 230 does not apply to A.I.

In the extreme, this would mean that any "cognizable harm" caused with the use of AI would result in liability not only for the perpetrator of the harm, but for the manufacturer of the product used to perpetrate the harm. This is the equivalent of saying that if I employ my MacBook and Gmail account to defraud people online, Apple and Google can be held liable for my crimes.

Crimes are often complex and require the interplay of different tools to achieve. If a near-future burglar were to use an AI-powered computer-vision system to probe my home for potential weaknesses, would the maker of that system be liable for the burglar's behavior? Or imagine that a white-collar criminal uses a language model to write a seemingly benign memo that was, unbeknownst to the model, being used in furtherance of a crime — to write a fraudulent investor presentation, for instance. Would the maker of that language model be financially liable, even though the model could not have known that it was aiding in a crime?

It's unlikely that a court would hold an AI company liable for such things. A poorly crafted liability law, however, could create serious uncertainty that may deter investment and dynamism in the field.

The crux of the matter is that AI will act as extensions of our own will, and hence our own intelligence. If a person uses AI to harm others or otherwise violate the law, that person is guilty of a crime. Adding the word "AI" to a crime does not constitute a new crime, nor does it necessarily require a novel solution.

In general, outside of extreme tail risks like creation of a synthetic bioweapon, AI must be understood as a general-purpose tool whose users are responsible — legally, ethically, and in most other important senses — for how they choose to use it. The United States already has a robust set of principles, judicial precedent, and laws that govern the liability of firms for the harms caused by their products in the real world. While this system is imperfect and may need to be updated to reflect the propagation of AI in some circumstances, it is not apparent that a radically different approach is required.

Indeed, to a certain extent, we already regulate AI — in the form of all the laws that currently exist. AI may one day be able to autonomously drive vehicles in the vast majority of circumstances; we already have rules of the road. AI may one day play a key role — perhaps the leading one — in developing new medicines and medical treatments; we already have a sprawling regulatory apparatus, a body of case law, and a plethora of interest groups and research organizations that study and evaluate the safety and effectiveness of new medicines. AI, however capable and technically marvelous, will not upend our existing institutions and legal structures overnight.

To be sure, our institutions, organizations, and laws will have to evolve, sometimes non-trivially, to address the novel questions AI raises. But it is impossible to know the necessary direction and extent of that evolution until specific AI tools exist and have begun to diffuse throughout society. Attempting to craft a grand regulatory regime for AI now would be tantamount to creating something like Europe's General Data Protection Regulation in the months after the first web browser was launched in 1990.

The fact that a new tool or technology exists is merely the first step in the process of innovation. Indeed, much of the actual "innovation" in technology takes place not in the invention itself, but in its diffusion throughout society. The internet, to take one example, has led to the development of a host of new industries and other innovations that no one could have foreseen: Who in 1990 could have predicted that the World Wide Web would provide a launchpad for bloggers, gaming influencers, social-media gurus, and gig-working Uber drivers?

It is through diffusion that we discover the utility, or lack thereof, of new technologies. We need to go through such a process to mediate AI, to develop new informal and formal norms related to its use, to learn where the weak spots are that would benefit from new laws, and to discover the true function of the technology before we begin attempting to regulate it. With that concrete experience, we can react to misuse or maladaptations with the law as appropriate.

THE AMERICAN WAY

The transformative potential of AI has inspired crucial questions, such as how to ensure that the technology is not biased, that it does not spread misinformation, and that it is only used for "good" or "productive" purposes more generally. Exploring those questions quickly leads to the sort of epistemic and moral questions we have been discussing and debating for millennia: What is the truth, what is the good, and who gets to decide?

Such conversations are essential, but we will fundamentally constrain them if they take place primarily in the debate over what government should be doing about AI today. History makes clear that government is not always an ideal arbiter of what is productive or even good. Indeed, we have found, through millennia of conflict and hard work, that answering those questions in a centralized fashion tends to lead to corruption, oppression, and authoritarianism. We must grapple with that reality, not seek to wash it away; attempting to do so will drastically limit or perhaps even outlaw future progress in this profoundly promising field.

One of the tenets of the American experiment is that it is not the state's unmediated responsibility to answer fundamental epistemic and moral questions for us. Uncomfortable as the boundlessness of the future may be, we discover the truth and the good organically, and often with no specific plan to do so.

The general-purpose nature of AI should give us a key intuition for how it will ultimately be governed. Because the uses and implications of general-purpose technologies are so wide-ranging, we will have to rely on society as a whole — all human institutions, habits, customs, preferences, norms, and our entire corpus of laws and regulations — to mediate them. There is no Department of Electricity or Department of the Internet to police every conceivable use of those technologies; such things would be obvious epistemic boondoggles. A similar apparatus for governing AI would lead to obtuse regulation at best and opportunities for tyranny at worst.

None of this is to suggest that there is no role for the state to take today. State efforts to make AI more "legible," to borrow a term from James Scott, through means like reporting requirements for the data centers that perform large-scale model training, may be a good investment of resources. Fortunately, President Biden's October 2023 Executive Order on AI does just this. Policymakers should also explore requiring such facilities to adopt Know Your Customer (KYC) procedures to mitigate the risk of an adversarial nation or non-state actor making use of their computing capacity. Ideally, such efforts will help policymakers understand the AI landscape in more concrete detail without burdening firms beyond reporting basic facts about their activities. This reporting regime could also help the state mitigate against models going off the rails in the dangerous ways envisioned by Eliezer Yudkowsky and other AI opponents.

Additionally, the AI field currently lacks robust standards for evaluating model performance, reliability, and safety. Government institutions, such as the National Institute of Standards and Technology, can help us develop such standards, though they may need to be industry specific. These standards could then be required in some high-risk settings, such as medicine, and incentivized elsewhere, perhaps through government procurement rules. This solution is far more flexible and practical than imposing one-size-fits-all rules on all models, which would be challenging to enforce given the widespread proliferation of AI capabilities throughout the world.

FEAR ITSELF

AI labs are engaged in an effort to recreate in silicon the mechanisms of the human mind. Regardless of whether they succeed, this is surely among the boldest scientific quests undertaken in the history of our species.

That history makes it clear that all technologies are used for both good and evil purposes. And while we tend to focus on the potential for AI to be used for evil, there is much potential for good that we overlook.

AI may one day enable a handful of people to conduct as much productive labor as a newsroom once did, transforming local journalism. It may give individuals and businesses a new ally to help them keep up with the torrent of rules and regulations their governments promulgate — an ally that can understand and act on immense volumes of text and never grows tired or frustrated. Highly customizable tutors with fluency in all domains of knowledge may one day help educate our children, supplementing and perhaps replacing the unaccountable government actors to whom most of us entrust this vital task. Just as the possibilities for misuse are innumerable, so too is the potential for AI to bring about a profoundly richer world for ourselves and our children.

There is thus a future to hope for — not simply one to fear. To forget this is to lose faith in the process of tool-building, innovation, and technology diffusion that has undergirded almost all material progress. If we forget this — if we lose hope and succumb to fear — regulation that causes stagnation in technological progress will seem tolerable, perhaps even desirable. The crucial balance to strike, then, is between hopeful optimism and candid recognition of the stark challenges we face with this and any new general-purpose technology.

These challenges are primarily technical and scientific, but they imply others, too. As AI diffuses throughout society, the need for new norms, new organizations, new alliances — in short, institution building — will become apparent. Surely the law has a role to play here. But much of this work will naturally fall to each of us doing our own small part in enriching the collective order. Perhaps we will wish to canonize in law — or arrest — some of the new norms we will organically create, but those norms must exist before they can be reinforced or mitigated by the law. So while there is a role for policy, our politicians should settle for a supporting role for now. Their humility will pay dividends for us all.

The British philosopher Michael Oakeshott once likened the human condition to sailing on a "boundless and bottomless sea." We can view the reality Oakeshott describes as a crisis, but it is surely more life affirming to accept and even embrace this fundamental truth. AI, like so many other challenges in human history, should be seen not as a tsunami to run away from, but as an invitation for humanity to be bold, to sail the boundless sea with courage. Each of us gets to choose whether to accept that invitation or to run from it. No matter which path we decide to pursue, we should bear in mind Oakeshott's wisdom: We have already set sail.

Dean Woodley Ball is the senior program manager for the Hoover Institution’s State and Local Governance Initiative and the author of the Substack Hyperdimensional.


Insight

from the

Archives

A weekly newsletter with free essays from past issues of National Affairs and The Public Interest that shed light on the week's pressing issues.

advertisement

Sign-in to your National Affairs subscriber account.


Already a subscriber? Activate your account.


subscribe

Unlimited access to intelligent essays on the nation’s affairs.

SUBSCRIBE
Subscribe to National Affairs.