Whose Rules
How Binding Is Administrative Guidance? An Empirical Study of Guidance, Rules, and the Courts Telling Them Apart
Amit Haim
Journal of Empirical Legal Studies, forthcoming
Abstract:
Guidance documents are a main pillar of the modern administrative state. While federal agencies issue thousands of rules every year through notice-and-comment rulemaking, they issue even more guidance documents in various forms. There is, however, an ongoing and fierce dispute over agencies' ability to create binding obligations through guidance without the notice-and-comment rulemaking procedures stipulated by the Administrative Procedure Act. The binding norm doctrine purports to prevent agencies from creating binding obligations through guidance, and often focuses on documents' choice of wording. But to what extent do guidance documents use binding language, and how do courts understand them? Despite the widespread interest in these questions, however, there has been a surprising lack of empirical studies tackling them. This article begins to bridge this gap and presents an analysis based on a novel dataset compiled from an online database of agency guidance, which encompasses nearly 70,000 documents issued by three key federal agencies from 1970 to 2022. Using computational text analysis, it investigates the language of guidance documents to assess their potential bindingness. It identifies specific linguistic cues that courts have used to interpret documents as binding or non-binding and applies these criteria across the dataset. The findings indicate a significant rise in the quantity and the assertiveness of language in guidance documents over the decades and show their near parity with legislative rules in terms of their binding effect, suggesting that guidance has indeed become a main bulwark of administrative policymaking. Moreover, the analysis explores judicial reviews of guidance documents, finding no substantial differences between documents that were set aside as too binding and others that were upheld, suggesting that the application of the binding norm doctrine fails to create a systematic and consistent framework for administrative agencies and regulated entities. In response to these findings, the article proposes a shift from the current focus on the close textual reading of documents to a procedural label test, which assesses only whether a rule has undergone the required procedural steps. This approach aims to simplify the legal assessment of guidance documents and provide a more stable foundation for administrative action.
Epistemic Humility and Adaptive Regulation
Yoon-Ho Alex Lee & Lynne Kiesling
Northwestern University Working Paper, August 2025
Abstract:
This article advances epistemic humility as a normative and institutional principle for agency rulemaking. Drawing on F.A. Hayek's critique of constructivist rationality, the article recognizes that regulators -- even with their best efforts -- can overestimate their ability to design optimal rules ex ante. Epistemically humble rulemaking would be mindful of regulators' knowledge limitations and build in mechanisms for learning, correction, and revision. These design features -- such as sunset provisions, pilot programs, and contingent rules -- not only improve regulatory performance over time but also enhance democratic legitimacy by institutionalizing openness to new information and stakeholder input. The article further explores how decentralized structures, particularly American federalism, can facilitate policy experimentation and cross-jurisdictional learning. In contrast to static models of regulation, epistemically humble rulemaking envisions regulation as a fallible but improvable enterprise. It ultimately reframes legitimacy not as perfection at the outset, but as responsiveness over time to evidence, context, and public reason.
The Intended and Unintended Consequences of Privacy Regulation for Consumer Marketing
Jean-Pierre Dubé et al.
Marketing Science, September-October 2025, Pages 975-984
Abstract:
As businesses increasingly rely on granular consumer data, the public has increasingly pushed for enhanced regulation to protect consumers’ privacy. We provide a perspective based on the academic marketing literature that evaluates the various benefits and costs of existing and pending government regulations and corporate privacy policies. We make four key points. First, data-based personalized marketing is not automatically harmful. Second, consumers have heterogeneous privacy preferences, and privacy policies may unintentionally favor the preferences of the rich. Third, privacy regulations may stifle innovation by entrepreneurs who are more likely to cater to underserved, niche consumer segments. Fourth, privacy measures may favor large companies who have less need for third-party data and can afford compliance costs. We also discuss technology platforms’ recent proposals for privacy solutions that mitigate some of these harms but, again, in a way that might disadvantage small firms and entrepreneurs.
Price regulation in two-sided markets: Empirical evidence from debit cards
Vladimir Mukharlyamov & Natasha Sarin
Journal of Financial Economics, October 2025
Abstract:
This paper provides empirical evidence of a well-known theoretical concern that market failures in two-sided markets are hard to identify and correct. We study the reactions of banks, merchants, and consumers to Dodd-Frank’s Durbin Amendment that lowered interchange fees on debit card transactions. Banks recouped a significant portion of their losses by charging consumers for products that they previously provided for free on the subsidized side of the two-sided market. The accelerated adoption of credit cards with higher interchange fees likely diminished -- if not eliminated -- merchants’ savings. These effects impede the regulation’s stated objective of enhancing consumers’ welfare through lower retail prices.
A Practical Measure of Red Tape
Dustin Chambers & Colin O'Reilly
Regulation & Governance, forthcoming
Abstract:
Regulation can influence economic dynamism, the distribution of income, and various measures of economic welfare. Despite a substantial proportion of regulation in the United States originating at the state level, we are not aware of any comprehensive measure of excess regulation or “red tape” at the state level. We fill this notable gap by constructing a novel measure of state-level red tape based on the State RegData dataset. The red tape index measures the excess stringency of regulation in each industry relative to a benchmark level of regulation and then weights the extent of excess stringency on the industrial composition of each state. As a comprehensive measure of regulation, the red tape index differs fundamentally from other existing measures of state-level regulation, which tend to target a particular type of regulation (e.g., labor market regulation). The red tape index reveals wide variation in the level of industry regulation among states and that most states have regulations that go beyond a so-called “light touch” regulatory approach, implying that red tape is pervasive. This index may be of value for both empirically estimating the effect of comparatively high levels of state regulation on various outcomes of interest and for policy makers seeking to streamline administrative rules.
Virtual Agent Economies
Nenad Tomasev et al.
Google DeepMind Working Paper, September 2025
Abstract:
The rapid adoption of autonomous AI agents is giving rise to a new economic layer where agents transact and coordinate at scales and speeds beyond direct human oversight. We propose the "sandbox economy" as a framework for analyzing this emergent system, characterizing it along two key dimensions: its origins (emergent vs. intentional) and its degree of separateness from the established human economy (permeable vs. impermeable). Our current trajectory points toward a spontaneous emergence of a vast and highly permeable AI agent economy, presenting us with opportunities for an unprecedented degree of coordination as well as significant challenges, including systemic economic risk and exacerbated inequality. Here we discuss a number of possible design choices that may lead to safely steerable AI agent markets. In particular, we consider auction mechanisms for fair resource allocation and preference resolution, the design of AI "mission economies" to coordinate around achieving collective goals, and socio-technical infrastructure needed to ensure trust, safety, and accountability. By doing this, we argue for the proactive design of steerable agent markets to ensure the coming technological shift aligns with humanity's long-term collective flourishing.
AI Agents for Economic Research
Anton Korinek
NBER Working Paper, September 2025
Abstract:
The objective of this paper is to demystify AI agents -- autonomous LLM-based systems that plan, use tools, and execute multi-step research tasks -- and to provide hands-on instructions for economists to build their own, even if they do not have programming expertise. As AI has evolved from simple chatbots to reasoning models and now to autonomous agents, the main focus of this paper is to make these powerful tools accessible to all researchers. Through working examples and step-by-step code, it shows how economists can create agents that autonomously conduct literature reviews across myriads of sources, write and debug econometric code, fetch and analyze economic data, and coordinate complex research workflows. The paper demonstrates that by "vibe coding" (programming through natural language) and building on modern agentic frameworks like LangGraph, any economist can build sophisticated research assistants and other autonomous tools in minutes. By providing complete, working implementations alongside conceptual frameworks, this guide demonstrates how to employ AI agents in every stage of the research process, from initial investigation to final analysis.
Strengthening nucleic acid biosecurity screening against generative protein design tools
Bruce Wittmann et al.
Science, 2 October 2025, Pages 82-87
Abstract:
Advances in artificial intelligence (AI)–assisted protein engineering are enabling breakthroughs in the life sciences but also introduce new biosecurity challenges. Synthesis of nucleic acids is a choke point in AI-assisted protein engineering pipelines. Thus, an important focus for efforts to enhance biosecurity given AI-enabled capabilities is bolstering methods used by nucleic acid synthesis providers to screen orders. We evaluated the ability of open-source AI-powered protein design software to create variants of proteins of concern that could evade detection by the biosecurity screening tools used by nucleic acid synthesis providers, identifying a vulnerability where AI-redesigned sequences could not be detected reliably by current tools. In response, we developed and deployed patches, greatly improving detection rates of synthetic homologs more likely to retain wild type–like function.
Platform Competition and Interoperability: The Net Fee Model
Mehmet Ekmekci, Alexander White & Lingxuan Wu
Management Science, October 2025, Pages 8842-8864
Abstract:
Is more competition the key to mitigating dominance by large tech platforms? Could regulation of such markets be a better alternative? We study the effects of competition and interoperability regulation in platform markets. To do so, we propose an approach of competition in net fees, which is well-suited to situations in which users pay additional charges, after joining, for on-platform interactions. Compared with existing approaches, the net fee model expands the tractable scope to allow variable total demand, platform asymmetry, and merger analysis. Regarding competition, we find that adding more platforms to the market may lead to the emergence of a dominant firm. In contrast, we find that interoperability can play a key role in reducing market dominance and lowering prices. Broadly speaking, our results favor policy interventions that assure the formidability of the competition that dominant platforms face.
Tacit collusion by pricing algorithms
Bharat Bhole & Sunita Surana
Economic Inquiry, October 2025, Pages 1036-1065
Abstract:
This article contributes to the debate about the potential of pricing algorithms to collude and earn supra-competitive profits without explicit communication. By simulating competition among seven algorithms, we demonstrate that: (1) algorithms can reach supra-competitive prices in a reasonably short time, taking less than 1/1,000th the time taken by algorithms in recent studies; and (2) tacit collusion among the algorithms is robust to the choice of different algorithms by competing firms. These results address the main criticisms concerning the practical relevance of recent studies that demonstrate algorithmic collusion. The top-performing algorithms possess properties of niceness, forgiveness, provocability, and flexibility.