How much regulation is too much regulation?
The Paris Artificial Intelligence Action Summit 2025 ended yesterday with the U.S. and UK refusing to sign a 100-nation statement of regulatory priorities, aiming at creating a global AI sector that is “human rights based, human-centric, ethical, safe, secure and trustworthy,” and addresses global inequalities in AI capacity. The U.S. and UK reportedly backed out over concerns that the EU’s approach to regulation – in the words of Vice President J.D. Vance in a speech to the Summit – “strangles” AI development and amounts to “tightening the screws on U.S. tech companies.”
We asked: how much regulation is too much regulation? And should we foster a regulatory environment in which different countries regulate AI according to their local needs, or a one-size-fits-all approach? CITP scholars shared their perspectives on this urgent question.
A Global Difference in Values
Different countries have different values, argued CITP Director Arvind Narayanan. “There is a pervasive but annoying U.S. tendency to opine on regulations in the EU and elsewhere through the lens of American values. Generally speaking, Europeans are more risk-averse and have different preferences than Americans on the balance between rights and economic growth.”
Signs point to some sort of convergence between EU and U.S. approaches, but it remains to be seen how much. Also speaking at the summit, French President Emmanuel Macron acknowledged criticisms that EU regulation might sometimes deter investors, promising to “simplify” regulations and “resynchronize” with the rest of the world.
How much can EU regulators learn from the American experience? Narayanan thinks they can justifiably rely on their own. “I think the question of whether the EU ‘overregulates’ is for Europeans to decide. American technologists’ views on the matter tend to reflect their self-interest rather than Europeans’ interests, and should be considered irrelevant. In general, I think it’s great that different regions — including but not limited to the U.S., the EU, and China — have very different approaches to tech regulation.”
CITP Tech Policy Clinic lead Mihir Kshirsagar agreed, noting that the same argument goes both ways. “There is a tendency for Americans to also use European experiences to inform U.S. domestic policy debates while ignoring the different values context.” This appeal to the EU as a north star may come from people “who want more regulations” – but don’t always.
New York Times tech reporter Kevin Roose wrote a dispatch from the Summit, saying that to him, watching EU regulators try to keep up with fast-moving AI developments is “…like watching policymakers on horseback, struggling to install seatbelts on a passing Lamborghini.”
Kshirsagar called the analogy “deeply unhelpful.” He explains, “It suggests this false notion that new tech needs new regs, when in fact it may just need what we apply to all tech — that it respect human values and can be made accountable.” If Roose’s argument is that we have to give up on trying to regulate tech because regulators can’t write fast enough, we’re ignoring the possibility that existing rules and standards already apply to AI.
There can be an image of regulation as the enemy of growth – an image so common that a recent Economist cover declared a “revolt against regulation” with the words bursting out of red tape.
Princeton political scientist (and responsible computing curriculum lead) Steven Kelts doesn’t think it has to be that way. He said he wants to give “two cheers” for regulation. “There’s a belief that the absence of regulation is freedom. It is, in a sense. In another sense, the absence of regulation is uncertainty. Where regulations aren’t clear and well-specified, and there isn’t a professional civil service, rules are applied inconsistently and the door is open to corruption.”
Kelts said it makes sense to him that people “…want sensible regulations, and not too much.” But they can’t let that thought go too far, and turn into a complete rejection of regulation.
What Macron called simplicity is important, but “…more important is clear regulations with limited discretion.”
> For another perspective on AI regulation commentary from the Paris AI Action Summit, read the February 11, 2025 TechTake – “Balancing Free Speech & Safety: Envisioning a Human-Centered First Amendment for AI Regulation”.
TechTakes is a series where we ask members of the CITP community to comment on tech and tech policy-related news. TechTakes is moderated by Steven Kelts, CITP Associated Faculty and lecturer in the Princeton School of Public and International Affairs (SPIA), and Lydia Owens, CITP Outreach and Programming Coordinator.
Leave a Reply