Why Should the National R&D Strategy Prioritize Diffusion Over Innovation?

hands on a laptop computer keyboard

Yesterday, researchers at Princeton’s AI Lab and CITP submitted comments to the National Science Foundation on the 2025 National AI Research & Development (R&D) Strategic Plan. Recent advances in AI (artificial intelligence), particularly with foundation models, are poised to have transformative effects on society. The question is not whether AI will reshape our economy and institutions—it’s whether we can guide that transformation to serve broad public interests or allow benefits to concentrate among a few well-resourced actors. Our submission makes four core arguments about where the Federal government should prioritize AI research funding.

Investments in Diffusion 

Our first recommendation centers on a distinction that shapes how technologies transform economies: the difference between innovation and diffusion. We risk focusing too heavily on winning the innovation race while neglecting how innovations actually spread through the economy and get adopted at scale.

This isn’t just an academic concern. Jeffrey Ding’s analysis of technological revolutions shows that countries excelling at adopting and adapting technologies often surpass those that pioneered them. AI follows this pattern. Despite rapid advances in AI methods, diffusion proceeds glacially. The classic example is electrification: despite the invention of the dynamo, factories saw no productivity gains for 40 years because realizing benefits required redesigning entire factory systems. We see this in productivity data—despite all the excitement about AI capabilities, we haven’t seen significant economy-wide productivity gains, largely due to limited diffusion across industries.

We argue the Federal government should prioritize investments in what we call the “complements of automation”—the infrastructure, capabilities, and institutional adaptations that enable productive AI adoption. This includes comprehensive AI literacy programs, accelerated digitization of government services, energy infrastructure improvements, and careful evidence-based adoption of AI in government itself.

The last point deserves emphasis. There is a historic opportunity for governments to model thoughtful AI adoption—avoiding both extremes of rushing inadequately tested systems into deployment and moving so slowly that citizens turn to private alternatives for basic services. This requires funding pilot programs, rigorous testing protocols, and sharing of best practices across agencies. We are convening a workshop at Princeton in June to assist state governments with advancing a similar agenda.

Supporting Open Models 

Our second recommendation focuses on supporting open models to democratize access to AI technology. The parallel to open source software is instructive—according to recent estimates, open source represents over $8 trillion in value and comprises 96% of commercial software. Openness in AI can provide similar benefits.

Open models offer an antidote to the growing concentration of AI capabilities among a few tech companies. Open models have already enabled vast amounts of research that couldn’t happen without access to model internals. At Princeton, research groups have used open models extensively to explore questions ranging from the limits of prediction to applications in chemistry and material science.

Open models also support reproducible research—a contrast to closed model developers who often deprecate older models, making research based on those systems impossible to reproduce. They increase transparency, lower barriers for diverse stakeholders to shape AI’s future, and enable more services built by and for communities whose needs larger companies may not address.

The policy implication is clear: the Federal government must make substantial, sustained investments in computing infrastructure that supports universities’ research using open models.

Augmentation Over Automation

Our third recommendation is that the NSF prioritize research on AI’s workforce impact. The popular discourse around AI often fixates on mass unemployment. However, recent research suggests more gradual effects; AI will change the nature of work across occupations, rather than simply eliminating or creating jobs. The federal strategy needs to support systematic, longitudinal research into this transformation, focusing on four key areas: understanding productivity impacts across occupations, designing AI to promote augmentation over automation, developing effective workforce training programs, and creating approaches that preserve worker agency.

The last point reflects lessons from deployment failures. AI tools imposed without worker input often worsen conditions—gig workers face wage discrimination and opaque pay structures that undermine fairness. But co-design efforts show promise. The Writers Guild and Screen Actors Guild secured contract protections ensuring AI augments rather than replaces their work.

Foundational Research at Universities

Finally, we emphasize how universities, with their commitment to public service and interdisciplinary expertise, stand ready to serve as essential partners with the Federal government in realizing the societal benefits of AI. The NSF has long recognized the value of universities conducting research that may be less likely to happen in commercial settings. It should continue to support basic, foundational research that has a long term horizon, which helps us understand how complex technical systems work and improve on them. 

The NSF’s docket has over 12,000 comments reflecting the broad public engagement with these questions. Our contribution highlights the need for foundational research and prioritizes investments in research promoting diffusion, open access, and workforce transformation, to avoid the risk of AI’s benefits concentrating among large technology companies. We welcome feedback from researchers, policymakers, and practitioners working to ensure AI development serves the public interest.

Mihir Kshirsagar runs CITP’s first-of-its-kind interdisciplinary technology policy clinic that gives students and scholars an opportunity to engage directly in the policy process. Most recently, he served in the New York Attorney General’s Bureau of Internet & Technology as the lead trial counsel in cutting edge matters concerning consumer protection law and technology. Before law school he was a policy analyst at the Electronic Privacy Information Center in Washington, D.C., educating policy makers about the civil liberties implications of new surveillance technologies. 


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *