The Promise of “Public AI”: Lessons from a National Gathering of State Leaders

Blog Post Author: Mihir Kshirsagar

Shaping the Future of AI Conference Report authored by: Mihir Kshirsagar and Sophie Luskin

Photography by Sameer Khan / Fotobuddy Photography

panelist speaker holding a microphone speaking to an audience

This summer, over 120 policymakers, researchers, and government leaders gathered in Princeton to wrestle with a deceptively simple question: How can artificial intelligence truly serve the public interest? Our report from the “Shaping the Future of AI” conference aims to capture both the enormous potential and stubborn challenges facing “Public AI” – systems designed with public accountability baked in from the start, rather than retrofitted as an afterthought. The conference was organized by the Center for Civic Futures (formerly CPSAI), Innovate (US), The Gov Lab, the National Governors Association, Center for Public Sector AI, and the Office of the Chief AI Strategist for New Jersey, the New Jersey AI Hub, and Princeton’s Center for Information Technology Policy (CITP).

The conference identified four key strategic choices that cut to the heart of how governments can responsibly deploy AI. Unlike the breathless coverage we often see about AI in government, these choices acknowledge genuine trade-offs in implementing these complex systems.

3 people engaging in conversation

1. Incrementalism vs. Transformational Investment

The tension between cautious incrementalism and transformational investment is perhaps most telling. States can pursue modest pilots within existing budgets – a choice that avoids political risk but may miss AI’s transformative potential. Or they can commit to sustained, multi-year investments that could fundamentally reshape how government works, but require political courage and sustained commitment across electoral cycles.

2. Back-Office vs. Public-Facing Innovation

Similarly, the choice between internal operations and public-facing innovation reflects a deeper question about the government’s role. Back-office AI applications – automated document processing or internal search tools – offer more reliable outcomes with less public scrutiny. But citizen-facing AI services, while riskier and more complex, could fundamentally improve how people interact with their government.

3. Minimum Viable Governance vs. Comprehensive Capacity-Building

conference participants listening to a speaker

The most intriguing tension involves governance timelines: “minimum viable governance” frameworks that enable faster AI adoption versus comprehensive capacity-building that delays implementation but could result in more public accountability. This choice forces states to confront whether they’re willing to accept some governance gaps in exchange for early-mover advantages.

4. Hype vs. Transparency

Perhaps most importantly, the report acknowledges that Public AI implementation occurs against a backdrop of declining trust in government institutions. As state leaders emphasized, constituents respond better to honest acknowledgment of challenges than overselling capabilities that don’t yet exist. This creates a difficult but essential dynamic: governments must be transparent about AI limitations while still demonstrating its value. Rather than promising that AI will magically fix problems, successful implementation requires careful attention to governance frameworks, technical limitations, workforce impacts, and public trust considerations that are often underestimated in initial planning. As Anne-Marie Slaughter noted at the conference, “AI doesn’t just transform how government does things – it can transform what government does and what government in a democracy can be.” 

Realizing this potential requires navigating the strategic choices outlined in the report based on each jurisdiction’s specific context rather than assuming one-size-fits-all solutions. It requires building organizational capacity for rapid technological change while maintaining public accountability. And it requires treating Public AI as infrastructure for innovation and economic development, not just a cost-cutting tool. The conversation is just beginning, but at least it’s a conversation grounded in reality rather than hype. 

Read the conference report in full.

4 people standing in a circle talking

Mihir Kshirsagar runs CITP’s first-of-its-kind interdisciplinary technology policy clinic that gives students and scholars an opportunity to engage directly in the policy process. Most recently, he served in the New York Attorney General’s Bureau of Internet & Technology as the lead trial counsel in cutting edge matters concerning consumer protection law and technology and obtained one of the largest consumer payouts in the State’s history. Before law school he was a policy analyst at the Electronic Privacy Information Center in Washington, D.C., educating policy makers about the civil liberties implications of new surveillance technologies. 

Sophie Luskin is an Emerging Scholar at CITP where she studies regulation, issues, and impacts around generative AI for companionship, social and peer media platforms, age assurance, and consumer privacy. Her writing on these topics has appeared in a variety of outlets, including Corporate Compliance InsightsNational Law Review, LexologyWhistleblower Network NewsTech Policy Press and CITP’s blog.


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *