Category: Artificial Intelligence, Data Science & Society
-
Why the GenAI Infrastructure Boom May Break Historical Patterns
Authored by Mihir Kshirsagar Observers invoke railroad, electricity, and telecom precedents when contextualizing the current generative artificial intelligence (GenAI) infrastructure boom—usually to debate whether or when we are heading for a crash. But these discussions miss an important pattern that held across all three prior cycles: when the bubbles burst, investors lost money but society…
-
Meet the Researcher: Varun Satish
Varun Satish is a Ph.D. student in demography at Princeton University. His current projects include using language models to study the life course, and using machine learning to uncover shifting perceptions of social class in the United States over the last 50 years. Satish is originally from Western Sydney, Australia. Princeton undergraduate Jason Persaud ‘27…
-
AI “Born Secret”? The Atomic Energy Act, AI, and Federalism
Authored by: Kylie Zhang and Peter Henderson Tl;dr: Can states regulate AI risks of disclosing nuclear secrets? This post will explore the Atomic Energy Act, its applicability to AI, the potential impacts on state efforts, and potential policy recommendations for guiding AI safety evaluations and model releases. If an advanced AI system can figure out…
-
Statutory Construction & Interpretation for AI
Blogpost authors: Nimra Nadeem, Lucy He, Michel Liao, and Peter Henderson Paper authors: Lucy He, Nimra Nadeem, Michel Liao, Howard Chen, Danqi Chen, Mariano-Florentino Cuéllar, Peter Henderson A longer version of this blog is available on the POLARIS Lab website, an accompanying policy brief is available online, and the full paper can be found on…
-
Emotional Reliance on AI: Design, Dependency, and the Future of Human Connection
What We Learned at CHAI 2025 Tutorial – by Inyoung Cheong, Quan Ze Chen, Manoel Horta Ribeiro, and Peter Henderson “I don’t feel, I don’t remember, and I don’t care. That’s not coldness—it’s design“ — Excerpt from a user-shared ChatGPT record As conversational AI systems become more emotionally expressive, users increasingly treat them not just as…
-
The “Bubble” of Risk: Improving Assessments for Offensive Cybersecurity Agents
Authored by Boyi Wei Most frontier models today undergo some form of safety testing, including whether they can help adversaries launch costly cyberattacks. But many of these assessments overlook a critical factor: adversaries can adapt and modify models in ways that expand the risk far beyond the perceived safety profile that static evaluations capture. At…
-
Aligned Generative Models Exhibit Adultification Bias
This blog post is based on “Adultification Bias in LLMs and Text-To-Image Models” by Jane Castleman and Aleksandra Korolova, to appear in the 8th ACM Conference on Fairness, Accountability, and Transparency (FAccT 2025). The blog post can also be found on Jane and Aleksandra’s Substack, Eclectic Notes on AI. Generative AI models are poised to…
-
What the AI Whistleblower Protection Act Would Mean for Tech Workers
This piece was originally published on Tech Policy Press. Written by Sophie Luskin, Emerging Scholar at CITP. On May 15, Senate Judiciary Chairman Chuck Grassley (R-IA) introduced the AI Whistleblower Protection Act (AIWPA), a bipartisan bill to protect individuals who disclose information regarding a potential artificial intelligence security vulnerability or violation. Under the bill, these whistleblowers…
-
Why Should the National R&D Strategy Prioritize Diffusion Over Innovation?
Yesterday, researchers at Princeton’s AI Lab and CITP submitted comments to the National Science Foundation on the 2025 National AI Research & Development (R&D) Strategic Plan. Recent advances in AI (artificial intelligence), particularly with foundation models, are poised to have transformative effects on society. The question is not whether AI will reshape our economy and…
-
Meet the Researcher: Dominik Stammbach
Dominik Stammbach is a postdoctoral researcher at the Princeton Center for Information Technology Policy. Stammbach completed his PhD at ETH Zürich in Switzerland and is now a part of Professor Peter Henderson’s POLARIS (Princeton Language+Law, Artificial Intelligence, & Society) Lab, which conducts interdisciplinary research at the intersection of artificial intelligence (AI) and law. Stammbach recently…

