Category: Artificial Intelligence, Data Science & Society
-
The Limits of Data Filtering in Bio-Foundation Models
Blog Authors: Boyi Wei, Matthew Siegel, and Peter Henderson Paper Authors: Boyi Wei*, Zora Che*, Nathaniel Li, Udari Madhushani Sehwag, Jasper Götting, Samira Nedungadi, Julian Michael, Summer Yue, Dan Hendrycks, Peter Henderson, Zifan Wang, Seth Donoughe, Mantas Mazeika This post is modified and cross-posted between Scale AI and Princeton University. The original post can be…
-
Meet the Researcher: Jane Castleman
Jane Castleman is a Master’s student in the Department of Computer Science at Princeton University. Castleman’s research centers around the fairness, transparency, and privacy of algorithmic systems, particularly in the context of generative AI and online platforms. She recently sat down with Princeton undergraduate Jason Persaud ‘27 to discuss her research interests and gave some…
-
Why the GenAI Infrastructure Boom May Break Historical Patterns
Authored by Mihir Kshirsagar Observers invoke railroad, electricity, and telecom precedents when contextualizing the current generative artificial intelligence (GenAI) infrastructure boom—usually to debate whether or when we are heading for a crash. But these discussions miss an important pattern that held across all three prior cycles: when the bubbles burst, investors lost money but society…
-
Meet the Researcher: Varun Satish
Varun Satish is a Ph.D. student in demography at Princeton University. His current projects include using language models to study the life course, and using machine learning to uncover shifting perceptions of social class in the United States over the last 50 years. Satish is originally from Western Sydney, Australia. Princeton undergraduate Jason Persaud ‘27…
-
AI “Born Secret”? The Atomic Energy Act, AI, and Federalism
Authored by: Kylie Zhang and Peter Henderson Tl;dr: Can states regulate AI risks of disclosing nuclear secrets? This post will explore the Atomic Energy Act, its applicability to AI, the potential impacts on state efforts, and potential policy recommendations for guiding AI safety evaluations and model releases. If an advanced AI system can figure out…
-
Statutory Construction & Interpretation for AI
Blogpost authors: Nimra Nadeem, Lucy He, Michel Liao, and Peter Henderson Paper authors: Lucy He, Nimra Nadeem, Michel Liao, Howard Chen, Danqi Chen, Mariano-Florentino Cuéllar, Peter Henderson A longer version of this blog is available on the POLARIS Lab website, an accompanying policy brief is available online, and the full paper can be found on…
-
Emotional Reliance on AI: Design, Dependency, and the Future of Human Connection
What We Learned at CHAI 2025 Tutorial – by Inyoung Cheong, Quan Ze Chen, Manoel Horta Ribeiro, and Peter Henderson “I don’t feel, I don’t remember, and I don’t care. That’s not coldness—it’s design“ — Excerpt from a user-shared ChatGPT record As conversational AI systems become more emotionally expressive, users increasingly treat them not just as…
-
The “Bubble” of Risk: Improving Assessments for Offensive Cybersecurity Agents
Authored by Boyi Wei Most frontier models today undergo some form of safety testing, including whether they can help adversaries launch costly cyberattacks. But many of these assessments overlook a critical factor: adversaries can adapt and modify models in ways that expand the risk far beyond the perceived safety profile that static evaluations capture. At…
-
Aligned Generative Models Exhibit Adultification Bias
This blog post is based on “Adultification Bias in LLMs and Text-To-Image Models” by Jane Castleman and Aleksandra Korolova, to appear in the 8th ACM Conference on Fairness, Accountability, and Transparency (FAccT 2025). The blog post can also be found on Jane and Aleksandra’s Substack, Eclectic Notes on AI. Generative AI models are poised to…
-
What the AI Whistleblower Protection Act Would Mean for Tech Workers
This piece was originally published on Tech Policy Press. Written by Sophie Luskin, Emerging Scholar at CITP. On May 15, Senate Judiciary Chairman Chuck Grassley (R-IA) introduced the AI Whistleblower Protection Act (AIWPA), a bipartisan bill to protect individuals who disclose information regarding a potential artificial intelligence security vulnerability or violation. Under the bill, these whistleblowers…

