
Sophie Luskin is an Emerging Scholar at the Princeton Center for Information Technology (CITP) conducting research on regulation, issues, and impacts around generative AI for companionship, social and peer media platforms, age assurance, and consumer privacy to protect users and promote responsible deployment. Her research has been conducted across policy, legal, journalistic, and communications spaces. Luskin’s writing on these topics has appeared in a variety of outlets, including Corporate Compliance Insights, National Law Review, Lexology, Whistleblower Network News, Tech Policy Press, and the CITP Blog. Recently, Luskin sat down with Princeton undergrad Jason Persaud to discuss her research interests.
Jason Persaud: Could you begin by telling us a little bit about yourself and some of the work that you do here at the CITP?
Sophie Luskin: I am a researcher in the Emerging Scholars Program here at CITP. I mostly work with Mihir Kshirsagar through the center’s Tech Policy Clinic but have ongoing projects with various people connected to the Center. This is my second year at CITP, and I was working at a whistleblower law firm prior to starting here. I was doing work initially as a communications fellow, which then became explicit tech policy work. So I feel like that really informs my research interests.
I see my interests as a mixture of public interest and consumer protection. It’s exciting to work on that, specifically around tech policy, because that has been my area of interest for a while, so it was less explicit before coming here.
Jason: Nice. Could you talk a little bit more about how your background has informed your current research?
Sophie: I got into tech policy because I had an interesting path through history. At University of California, Davis [UC Davis], I had a professor named Omnia El Shakry, and a lot of her classes’ themes discussing colonialism and global interconnectivity centered around technologies of control like surveillance, etc. Those were major themes brought up through history, and were ones that I could connect to social media and the internet. Then UC Davis had a science and technology studies program, which I discovered my junior year of undergrad. And so I minored in science technology studies from there.
And then I ended up at the law firm because when I was interviewing I saw that it was the broadest opportunity I had to explore different areas of interests, and they were excited that I was interested in tech policy.
Jason: Okay, so, you mentioned right before [the interview] that you just came from a meeting about an AI project. Could you talk more about that?
Sophie: Yeah, so this project is a survey of products that are AI mental health chatbots. And it’s specifically looking at the language they use to market themselves; so it’s looking at claims like ‘24/7 availability’, ‘non-judgmentality’, ‘personalization’ (gets to know you), etc. What’s interesting there is that this is a widely discussed topic now in the news because there have been cases of how sycophancy has impacted people’s mental health, livelihoods, etc.
These are all general-purpose products. These stories are coming out of interactions with OpenAI’s ChatGPT. But when people talk about why people are turning to that, they say, ‘24/7 availability,’ ‘non-judgmental,’ and things like that. And that’s not necessarily the language coming from the companies and products themselves. So it’s just trying to analyze and kind of pick up themes of the major mental health products – products designed to be tools for that, and analyzing what language they are using and how that may still be harmful.
Jason: Could you tell us a little bit more about another project you’re working on?
Sophie: Aside from the therapy chatbot project, I am working on a survey with Madelyne Xiao and Mihir, inspired by New York’s SAFE for Kids Act.
It’s about what people’s preferences are around age assurance methodology. The act is designed to prevent kids from being fed algorithmic personalized feeds without parental consent. And so, for that to happen, one would have to prove they are over 18 if they didn’t receive parental consent to be shown that kind of feed.
If they’re under 18, they’ll still have access, but it would be a chronological feed. So, it’s not like they’d be cut off from the product entirely – it’s just steering them away from features that are deemed harmful or addictive.
Our angle is: this is going to happen, the act passed, and now they’re looking into implementation. What are the ways people are most comfortable with age assurance being conducted, and why? What demographic features relate to that?
Specifically, we’re trying to get at whether people are most comfortable with biometric methods – like face scanning or voice analysis to estimate age – or with a more “hard verification”, like uploading a photo of a driver’s license. And beyond those methods, where do they want that verification to occur? On each platform? Within a device’s operating system? At the app store level? In the browser?
We want to know: when people are fully informed of their options, what do they choose? That way implementation can be as smooth as possible, because there’s going to be a lot of tension around this. So that project is currently in the design stage. It complements a year-long course from last year where three SPIA juniors (now seniors) did a report on age assurance methods and where they can be performed within the tech stack, to submit as a comment to the New York Attorney General’s office. We just submitted that recently, actually.
Jason: Great, thank you for giving us an opportunity to discuss your work with you.
Jason Persaud is a Princeton University junior majoring in Operations Research & Financial Engineering (ORFE), pursuing minors in Finance and Machine Learning & Statistics. He works at the Center for Information Technology Policy as a Student Associate. Jason helped launch the Meet the Researcher series at CITP in the spring of 2025.


Leave a Reply