
Jane Castleman is a Master’s student in the Department of Computer Science at Princeton University. Castleman’s research centers around the fairness, transparency, and privacy of algorithmic systems, particularly in the context of generative AI and online platforms. She recently sat down with Princeton undergraduate Jason Persaud ‘27 to discuss her research interests and gave some perspective into her time as a Princeton undergrad herself.
Jason Persaud: Could you begin by just telling us a little bit about yourself and your work that you do here?
Jane Castleman: I’m a second-year Master’s student in computer science, working with Professor Aleksandra Korolova. I mostly work on fairness, privacy, and transparency in online and algorithmic systems, mostly doing audits and evaluations.
Jason: Nice, and congratulations on your most recent award – please do tell us more about that.
Jane: Yeah, thank you. I was recently picked as one of the Siebel Scholars, and I was honestly really surprised to be selected. It’s mostly for academics and research, and it’s definitely an honor to be picked. And I think it’s really exciting that we have two CITP researchers represented in the list. I think policy research, especially in the computer science community, ranges in technicality, but it feels good to have such research validated as being just as important as other types of research.

Jason: Could you talk about a project that you’ve been working on recently?
Jane: I’m working on a couple projects. One of them right now is trying to investigate the fairness and validity of decision making from our LLMs [large language models]. Specifically in hiring and medical decision making, there’s a lot of evaluations about the fairness of these decisions and whether they change under different demographic attributes. But there’s less research on whether these decisions are valid. And so we wonder if they are made using the right pieces of information. And do we understand why these decisions were made? So we’re trying to use a new type of evaluation to understand that a little bit better.
Jason: How do you see your work informing policymakers in terms of accountability in generative AI?
Jane: Yeah, that’s a good question. I think it’s always hard to think of the policy impact. And I think for a standard computer science paper, you kind of have to rewrite it or know from the beginning that you want it to have a policy impact.
I especially learned this in Jonathan Mayer‘s class called Computer Science, Law, and Public Policy. And I think it’s something that I’ve been trying to keep in mind is to – on the solution side – make sure that it’s actually scalable and able to be implemented without sacrificing a lot of efficiency and utility, because otherwise there’s not really an incentive for any developers to adopt your solution.
I think on the accountability side, something I’ve been thinking a lot about is how evaluations can be more efficient and how we can do them over longer periods of time. Right now, a lot of accountability comes from media pressure. You’ll see these research papers that get picked up by Bloomberg or The Verge, and they’re popular tech reporting outlets and that provide some pressure.
But it’s really hard because the next model comes out and then the companies claim to have solved the problem, and it would be great if they have. But it just kind of goes in this repeating cycle. And so without efficient evaluations to hold companies accountable, it’s really difficult.
If you’re an undergrad, don’t be afraid to just talk to people and take hard classes.”
Jason: What advice would you give to undergrad students who are interested in some of the work that you do?
Jane: So I was actually an undergrad at Princeton before my Masters. I studied computer science, but I don’t think you have to come from computer science. I think something that I’ve been thinking a lot about as I’m in grad school is: don’t be afraid that something will be too hard. Like, I know at Princeton, there’s a lot of pressure to get really good grades. And sometimes that means taking easier classes because you’ll think you’ll get a better grade.
But I definitely regret not challenging myself as much. Especially being in grad school where now I have to take those hard classes. So I think really try to take as many difficult courses as you can. I think the trend or advice people give is to try to be technically minded when entering into this policy space.
And I think it broadens the range of tools you can use – to make policy changes and to incentivize. And if you can use these technical skills to say, ‘hey, this is impossible because I can prove it’s impossible,’ or, ‘hey, I built something that’s actually scalable and efficient because I use these technical skills.’
I guess that’s also advice for myself. But if you’re an undergrad, don’t be afraid to just talk to people and take hard classes.
Jason Persaud is a Princeton University junior majoring in Operations Research & Financial Engineering (ORFE), pursuing minors in Finance and Machine Learning & Statistics. He works at the Center for Information Technology Policy as a Student Associate. Jason helped launch the Meet the Researcher series at CITP in the spring of 2025.


Leave a Reply