Policy Student Taskforce Report on Age Assurance Techniques Now Available To the Public

Last year, three Princeton University’s undergraduate students in SPIA’s Princeton Policy Advocacy Clinic — Sabrina Johnston, Sander McComiskey, Jeana Raphael — wrote a report titled “Age Assurance Techniques and the New York SAFE for Kids Act” for a policy taskforce taught by Mihir Kshirsagar that analyzed eight prominent age assurance techniques to assess whether those techniques could qualify as “commercially reasonable and technically feasible” mechanisms under New York’s Stop Addictive Feeds Exploitation (SAFE) for Kids Act of 2024. I served as course advisor for these students throughout the term and in the writing of this report. 

We are making this report available broadly as a public resource for governing and enforcement bodies, as it provides information on the technical and commercial analysis of these mechanisms and their alignment with the Act’s values.


About the SAFE for Kids Act

New York’s SAFE for Kids Act prohibits social media platforms from providing addictive feeds to children under 18 without parental consent, and requires platforms to use commercially reasonable and technically feasible methods to determine user age. The New York Attorney General (NYAG) is responsible for promulgating regulations that identify qualifying techniques in light of the developing regulations to limit children’s access to “addictive feeds.” The act defines addictive feeds as those that personalize the content displayed on the basis of the user’s activity on that platform. This means that the act limits access to a specific design feature rather than specific content. Its prohibitions are therefore “content-neutral.”

The 2024 law requires social media companies to use commercially reasonable and technically feasible methods to determine user age, and tasks the NYAG’s office with promulgating regulations to identify qualifying techniques in light of the factors and constraints listed above, and for specifying the accuracy those techniques must achieve to satisfy platforms’ obligations. The Act gives NYAG broad authority to constrain and compel firm conduct, authority that extends beyond ordering platforms to use a specific technique to compelling more complex schemes involving third-party signaling or cooperation. 

What the Report Analysizes

This report evaluates feasibility and effectiveness of assurance techniques on both technical performance and broader values-based criteria. The technical criteria includes: false positives and negatives, ease of circumvention, data collection and protection practices, the burden on users, commercial feasibility, and the bias, equity and accessibility of the product. 

The values-based criteria includes protection of minors from harm, adult user access, privacy and anonymity, burden of platforms and users, and disproportionate impact. The report also analyzes who is responsible for the age assurance, be it social media platforms, app stores, browsers, devices, or operating systems, and whether they should be in-house or third party.

A resource for regulators and enforcement

The report serves as a resource for regulators and enforcement to understand the stakeholder landscape, how age assurance can be conducted, and where to place responsibility. More recently, the Knight-Georgetown Institute released a technical assessment of online age assurance that is another resource that conducts a careful technical evaluation of the risks and benefits of different approaches to age assurance.

Sophie Luskin is a CITP Emerging Scholar studying regulation, issues, and impacts around generative AI for companionship, social and peer media platforms, age assurance, and consumer privacy to protect users and promote responsible deployment. Luskin began her career in the whistleblower law space, where she specialized in public interest tech whistleblowing and continues to advocate.


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *