Tag: AI
-
CITP Comments on AI Accountability
Recently, the White House opened a number of opportunities for the public to comment on the growing field of accountability for artificial intelligence (AI) systems. The National Telecommunications and Information Administration (NTIA), the Executive Branch agency that is principally responsible for advising the President on telecommunications and information policy issues, launched a comment process that…
-
How the National AI Research Resource can steward the datasets it hosts
Last week I participated on a panel about the National AI Research Resource (NAIRR), a proposed computing and data resource for academic AI researchers. The NAIRR’s goal is to subsidize the spiraling costs of many types of AI research that have put them out of reach of most academic groups. My comments on the panel…
-
CITP Case Study on Regulating Facial Recognition Technology in Canada
Canada, like many jurisdictions in the United States, is grappling with the growing usage of facial recognition technology in the private and public sectors. This technology is being deployed at a rapid pace in airports, retail stores, social media platforms, and by law enforcement – with little oversight from the government. To help address this…
-
Calling for Investing in Equitable AI Research in Nation’s Strategic Plan
By Solon Barocas, Sayash Kapoor, Mihir Kshirsagar, and Arvind Narayanan In response to the Request for Information to the Update of the National Artificial Intelligence Research and Development Strategic Plan (“Strategic Plan”) we submitted comments providing suggestions for how the Strategic Plan for government funding priorities should focus resources to address societal issues such as…
-
Bridging Tech-Military AI Divides in an Era of Tech Ethics: Sharif Calfee at CITP
In a time when U.S. tech employees are organizing against corporate-military collaborations on AI, how can the ethics and incentives of military, corporate, and academic research be more closely aligned on AI and lethal autonomous weapons? Speaking today at CITP was Captain Sharif Calfee, a U.S. Naval Officer who serves as a surface warfare officer.…
-
Princeton Dialogues of AI and Ethics: Launching case studies
Summary: We are releasing four case studies on AI and ethics, as part of the Princeton Dialogues on AI and Ethics. The impacts of rapid developments in artificial intelligence (“AI”) on society—both real and not yet realized—raise deep and pressing questions about our philosophical ideals and institutional arrangements. AI is currently applied in a wide…
-
The Rise of Artificial Intelligence: Brad Smith at Princeton University
What will artificial intelligence mean for society, jobs, and the economy? Speaking today at Princeton University is Brad Smith, President and Chief Legal Officer of Microsoft. I was in the audience and live-blogged Brad’s talk. CITP director Ed Felten introduces Brad’s lecture by saying that the tech industry is at a crossroads. With the rise…
-
AI Mental Health Care Risks, Benefits, and Oversight: Adam Miner at Princeton
How does AI apply to mental health, and why should we care? Today the Princeton Center for IT Policy hosted a talk by Adam Miner, ann AI psychologist, whose research addresses policy issues in the use, design, and regulation of conversational AI in health. Dr. Miner is an instructor in Stanford’s Department of Psychiatry and…
-
Getting serious about research ethics: AI and machine learning
[This blog post is a continuation of our series about research ethics in computer science.] The widespread deployment of artificial intelligence and specifically machine learning algorithms causes concern for some fundamental values in society, such as employment, privacy, and discrimination. While these algorithms promise to optimize social and economic processes, research in this area has…
-
Language necessarily contains human biases, and so will machines trained on language corpora
I have a new draft paper with Aylin Caliskan-Islam and Joanna Bryson titled Semantics derived automatically from language corpora necessarily contain human biases. We show empirically that natural language necessarily contains human biases, and the paradigm of training machine learning on language corpora means that AI will inevitably imbibe these biases as well. Specifically, we look at…