Category: Artificial Intelligence, Data Science & Society
-
Why the Singularity is Not a Singularity
This is the first in a series of posts about the Singularity, that notional future time when machine intelligence explodes in capability, changing human life forever. Like many computer scientists, I’m a Singularity skeptic. In this series I’ll be trying to express the reasons for my skepticism–and workshopping ideas for an essay on the topic…
-
AI and Policy Event in DC, December 8
Princeton’s Center for Information Technology Policy (CITP) recently launched an initiative on Artificial Intelligence, Machine Learning, and Public Policy. On Friday, December 8, 2017, we’ll be in Washington DC talking about AI and policy. The event is at the National Press Club, at 12:15-2:15pm on Friday, December 8. Lunch will be provided for those who…
-
AI Mental Health Care Risks, Benefits, and Oversight: Adam Miner at Princeton
How does AI apply to mental health, and why should we care? Today the Princeton Center for IT Policy hosted a talk by Adam Miner, ann AI psychologist, whose research addresses policy issues in the use, design, and regulation of conversational AI in health. Dr. Miner is an instructor in Stanford’s Department of Psychiatry and…
-
Getting serious about research ethics: AI and machine learning
[This blog post is a continuation of our series about research ethics in computer science.] The widespread deployment of artificial intelligence and specifically machine learning algorithms causes concern for some fundamental values in society, such as employment, privacy, and discrimination. While these algorithms promise to optimize social and economic processes, research in this area has…
-
Multiple Intelligences, and Superintelligence
Superintelligent machines have long been a trope in science fiction. Recent advances in AI have made them a topic for nonfiction debate, and even planning. And that makes sense. Although the Singularity is not imminent–you can go ahead and buy that economy-size container of yogurt–it seems to me almost certain that machine intelligence will surpass ours eventually, and quite…
-
Language necessarily contains human biases, and so will machines trained on language corpora
I have a new draft paper with Aylin Caliskan-Islam and Joanna Bryson titled Semantics derived automatically from language corpora necessarily contain human biases. We show empirically that natural language necessarily contains human biases, and the paradigm of training machine learning on language corpora means that AI will inevitably imbibe these biases as well. Specifically, we look at…
-
Robots don't threaten, but may be useful threats
Hi, I’m Joanna Bryson, and I’m just starting as a fellow at CITP, on sabbatical from the University of Bath. I’ve been blogging about natural and artificial intelligence since 2007, increasingly with attention to public policy. I’ve been writing about AI ethics since 1998. This is my first blog post for Freedom to Tinker. Will…
-
If Robots Replace Lawyers, Will Politics Calm Down?
[TL;DR: Probably not.] A recent essay from law professor John McGinnis, titled “Machines v. Lawyers,” explores how machine learning and other digital technologies may soon reshape the legal profession, and by extension, how they may change the broader national policy debate in which lawyers play such key roles. His topic and my life seem closely related: After law…
-
Robots and the Law
Stanford Law School held a panel Thursday on “Legal Challenges in an Age of Robotics”. I happened to be in town so I dropped by and heard an interesting discussion. Here’s the official announcement: Once relegated to factories and fiction, robots are rapidly entering the mainstream. Advances in artificial intelligence translate into ever-broadening functionality and…