Category: Artificial Intelligence, Data Science & Society
-
Singularity Skepticism 4: The Value of Avoiding Errors
[This is the fourth in a series of posts. The other posts in the series are here: 1 2 3.] In the previous post, we did a deep dive into chess ratings, as an example of a system to measure a certain type of intelligence. One of the takeaways was that the process of numerically…
-
Singularity Skepticism 3: How to Measure AI Performance
[This is the third post in a series. The other posts are here: 1 2 4] On Thursday I wrote about progress in computer chess, and how a graph of Elo rating (which I called the natural measure of playing skill) versus time showed remarkably consistent linear improvement over several decades. I used this to argue…
-
Singularity Skepticism 2: Why Self-Improvement Isn’t Enough
[This is the second post in a series. The other posts are here: 1 3 4] Yesterday, I wrote about the AI Singularity, and why it won’t be a literal singularity, that is, why the growth rate won’t literally become infinite. So if the Singularity won’t be a literal singularity, what will it be? Recall that…
-
Why the Singularity is Not a Singularity
This is the first in a series of posts about the Singularity, that notional future time when machine intelligence explodes in capability, changing human life forever. Like many computer scientists, I’m a Singularity skeptic. In this series I’ll be trying to express the reasons for my skepticism–and workshopping ideas for an essay on the topic…
-
AI and Policy Event in DC, December 8
Princeton’s Center for Information Technology Policy (CITP) recently launched an initiative on Artificial Intelligence, Machine Learning, and Public Policy. On Friday, December 8, 2017, we’ll be in Washington DC talking about AI and policy. The event is at the National Press Club, at 12:15-2:15pm on Friday, December 8. Lunch will be provided for those who…
-
AI Mental Health Care Risks, Benefits, and Oversight: Adam Miner at Princeton
How does AI apply to mental health, and why should we care? Today the Princeton Center for IT Policy hosted a talk by Adam Miner, ann AI psychologist, whose research addresses policy issues in the use, design, and regulation of conversational AI in health. Dr. Miner is an instructor in Stanford’s Department of Psychiatry and…
-
Getting serious about research ethics: AI and machine learning
[This blog post is a continuation of our series about research ethics in computer science.] The widespread deployment of artificial intelligence and specifically machine learning algorithms causes concern for some fundamental values in society, such as employment, privacy, and discrimination. While these algorithms promise to optimize social and economic processes, research in this area has…
-
Multiple Intelligences, and Superintelligence
Superintelligent machines have long been a trope in science fiction. Recent advances in AI have made them a topic for nonfiction debate, and even planning. And that makes sense. Although the Singularity is not imminent–you can go ahead and buy that economy-size container of yogurt–it seems to me almost certain that machine intelligence will surpass ours eventually, and quite…
-
Language necessarily contains human biases, and so will machines trained on language corpora
I have a new draft paper with Aylin Caliskan-Islam and Joanna Bryson titled Semantics derived automatically from language corpora necessarily contain human biases. We show empirically that natural language necessarily contains human biases, and the paradigm of training machine learning on language corpora means that AI will inevitably imbibe these biases as well. Specifically, we look at…
-
Robots don't threaten, but may be useful threats
Hi, I’m Joanna Bryson, and I’m just starting as a fellow at CITP, on sabbatical from the University of Bath. I’ve been blogging about natural and artificial intelligence since 2007, increasingly with attention to public policy. I’ve been writing about AI ethics since 1998. This is my first blog post for Freedom to Tinker. Will…