Tag: machine learning
-
Toward Trustworthy Machine Learning: An Example in Defending against Adversarial Patch Attacks (2)
By Chong Xiang and Prateek Mittal In our previous post, we discussed adversarial patch attacks and presented our first defense algorithm PatchGuard. The PatchGuard framework (small receptive field + secure aggregation) has become the most popular defense strategy over the past year, subsuming a long list of defense instances (Clipped BagNet, De-randomized Smoothing, BagCert, Randomized…
-
Toward Trustworthy Machine Learning: An Example in Defending against Adversarial Patch Attacks
By Chong Xiang and Prateek Mittal Thanks to the stunning advancement of Machine Learning (ML) technologies, ML models are increasingly being used in critical societal contexts — such as in the courtroom, where judges look to ML models to determine whether a defendant is a flight risk, and in autonomous driving, where driverless vehicles are…
-
How the National AI Research Resource can steward the datasets it hosts
Last week I participated on a panel about the National AI Research Resource (NAIRR), a proposed computing and data resource for academic AI researchers. The NAIRR’s goal is to subsidize the spiraling costs of many types of AI research that have put them out of reach of most academic groups. My comments on the panel…
-
Calling for Investing in Equitable AI Research in Nation’s Strategic Plan
By Solon Barocas, Sayash Kapoor, Mihir Kshirsagar, and Arvind Narayanan In response to the Request for Information to the Update of the National Artificial Intelligence Research and Development Strategic Plan (“Strategic Plan”) we submitted comments providing suggestions for how the Strategic Plan for government funding priorities should focus resources to address societal issues such as…
-
What Are Machine Learning Models Hiding?
Machine learning is eating the world. The abundance of training data has helped ML achieve amazing results for object recognition, natural language processing, predictive analytics, and all manner of other tasks. Much of this training data is very sensitive, including personal photos, search queries, location traces, and health-care records. In a recent series of papers,…
-
Getting serious about research ethics: AI and machine learning
[This blog post is a continuation of our series about research ethics in computer science.] The widespread deployment of artificial intelligence and specifically machine learning algorithms causes concern for some fundamental values in society, such as employment, privacy, and discrimination. While these algorithms promise to optimize social and economic processes, research in this area has…
-
Language necessarily contains human biases, and so will machines trained on language corpora
I have a new draft paper with Aylin Caliskan-Islam and Joanna Bryson titled Semantics derived automatically from language corpora necessarily contain human biases. We show empirically that natural language necessarily contains human biases, and the paradigm of training machine learning on language corpora means that AI will inevitably imbibe these biases as well. Specifically, we look at…
-
How do we decide how much to reveal? (Hint: Our privacy behavior might be socially constructed.)
[Let’s welcome Aylin Caliskan-Islam, a graduate student at Drexel. In this post she discusses new work that applies machine learning and natural-language processing to questions of privacy and social behavior. — Arvind Narayanan.] How do we decide how much to share online given that information can spread to millions in large social networks? Is it always our…