Loading Events
  • This event has passed.


March 19 @ 5:00 pm - 7:00 pm

Stephanie Dick, University of Pennsylvania


Stephanie Dick is an Assistant Professor of History and Sociology of Science at the University of Pennsylvania. Prior to joining the faculty, she was a Junior Fellow in the Harvard Society of Fellows. She holds a PhD in History of Science from Harvard University. She is a historian of mathematics, computing, and artificial intelligence. Her first book project Making Up Minds: Proof and Computing in the Postwar United States tracks early efforts to automate mathematical proof and the many controversies about minds and machines that surrounded them. Her second project explores the early introduction of computing to domestic policing in the United States, including databasing practices and automated identification tools.

ABSTRACT: In 1969, the New York State Police Department became the first in America to create a centralized and standardized computerized law enforcement database. NYSIIS – the New York State Identification and Intelligence System – was developed after the 1957 Appalachin Meeting of organized crime in New York State, at which Sergeant Edgar Croswell explained numerous “information circulation” bottlenecks and failures that had slowed his infamous Appalachin raid, which had resulted in over 60 arrests. He reported that some of the main suspects in his investigation were the subjects of “as many as two hundred separate official police files in a surrounding area of several hundred miles,” and called for more efficient and centralized file-sharing. The resulting NYSIIS system was heralded as a “scientific breakthrough” in policing that would allow improved objectivity and accuracy especially in the identification of individuals who had multiple encounters with law enforcement. However, like many supposedly “innovative technologies” NYSIIS was in fact very conservative, serving to reinforce a social order that subjected different parts of the population differentially to surveillance and policing. In this short presentation, I will describe the system, and turn quickly to the 1974 Congressional hearings that followed. The hearings were meant to investigate whether or not people’s rights were being violated by law enforcement databasing practices. However, their inquiries operated entirely within the logic of databasing, surveillance, and automated identification at work in systems like NYSIIS, never questioning its underlying vision of policing or the policed. I use this case to demonstrate how technical choices can foreclose legal, ethical, and political ones and how critique, when it happens on the terms of that which it critiques, can strengthen, more than check technical power.

Paul Dourish, UC-Irvine


Paul Dourish is Chancellor’s Professor of Informatics in the Donald Bren School of Information and Computer Sciences at UC Irvine, with courtesy appointments in Computer Science and Anthropology. He is also an Honorary Professorial Fellow in Computing and Information Systems at the University of Melbourne. His research focuses primarily on understanding information technology as a site of social and cultural production; his work combines topics in human-computer interaction, social informatics, and science and technology studies. He is the author of several books, most recently “The Stuff of Bits: An Essay on the Materialities of Information” (MIT Press, 2017). He is a Fellow of the ACM, a Fellow of the BCS, a member of the SIGCHI Academy, and a recipient of the AMIA Diana Forsythe Award and the CSCW Lasting Impact Award.
ABSTRACT: The technical community’s response to the challenges of ethics in AI has been to turn towards fairness, accountability, and transparency as ways of opening up AI decision-making to human scrutiny. These properties have two characteristics — first, that they look internally to the constitution of technical arrangements, and second, that they gesture towards quantitative assessments of impact. I will explore how we might found a notion of ethics and AI around a collective and relational model founded in feminist ethics of care.

Kate Klonick, St. John’s University


Kate Klonick is an Assistant Professor at St. John’s University Law School and an Affiliate Fellow at Yale Law School’s Information Society Project. Her research on networked technologies’ effect on social norm enforcement, freedom of expression, and private governance has appeared in the Harvard Law Review, New York Times, New Yorker, The Atlantic, The Guardian and numerous other publications.

Creating Global Governance for Online Speech: The Development of Facebook’s Oversight Board, 129 YALE L. J. (forthcoming 2020)

ABSTRACT: For a decade and a half, Facebook has dominated the landscape of digital social networks and has evolved to become one of the most powerful arbiters of online speech. Twenty-four hours a day, seven days a week, over 2.5 billion users leverage the platform to post, share, discuss, react to, and access content from all over the globe. Through a system of semi-public rules called “Community Standards,” Facebook has created a body of “laws” and a system of governance to administer those rules, which dictate what users may say on the platform. As their immense private power over the public right of speech has become more visible, Facebook has come under intense pressure to become more accountable and transparent—not only in how it creates its fundamental policies for speech, but in how it enforces them. In answer to years of entreaty from the press, advocacy groups, and users, CEO and Founder Mark Zuckerberg announced in November 2018 that Facebook would create an independent oversight body. The express purpose of this institution was stated to be to serve as an appellate review of user content and to make content moderation policy recommendations to Facebook. This Article empirically documents the creation of what is now called the Facebook Oversight Board. It is the first time a private transnational company has voluntarily jettisoned a portion of its core policy and product decisions to a self-regulating independent entity. The Article begins with a detailed history of content moderation and online speech at Facebook and then gives a description of the 18-month process of creating the Board, a massive endeavor, both in terms of philosophical aims and practical articulation. Finally, this Article analyzes the Oversight Board creation process and the final decisions for Board formation to facilitate public understanding of the Board’s role in online governance, its chances for success, its potential impact on industry standards, and how it can be leveraged by users to create accountability around issues of private governance of global online speech.

D. Fox Harrell, MIT


D. Fox Harrell, Ph.D., is Professor of Digital Media & Artificial Intelligence in the Comparative Media Studies Program and Computer Science and Artificial Intelligence Laboratory (CSAIL) at MIT. He is the director of the MIT Center for Advanced Virtuality. His research explores the relationship between imagination and computation and involves inventing new forms of VR, computational narrative, videogaming for social impact, and related digital media forms. The National Science Foundation has recognized Harrell with an NSF CAREER Award for his project “Computing for Advanced Identity Representation.” Dr. Harrell holds a Ph.D. in Computer Science and Cognitive Science from the University of California, San Diego. His other degrees include a Master’s degree in Interactive Telecommunication from New York University’s Tisch School of the Arts, and a B.S. in Logic and Computation and B.F.A. in Art (electronic and time-based media) from Carnegie Mellon University – each with highest honors. He has worked as an interactive television producer and as a game designer. His book Phantasmal Media: An Approach to Imagination, Computation, and Expression was published by the MIT Press (2013).



March 19
5:00 pm - 7:00 pm