Online Teilnahme:
https://zoom.us/j/95603412698?pwd=lWi9j5gEC1WV5VKIx8RbQOYKsaCJIQ.1
Meeting-ID: 956 0341 2698
Kenncode: 135860
Prof. Dr. Aaron Mendon-Plasek, Purdue University
Subjectivity as a solution to human fallibility: political knowledge, machine learning efficacy, and the reorientation of 1980s artificial intelligence around extra-evidential judging
How should we narrate the development of artificial intelligence? And how do these stories constrain and motivate what descriptions of society, science, and self are seen as credible? By the mid-1980s the term “machine learning” was commonplace but connoted a startling hodgepodge of contradictory methods, modes of valuing, and criteria for evaluation. Efforts to understand the reconfiguration of artificial intelligence around the use of neural networks have downplayed this heterogeneity in favor of stories in which a few researchers, labs, or subcommunities compel consensus regarding the superiority of some problem-solving strategies over others. This talk, in contrast, seeks to explain the history of machine learning by centering disagreement, in which simultaneous yet disparate conceptions of “machine learning” within and across computing communities of practice were essential to the reorganization of AI around the study of ill-defined problems. To do this, I wade into a thicket of different pattern recognition problems that linked artificial intelligence researchers to other workers engaged in a dizzying array of inquiries. In taking the arguments, research, and career of Michael Satosi Watanabe as an illustrative example, I demonstrate that disputes regarding the efficacy of pattern recognition research, including the proper interpretation and application of such techniques to particular problems, leveraged nominalist-inflected conceptions of knowledge as a strategy for resolving political and social questions given imperfect, incorrect information. Such strategies, and the specific learning programs that sought to implement these strategies, were seen to be more effective precisely because they were seen as subjective. In asserting that any classification required “extra-experimental” and “extra-logical” judgment that could never be justified by logic alone, these researchers sought to apply these strategies to explicit questions about democratic governance, scientific development, and individual autonomy even as their efforts reorganized the field of artificial intelligence.
Bio:
Aaron Mendon-Plasek is an Assistant Professor in the Department of History at Purdue University. His first book project, tentatively titled The Ill-Defined World: A History of Machine Learning and Novel Political Knowledge, examines how little-known transnational communities of researchers sought to build learning machines that linked “efficacy” to visions of subjectivity, and how these efforts remade contemporary AI, scientific inquiry, and political representation. His work has been supported by various organizations, including the National Science Foundation, Columbia University, Purdue University, and the Charles Babbage Institute. Prior to his appointment at Purdue, he held a postdoc in the Information Society Project at Yale University, where he remains an Affiliate Fellow. He holds seven degrees, including a PhD in history from Columbia University, an MA in humanities and social thought from New York University, and an MFA in writing from the School of the Art Institute of Chicago.
Bild: Deutsches Museum