Ulf Hashagen / Helmuth Trischler (Deutsches Museum)
Florian Müller, Dinah Pfau, Helen Piel, Rudolf Seising, Jakob Tschandl (IGGI Project, Deutsches Museum)
A Network for Learning Machines – Karl Steinbuch’s “Kybernetik” and the Modelling of Human and Mechanic Intelligence in Early German AI
Christian Vater (Karlsruhe Institute of Technology)
Before ‘Artificial Intelligence’, there was ‘Cybernetics’. This was the approach that stuck most in post-war Germany, West and East, and it was picked up by communications engineer Karl Steinbuch at the Technical University of Karlsruhe (today KIT). He understood it as a universal approach, modelling information-processing systems – human, animal and machinery alike, calling his concept “Informatik”. This article highlights three aspects of his work: (a) His definition and use of models in theory, (b) his practical development of models in two and three dimensions, drawn and built, (c) his network of research, both in print and to persons, in a transnational and trans-disciplinary research style. He connected the western and eastern hemisphere of a world during the Cold War, crossing ideological borders and academic boundaries of the “Two Cultures”. Later in life, he became a ‘public scientist’, advocating a ‘reasonable’ Control of Public Affairs based on a Scientific Model of Society. In this he failed.
Sensa(c)tion: Modelling Intelligence in Sensor-Actor-Systems
Christiane Heibach (University of Regensburg)
In his canonical study on Cybernetics of 1948 Norbert Wiener states the equivalence between automates and biological systems which both dispose of “sense organs, effectors and the equivalent of a nervous system” (2nd ed. 1961, p. 43). Following this analogy, British cyberneticists start several experiments during the 1960s, which aim to simulate the human nervous system – partly relating it to artificial senses, partly skipping the ‘detour’ via sensory perception. All of these systems perform a very basic intelligence applying the rather simple stimulus-response model, while current (much more complex) sensor-actor-systems develop towards specific non-human technological epistemologies and thus seem to veer away from the cybernetic’s isomorphism between technical and biological systems.
The proposed contribution starts from the assumption that sensor-actor-systems refer to different notions of intelligence, depending on their complexity. While discussing this issue from the 1960s to the present it will be of particular interest, which lines of tradition can be drawn between European cyberneticists and the developers of the successive perceiving systems.
From Theoretical Physics to Cybernetics, AI (and beyond). The strange case of the Italian path to Information Sciences and Technology
Settimo Termini (DMI Università degli Studi di Palermo. Accademia Nazionale di Scienze, Lettere e Arti di Palermo)
The paths followed by AI for establishing itself as a crucial and driving force at the frontier of innovation in different places and periods have been various and different, as the “dynamical” name of the Conference strongly suggest. It is also to observe that for fully grasping its impact it is important to take into account all the general condition under which new scientific activities began developing.
The case of Italy adds something very specific and, in a sense, unusual: the predominant (and, perhaps, overwhelming) presence, in the first years, of physicists. Specifically, of theoretical physicists.
This fact can be looked at from an historical point of view trying to answer such questions as: why this happened only in a country or whether this specific event had some visible impact on the type of research done.
There is, however, also another aspect which goes beyond the historical interest. Conclusions which can be drawn from this historical case, can be useful for studying and forecasting development and role of present AI in present Society?
The talk, focusing on the previous items, will briefly present some comments on these arguments.
Blurred vision. Computer Vision between Computer and Vision
Birgit Schneider (University of Potsdam)
How much human vision is in computer vision and how much of it is analogy? What kind of concept of human vision does it take to think computer vision? What does computer vision "see"? These questions are the focus of this paper, which tries to approach the 'seeing' of computer vision with the heuristic method of visual disorder by looking at European approaches in the field. After the perceptron model had already been used at the end of the 1950s to introduce the functioning of an artificial neural network, including the inductive idea of a learning rule as a seeing machine, this branch of research came to a standstill. The book that sparked renewed interest in neural networks for the emerging field of computer vision was a 1982 cognitive science book on human vision. It was entitled “Vision – A Computational Investigation into the Human Representation and Processing of Visual Information” and was written by British neuroscientist and psychologist David Marr. The chapter will contextualize this work and its impact and analyze the analogies of seeing in humans and machines in the early times of computer vision.
Next Frontiers of Machine Vision & Learning and the Digital Humanities
Björn Ommer (Heidelberg University)
Recently, deep learning research has seen enormous progress that has tremendously propelled artificial intelligence and its applications. However, in light of the grand challenges of this field the technology still shows significant limitations. The ultimate goal of artificial intelligence and computer vision in particular are models that help to understand our (visual) world. Explainable AI extends this further, seeking models that are also interpretable by a human user. The talk will discuss some of the recent breakthroughs in machine vision and learning, highlight future challenges, and propose ways to improve the accessibility of content. Which methodological advances are needed to fully leverage the potential of intelligent data analysis and what is the next frontier? The talk will then showcase novel applications in the digital humanities and, time permitting, in the life sciences.
Literature and artificial intelligence
Hans-Christian von Herrmann (Technical University Berlin)
Based on a story by E. A. Poe, Claude E. Shannon constructed his mind-reading (?) machine in 1953, which the Parisian psychoanalyst Jacques Lacan was to focus on the following year in his seminar on language and cybernetics. As early as 1949, Alan Turing had spoken to the London Times in a telephone interview that computers would soon be able to prove themselves in all areas of human activity, even when writing sonnets. In 2019, the English author Ian McEwan in his novel Machines Like Me, which contains a history of artificial intelligence and also leads the reader back to the beginning of the 1980s, once again made a virtuously clear statement that links literary fiction and AI. The presentation turns to these very different intersections of literary history and artificial intelligence in order to explore how they reveal a profound change in the modern relationship between culture and technology.
13:30-15:30 Lunch & screen break
Late socialist AI? Transformations of state and computer research in the GDR
Martin Schmitt (Technical University of Darmstadt)
Analyzing the history of AI in Europe, some researchers tend to forget that the continent was divided until 1990. But computer technology played an important role in socialist states, so did AI. As new literature showed the potential of comparative approaches, this article aims to investigate the developments of early AI research in the GDR. Did GDR AI research differ from Western approaches? How was it integrated into the broader plans for computer development of a state reaching its financial limits? Answers to these questions might contribute to an analysis of a possible co-construction between AI and societal, cultural and political developments. Through an analysis of main projects and the transnational cooperation within Eastern European states, this article contributes to the overall theme of the special-issue on transformations of AI research. Based on interviews with main protagonists, media sources like newspapers and a re-evaluation of literature, the article will provide new insights into the formation and transformation of AI research in the GDR in the 1980s and 1990s.
Between System Theories and AI: The European Origin of the New Three Theories in the People's Republic of China (the 1980s-)
Bo An (Yale University/Max Planck Institute for the History of Science)
The paper examines the impact of European systems theory had on cybernetics and, by consequence, AI research in the People's Republic of China since the 1980s. As one of the core disciplines that formed the Chinese tradition of AI research, cybernetics as developed by Qian Xuesen, a prominent Chinese scientist and technocrat, witnessed a revival in the 1980s during the reform era, with the introduction of three theories by three European scientists: Austrian system theorist Ludwig von Bertalanffy's Theory of Dissipative Structure, French mathematician Rene Thom's Catastrophe Theory, and Germany physicist Hermann Haken's synergetics. Following the old triad--systems theory, information theory, and control theory, they became known and popularized in China as the New Three Theories, deemed crucial to develop alternative Chinese AI systems and visions. The paper explains the unlikely rendezvous by contextualizing it in the exchange between Europe and China in a recent past that has become barely recognizable in standard AI histories.
The Mind in a Technological System: Artificial Intelligence in Late Socialism
Ekaterina Babintseva (Harvey Mudd College)
In the 1960s, Soviet scientists and government contended that the country’s prosperity depended on the computerization of production and its efficient management. Responding to this verdict, Soviet psychologists proposed that to advance computerization, the Soviets needed to master logic-based methods of problem-solving. This paper examines psychologist Lev Landa’s Algo-Heuristic Theory (AHT), which described human problem-solving with a set of logical prescriptions. At first, the AHT assisted Soviet teachers in training students to solve technoscientific tasks. In the US, where Landa emigrated in 1974, the AHT found its application in management training and the development of expert systems, a dominant approach to AI in the 1970s. I argue for Soviet and American converging visions of the role of computers and rule-based methods of thinking in their economies. While the mid-century American public associated pattern-based thinking with totalitarianism, American managers and computer scientists praised the AHT for its ability to optimize human, and later, computational thinking. Additionally, AHT’s applications across pedagogical, managerial, and AI contexts are emblematic of the parallel developments in 20th-century computerization and the standardization of human thinking. While the AHT did not lead to thinking machines, its logic-based methods of problem-solving succeeded in making some humans think like machines.
Online Performance: Sophie Schmidt (http://sophieschmidt.info/)
18:00-19:00 Keynote: Title TBC
Thomas Sturm (Universitat Autònoma de Barcelona)