Digital cultures of technology and knowledge
IGGI - The Engineering Spirit and Engineers of Mind
Exaggerated expectations and fears dominate the debates on ‘artificial intelligence’ (AI), both internationally and here in Germany. But what were the original goals and intended uses of AI research in West Germany? Where, scientifically speaking, does it come from?
IGGI - The Engineering Spirit and Engineers of Mind: A History of AI in the Federal Republic of Germany
The project (Förderkennzeichen 01IS19029) is funded by BMBF.
- Digital cultures of technology and knowledge
- Mathematik und Informatik
- Natural Sciences
Leitung BMBF Forschungsprojekt "Eine Geschichte der KI in der BRD"Teilprojekt "Automatisches Beweisen"
BMBF Forschungsprojekt "Eine Geschichte der KI in der BRD"Teilprojekt "Künstliche Intelligenz und Kognitionswissenschaft"
BMBF Forschungsprojekt "Eine Geschichte der KI in der BRD"Teilprojekt "Sprachverarbeitung"
BMBF Forschungsprojekt "Eine Geschichte der KI in der BRD“Teilprojekt: "Verarbeitung von Bildern"
BMBF Forschungsprojekt "Eine Geschichte der KI in der BRD"Teilprojekt "Expertensysteme"
This project asks how AI developed in West Germany from whose specific academic and other research contexts it eventually emerged as an internationally successful part of computer science. Next to analysing archival and other source material we are conducting oral history interviews with eyewitnesses in order to contextualise West-German AI research within the recent history of science and technology.
“Artificial intelligence” (AI) currently dominates the debates
“Artificial intelligence” (AI) currently dominates the debates in science and technology, politics, economics, the arts and the media. But AI is more than self-driving cars, chess-playing computers or talking robots: AI is also a scientific discipline that hopes to imitate human intelligence with computers and even to construct machines with their own “intelligence”. The buzzword AI covers research on the design, processing and communication of information in and through machines. All of these skills are typically ascribed to humans.
IGGI: Ingenieur-Geist und Geistes-Ingenieure (the engineering spirit and engineers of mind)
The IGGI project researches the history of AI in the Federal Republic of Germany in order to further our understanding of the technologies known as “AI”. The name IGGI (the engineering spirit and engineers of mind) refers to a view held by early computer scientists, for whom programming did not result in a material product but rather in an abstract one aimed at problem solving. After establishing themselves as a scientific community in the mid-1970s, West-German AI researchers began to wonder if computers might be able to think. If so, then there would be no difference between the problem solving done by computer programmes and that done by the human mind. Such a functional equivalence between computer and brain is at the core of the computer metaphor, a central tenet of cognitive science.
In the 1980s, several research strands began to differentiate themselves within AI. Among these were automated theorem proving, the processing and understanding of natural languages and images, and expert systems. Together with cognitive science, these strands present the five lenses through which we examine AI in our project.
A central methodological element is securing and analysing material in archives as well as papers held by researchers. We are also conducting oral history interviews with pioneers of AI to preserve their memories, archiving them as audio and video files.
International Conference "AI in Flux"
- Recordings of the roundtable discussion, the performance, and several of the talks are available here: “AI in Flux”
International Confernce "AI in Flux"
Artificial intelligence (AI) has been with us for over half a century. From the Dartmouth College summer school that coined its name in 1956 it has moved across disciplinary and geographical borders, and today AI researchers are based in universities and institutes all over the world. AI is attracting attention – and increasingly, that attention is historical.
During our online conference “AI in Flux,” we discuss the transformations and circulations of the idea, science, and technology of artificial intelligence since it has left its original US-American context. Hosted by the Deutsches Museum in Munich, Germany, the conference consists of two parts. The first day (29 Nov 2021) will be held in German and introduces perspectives on AI and related topics from the humanities. On the second and third day (30 Nov-1 Dec 2021) an English-language programme discusses the history of AI and cybernetics beyond the American context. The speakers will shed light on the transformations that happened to these ideas after they began circulating globally. Talks about contemporary approaches to AI will complete the conference.
Automated theorem proving
One of the oldest applications of AI was the automatic proving of mathematical theorems by a computer programme. This resulted in logical programming, term substitution systems and unification systems, but also in non-monotonic logics. In addition to Wolfgang Bibel’s group in Munich, this area has been addressed in Kaiserslautern and Karlsruhe as well as in Hamburg, Kiel and Stuttgart. The representatives of this AI subject in particular had to assert themselves in the beginning against the “standard computer scientists”. Nevertheless, even critics of AI soon acknowledged the legitimacy of automatic reasoning as a field of scientific activity.
To be sure that a mathematical theorem is correct, it must be proven. To do this, it is traced back to propositions that are already considered correct by means of logical deductions. Mathematicians are often helped here by experience and intuition, but they will often also have to try out their proof approaches and successively correct them before they are successful. Can computers perform mathematical proofs? A first AI programme for this purpose was the "“Logic Theorist”.
This project will investigate the background of the institutional and scientific developments of automated theorem proving in the Federal Republic of Germany.
Natural language processing
Language is closely linked to thinking. So it’s not surprising that early AI researchers focused on how computers could recognise and output natural language. In the US, research into this question began in the 1950s. Among the most important applications were and still are machine translations, systems to interpret and summarise texts, and dialogue systems that additionally aimed at improving human-machine-communication. Early on, this research resulted in close collaboration between computer experts and linguists.
These trends continued in West Germany, yet within a different original context: in the US, AI researchers introduced language processing and were later joined by linguists. In West Germany, however, research in automated language processing began in computer linguistics, which was just developing in the 1960s. Only in the mid- to late-1970s, after the emergence of a West German AI community, did AI researchers start to turn towards the processing of natural language.
This project analyses how West German researchers, coming from different disciplines, collaborated, how they influenced each other, and which role AI played in this context.
New media technologies have changed and are continuing to change our understanding of visual perception. Photography and cinematography, for instance, allowed new experiments and questions: how can humans perceive a sequence of static images as moving? How do we see, and which rules govern visual perception?
Information technology, and the digital technologies of the 20th century in particular, have moved the technological imitation of vision beyond theoretical formalisation and into the realm of the possible. The processing of visual information thus became interesting not only for disciplines like physics, physiology and psychology, but also for (bio) cybernetics (especially in West Germany) and artificial intelligence. With increased funding and the first chairs for computer science in the 1970s, research into computer vision gained additional momentum. New problems in and approaches to “image understanding” resulted in the discovery of common interests with AI. AI’s methods of knowledge processing in particular helped understand how complex information about a 3D “environment” with temporal sequences – movements – could be analysed and interpreted.
Yet image processing and interpretation aren’t limited to computer science. Both as research subjects and methods they can be found in other areas of science and application, such as medicine, robotics and self-driving cars. The development of such “image-understanding” systems rests on several different approaches and technologies. The success (or failure) of these systems wasn’t necessarily the result of their functioning but often the result of cultural, economic and political factors.
Expert systems (XPS) were central to the research strand that aimed at creating programmes that were meant to do tasks previously only done by human experts.
At the core of the XPS is the separation of the database from its processing. Experts like medical doctors or economists were asked to formulate their knowledge in the form of “if-then” rules in order to make it processable by machine. Then, software specialists wrote a “problem-solving programme” which used the facts and rules of the database to, among other things, answer questions or supervise processes. The programme not only output this knowledge, it was also supposed to create new knowledge. This separation of knowledge from its processing allowed applications beyond the classical disciplinary boundaries. The problem-solving programme of MYCIN, originally developed for medical applications, could for instance be used to diagnose technical systems by utilising a different database.
The branch of AI called expert systems can be traced back to the US-American DENDRAL, which was developed at Stanford since 1965. After two decades of relative success, the interest in expert systems rose rapidly when the Japanese Ministry for International Trade and Industry began implementing their “Fifth Generation Computer” project in 1982. Within a short amount of time, publications on the topic multiplied and expert systems became synonymous with artificial intelligence in the late 1980s and early 1990s. After peaking in the US in 1988, the hype around XPS lost momentum. In 1992, the “Fifth Generation Computer” project ended without having reached most of its goals. In today’s public debates, this “AI hype” of the 1980s is forgotten.
This project examines what influences expert systems had on the development of AI in West Germany.
Artificial intelligence and cognitive science
Under the term “cognition” one can summarise those abilities that enable (human) intelligence: problem solving, learning, memory, etc. The question of what exactly intelligence is, isn’t new. Neither is the attempt to answer it by bringing in mechanical analogies. In recent history, the computer has proven to be a particularly powerful analogy: according to this hypothesis, cognition (and thus also intelligence) is information processing, and this information processing is similar to what happens in computers.
Based on this computer analogy, cognitive scientists try to understand intelligent systems and the nature of cognition. They do so by combining methods and results from a number of different disciplines, such as psychology, linguistics and artificial intelligence.
Cognitive science and AI research thus have a common subject: researching intelligence in humans and machines. They also share their interdisciplinary outlook. In contrast to the US, however, in West Germany it took some time before cognitive science – and with it, philosophical questions – were considered a part of AI. How exactly the relationship between these two areas of research developed in the West German context needs to be examined.
Ahrweiler, Petra: Künstliche Intelligenz-Forschung in Deutschland. Die Etablierung eines Hochtechnologie-Fachs (Münster/New York 1995).
Barthelmeß, Ulrike; Furbach, Ulrich: Künstliche Intelligenz aus ungewohnten Perspektiven. Ein Rundgang mit Bergson, Proust und Nabokov (Wiesbaden 2019).
Coy, Wolfgang; Bonsiepen, Lena: Erfahrung und Berechnung. Kritik der Expertensystemtechnik (Berlin; Heidelberg 1989).
Heintz, Bettina: Die Herrschaft der Regel. Zur Grundlagengeschichte des Computers (Frankfurt/Main; New York 1993).
Lenzen, Manuela: Künstliche Intelligenz. Was sie kann und was uns erwartet (München 2018).
Malsch, Thomas; Bachmann, Reinhard; Jonas, Michael; Mill, Ulrich; Ziegler, Susanne: Expertensysteme in der Abseitsfalle? Fallstudien aus der industriellen Praxis (Berlin 1993).
Puppe, Frank: Einführung in Expertensysteme. (Berlin; Heidelberg 1988).
Rammert, Werner (Hg.): Soziologie und künstliche Intelligenz. Produkte und Probleme einer Hochtechnologie (Frankfurt/Main; New York 1995).
Seising, Rudolf; Dittmann, Frank: Eine historisch-kritische Einführung. In: Deutsches Museum (Hg.): Bibel, Wolfgang; Furbach, Ulrich: Formierung eines Forschungsgebiets – Künstliche Intelligenz und Intellektik an der Technischen Universität München, Preprint 15, Deutsches Museum (München 2018).
Seising, Rudolf: Es denkt nicht! (Frankfurt am Main, Wien und Zürich 2021).