10:00-10:30 The mechanisation of thought processes – the view from 1958
Matthew Cobb (University of Manchester)
In 1958, the UK National Physical Laboratory held a meeting with 200 delegates, including many leading thinkers from around the world in the embryonic fields of AI, machine learning, pattern recognition, mechanised translation and literature searching. The proceedings of the meeting – including transcripts of discussions – were published in a 1000-page two-volume collection. These documents provide a snapshot of the attitudes of both academia and industry regarding the future. In particular, there was palpable excitement over recently-developed techniques for pattern recognition (Perceptron and Pandemonium). This article will explore the significance of this meeting in the development of AI and its application in science and industry, and the forgotten hopes and fears of researchers at the very beginning of this field.
10:30-11:00 Models, Mechanisms and Organisms in Turing and Ashby
Hajo Greif (Warsaw University of Technology)
This paper will outline the differences in approaches to and resources of “producing models of the action of the brain” (Turing 1946) in Alan M. Turing and W. Ross Ashby, who were in conversation on these topics as members of the “Ratio Club”. Ashby (1960) explicitly committed himself to building analogue machine models of the adaptive behaviours of brains and other systems, their functions and their relationships to their environments, all understood in explicitly Darwinian terms. However, he restricted his focus to the origins of adaptive behaviour by learning, leaving aside “genic” adaptation, and therefore the organic basis of that behaviour. Conversely, Turing developed a notion of idealised theoretical machines, known as “logical computing machines”, which originally served metamathematical purposes but informed the concrete design of the digital computer. He used his theoretical machines for inquiries into a varied set of phenomena, from proto-connectionist models of the brain via simulation of conversational behaviour to pattern development in organisms. Notably, in the latter (1952) he relied on the non-Darwinian account of morphogenesis in Sir D’Arcy Thompson’s On Growth and Form (1942). We will broadly outline the state of biological theorising on which Turing and Ashby relied at the time of their writing, and ask how their specific biological commitments may have influenced their choice of modelling approach.
Work on this paper is supported by National Science Centre (NCN) grant “Turing, Ashby, and ‘the Action of the Brain’”, no. 2020/37/B/HS1/01809, Hajo Greif (PI).
11:00-11:30 A handful of beginnings: AI in West Germany
Florian Müller, Dinah Pfau, Jakob Tschandl (IGGI Project Team, Deutsches Museum)
Research into Artificial Intelligence (AI) has been done for over 60 years, often with the difficulty of how to define its contents and borders. AI was and is an umbrella term for an interdisciplinary field using methods and theories of both the sciences and humanities. Looking at five AI research areas, we explore these issues for West Germany, where the internationally emerging fields of Automated Deduction, Natural Language Processing, and Image Processing took off in the 1950s. But it needed the initiative of several young researchers active in these fields during the 1970s to establish an AI community. This community then experienced a shift in focus, partly due to international political and economic influences, towards research into Expert Systems, which dominated AI during the 1980s. While this field was emphasising technological applicability, a parallel strand of research, Cognitive Science, focused on understanding natural intelligent systems. Though seemingly disparate, we will show how all of these beginnings and transformations characterise West German AI.
12:00-12:30 “Autonomous technical systems” – a new paradigm in technological science?
Benjamin Rathgeber (Karlsruhe Institute of Technology, Munich School of Philosophy)
Autonomous technical systems (ATS) are now present in all areas of our modern society. They already play a central role not only in the areas of mobility, the military and production, but also in the financial sector, care and research, and will become even more important in the coming years. However, a lot of different technological developments can be related to ATS and it is not always clear what exactly is meant by “autonomy”. If the claim is to develop technical systems that are supposed to evolve independently of the developer and to behave completely autonomously of the developer's purposes, then a new paradigm shift in technological development would have to occur. This means however, that the specific purposes ATS serve are no longer clear and that a disparity exists between the understanding of technology and the autonomy of the objects it produces. The presentation will explore this inherent problem of ATS from a methodological point of view. By reconstruction recent technological developments, a solution will be proposed how we can meaningfully talk about ATS and autonomy.
12:30-13:00 Algorithms, Knowledge, and Data: On the Evolution of AI Systems Design
Eyke Hüllermeier (Chair of Artificial Intelligence and Machine Learning Institute of Informatics, LMU Munich)
During the past decades, the design of intelligent systems and development of applications in artificial intelligence (AI) has been subject to a steady evolution. Most notably, there has been a significant shift from the classical knowledge-based paradigm to a strongly data-driven approach. This shift has been fostered by the recent emergence of data science as a scientific discipline and the success of machine learning (ML) as one of its core methodologies.
Elaborating on the evolution of algorithm and intelligent systems design in general, this talk will therefore specifically focus on recent developments in machine learning. Proceeding from the standard algorithmic approach as commonly adopted in computer science, three paradigms will be motivated and explained briefly.
13:00-13:30 Blurred vision. Computer Vision between Computer and Vision
Birgit Schneider (University of Potsdam)
How much human vision is in computer vision and how much of it is analogy? What kind of concept of human vision does it take to think computer vision? What does computer vision "see"? These questions are the focus of this paper, which tries to approach the 'seeing' of computer vision with the heuristic method of visual disorder by looking at European approaches in the field. After the perceptron model had already been used at the end of the 1950s to introduce the functioning of an artificial neural network, including the inductive idea of a learning rule as a seeing machine, this branch of research came to a standstill. The book that sparked renewed interest in neural networks for the emerging field of computer vision was a 1982 cognitive science book on human vision. It was entitled “Vision – A Computational Investigation into the Human Representation and Processing of Visual Information” and was written by British neuroscientist and psychologist David Marr. The chapter will contextualize this work and its impact and analyze the analogies of seeing in humans and machines in the early times of computer vision.