Next Article in Journal
MES: A Mathematical Model for the Revival of Natural Philosophy
Next Article in Special Issue
From Reflex to Reflection: Two Tricks AI Could Learn from Us
Previous Article in Journal
Exceptional Experiences of Stable and Unstable Mental States, Understood from a Dual-Aspect Point of View
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

AlphaGo, Locked Strategies, and Eco-Cognitive Openness

Department of Humanities, Philosophy Section, and Computational Philosophy Laboratory, University of Pavia, 27100 Pavia, Italy
Philosophies 2019, 4(1), 8; https://doi.org/10.3390/philosophies4010008
Submission received: 11 January 2019 / Revised: 3 February 2019 / Accepted: 11 February 2019 / Published: 16 February 2019
(This article belongs to the Special Issue Philosophy and Epistemology of Deep Learning)

Abstract

:
Locked and unlocked strategies are at the center of this article, as ways of shedding new light on the cognitive aspects of deep learning machines. The character and the role of these cognitive strategies, which are occurring both in humans and in computational machines, is indeed strictly related to the generation of cognitive outputs, which range from weak to strong level of knowledge creativity. I maintain that these differences lead to important consequences when we analyze computational AI programs, such as AlphaGo, which aim at performing various kinds of abductive hypothetical reasoning. In these cases, the programs are characterized by locked abductive strategies: they deal with weak (even if sometimes amazing) kinds of hypothetical creative reasoning, because they are limited in what I call eco-cognitive openness, which instead qualifies human cognizers who are performing higher kinds of abductive creative reasoning, where cognitive strategies are instead unlocked.

1. Are AlphaGo Cognitive Strategies Locked? An Abductive Framework

In 2015, Google DeepMind’s program AlphaGo (able to perform the famous Go game) beat Fan Hui, the European Go champion and a 2 dan (out of 9 dan) professional, five times out of five with no handicap on a full size 19 × 19 board. Later on, in March 2016, Google also challenged Lee Sedol, a 9 dan player who is considered the top world champion, to a five-game match. The DeepMind program defeated Lee in four of the five games. It was said that the program “invented” a new and unconventional move—never adopted by human beings—which was able to originate a new strategic framework, phenomenologically seen as simulating a proper “human” skillful ability, better than the ones of the more experienced humans. It is well-known that AlphaGo acquired the capacity to play the game by taking advantage of “seeing” data of thousands of games, perhaps also including those played by Lee Sedol, exploiting the so-called “reinforcement learning”: the program in turn plays successively against itself to improve, enrich, and adjust once again further its own deep neural networks grounded on trial and error procedures. I have to add that the machine also implicitly actualizes what I call “cognitive strategies” to simplify the search space for the next best move: it is in this way that an almost infinite range of chances can be reduced to a more manageable quantity.
Let us adopt the meaning of the attribute “strategic” to refer—this seems particularly appropriate in the case of AI studies—to an expert mixture in reasoning of various heuristic cognitive devices: a strategy consists in a subsequent smart choice of the next state of a cognitive routine (for example, the nearest one, by respecting an appropriate distance measure) and a heuristic is one of the instruments a strategy can exploit to rapidly arrive to a wanted state. When we are referring to the non-computational case of game theory, the meaning of the word strategy more amply refers to the role of the agents in their relationships with other agents and to the various related contentious, interlaced, or collective cognitive acts. In the field of research of the so-called ecological thinking (or ecological rationality) [1,2,3], strategies acquire an even more extended meaning: they also regard thinking routines which take advantage of a great amount of information, knowledge, and elevated computational costs; on the contrary heuristics are used to execute more simple and efficient cognitive processes, even if less precise. Sometimes cognitive heuristics are more generically seen as coincident with cognitive strategies. In this article, it is useful to adopt the AI more widespread perspective: strategies consist in a wise successive choice of suitable heuristics.
I contend that it is in the framework of abductive cognition [4] we can appropriately and usefully analyze the concept of strategic reasoning to the aim of seeing the distinction between locked and unlocked strategies. Indeed, I will contend that in AlphaGo only locked strategies are at play, and this fact affects the type of creativity which is in general performed by deep learning machines.
In my studies on abduction, I have extendedly illustrated various kinds of human (but also computational) hypothetical cognition. I suggested the adoption of the dichotomous distinction between selective abduction [5]—for example, in diagnosis (in which abduction is basically described as an inferential process of “selecting” from a “repository” of pre-stored hypotheses)—and creative abduction (abduction that produces new hypotheses). Furthermore, I have shown that abduction is not only related to the propositional aspect, i.e. when processed using human language (oral and written), but can also be “model-based” and “manipulative”. In the first case, we deal with an abduction that is basically performed thanks to internal cognitive acts that take advantage of models such as simulations, visualizations, diagrams, etc.; in the second case the external dimension is at play: in this case, an eco-cognitive perspective is fundamental because we have to refer to those cognitive actions (embodied, situated, and enacted) in which the role of external models (for example artifacts), props, and devices, is mandatory, and in which the characters of the actions themselves are hidden and hard to be extracted. Action can give origin to otherwise unavailable data that makes the agent solve problems by initiating and processing an appropriate abductive process of production and/or selection of hypotheses. As I say, manipulative abduction is occurring when we are thinking “through” doing and not only, in a pragmatic sense, about doing (cf. [4] chapter 1). It is clear that, when we are dealing with games such as Go, manipulative abduction is also at play, given the fact the reasoning is considerably intertwined with the manipulation of the stones and various embodied aspects are involved, together with the visualization of the whole scenario, the adversary, etc.

1.1. Abduction and AI

The concept of abduction has been involved in AI at least since the beginnings of this young discipline. Already in 1988 Paul Thagard [6] described four types of abduction implemented in PI, a computational program devoted to perform some of the main cognitive abilities illustrated by philosophy of science: scientific discovery, explanation, evaluation, etc.1 The program explicitly executes the so-called simple, existential, rule-forming, and analogical abduction. In this case the use of computer simulation exhibited a first sophisticated new tool to increase knowledge about abduction itself, showing how this kind of non deductive reasoning can be automatically rendered thanks to computational concrete artefacts.
Early works on machine scientific discovery, such as the well-known Logic Theorist (Newell et al. [7]), DENDRAL in chemistry (Lindsay et al. [8]), and AM in mathematics (Lenat [9]), demonstrated that heuristic search in combinatorial spaces represents an appropriate and general instrument for automating scientific discovery, and abduction was explicitly or implicitly categorized. In 1995, the AAAI Society organized a Spring Symposium on “Systematic Methods of Scientific Discovery” and in 1997 the Journal Artificial Intelligence promoted a special issue on “Machine Discovery” (91(2), 1997, cf. the classical Simon [10] and Okada and Simon [11]).2 In 1990, the AAAI Society coordinated a Spring Symposium on “automated abduction”, in which various computational programs able to execute various abductive tasks were analyzed.3 In these seminal programs, abduction was mainly rendered at a symbolic/propositional level, using rules and heuristics, and logic programming prevailed.4
Already in 1992 Margaret Boden [26] illustrated the distinction between classical programs able to re-generate historical cases of scientific discovery in physical science (Simon’s five BACON systems and GLAUBER, Langley [12]), and systems able to build new discoveries (DENDRAL and AM, cited above). Other authors (for example, Schunn and Klahr [27], who wrote the program ACT-R) stressed the further distinction between computational systems that regard the processes of abductive hypothesis formation and of evaluation: PHINEAS [28]; AbE [29]; ECHO [30,31]; TETRAD [32], and the already quoted MECHEM. Other programs addressed the abductive nature of experimental procedures (DEED [33]; DIDO [34]), and, finally, we have to remember the ones that addressed both processes of hypothesis formation and evaluation, and of experiment (KEKADA [35]; SDDS [36], LIVE [37]). Some of the classical computational programs I have quoted above modeled what I call sentential creative abduction (BACON and GLAUBER in terms of heuristic search reproduced famous discoveries in physics), others performed model-based creative abduction (the already quoted AbE, PHINEAS and TETRAD, and CHARADE [38], GALATEA [39], and PROTEUS [40], regarding causal and analogical abductive reasoning).
Finally, as I already said, there are also programs able to represent the activity of experiment (which is close to the processes of what I call “manipulative abduction”). Already in 2008 Clark himself ([41] p. 202) was convinced that artificial intelligence and robotics would have also involved in the near future various aspects related to embodiment, action, situatedness, and manipulation of external entities appropriately cognitively reshaped:
The increasingly popular image of functional, computational, and information-processing approaches to mind as flesh-eating demons is thus subtly misplaced. For rather than necessarily ignoring the body, such approaches may instead help target larger organizational wholes in ways that help reveal where, why, how, and even how much […] embodiment and environmental embedding really matter for the construction of mind and experience.
Finally, we have to remember that other traditional AI programs have been built to reproduce abduction in mathematical and geometrical reasoning, such as ARCHIMEDES (Lindsay [45,46,47,48]) and HR (Colton and Pease [15,49]).6
Traditionally, philosophy, epistemology, and logic have been the main disciplines interested in studying human reasoning. However, reasoning is by now also a major subject of investigation in AI and in the whole area of cognitive science. In this perspective, we can say that many fundamental cognitive processes (such as abduction, disregarded by the mainstream deductive logic tradition), when implemented in a computer, become AI programs: the abstract theories and the more concrete computational programs become two different ways of expressing the same thing. In this sense, theories of reasoning are about rules for reasoning and these are rules that teach us to do certain things in certain situations. Writing an AI program permits to us to delineate these rules accurately. Computational programs that execute various kinds of hypothetical reasoning can be seen as prosthetic “abducers”: just as microscopes are technologically created to widen the sensory endowments of humans, computational models of abductive cognition are created to extend human cognitive capacities.
What about the new perspectives on hypothetical abductive reasoning offered by deep learning programs such as AlphaGo? As I have anticipated, to clarify the cognitive character of this program, the examination of the kinds of strategies that are at play is in my opinion central.

1.2. Abduction and AlphaGo

It is now clear that studies on abduction are very useful when we have to analyze creative reasoning, and a simple new and unexpected move of a human being who is playing a Go game surely represents a kind of creative reasoning. In this article, it will be the keystone concept of knowledge-enhancing abduction and the related one of eco-cognitive openness that will favor a deep understanding of the logical and cognitive status of those kinds of cognitive strategies I will describe and that I call locked and unlocked abductive strategies. Locked and unlocked strategies are at the center of this article, as ways of shedding new light on the cognitive aspects of deep learning machines. The character and the role of these cognitive strategies, which are occurring both in humans and in computational machines, is indeed strictly related to the generation of cognitive outputs, which range from weak to strong level of knowledge creativity. I maintain that these differences lead to important consequences when we analyze computational AI programs, such as AlphaGo, which aim at performing various kinds of abductive hypothetical reasoning.
These programs are characterized by locked abductive strategies: they deal with weak (even if sometimes amazing) kinds of hypothetical creative reasoning, because they are limited in what I call eco-cognitive openness, which instead qualifies human cognizers who are performing higher kinds of abductive creative reasoning, where cognitive strategies are instead unlocked.
An objection to the adoption of the concept of abduction to shed more cognitive light on the strong impact of deep learning programs such as AlphaGo in contemporary AI regards the fact that they are based on hierarchical neural networks that operate on a subconceptual level: abduction has been instead fundamentally investigated thanks to symbolic formal models related to the tradition of logic.7 I have indicated in the previous subsection that certainly the concept of abduction enters in the last part of the previous century traditional AI research thanks to the studies concerning automated scientific discovery (creative abduction) and medical diagnostic reasoning (selective abduction). The dominant representational tools were the symbolic ones such as classical logic programming, rule-based systems, probabilistic networks, etc. Can good abductive processes be modeled using representational tools and algorithms that operate on a subconceptual level? The answer is yes.
Bruza et al. [51] insisted that it would be misguided to adopt a simple traditional, symbolic perspective of an abductive logical system by assuming a propositional knowledge representation and proof-theoretic approaches for driving it, because this perspective seems conceptually incomplete insofar as it ignores what is going “down below” [52], which can be interpreted as the subconceptual level of cognition. They proposed semantic spaces as a computational approximation of Gärdenfors’ conceptual space. Abductive hypotheses generated from semantic spaces do not have a proof-theoretic basis, but rather they are computations of associations by various means within the space: the passage from the sub-conceptual to the conceptual level usually involves a reduction of the number of dimensions that are represented. Not only from a cognitive perspective it seems that interesting hypotheses and dimensional reduction are inextricably related as information passes from the subconceptual to conceptual level. More recently, Bruza et al. [53] interestingly analyzed concept combinations in human cognition by showing emergent associations still as the result of abductive reasoning within conceptual space, below the symbolic level of cognition, but in terms of a tensor based approach which conceptualizes combinations as interacting quantum systems.
In sum, certainly the mainstream non-standard logical tradition which created models of abductive inferences was characterized by symbols but, from a more extended cognitive and philosophical perspective, also multimodality (that is cognition in terms of non propositional models, icons, thought experiments, simulations, etc.),8 and implicit reasoning appear to be important. Moreover, I have to remember that, from a wide cognitive and philosophical perspective, as I have illustrated in my own research [4,55], the term abduction refers to all kinds of cognitive activities that lead to hypotheses, in human and non human animals. For example, humans often generate abductive hypotheses thanks to manipulative, embodied and unconscious endowments, and higher mammals surely do not take advantage of symbolic syntactic language but instead other multiple cognitive capacities. Analogously to what is happening in the case of humans, that can perform abductions in various ways, there is not in AI a unique method able to favor the development of programs able to reproduce abductive cognition to hypotheses. Various knowledge representation formalisms and algorithms can be adopted to implement an appropriate computational program.9
At this point, we can go ahead and try to analyze the specific kind of abductive performance (the generations of “moves”) that characterizes the deep learning AI program AlphaGo.

2. Natural, Artificial, and Computational Games

2.1. Locked and Unlocked Strategies in Natural and Artificial Frameworks

Go is a game played by human agents and AlphaGo is a computational deep learning program that can play the role of an automatic agent/player, so that a competition with humans can become partially computationally determined. Go is already an “artificial” game, as it is invented by human beings and, consequently, takes advantage of abstract rules and artifacts, such as the board and other material objects. AlphaGo is artificial too, but a more complicated fruit of the technological creativity of few human beings. However, we have to remember that also “natural cognitive games”, so to speak, can be contemplated. For example, as I have already illustrated in the previous section when delineating manipulative abduction, a strategic human cognition not only refers to propositional aspects concerning acts performed through written and spoken language, but it is also active in a distributed cognition environment, in a kind of “game” in which models, artifacts, internal and external representations, sensations, and manipulations play a central function: imagine the pre-linguistic cognitive “natural game” between humans and their surroundings, in which “unlocked” strategies (see below) are at play, such as the phenomenological tradition has illustrated, exactly involving embodied and distributed systems, and visual, kinesthetic, and motor sensations [57].
What counts here is that in the natural games the strategies are unlocked because, even if local constraints are always at play in the interaction humans/environments, no predetermined background is established. On the contrary, what happens in the case of human made “artificial games” such as Go, or in the case of their computational counterpart, such as AlphaGo? In these two last cases, the involved cognitive strategies are locked, as I will describe in the following paragraphs.
Let us abandon the problem (I have just sketched) of the prelinguistic cognitive abductive strategies which are at play in a natural interaction—natural game—between humans and their prepredicative surroundings (for example, splendidly studied by Husserl [58]) and let us concentrate on the cognitive abductive strategies that are at play in the artifactual case of the moves that are occurring in the adversarial game Go with two players and their respective changing surroundings, which in this case are basically formed by board, stones, and possible artifactual assisting accessories. In this game, analogously to the case of the natural processes, we still obviously find the role of visual, kinesthetic, and motor sensations and actions, but also the strong function of visual, iconic, and propositional artificial representations, anchored to the human made “meanings” (both internal and external) which gave birth to the game and which characterize its features and rules.

2.2. Reading Ahead

A fundamental strategy we immediately detect in artificial games such as Go, which is necessary for proficient and smart tactical play, is the capacity to read ahead, as the Go players usually say. Reading ahead is a practice of generating groups of anticipations that aim at being very robust and complex (either serious-minded or intuitive) and that demand the consideration of
  • Clusters of moves to be adopted and their potential outcomes. The available scenario at time t 1 , exhibited by the board, represents an adumbration10 of a subsequent potential more profitable scenario at time t 2 , which indeed is abductively credibly hypothesized: in turn, one more abduction is selected and actuated, which—consistently and believably—activates a particular move that can lead to an envisaged more fruitful scenario.
  • Possible countermoves to each move.
  • Further chances after each of those countermoves. It seems that some of the smarter players of the game can read up to 40 moves ahead even in hugely complex positions.
In a book published in Japan, related to the description of various strategies that can be exploited in Go games, Davies emphasizes the role of “reading ahead”:
The problems in this book are almost all reading problems. […] they are going to ask you to work out sequences of moves that capture, cut, link up, make good shape, or accomplish some other clear tactical objective. A good player tries to read out such tactical problems in his head before he puts the stones on the board. He looks before he leaps. Frequently he does not leap at all; many of the sequences his reading uncovers are stored away for future reference, and in the end never carried out. This is especially true in a professional game, where the two hundred or so moves played are only the visible part of an iceberg of implied threats and possibilities, most of which stays submerged. You may try to approach the game at that level, or you may, like most of us, think your way from one move to the next as you play along, but in either case it is your reading ability more than anything else that determines your rank.
([59] p. 6)
Further strategies that are usefully adopted by human players in the game Go are for instance related to “global influence, interaction between distant stones, keeping the whole board in mind during local fights, and other issues that involve the overall game. It is therefore possible to allow a tactical loss when it confers a strategic advantage”.11
The material and external scenarios (which are composed by the sensible objects—stones and board) that characterize artificial games are the fruit of a cognition “sedimented”12 in their embodiment, after the starting point of their creation and subsequent uses and modifications. The cognitive tools that are related to the application of both the game allowed rules and the individual inferential talents owned by the two players, strategies, tactics, heuristics, etc. are sedimented in those material objects (artifacts, in this case) that become cognitive mediators:13 for example they orient players’ inferences, transfer information, and provoke reasoning chances. Once represented internally, the external subsequent scenarios become object of mental manipulation and new ones are further made, to the aim of producing the next most successful move.
It is important to note again that these strategies, when actuated, are certainty characterized by an extended variety, but all are “locked”, because the elements of each scenario are always the same (what changes is merely the number of seeable stones and their dispositions in the board), in a finite and stable framework (no new rules, no new objects, no new boards, etc.) These strategies are devoid of the following feature: they are not able to recur to reservoirs of information different from the ones available in the fixed given scenario. It is important to add a central remark: of course the “human” player can enrich and fecundate his strategies by referring to internal resources not necessarily directly related to the previous experience with Go, but with other preexistent skills belonging to disparate areas of cognition. This is the reason why we can say that the strategies of a “human” player present a less degree of closure with respect to the automatic player AlphaGo. In humans, strategies are locked with respect to the external rigid scenario, but more open with respect to the mental field of reference to previous wide strategic experiences; in AphaGo and in deep learning systems, the strategic reservoir cannot—at least currently—take advantage of that mental openness and flexibility typical of human beings: the repertoire is merely formed/learned to play the game by checking data of thousands of games, and no other sources.
I have also to say that the notion of cognitive locked strategy I am referring to here is not present in and it is unrelated to the usual technical categorizations of game theory. Fundamentally, in combinatorial game theory, Go can be technically illustrated as zero-sum (player choices do not increment resources available-colloquially), perfect-information, partisan, deterministic strategy game, belonging to the same class as chess, checkers (draughts) and Reversi (Othello). Moreover, Go is bounded (every game has to end with a victor within a finite number of moves and time), strategies are obviously associative (that is in function of board position), format is of course non-cooperative (no teams are allowed), positions are extensible (that is they can be represented by board position trees).14

3. Locked Abductive Strategies Counteract the Maximization of Eco-Cognitive Openness

As I have already anticipated above in Section 1, in my research I have recently emphasized ([55] chapter 7) the knowledge enhancing character of abduction. This means that in this case the abductive reasoning strategies grant successful and highly creative outcomes. The knowledge enhancing feature regards several kinds of new generated knowledge of various novelty degrees, from that new knowledge about a suffering patient we have abductively accomplished in medical diagnosis (a case of selective abduction, as no new biomedical knowledge is created, just new knowledge about a person) to the new knowledge developed in scientific discovery, which many epistemologists celebrated, for example Paul Feyerabend in Against Method [62]. In the case of an artificial game such as Go, the knowledge activated thanks to an intelligent choice of already available strategies or thanks to the invention of novel strategies and/or heuristics must also be considered a result of knowledge enhancing abduction.
I strongly contend that, to arrive to uberous selective or creative optimal abductive results, useful strategies must be applied, but it is also needed to be in presence of a cognitive environment marked by what I have called optimization of eco-cognitive situatedness, in which eco-cognitive openness is fundamental [63]. This feature of the cognitive environment is especially needed in the case of strong creative abduction, that is when the kind of novelty is not restricted to the case of the “simple” successful diagnosis. In Section 4, I will illustrate in more detail that, to favor good creative and selective abduction reasoning, strategies must not be “locked” in an external restricted eco-cognitive environment, such as in a scenario characterized by fixed defining rules and finite material aspects, which would function as cognitive mediators able to constrain agents’ reasoning.
At this point, it is valuable to supply a brief presentation of the concept of eco-cognitive openness. The new viewpoint inaugurated by the so-called naturalization of logic [64] contends that the regulating authority claimed by formal models of ideal reasoners to control human reasoning practice on the ground is, to date, unwarranted. It is instead urgent to propose a “naturalization” of the logic of human inference. Woods held a naturalized logic to an adequacy condition of “empirical sensitivity” [65]. A naturalized logic is open to consider many ways of thinking that are typical of concrete human knowers, such as fallacies, which, even if not truth preserving inferences, in any case can furnish truths and profitable results. Of course, one of the guiding cases is furnished by the logic of abduction, where the naturalization of the well-known fallacy “affirming the consequent” is at play. In my recent research on abduction [55,63,66], I emphasized the importance in good abductive cognition to hypotheses of what has been called optimization of situatedness.
Let us explain what is the meaning of the expression optimization of situatedness: abductive cognition is for example very important in scientific reasoning because it refers to that activity of creative hypothesis generation which characterizes one of the more valued aspects of rational knowledge. To get abductive results in science, the “situatedness” of the involved cognitive activities is strongly connected with eco-cognitive aspects, related to the contexts in which knowledge is “traveling”: in the case of scientific abductive cognition (but also in other abductive cases, such as diagnosis) to favor the solution of an inferential process the situatedness also has to be characterized by the richness of the flux of information, which has to be maximized. This maximization aims at that optimization of situatedness, which, as I quoted above, can only be made possible by a maximization of changeability of the basic data which inform the abductive cognitive process: inputs have to be maximally enriched, rebuilt, or modified and the same has to occur with respect to the knowledge applied during the hypothetical reasoning process. Obviously, the aim is to have at disposal a favorable “cognitive environment” in which available data can become optimally positioned.15
In summary, abductive processes to hypotheses—in a considerable quantity of cases, for example in science—are highly information-sensitive, and face with a flow of information and data uninterrupted and appropriately promoted and enhanced when needed (of course also thanks to artefacts of various kinds). This means that also from the psychological perspectives of the individuals the epistemological openness in which knowledge channeling has to be maximized is fundamental.
A note on the history of philosophy can be added: already Aristotle provided a first fundamental study on abduction, which stresses the relevance, we can hazard, of non-locked, but highly open, cognition, in the celebrated (by Charles Sanders Peirce) passage of Chapter B25 of Prior Analytics regarding ἀπαγωγή (that is abduction, translated, in the English edition, with “leading away”). Indeed, it is exactly the idea of “leading away” which expresses that in smart abductions we have to integrate (or “unlock”) the given components of the cognitive environment with the help of other cognitive tools and data that are away from them.16

4. Locking Strategies Restricts Creativity

Optimization of situatedness is related to unlocked strategies. Locked strategies, such as the ones active in Go game, AlphaGo, and other computational AI systems and deep learning devices, do not favor the optimization of situatedness. Indeed, I have already contended above that, to obtain good creative and selective abductions, reasoning strategies must not be “locked” in bounded eco-cognitive surroundings (that is, in scenarios designed by fixed defining rules and finite material objects which would play the role of the so-called cognitive mediators). In this perspective, a poor scenario is certainly responsible for the minimization of the eco-cognitive openness and it is the structural consequence of the constitutive organization of the game Go (and also of Chess and other games), as I have already described in Section 2.2. I have said that in the game Go stones, board, and rules are rigid and so totally predetermined; what instead is undetermined are the strategies and connected heuristics that are adopted to defeat the adversary in their whole process of application.17
As I have already said, the available strategies and the adversary’s ones are always locked in the fixed scenario: you cannot, during a Go game, play for few minutes Chess or adopt another rule or another unrelated cognitive process, affirming that that weird part of the game is still appropriate to the game you agreed to play. You cannot decide to change the environment at will so unlocking your strategic reasoning, for example because you think this will be an optimal way to defeat the adversary. Furthermore, your adversary cannot activate at his discretion a process of eco-cognitive transformation of that artificial game. On the contrary, in the example of scientific discovery, the scientist (or the community of scientists) frequently recur to disparate external models and change their reasoning strategies18 to produce new analogies or to favor other cognitive useful procedures (prediction, simplification, confirmation, etc.) to enhance the abductive creative process.
The case of scenarios in human scientific discovery precisely represents the counterpart of the ones that are poor from the perspective of their eco-cognitive openness. Indeed, in these last cases, the reasoning strategies that can be endorsed (and also created for the first time), even if multiple and potentially infinite, are locked in a determined perspective where the components do not change (the stones can just diminish and put aside, the board does not change, etc.) I would say that in scenarios in which strategies are locked, in the sense I have explained, an autoimmunization [68,69] is active, that constitutes the limitations that preclude the application of strategies that are not related to “pre-packaged” scenarios, strategies that would be foreigners to the ones that are strictly intertwined with the components of the given scenario. Remember I already said that these components play the role of cognitive mediators, which anchor and constrain the whole cognitive process of the game.
To summarize and further explain (by linking the problem of locked and unlocked strategies to the various cases of selective and creative abduction):
  • Contrarily to the case of high level “human” creative abductive inferences such as the ones expressed by scientific discovery or other examples of special exceptional intellectual results, the status of artificial games (and of their computational counterpart) is very poor from the point of view of the non-strategic knowledge involved. We are dealing with stones, a modest number of rules, and one board. When the game progresses, the shape of the scenario is spectacularly modified but no unexpected cognitive mediators (objects) are appearing: for example, no diversely colored stones, or a strange hexagonal board. On the contrary, to continue with the example of high levels creative abductions in scientific discovery (for example, in empirical science), first of all the evidence is extremely rich and endowed with often unexpected novel features (not only due to modifications of aspects of the “same things”, as in the case of artificial games). Secondly, the flux of knowledge at play is multifarious and is related to new analogies, thought experiments, models, imageries, mathematical structures, etc. that are rooted in heterogeneous disciplines and fields of intellectual research. In sum, in this exemplary case, we are facing with a real tendency to a status of optimal eco-cognitive situatedness (further details on this kind of creative abduction are furnished in [55,63,66]).
  • What happens when we are dealing with selective abduction (for example in medical diagnosis)? First of all, evidence freely and richly arrives from several empirical sources in terms of body symptoms and data mediated by sophisticated artifacts (which also change and improve thanks to new technological inventions). Second, the encyclopedia of biomedical hypotheses in which selective abduction can work is instead locked,19 but the reference to possible new knowledge (locally created of externally available) is not prohibited, so the diagnostic inferences can be enhanced thanks to scientific advancements at a first sight not considered. Third, novel inferential strategies and linked heuristics can be created and old ones used in new surprising ways but, what is important, strategies are not locked in a fixed scenario. In sum, the creativity that is occurring in the case of human selective abduction is poorer than the one active in scientific discovery, but richer than the one related to the activity of the locked reasoning strategies of the Go game and AlphaGo, I have considered above.
  • In Go (and similar games) and in deep learning systems such as AlphaGo, in which strategies and heuristics are “locked”, these are exactly the only part of the game that can be improved and rendered more fertile: strategies and related heuristics can be used in a novel way and new ones can be invented. Anticipations as abductions (which incarnate the activities of “reading ahead”) just affect the modifications and re-grouping of the same elements. No other types of knowledge will increase; all the rest remains stable.20 Of course, this dominance of the strategies is the quintessence of Go, Chess, and other games, and also reflects the spectacularity of the more expert moves of the human champions. However, it has to be said that this dominance is also the reason that explains the fact the creativity at stake is even more modest than the one involved in the higher cases of selective abduction (diagnosis). I will soon illustrate that this fact is also the reason that explains why the smart strategies of Go or Chess games can be more easily simulated, for example with respect to the inferences at play in scientific discovery, by recent artificial intelligence programs, such as the ones based on deep learning.21
The reader does not have to misunderstand me: I do not mean to minimize the relevance of creative heuristics as they work in Go and other board games. John Holland already clearly illustrated [70,71] that board games such as checkers, as well as Go, are wonderful cases of “emerging” cognitive processes, where potentially infinite strategies favor exceptional games: even if simply thanks to a few rules regulating the moves of the pieces, games cannot be predicted starting from the initial configurations. While other cases of emerging cognitive processes (I have indicated the example of scientific discovery) characterize what can be called “vertical” creativity (that is, related to unlocked strategies), board games are examples of “horizontal” creativity: even if board games are circumscribed by locked strategies that constrain the game, “horizontal” creativity can show astonishing levels of creativity and skilfulness. We already said that these extraordinary humans skills have been notably appropriated by artificial intelligence software (see below the last paragraphs of this section): the example given in this article is the one of AI deep learning heuristics that were able to learn from human games. What are the remaining most important effects which derive from these computational AI programs equipped to concretize cognitive abductive inferences characterized by “locked” strategies?
At the beginning of this article, I have illustrated some amazing performances of Google DeepMind’s program AlphaGo against human players and the fact the system showed to be able to “create” unconventional moves—never played by humans—thus building new strategies, a fact that undoubtedly favors the attribution to the system of actual “human” capacities, and we have to add, better than the ones of the more skilled humans. AlphaGo instructed itself to play “attending” (so to speak) thousands of games played by human beings thanks to “reinforcement learning”, which refers to that activity of self-playing of the machine (the machine plays against itself) to further feed and adapt its own neural networks.
In the passage I am reporting below, Cohleo and Thompsen Primo seem to corroborate my contention regarding the fact I consider AI programs as AlphaGo considerably simple to be produced at the computational level because they are dealing with what I have called in this article locked reasoning strategies. In summary, a kind of background reason of this easiness would be that this type of human cognition is less creative than others, even if it is so striking when optimally realized by very smart human intelligent subjects.
Let us compare the key ideas behind Deep Blue (Chess) and AlphaGo (Go). The first program used values to assess potential moves, a function that incorporated lots of detailed chess knowledge to evaluate any given board position and immense computing power (brute force) to calculate lots of possible positions, selecting the move that would drive the best possible final possible position. Such ideas were not suitable for Go. A good program may capture elements of human intuition to evaluate board positions with good shape, an idea able to attain far-reaching consequences. After essays with Monte Carlo tree search algorithms, the bright idea was to find patterns in a high quantity of games (150,000) with deep learning based upon neural networks. The program kept making adjustments to the parameters in the model, trying to find a way to do tiny improvements in its play. And, this shift was a way out to create a policy network through billions of settings, i.e., a valuation system that captures intuition about the value of different board position. Such search-and-optimization idea was cleverer about how search is done, but the replication of intuitive pattern recognition was a big deal. The program learned to recognize good patterns of play leading to higher scores, and when that happened it reinforces the creative behavior (it acquired an ability to recognize images with similar style).
[72]
I think humans with their biological brains do not have to feel mortified by these extraordinary skillful capacities of the AI programs. Unfortunately, given the present worldwide status of mass media, other magnificent human performances in various fields of creativity, much more creative than the ones related to locked strategic reasoning, are unable to reach the global echo AlphaGo gained. Indeed, human-more-skillful-abductive creative capacities, related to unlocked strategies, as I have tried to demonstrate in this article—still cognitively beautiful—are not sponsored by Google, which is a herculean corporation that can easily obtain the attention of not only the monocultural media of our age, but also of the social networks: many human beings are more easily impressionable by the “miracles” of AI, robotics, and information technologies, than by prodigious knowledge results of human beings-like-us, too often out of sight (after all—ça va sans dire—also AI traditional programs and AI deep learning systems have been created by humans…)
Google managers also think that AI deep learning programs similar to AlphaGo could be exploited to help science resolve important real-world problems in healthcare but also in other fields. This would be a good research program. Google seems to also expect to implement some business thanks to a commercialization of new deep learning AI powers to collect appropriate information and generate abductions in some advantageous fields. Simply checking the Wikipedia entry DeepMind (https://en.wikipedia.org/wiki/DeepMind),22 [DeepMind is a British artificial intelligence company instituted in September 2010 and took by Google in 2014, the company created the AlphaGo program], the following non-contested passage is reported, concerning the so-called “NHS data-sharing controversy”:
In April 2016, New Scientist obtained a copy of a data-sharing agreement between DeepMind and the Royal Free London NHS Foundation Trust. The latter operates three London hospitals where an estimated 1.6 million patients are treated annually. The agreement shows DeepMind Health had access to admissions, discharge and transfer data, accident and emergency, pathology and radiology, and critical care at these hospitals. This included personal details such as whether patients had been diagnosed with HIV, suffered from depression or had ever undergone an abortion in order to conduct research to seek better outcomes in various health conditions. A complaint was filed to the Information Commissioner’s Office (ICO), arguing that the data should be pseudonymised and encrypted. In May 2016, New Scientist published a further article claiming that the project had failed to secure approval from the Confidentiality Advisory Group of the Medicines and Healthcare Products Regulatory Agency. In May 2017, Sky News published a leaked letter from the National Data Guardian, Dame Fiona Caldicott, revealing that in her ‘considered opinion’ the data-sharing agreement between DeepMind and the Royal Free took place on an ‘inappropriate legal basis’. The Information Commissioner’s Office ruled in July 2017 that the Royal Free hospital failed to comply with the Data Protection Act when it handed over personal data of 1.6 million patients to DeepMind.
Even if based on what I called in this article locked strategies, and thus far from the highest levels of human creativity, AI deep learning system and various other programs can also offer chances for business and a good integration in the market. I think epistemologists and logicians have to monitor the use of these AI devices (of course, when less transparent than the natural and limpid —and so stupefying—performance of AlphaGo in games against humans). Recent research in the field of epistemology, cognitive science, and philosophy of technology23 illustrate that good AI software, which surely furnishes a big new chance for opportunity and data analytics, can be transmuted in a tool that does not respect epistemological and/or ethical rigor. For example, in the case regarding the computational exploitation of big data, outcomes can inadvertently lead to epistemologically unacceptable computer-discovered correlations (instead possibly good from a commercial perspective), but these tools are sometimes—unfortunately—seriously illustrated as aiming at replacing tout-court human-based scientific research as a guide to understanding, prediction and action. Calude and Longo say: “Consequently, there will be no need to give scientific meaning to phenomena, by proposing, say, causal relations, since regularities in very large databases are enough: ‘with enough data, the numbers speak for themselves’ ” ([74] p. 595). Unfortunately, some “correlations appear only due to the size, not the nature, of data. In ‘randomly’ generated, large enough databases too much information tends to behave like very little information” (ibid.). I agree with these authors: we cannot treat some spurious correlations as results of deep scientific creative abduction, but just as trivial generalizations, even if reached with the help of sophisticated artifacts.24 I cannot further deepen the problems regarding issues connected to the impact of computational programs on ethics and society. In this article, I limit myself to deal with cognitive, logical, and epistemological aspects to the aim of introducing the distinction between locked and unlocked strategies and its meaning with respect to intelligent computation.

5. Conclusions

In this article, with the help of the concepts of locked and unlocked strategies, abduction, and optimization of eco-cognitive openness, I have described some central aspects of the cognitive character of reasoning strategies and related heuristics, to the aim of shedding new light on the cognitive aspects of deep learning machines. Taking advantage of my studies on abduction, I have contended that what I call eco-cognitive openness is undermined in the case of famous computational programs such as AlphaGo, because they are based on locked abductive strategies. Instead, unlocked abductive strategies, which are in tune with what eco-cognitive openness requires, qualify those high-level kinds of abductive creative reasoning that are typical of human-based cognition. Locked abductive reasoning strategies are much simpler than unlocked ones to be rendered at the computational level: they indeed take advantage of a kind of autoimmunity that grants the limitations that preclude the application of strategies that are not related to “pre-packaged” scenarios, strategies that would be foreign to the ones that are strictly intertwined with the components of a given scenario.

Funding

Research for this article was supported by the Blue Sky Research 2017—University of Pavia, Pavia, Italy.

Acknowledgments

Some themes of this article were already presented as a keynote lecture to the conference “Logical Foundations of Strategic Reasoning”, Korean Advanced Institute of Science and Technology (KAIST), Daejeon, 3 of November 2016, published as L. Magnani (2018), Playing with anticipations as abductions. Strategic reasoning in an eco-cognitive perspective, in Journal of Applied Logic- IfColog Journal of Logics and their Applications 5(5), 1061–1092. Special Issue on “Logical Foundations of Strategic Reasoning” (guest editor W. Park and J. Woods). For the informative critiques and interesting exchanges that assisted me to enrich my analysis of the naturalization of logic and/or abductive cognition, I am obligated to John Woods, Atocha Aliseda, Woosuk Park, Giuseppe Longo, Gordana Dodig-Crnkovic, Luís Moniz Pereira, Paul Thagard, Joseph Brenner, to the two reviewers, and to my collaborators Tommaso Bertolotti and Selene Arfini.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Gigerenzer, G.; Selten, R. Bounded Rationality. The Adaptive Toolbox; The MIT Press: Cambridge, MA, USA, 2002. [Google Scholar]
  2. Raab, M.; Gigerenzer, G. Intelligence as smart heuristics. In Cognition and Intelligence. Identifying the Mechanisms of the Mind; Sternberg, R.J., Prets, J.E., Eds.; Cambridge University Press: Cambridge, MA, USA, 2005; pp. 188–207. [Google Scholar]
  3. Gigerenzer, G.; Brighton, H. Homo heuristicus: Why biased minds make better inferences. Top. Cognit. Sci. 2009, 1, 107–143. [Google Scholar] [CrossRef] [PubMed]
  4. Magnani, L. Abductive Cognition. The Epistemological and Eco-Cognitive Dimensions of Hypothetical Reasoning; Springer: Berlin/Heidelberg, Germany, 2009. [Google Scholar]
  5. Magnani, L. Abduction, Reason, and Science. Processes of Discovery and Explanation; Kluwer Academic/Plenum Publishers: New York, NY, USA, 2001. [Google Scholar]
  6. Thagard, P. Computational Philosophy of Science; The MIT Press: Cambridge, MA, USA, 1988. [Google Scholar]
  7. Newell, A.; Shaw, J.C.; Simon, H.A. Empirical explorations of the Logic Theory Machine: A case study in heuristic. In Proceedings of the Western Joint Computer Conference [JCC 11], Los Angeles, CA, USA, 26–28 February 1957; pp. 218–239. [Google Scholar]
  8. Lindsay, R.K.; Buchanan, B.; Feingenbaum, E.; Lederberg, J. Applications of Artificial Intelligence for Organic Chemistry: The Dendral Project; McGraw Hill: New York, NY, USA, 1980. [Google Scholar]
  9. Lenat, D. Discovery in mathematics as heuristic search. In Knowledge-Based Systems in Artificial Intelligence; Davis, R., Lenat, D., Eds.; McGraw Hill: New York, NY, USA, 1982. [Google Scholar]
  10. Simon, H.A.; Valdés-Pérez, R.E.; Sleeman, D.H. Scientific discovery and simplicity of method. Artif. Intell. 1997, 91, 177–181. [Google Scholar] [CrossRef]
  11. Okada, T.; Simon, H.A. Collaborative discovery in a scientific domain. Cogn. Sci. 1997, 21, 109–146. [Google Scholar] [CrossRef]
  12. Langley, P.; Simon, H.A.; Bradshaw, G.; Zytkow, J. Scientific Discovery. Computational Explorations of the Creative Processes; The MIT Press: Cambridge, MA, USA, 1987. [Google Scholar]
  13. Shrager, J.; Langley, P. (Eds.) Computational Models of Scientific Discovery and Theory Formation; Morgan Kaufmann: San Mateo, CA, USA, 1990. [Google Scholar]
  14. Zytkow, J. (Ed.) Proceedings of the ML-92 Workshop on Machine Discovery (MD-92); National Institute for Aviation Research, The Wichita State University: Wichita, KS, USA, 1992. [Google Scholar]
  15. Colton, S. (Ed.) AI and Scientific Creativity. Proceedings of the AISB99 Symposium on Scientific Creativity, Society for the Study of Artificial Intelligence and Simulation of Behaviour; Edinburgh College of Art and Division of Informatics, University of Edinburgh: Edinburgh, UK, 1999. [Google Scholar]
  16. Paul, G. AI approaches to abduction. In Abductive Reasoning and Learning; Gabbay, D., Kruse, R., Eds.; Springer: Dordrecht, The Netherlands, 2000; pp. 35–98. [Google Scholar]
  17. Bylander, T.; Allemang, D.; Tanner, M.C.; Josephson, J.R. The computational complexity of abduction. Artif. Intell. 1991, 49, 25–60. [Google Scholar] [CrossRef]
  18. Reiter, R. A theory of diagnosis from first principles. Artif. Intell. 1987, 32, 57–95. [Google Scholar] [CrossRef]
  19. De Kleer, J.; Williams, B. Diagnosing multiple faults. Artif. Intell. 1987, 32, 97–130. [Google Scholar] [CrossRef]
  20. Reggia, J.A.; Nau, D.S.; Wang, P.Y. Diagnostic expert systems based on set covering model. J. Man-Mach. Stud. 1983, 19, 437–460. [Google Scholar] [CrossRef]
  21. Valdés-Pérez, R.E. Principles of human computer collaboration for knowledge discovery in science. Artif. Intell. 1999, 107, 335–346. [Google Scholar] [CrossRef]
  22. Zeigarnik, A.V.; Valdés-Pérez, R.E.; Temkin, O.N.; Bruk, L.G.; Shalgunov, S.I. Computer-aided mechanism elucidation of acetylene hydrocarboxylation to acrylic acid based on a novel union of empirical and formal methods. Organometallics 1997, 16, 3114–3127. [Google Scholar] [CrossRef]
  23. Swanson, D.R.; Smalheiser, N.R. An interactive system for finding complementary literatures: A stimulus to scientific discovery. Artif. Intell. 1997, 91, 183–203. [Google Scholar] [CrossRef]
  24. Fajtlowicz, S. On conjectures of Graffiti. Discrete Math. 1988, 72, 113–118. [Google Scholar] [CrossRef]
  25. Pericliev, V.; Valdés-Pérez, R.E. Automatic componential analysis of kinship semantics with a proposed structural solution to the problem of multiple models. Anthropol. Linguist. 1998, 40, 272–317. [Google Scholar]
  26. Boden, M. The Creative Mind: Myths and Mechanisms; Basic Books: New York, NY, USA, 1992. [Google Scholar]
  27. Shunn, C.; Klahr, D. A 4-space model of scientific discovery. AAAI Symposium Systematic Methods of Scientific Discovery; Technical Report SS-95-03; AAAI Press: Menlo Park, CA, USA, 1995. [Google Scholar]
  28. Falkenhainer, B.C. A unified approach to explanation and theory formation. In Computational Models of Scientific Discovery and Theory Formation; Shrager, J., Langley, P., Eds.; Morgan Kaufmann: San Mateo, CA, USA, 1990; pp. 157–196. [Google Scholar]
  29. O’Rorke, P.; Morris, S.; Schulemburg, D. Theory formation by abduction: A case study based on the chemical revolution. In Computational Models of Scientific Discovery and Theory Formation; Shrager, J., Langley, P., Eds.; Morgan Kaufmann: San Mateo, CA, USA, 1990; pp. 197–224. [Google Scholar]
  30. Thagard, P. Explanatory coherence. Behav. Brain Sci. 1989, 12, 435–467. [Google Scholar] [CrossRef]
  31. Thagard, P. Conceptual Revolutions; Princeton University Press: Princeton, NJ, USA, 1992. [Google Scholar]
  32. Glymour, C.; Scheines, R.; Spirtes, P.; Kelly, K. Discovering Causal Structure; Academic Press: San Diego, CA, USA, 1987. [Google Scholar]
  33. Rajamoney, S.A. The design of discrimination experiments. Mach. Learn. 1993, 12, 185–203. [Google Scholar] [CrossRef]
  34. Scott, P.D.; Markovitch, S. Experience selection and problem choice in an exploratory learning system. Mach. Learn. 1993, 12, 49–67. [Google Scholar] [CrossRef]
  35. Kulkarni, D.; Simon, H.A. The process of scientific discovery: The strategy of experimentation. Cognit. Sci. 1988, 12, 139–176. [Google Scholar] [CrossRef]
  36. Klahr, D.; Dunbar, K. Dual space search during scientific reasoning. Cognit. Sci. 1988, 12, 1–48. [Google Scholar] [CrossRef]
  37. Shen, W.M. Discovery as autonomous learning from the environment. Mach. Learn. 1993, 12, 143–165. [Google Scholar] [CrossRef]
  38. Corruble, V.; Ganascia, J.G. Induction and the discovery of the causes of scurvy: A computational reconstruction. Artif. Intell. 1997, 91, 205–223. [Google Scholar] [CrossRef]
  39. Davies, J.; Goel, A.K. A Computational Theory of Visual Analogical Transfer; Technical Report; Georgia Institute of Technology: Atlanta, GA, USA, 2000. [Google Scholar]
  40. Davies, J.; Goel, A.K.; Yaner, P.W. Proteus: Visual analogy in problem solving. Knowl.-Based Syst. 2008, 21, 636–654. [Google Scholar] [CrossRef]
  41. Clark, A. Supersizing the Mind. Embodiment, Action, and Cognitive Extension; Oxford University Press: Oxford, UK; New York, NY, USA, 2008. [Google Scholar]
  42. Pennock, R.T. Tower of Babel. The Evidence Against the New Creationism; The MIT Press: Cambridge, MA, USA, 1999. [Google Scholar]
  43. Pennock, R.T. Can Darwinian mechanisms make novel discoveries? Learning from discoveries made by evolving neural network. Found. Sci. 2000, 5, 225–238. [Google Scholar] [CrossRef]
  44. Boneh, D.; Dunworth, C.; Lipton, R.J.; Sgall, J. On the computational power of DNA, Discrete Applied Mathematics. Comput. Mol. Biol. 1996, 71, 79–94. [Google Scholar]
  45. Lindsay, R.K. Understanding diagrammatic demonstrations. In Proceedings of the 16th Annual Conference of the Cognitive Science Society, Atlanta, GA, USA, 13–16 August 1994; Ram, A., Eiselt, K., Eds.; Erlbaum, Hillsdale: Paris, France, 1994; pp. 572–576. [Google Scholar]
  46. Lindsay, R.K. Using diagrams to understand geometry. Comput. Intell. 1998, 9, 343–345. [Google Scholar] [CrossRef]
  47. Lindsay, R.K. Using spatial semantics to discover and verify diagrammatic demonstrations of geometric propositions. In Spatial Cognition, Proceedings of the Annual Conference of the Cognitive Science Society, Philadelphia, PA, USA, 13–15 August 2000; O’Nuallian, S., Ed.; John Benjamins: Amsterdam, The Netherlands, 2000; pp. 199–212. [Google Scholar]
  48. Lindsay, R.K. Playing with diagrams. In Diagrams 2000; Anderson, M., Cheng, P., Haarslev, V., Eds.; Springer: Berlin, Germany, 2000; pp. 300–313. [Google Scholar]
  49. Pease, A.; Colton, S.; Smaill, A.; Lee, J. A model of Lakatos’s philosophy of Mathematics. In Computing, Philosophy and Cognition; Magnani, L., Dossena, R., Eds.; College Publications: London, UK, 2005; pp. 57–85. [Google Scholar]
  50. Thagard, P.; Litt, A. Models of scientific explanation. In Cambridge Handbook of Computational Psychology; Sun, R., Ed.; Cambridge University Press: Cambridge, UK, 2008; pp. 549–564. [Google Scholar]
  51. Bruza, P.D.; Cole, R.J.; Song, D.; Bari, Z. Towards operational abduction from a cognitive perspective. Log. J. IGPL 2006, 14, 161–179. [Google Scholar] [CrossRef]
  52. Gärdenfors, P. Conceptual Spaces: The Geometry of Thought; The MIT Press: Cambridge, MA, USA, 2000. [Google Scholar]
  53. Bruza, P.D.; Kitto, K.; Ramm, B.; Sitbon, L.; Blomberg, S.; Song, D. Quantum-like non-separability of concept combinations, emergent associates and abduction. Log. J. IGPL 2012, 20, 445–457. [Google Scholar] [CrossRef]
  54. Figueroa, A.R. Inferencia abductiva basada en modelos. Una relación entre lógica y cognitión. Crítica. Revista Hispanoamericana de Filosofía 2012, 43, 3–29. [Google Scholar]
  55. Magnani, L. The Abductive Structure of Scientific Creativity. An Essay on the Ecology of Cognition; Springer: Cham, Switzerland, 2017. [Google Scholar]
  56. Ramoni, M.; Stefanelli, M.; Magnani, L.; Barosi, G. An epistemological framework for medical knowledge-based systems. IEEE Trans. Syst. Man Cybern. 1992, 22, 1361–1375. [Google Scholar] [CrossRef]
  57. Magnani, L. Playing with anticipations as abductions. Strategic reasoning in an eco-cognitive perspective. J. Appl. Log. IfColog J. Log. Their Appl. 2018, 5, 1061–1092. [Google Scholar]
  58. Husserl, E. Ideas. General Introduction to Pure Phenomenology; [First book, 1913]; Boyce Gibson, W.R., Translator; Northwestern University Press: London, UK; New York, NY, USA, 1931. [Google Scholar]
  59. Davies, J. Tesuji. Elementary Go Series 3; Kiseido Publishing Company: Tokyo, Japan, 1995. [Google Scholar]
  60. Husserl, E. The Origin of Geometry (1939). In Edmund Husserl’s “The Origin of Geometry”; Derrida, J., Ed.; Nicolas Hays: Stony Brooks, NY, USA, 1978; pp. 157–180. [Google Scholar]
  61. Hutchins, E. Cognition in the Wild; The MIT Press: Cambridge, MA, USA, 1995. [Google Scholar]
  62. Feyerabend, P. Against Method; Verso: London, UK; New York, NY, USA, 1975. [Google Scholar]
  63. Magnani, L. The eco-cognitive model of abduction. Irrelevance and implausibility exculpated. J. Appl. Log. 2016, 15, 94–129. [Google Scholar] [CrossRef]
  64. Magnani, L. Naturalizing Logic. Errors of reasoning vindicated: Logic reapproaches cognitive science. J. Appl. Log. 2015, 13, 13–36. [Google Scholar] [CrossRef]
  65. Woods, J. Errors of Reasoning. Naturalizing the Logic of Inference; College Publications: London, UK, 2013. [Google Scholar]
  66. Magnani, L. The eco-cognitive model of Abduction. ’Aπαγωγή now: Naturalizing the logic of abduction. J. Appl. Log. 2015, 13, 285–315. [Google Scholar] [CrossRef]
  67. Magnani, L.; Bertolotti, T. (Eds.) Handbook of Model-Based Science; Springer: Cham, Switzerland, 2017. [Google Scholar]
  68. Magnani, L.; Bertolotti, T. Cognitive bubbles and firewalls: Epistemic immunizations in human reasoning. In Proceedings of the CogSci 2011, XXXIII Annual Conference of the Cognitive Science Society, Boston, MA, USA, 20–23 July 2011; Carlson, L., Hölscher, T., Shipley, T., Eds.; Cognitive Science Society: Boston, MA, USA, 2011. [Google Scholar]
  69. Arfini, S.; Magnani, L. An eco-cognitive model of ignorance immunization. In Philosophy and Cognitive Science II. Western & Eastern Studies; Magnani, L., Li, P., Park, W., Eds.; Springer: Cham, Switzerland, 2015; Volume 20, pp. 59–75. [Google Scholar]
  70. Holland, J.H. Hidden Order; Addison-Wesley: Reading, MA, USA, 1995. [Google Scholar]
  71. Holland, J.H. Emergence: From Chaos to Order; Oxford University Press: Oxford, UK, 1997. [Google Scholar]
  72. Coelho, H.; Thompsen Primo, T. Exploratory apprenticeship in the digital age with AI tools. Prog. Artif. Intell. 2017, 1, 17–25. [Google Scholar] [CrossRef]
  73. Magnani, L. Morality in a Technological World. Knowledge as Duty; Cambridge University Press: Cambridge, UK, 2007. [Google Scholar]
  74. Calude, C.S.; Longo, G. The deluge of spurious correlations in big data. Found. Sci. 2017, 22, 595–612. [Google Scholar] [CrossRef]
1.
The AI research on these topics also favored the formation, in two philosophy departments, of the following facilities: the Computational Epistemology Laboratory (http://cogsci.uwaterloo.ca/) headed by P. Thagard at the University of Waterloo, Canada and the Computational PhilosophyLaboratory (http://www-3.unipv.it/webphilos_lab/wordpress/), headed by myself at the University of Pavia, Italy, both devoted to research into cognitive science and related areas of philosophy.
2.
Classical volumes where the reader can find the illustration of the most important research and of some historical machine discovery programs are Langley [12] and Shrager and Langley [13]. Cf. also Zytkov [14] (Proceedings of MD-92 Workshop on “Machine Discovery”), and Colton [15] (Proceedings of AISB’99).
3.
4.
A review of the classical AI approaches to abduction (mainly based on logic programming) is given by Paul [16] and Bylander et al., Reiter et al., de Kleer and Williams, Reggia et al. [17,18,19,20] (set covering approaches). Other classical programs regarding discovery in science are illustrated by Valdés-Pérez [21]: MECHEM (reaction mechanisms in chemistry [22]), ARROSMITH (intertwining between drugs or dietary aspects and diseases in medicine [23]), GRAFFITI (generation of conjectures in graph theory and other mathematical areas [24]), MDP/KINSHIP (determination of classes within a classification in linguistics [25]).
5.
Other computational programs that demonstrated their efficaciousness in the execution of machine discovery abductive tasks derived from the studies on genetic algorithms and evolving neural networks (cf. for example [42,43])—in which creative abductive reasoning is rendered by exploiting some of the Darwinian mechanisms involved by evolutionary theories—and also from the so-called research on DNA computers [44].
6.
A rich survey of the intertwining between computation and scientific explanation and abductive discovery is illustrated in Thagard and Litt [50].
7.
A simple neural network has been used to build the computational program ECHO (Explanatory Coherence) regarding that part of abduction that concerns the process of hypothesis evaluation [30,31].
8.
A survey about the importance of models in abductive cognition is illustrated in [54].
9.
The need of a plurality of representations was already clear at the time of classical AI formalisms, when I was collaborating with AI researchers to implement a Knowledge-Based System (KBS) able to develop medical abductive reasoning [56].
10.
The word belongs to the Husserlian philosophical lexicon [58] I have analyzed in its relationship with abduction in ([4] chapter 4).
11.
Cf. Wikipedia, entry Go (game) https://en.wikipedia.org/wiki/Go_(game).
12.
An expressive adjective still used by Husserl [60]. Translated by D. Carr and originally published in Husserl, E. The Crisis of European Sciences and Transcendental Phenomenology [1954]; George Allen & Unwin and Humanities Press: London, UK; New York, NY, USA, 1970.
13.
This expression, I have extendedly used in [5], is derived from Hutchins, who introduced the expression “mediating structure”, which regards external tools and props that can be constructed to cognitively enhance the activity of navigating. Written texts are trivial examples of a cognitive “mediating structure” with clear cognitive purposes, so mathematical symbols, simulations, and diagrams, which often become “epistemic mediators”, because related to the production of scientific results: “Language, cultural knowledge, mental models, arithmetic procedures, and rules of logic are all mediating structures too. So are traffic lights, supermarkets layouts, and the contexts we arrange for one another’s behavior. Mediating structures can be embodied in artifacts, in ideas, in systems of social interactions […]” ([61] pp. 290–291) that function as an enormous new source of information and knowledge.
14.
Cf. Wikipedia entry Go (game) https://en.wikipedia.org/wiki/Go_(game).
15.
I have furnished more cognitive and technical details to explain this result in [63].
16.
I think that in Aristotle some of the current central aspects of abductive cognition are already present, and they are in tune with the EC-Model (Eco-Cognitive Model) of abduction I have introduced in [4,55,63,66].
17.
Of course, many of the strategies of a good player are already mentally present thanks to the experience of several previous games.
18.
Many interesting examples can be found in the recent [67].
19.
It is necessary to select from pre-stored diagnostic hypotheses.
20.
Obviously, for example, new rules and new boards can be proposed, so realizing new types of game, but this chance does not jeopardize my argumentation.
21.
Some notes on the area of the so-called automated scientific discovery in AI cf. ([4] chapter 2, section 2.7 “Automatic Abductive Scientists”).
22.
Date of access 10 of January, 2019.
23.
Relatively recent bibliographic references can be found in my book [73].
24.
On this problem and other negative epistemological use of computational programs, cf. the recent [74].

Share and Cite

MDPI and ACS Style

Magnani, L. AlphaGo, Locked Strategies, and Eco-Cognitive Openness. Philosophies 2019, 4, 8. https://doi.org/10.3390/philosophies4010008

AMA Style

Magnani L. AlphaGo, Locked Strategies, and Eco-Cognitive Openness. Philosophies. 2019; 4(1):8. https://doi.org/10.3390/philosophies4010008

Chicago/Turabian Style

Magnani, Lorenzo. 2019. "AlphaGo, Locked Strategies, and Eco-Cognitive Openness" Philosophies 4, no. 1: 8. https://doi.org/10.3390/philosophies4010008

APA Style

Magnani, L. (2019). AlphaGo, Locked Strategies, and Eco-Cognitive Openness. Philosophies, 4(1), 8. https://doi.org/10.3390/philosophies4010008

Article Metrics

Back to TopTop