Related papers
Almost Ideal: Computational Epistemology and the Limits of Rationality for Finite Reasoners
Danilo Fraga Dantas
University of California, Davis, 2016
The notion of an ideal reasoner has several uses in epistemology. Often, ideal reasoners are used as a parameter of (maximum) rationality for finite reasoners (e.g. humans). However, the notion of an ideal reasoner is normally construed in such a high degree of idealization (e.g. infinite/unbounded memory) that this use is unadvised. In this dissertation, I investigate the conditions under which an ideal reasoner may be used as a parameter of rationality for finite reasoners. In addition, I present and justify the research program of computational epistemology, which investigates the parameter of maximum rationality for finite reasoners using computer simulations. In chapter 1, I investigate the use ideal reasoners for stating the maximum (and minimum) bounds of rationality for finite reasoners. I propose the notion of a strictly ideal reasoner which coincides with the notion of maximum rationality. The notion of a strictly ideal reasoner is relative to a logic and a notion of beliefs (explicit, implicit, etc). I argue that, for some relevant logics, a finite reasoner may only approach maximum rationality at the limit of a reasoning sequence (stable beliefs) 1 . In chapter 2, I investigate the use of ideal reasoners in the zombie argument against physicalism (Chalmers, 2010). This notion is used in the principle that ideal negative conceivability entails possibility. The conclusion is that the zombie argument is neither an a priori nor a conclusive argument against physicalism. In chapter 3, I investigate the notion of maximum (and minimum) epistemic rationality for finite reasoners. Epistemic rationality is often related to maximizing true beliefs and minimizing false beliefs. I argue that most of the existing models of maximum epistemic rationality have problems in dealing with blindspots and propose a model in terms of the maximization of a function g, which evaluates sets of beliefs regarding truth/falsehood. However, function g may only be maximized at the limit of a reasoning sequence. In chapter 4, I argue that if maximum (epistemic) rationality for finite reasoners must be understood in terms of the limit of a reasoning sequence, then issues about the computational complexity of reasoning are relevant to epistemology. Then I propose the research program of computational epistemology, which uses computer simulations for investigating maximum (epistemic) rationality for finite reasoners and considers the computational complexity of reasoning. In chapter 5, I provide an example of an investigation in computational epistemology. More specifically, I compare two models of maximum rationality for situations of uncertain reasoning: theory of defeasible reasoning (Pollock, 1995) and Bayesian epistemology (Joyce, 2011).
View PDFchevron_right
Artificial Intelligence
Siva K
Artificial Intelligence, cognition, machine learning, Robotics, automation
View PDFchevron_right
FIRST-ORDER LOGIC
Amit negi
In which we notice that the world is blessed with many objects, some of which are related to other objects, and in which we endeavor to reason about them.
View PDFchevron_right
Conditional Partial Plans for Rational Situated Agents Capable of Deductive Reasoning and Inductive Learning
Slawomir Nowaczyk
2008
Rational, autonomous agents that are able to achieve their goals in dynamic, partially observable environments are the ultimate dream of Artificial Intelligence research since its beginning. The goal of this PhD thesis is to propose, develop and evaluate a framework well suited for creating intelligent agents that would be able to learn from experience, thus becoming more efficient at solving their tasks.
View PDFchevron_right
Impossible States at Work: Logical Omniscience, Partial Beliefs and Rational Choice
mikaël cozic
Workshop on Logics for Resource-Bounded Agents
Developments in reasoning about knowledge have picked up pace since the fundamental work of Jaakko Hintikka [4] and David Lewis [5]. A great deal of purely technical work has since come out of IBM [3], CUNY [7-13], Indiana, Amsterdam, and other places.
View PDFchevron_right
Don’t Overthink It! On the rationality of following your gut
Danilo Fraga Dantas
Dual-process theories state that human reasoning comprises processes of Type 1 (‘intuitive’; fast and effortless, but error-prone) and Type 2 (‘deliberative’; slow and effortful, but normative). Full-probabilistic reasoning demands Type 2 processing and much of the research in dual-processing is concerned with systematic violations of probability, which is treated as a normative model for situations of uncertain reasoning. Bayesian epistemologists argue that the tenets of probability are, in fact, requirements of rationality. But is it always rational to perform Type 2 processing in situations of uncertain reasoning? I have constructed AI agents based on the Bayesian model and in a nonmonotonic framework that is used in psychology to model Type 1 processes and tested how they perform in an epistemic version of the Wumpus World, a class of problems used in AI for studying uncertain reasoning. The results of the simulations suggest that it is rational to perform a Type 1 process (instead of Type 2 Bayesian reasoning) in situations where extreme risk aversion is rewarded (practical rationality) and where the evidence is relatively informative or difficult to gather (epistemic rationality).
View PDFchevron_right
Bayesian models of nonstationary markov decision processes
Peter Stone
2005
Abstract Standard reinforcement learning algorithms generate polices that optimize expected future rewards in a priori unknown domains, but they assume that the domain does not change over time. Prior work cast the reinforcement learning problem as a Bayesian estimation problem, using experience data to condition a probability distribution over domains.
View PDFchevron_right
Using LTL Assumptions to Generate Safe Plans for Partially Known Domains
Alexandre Albore
2005
Abstract Planning for partially known domains is an extremely demanding task. However, it is often possible to formulate assumptions over the expected dynamics of the domain; these can be used to effectively cut the search, dramatically improving plan generation. In turn, the execution of assumption-based plans must be monitored to prevent run-time failures that may happen if assumptions turn out to be untrue, and to replan in that case.
View PDFchevron_right
Algorithms and Networking for Computer Games
Jouni Smed
2006
This book is printed on acid-free paper responsibly manufactured from sustainable forestry in which at least two trees are planted for each one used for paper production.
View PDFchevron_right
Advances In Modeling Adaptive and Cognitive Systems
Angelo Loula
books.google.com, 2010
Bulding adaptive and cognitive systems João Queiroz & Angelo Loula pages 1-3 Artificial Life: Prospects of a Synthetic Biology Jon Umerez pages 4-16 Interdisciplinary Engineering of Intelligent Systems. Some Methodological Issues Gerd Doeben-Henisch, Ute Bauer-Wersing, Louwrence Erasmus, Ulrich Schrader, and Matthias Wagner pages 17-28 Is Life Computable? Anthony Chemero and Michael T. Turvey pages 29-37 First steps toward a cognitive architecture based on adaptive automata Joao Eduardo Kogler Junior and Reginaldo Inojosa Filho pages 38-47 An Emotional-Evolutionary Technique for Low-Level Goal Definition in a Multi-Purpose Artificial Creature Patrícia de Toro, Ricardo Gudwin, and Mauro Miskulin pages 48-59 A Memory Model for Cognitive Agents Guilherme Bittencourt pages 60-76 Intelligent agents capable of developing memory of their environment Gul Muhammad Khan, Julian F. Miller, and David M. Halliday pages 77-114
View PDFchevron_right