Contact

For general inquiry please contact Yang Liu or use the contact form on the right. 

         

123 Street Avenue, City Town, 99999

(123) 555-6789

email@address.com

 

You can set your address, phone number, email and site description in the settings tab.
Link to read me page with more information.

DECISION-AI I

 

DECISION-AI I

Decision Theory & the Future of Artificial Intelligence

28 - 31 July 2017
Cambridge, United Kingdom

There is increasing interest in the challenges of ensuring that the long-term development of artificial intelligence (AI) is safe and beneficial. Moreover, despite different perspectives, there is much common ground between mathematical and philosophical decision theory, on the one hand, and AI, on the other. The aim of this workshop – intended to be the first in a regular series organised jointly by MCMP at LMU and CFI and CSER at Cambridge – is to bring the expertise of decision theory to bear on the challenges of the beneficial development of AI, by fostering links and joint research at the nexus between decision theory and AI. The inaugural workshop will aim at community building and road-mapping, and identifying useful research programs at the intersection of these two fields.

Speakers

R.A. Briggs (Stanford University)
David Danks (Carnegie Mellon University)
Frederick Eberhardt (Caltech)
Ulrike Hahn (Birkbeck, University of London)
Joe Halpern (Cornell University)
Thomas Icard (Stanford University)
Ramana Kumar (Data61, CSIRO/UNSW)
Pedro Ortega (Google DeepMind)
Reuben Stern (LMU Munich)
Jiji Zhang (Lingnan University)

Registration

The registration for this event is now closed as we have reached the maximum capacity of the venue. Thanks for your interests. 

Organisers

Yang Liu (Cambridge, local contact)
Huw Price (Cambridge)
Stephan Hartmann (LMU Munich)

Program

28 July Old Combination Room 

19:00    Reception

29 July Junior Parlour  

09:30     Coffee/Tea
09:55     Seán Ó hÉigeartaigh: Opening
10:00     David Danks: The Challenge of Extrinsic Specifications
11:30     R.A. Briggs: Real-Life Newcomb Problems?
12:45     Lunch
14:00     Reuben Stern: Why Think That Causes Must Precede Their Effects
15:30     Ramana Kumar: New Work for Decision Theorists

30 July Old Combination Room

09:30     Coffee/Tea
10:00     Joe Halpern: Decision Theory with Resource-Bounded Agents
11:30     Pedro Ortega: Information-Theoretic Bounded Rationality
12:45     Lunch
14:00     Frederick Eberhardt: Some Constraints on the Choice of Causal Variables
15:30     Jiji Zhang: Towards a Decision-Theoretic Foundation for Causal Bayes Nets

31 July Old Combination Room

09:30    Coffee/Tea
10:00    Ulrike Hahn: Is Decision Theory Relevant to the Future of AI?
11:30    Thomas Icard: Algorithmic Rationality and the Role of Randomization
12:45    Lunch

Abstracts

Real-Life Newcomb Problems?
R.A. Briggs

Decision theorists have spilled much ink over the difference between causal and evidential decision rules, whose recommendations differ in puzzle cases.  Recently, a class of rivals to causal and evidential rules, aimed at solving more complicated puzzle cases. But how realistic are the puzzle cases where these decision rules disagree?  I argue that such cases are rare, and can often be handled without addressing the difference between decision rules, by changing other parts of the problem description.

The Challenge of Extrinsic Specifications
David Danks

Decision theory (DT) and modern artificial intelligence methods (AI) have many overlaps in formalisms, approaches, and communities, and where they diverge, they typically do so in relatively complementary ways. In this talk, I will focus on a particular class of shared features of most DT and AI: namely, both require significant extrinsic specification of key components. In particular, both DT and AI systems require the user or developer to extrinsically specify (i) values or goals; (ii) possible actions or responses; and (iii) appropriateness of context. Increasing autonomy requires minimizing the amount of extrinsic specification, but beneficial uses of DT and AI depend critically on proper (for us) specification of these three components. We thus face a challenge: how can we increase autonomy while ensuring that these components are properly specified (so the resulting system will be beneficial for us)? In this talk, I will first examine this tension in greater detail, and then explore several ways of minimizing extrinsic specifications of these components, with a particular focus on the user-centric impacts of those methods.

Some Constraints on the Choice of Causal Variables
Frederick Eberhardt

From a purely probabilistic point of view, there are few constraints on how random variables can be combined in a model. For example, random variables that are functions of each other can be combined in a model without leading to outright contradictions or formal difficulties. Causal variables are sometimes described as random variables, but the additional causal commitments introduce constraints on the choice of causal variables that have received very little attention. I will describe some constraints on how causal variables can be constructed and show what goes wrong when they are not respected.

Is Decision Theory Relevant to the Future of AI?
Ulrike Hahn

The talk will examine several arguments for why decision theory might have little relevance to the future of AI. These arguments will be drawn both from the human psychology of judgment and decision-making and from the history of computational modelling research within Cognitive Science and AI.

Decision Theory with Resource-Bounded Agents
Joe Halpern

There have been two major lines of research aimed at capturing resource-bounded players in game theory. The first, initiated by Rubinstein, charges an agent for doing costly computation; the second, initiated by Neyman does not charge for computation, but limits the computation that agents can do, typically by modeling agents as finite automata. We review recent work on applying both approaches in the context of decision theory. For the first approach, we take the objects of choice in a decision problem to be Turing machines, and charge players for the “complexity” of the Turing machine chosen (e.g., its running time). This approach can be used to explain well-known phenomena like first-impression-matters biases (i.e., people tend to put more weight on evidence they hear early on) and belief polarization (two people with different prior beliefs, hearing the same evidence, can end up with diametrically opposed conclusions) as the outcomes of quite rational decisions. For the second approach, we model people as finite automata, and provide a simple algorithm that, on a problem that captures a number of settings of interest, provably performs optimally as the number of states in the automaton increases. Perhaps more importantly, it seems to capture a number of features of human behavior, as observed in experiments.

This is joint work with Rafael Pass and Lior Seeman. No previous background is assumed.

Algorithmic Rationality and the Role of Randomization
Thomas Icard 

It is commonly acknowledged that bringing decision theory and AI closer together will require a good theory of how decision quality trades off against the costs involved in thinking through decisions. There are various proposals about what such a theory might look like, going back at least to I.J. Good. In this talk I shall consider one specific question in this area:  what role should randomization play in decision making? Within traditional Bayesian decision theory randomized acts play a very limited role. By contrast, randomization is ubiquitous in the design of artificial agents at almost all levels of organization. I plan to explore this apparent discrepancy, and to ask whether the uses to which randomization is put can be unified under one (or only a few) general principles. The broader aim is to make progress on understanding rational decision making for resource-bounded agents.

New Work for Decision Theorists
Ramana Kumar

There are two different projects that decision theorists can engage in. Roughly, they are (a) trying to discover the norms that govern instrumental reasoning, and (b) trying to figure out which decision procedures to install in our AIs, our institutions, and (if possible) ourselves. Whatever the relationship between these two projects, the two most popular answers to (a), namely Causal Decision Theory and Evidential Decision Theory, are clearly not good answers to (b). In this talk, which presents joint ongoing work with Daniel Kokotajlo, I hope to argue that project (b) exists, that it is important, and that decision theorists can productively contribute to it. Indeed, perhaps some decision theorists were working on it all along.

Information-Theoretic Bounded Rationality
Pedro Ortega

In this talk I provide an overview of information-theoretic bounded-rationality. I show how to ground the theory on a stochastic computation model for large-scale choice spaces and then derive the free energy functional as the associated variational principle for characterizing bounded-rational decisions. These decision processes have three important properties: they trade off utility and decision complexity; they give rise to an equivalence class of behaviorally indistinguishable decision problems; and they possess natural stochastic choice algorithms. I will discuss a general class of bounded-rational sequential planning problems that encompasses some well- known classical planning algorithms as limit cases (such as Expectimax and Minimax), as well as trust- and risk-sensitive planning. Finally, I will point out formal connections to Bayesian inference and to regret theory.

Why Think That Causes Must Precede Their Effects
Reuben Stern

Though common sense says that causes must precede their effects, the hugely influential interventionist account of causation makes no reference to temporal precedence. Does common sense lead us astray? In this paper, I evaluate the power of the commonsense assumption from within the interventionist approach to causal modeling. On the one hand, I argue that if causes precede their effects, then one need not consider the outcomes of hypothetical interventions in order to infer causal relevance, and that one can instead use temporal information to infer exactly when X is causally relevant to Y. On the other hand, I argue that one must consider the outcomes of hypothetical interventions in order to infer the extent to which X causally explains Y, and that the commonsense assumption is therefore less useful when answering quantitative questions about causal explanatory power than when answering qualitative questions about causal relevance. Finally, I conclude by considering the upshot of these findings for decision theory—i.e., how and whether it may be useful to assume that causes precede their effects while determining what to do.

Towards a Decision-Theoretic Foundation for (Subjective) Causal Bayes Nets
Jiji Zhang

Causal Bayes nets have proved to be a powerful framework for research on causality in artificial intelligence and related fields. In this talk I explore a subjective interpretation of the framework in which (interventional) probabilities are taken to represent credences (of some sort). I argue that these credences are in general indeterminate and should usually be modelled by non-convex sets of probabilities. I then outline a generalization of Seidenfeld et al.'s theory of coherent choice functions that can potentially provide a decision-theoretic foundation for the subjective interpretation. 

Venue

Trinity College, University of Cambridge, Cambridge, CB2 1TQ, United Kingdom

 
 

Sponsors