Publication

When, What, and How Much to Reward in Reinforcement Learning-Based Models of Cognition

Janssen, C. P. & Gray, W. D., Mar-2012, In : Cognitive Science. 36, 2, p. 333-358 26 p.

Research output: Contribution to journalArticleAcademicpeer-review

  • Christian P. Janssen
  • Wayne D. Gray

Reinforcement learning approaches to cognitive modeling represent task acquisition as learning to choose the sequence of steps that accomplishes the task while maximizing a reward. However, an apparently unrecognized problem for modelers is choosing when, what, and how much to reward; that is, when (the moment: end of trial, subtask, or some other interval of task performance), what (the objective function: e.g., performance time or performance accuracy), and how much (the magnitude: with binary, categorical, or continuous values). In this article, we explore the problem space of these three parameters in the context of a task whose completion entails some combination of 36 stateaction pairs, where all intermediate states (i.e., after the initial state and prior to the end state) represent progressive but partial completion of the task. Different choices produce profoundly different learning paths and outcomes, with the strongest effect for moment. Unfortunately, there is little discussion in the literature of the effect of such choices. This absence is disappointing, as the choice of when, what, and how much needs to be made by a modeler for every learning model.

Original languageEnglish
Pages (from-to)333-358
Number of pages26
JournalCognitive Science
Volume36
Issue number2
Publication statusPublished - Mar-2012

    Keywords

  • Reinforcement learning, Choice, Strategy selection, Adaptive behavior, Expected utility, Expected value, Cognitive architecture, Skill acquisition and learning, MANIPULATING INFORMATION ACCESS, INTERACTIVE BEHAVIOR, SOFT CONSTRAINTS, RECURRENT CHOICE, TASK, STRATEGIES, ENVIRONMENT, ADAPTATION, MEMORY, ALLOCATION

ID: 5514960