Publication
Cognitive Flexibility in Cognitive Architecture: Simulating using Contextual Learning in PRIMs
Ji, Y., van Rij, J. & Taatgen, N., 2020, Poster session presented at 18th International Conference on Cognitive Modeling.Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Academic › peer-review
APA
Author
Harvard
Standard
Cognitive Flexibility in Cognitive Architecture : Simulating using Contextual Learning in PRIMs. / Ji, Yang; van Rij, Jacolien; Taatgen, Niels.
Poster session presented at 18th International Conference on Cognitive Modeling. 2020.Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Academic › peer-review
Vancouver
BibTeX
}
RIS
TY - GEN
T1 - Cognitive Flexibility in Cognitive Architecture
T2 - The 18th Annual Meeting of the International Conference on Cognitive Modelling
AU - Ji, Yang
AU - van Rij, Jacolien
AU - Taatgen, Niels
PY - 2020
Y1 - 2020
N2 - The universal flexibility of biological systems needs to be reflected in cognitive architecture. In PRIMs, we attempt to achieve flexibility through a bottom-up approach. Using contextual learning, randomly firing of a set of instantiated primitive operators are gradually organized into context-sensitive operator firing sequences (i.e., primordial “skills”). Based on this implementation, the preliminary results of the model simulated the averaged single-pattern processing latency that is consistent with infants’ differential focusing time in three theoretically controversial artificial language studies, namely Saffran, Aslin, and Newport (1996), Marcus, Vijayan, Rao, and Vishton (1999), and Gomez (2002). In our ongoing work, we are analyzing (a) whether the model can arrive at primordial “skills” adaptive to the trained tasks, and (b) whether the learned chunks mirror the trained patterns.
AB - The universal flexibility of biological systems needs to be reflected in cognitive architecture. In PRIMs, we attempt to achieve flexibility through a bottom-up approach. Using contextual learning, randomly firing of a set of instantiated primitive operators are gradually organized into context-sensitive operator firing sequences (i.e., primordial “skills”). Based on this implementation, the preliminary results of the model simulated the averaged single-pattern processing latency that is consistent with infants’ differential focusing time in three theoretically controversial artificial language studies, namely Saffran, Aslin, and Newport (1996), Marcus, Vijayan, Rao, and Vishton (1999), and Gomez (2002). In our ongoing work, we are analyzing (a) whether the model can arrive at primordial “skills” adaptive to the trained tasks, and (b) whether the learned chunks mirror the trained patterns.
M3 - Conference contribution
BT - Poster session presented at 18th International Conference on Cognitive Modeling
Y2 - 22 July 2020 through 31 July 2020
ER -
ID: 130100099