[Illinois]: Perturbative Reinforcement Learning to Develop Distributed Representations

By AbderRahman N Sobh1; Jessica S Johnson1; NanoBio Node1

1. University of Illinois at Urbana-Champaign

This tool trains three-layered networks of sigmoidal units to associate patterns.

Launch Tool

You must login before you can run this tool.

Version 1.0d - published on 06 Aug 2014

doi:10.4231/D3FF3M12N cite this

Open source: license | download

View All Supporting Documents

Category

Tools

Published on

Abstract

From Tutorial on Neural Systems Modeling, Chapter 7: Simultaneous perturbative reinforcement learning is effective not only in two-layered but also in three-layered neural networks. Input-hidden, hidden-output, and bias weights can all be perturbed simultaneously. We will use perturbative reinforcement learning (the directed drift algorithm) in this sec­tion to show how two different input signals are distributed over the hidden units in a three-layered, feedforward neural network. Specifically, we will use perturbative reinforcement learning to reproduce the results on the formation of a non-uniform distributed representation that we obtained in Chapter 6 using back-propagation. This simulation was a simplified version of a neural network model of distributed parallel processing in the vestibulo-oculomotor system (see Chapter 6).

Sponsored by

NanoBio Node, University of Illinois Champaign-Urbana

Cite this work

Researchers should cite this work as follows:

  • Tutorial on Neural Systems Modeling, Copyright 2010 Sinauer Associates Inc. Author: Thomas J. Anastasio
  • AbderRahman N Sobh, Jessica S Johnson, NanoBio Node (2014), "[Illinois]: Perturbative Reinforcement Learning to Develop Distributed Representations," https://nanohub.org/resources/pertdistrep. (DOI: 10.4231/D3FF3M12N).

    BibTex | EndNote

Tags