Skip to content

Learning #5

@sdesalas

Description

@sdesalas
  1. Network decay should be evenly spread out over all the synapses that did not fire in learning period, maintaining overall network weight constant.
  2. This provides a very elegant solution around the problem of avoiding network overload through positive reinforcement.
  3. This way the network strength will increase over time (more synapses over the threshold), however the average weight should always remain constant. (exponential weight decrease might be a bit tricky thou)
  4. Negative reinforcement is perhaps not best handled at the moment, At present it inhibits recently fired synapses, but also provides some randomness by increasing the weight of synapses that did not fire recently. I think if compensatory decay is used instead (as per point 1 above) this may take care of the need to allow alternate pathways... does random increases actually help?
  5. Negative weighted (inhibitory) synapses are a bit of a conundrum. If reinforcement is positive one would assume they need to provide further inhibition, but at the moment they will swing the other way.
  6. SignalFireThreshold is another tricky one. When synapses are decaying it makes sense for them to decay towards somewhere below the threshold, perhaps the average network weight, but what should that be? A calculated value that is derived from the fire threshold (ie 1/2)?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions