Open main menu

Deliberative Democracy Institiute Wiki β

Changes

Neuronal decision making model

809 bytes added, 09:18, 20 June 2014
Basic Mechanism
[[File:Neural_network_two_paths.jpg|500px|thumb|center]]
For every [[node]] in a path, there are several [[implications]]. Every implication has is attached at the end of it to a reward in the reward system.  The [[value]] of the reward is signaled by the plesentness or pain of the reward, the magnitude of the reward, the chance of the reward and it's immdiecy. The more pleasent the expectd reward is, the more we will be attracted to it. The more pain we assume the reward will bring, the less we will like to chose this reward. When pain in in stake, we will be rpulsed by it more then an equivelnt amount of plesure.. The stongger the reward will be the more we will be attracted to it, if pleasent, or we will distance ourselves from it. The faster we will think we will get the reward, the more we will be chose to take it's course. [[File:Attraction.png|500px|center|The reward-attraction function model]] the expected reward, and the faster thea value which is represented by the [[reward system]]. the value can very from positive to negative. The strength of the reward will be composed by the intensity of the reward that the reward system cell produce, and the closeness of the reward to the action, where immediate reward will increase the strength of the reward to the implication, while delayed reward will result reduced connection between implication and reward. The strength will be produced according to the learning rules of [[LTP]] and [[STP]].
[[File:Neural network two paths implications.jpg|500px|thumb|center]]