Truthnness effect===Bush and Mosteller Learning Curve=== This very general idea of Conditional learning, was first mathematically formalized when Bush and Mosteller (23, 24) proposed that the probability of Pavlov's (22) dog expressing the salivary response on sequential trials could be computed through an iterative equation where [[File: when something was stemped in Bush and Mosteller eq1.jpg]] In this equation, Anext_trial is the probability that the salivation will occur on the next trial (or more formally, the associative strength of the connection between the neural networksbell and salivation). To compute Anext_trial, one begins with the value of A on the previous trial and adds to it has a feeling correction based on the animal's experience during the most recent trial. This correction, or error term, is the difference between what the animal actually experienced (in this case, the reward of femiliaritythe meat powder expressed as Rcurrent_trial) and what he expected (simply, what A was on the previous trial). The difference between what was obtained and what was expected is multiplied by α, a number ranging from 0 to 1, which is known as the learning rate. If When α = 1, A is always immediately updated so that it will be encounter again equals R from the last trial. When α = 0.5, only one-half of the error is corrected, and the value of A converges in half steps to R. When the value of α is small, around 0.1, then A is only very slowly incremented to the value of R. What the Bush and Mosteller (23, 24) equation does is compute an average of previous rewards across previous trials. In this average, the most recent rewards have the greatest impact, whereas rewards far in hearing or readingthe past have only a weak impact. If, to take a concrete example, α = 0.5, then the equation takes the most recent reward, uses it will be sound more trueto compute the error term, and multiplies that term by 0.5. One-half of the new value of A is, thus, constructed from this most recent observation. That means that the sum of all previous error terms (those from all trials in the past) has to count for the other one-half of the estimate. AccordinglyIf one looks at that older one-half of the estimate, when one rights good -half of that one-half comes from what was observed one trial ago (thus, 0.25 of the total estimate) and one-half (According to 0.25 of the estimate) comes from the roles sum of all trials before that one. The iterative equation reflects a weighted sum of previous rewards. When the learning rate (α) is 0.5, the weighting rule effectively being carried out is [http[File://wwwBush and Mosteller eq2.uncgif]] an exponential series, the rate at which the weight declines being controlled by α.edu/~haipeng/teaching/sci When α is high, the exponential function declines rapidly and puts all of the weight on the most recent experiences of the animal.pdf The science When α is low, it declines slowly and averages together many observations, which is shown in Fig. 1. [[File:Bush and Mosteller graph1.gif||||“Weights determining the effects of previous rewards on current associative strength effectively decline as an exponential function of scientific writing]time” (65)<. [Reproduced with permission from Oxford University Press from ref>Kahnman. 65 (Copyright 2010, Thinking slaw Paul W. Glimcher).)]] The Bush and fastMosteller (23, 24) equation was critically important, 2011 because it was the first use of this kind of iterative error-based rule for reinforcement learning; additionally, it forms the basis of all modern approaches to this problem. This is a fact often obscured by what is known as the Rescorla–Wagner model of classical conditioning (25). The Rescorla–Wagner model was an important extension of the truthness effectBush and Mosteller approach (23, 24)</ref>to the study of what happens to associative strength when two cues predict the same event. Their findings were so influential that the basic Bush and Mosteller rule is now often mistakenly attributed to Rescorla and Wagner by neurobiologists.
==References==
<references>
[[category: epistemiology]]