The tabular TD(0) method is one of the simplest TD methods. It is a special case of more general stochastic approximation methods. It estimates the state value function of a finite-state Markov decision process (MDP) under a policy π {\displaystyle \pi } . Let V π {\displaystyle V^{\pi }} denote the state value function of the MDP with states ( S t ) t ∈ N {\displaystyle (S_{t})_{t\in \mathbb {N} }} , rewards ( R t ) t ∈ N {\displaystyle (R_{t})_{t\in \mathbb {N} }} and discount rate9 γ {\displaystyle \gamma } under the policy π {\displaystyle \pi } :10
We drop the action from the notation for convenience. V π {\displaystyle V^{\pi }} satisfies the Hamilton-Jacobi-Bellman Equation:
so R 1 + γ V π ( S 1 ) {\displaystyle R_{1}+\gamma V^{\pi }(S_{1})} is an unbiased estimate for V π ( s ) {\displaystyle V^{\pi }(s)} . This observation motivates the following algorithm for estimating V π {\displaystyle V^{\pi }} .
The algorithm starts by initializing a table V ( s ) {\displaystyle V(s)} arbitrarily, with one value for each state of the MDP. A positive learning rate α {\displaystyle \alpha } is chosen.
We then repeatedly evaluate the policy π {\displaystyle \pi } , obtain a reward r {\displaystyle r} and update the value function for the current state using the rule:11
where S t {\displaystyle S_{t}} and S t + 1 {\displaystyle S_{t+1}} are the current and next states, respectively. The value R t + 1 + γ V ( S t + 1 ) {\displaystyle R_{t+1}+\gamma V(S_{t+1})} is known as the TD target, and R t + 1 + γ V ( S t + 1 ) − V ( S t ) {\displaystyle R_{t+1}+\gamma V(S_{t+1})-V(S_{t})} is known as the TD error.
TD-Lambda is a learning algorithm invented by Richard S. Sutton based on earlier work on temporal difference learning by Arthur Samuel.12 This algorithm was famously applied by Gerald Tesauro to create TD-Gammon, a program that learned to play the game of backgammon at the level of expert human players.13
The lambda ( λ {\displaystyle \lambda } ) parameter refers to the trace decay parameter, with 0 ⩽ λ ⩽ 1 {\displaystyle 0\leqslant \lambda \leqslant 1} . Higher settings lead to longer lasting traces; that is, a larger proportion of credit from a reward can be given to more distant states and actions when λ {\displaystyle \lambda } is higher, with λ = 1 {\displaystyle \lambda =1} producing parallel learning to Monte Carlo RL algorithms.14
The TD algorithm has also received attention in the field of neuroscience. Researchers discovered that the firing rate of dopamine neurons in the ventral tegmental area (VTA) and substantia nigra (SNc) appear to mimic the error function in the algorithm.1516171819 The error function reports back the difference between the estimated reward at any given state or time step and the actual reward received. The larger the error function, the larger the difference between the expected and actual reward. When this is paired with a stimulus that accurately reflects a future reward, the error can be used to associate the stimulus with the future reward.
Dopamine cells appear to behave in a similar manner. In one experiment measurements of dopamine cells were made while training a monkey to associate a stimulus with the reward of juice.20 Initially the dopamine cells increased firing rates when the monkey received juice, indicating a difference in expected and actual rewards. Over time this increase in firing back propagated to the earliest reliable stimulus for the reward. Once the monkey was fully trained, there was no increase in firing rate upon presentation of the predicted reward. Subsequently, the firing rate for the dopamine cells decreased below normal activation when the expected reward was not produced. This mimics closely how the error function in TD is used for reinforcement learning.
The relationship between the model and potential neurological function has produced research attempting to use TD to explain many aspects of behavioral research.2122 It has also been used to study conditions such as schizophrenia or the consequences of pharmacological manipulations of dopamine on learning.23
Sutton & Barto (2018), p. 133. - Sutton, Richard S.; Barto, Andrew G. (2018). Reinforcement Learning: An Introduction (2nd ed.). Cambridge, MA: MIT Press. http://www.incompleteideas.net/book/the-book.html ↩
Sutton, Richard S. (1 August 1988). "Learning to predict by the methods of temporal differences". Machine Learning. 3 (1): 9–44. doi:10.1007/BF00115009. ISSN 1573-0565. S2CID 207771194. https://doi.org/10.1007%2FBF00115009 ↩
Schultz, W, Dayan, P & Montague, PR. (1997). "A neural substrate of prediction and reward". Science. 275 (5306): 1593–1599. CiteSeerX 10.1.1.133.6176. doi:10.1126/science.275.5306.1593. PMID 9054347. S2CID 220093382.{{cite journal}}: CS1 maint: multiple names: authors list (link) /wiki/CiteSeerX_(identifier) ↩
Montague, P. R.; Dayan, P.; Sejnowski, T. J. (1996-03-01). "A framework for mesencephalic dopamine systems based on predictive Hebbian learning" (PDF). The Journal of Neuroscience. 16 (5): 1936–1947. doi:10.1523/JNEUROSCI.16-05-01936.1996. ISSN 0270-6474. PMC 6578666. PMID 8774460. http://papers.cnl.salk.edu/PDFs/A%20Framework%20for%20Mesencephalic%20Dopamine%20Systems%20Based%20on%20Predictive%20Hebbian%20Learning%201996-2938.pdf ↩
Montague, P.R.; Dayan, P.; Nowlan, S.J.; Pouget, A.; Sejnowski, T.J. (1993). "Using aperiodic reinforcement for directed self-organization" (PDF). Advances in Neural Information Processing Systems. 5: 969–976. http://www.gatsby.ucl.ac.uk/~dayan/papers/mdnps93.pdf ↩
Montague, P. R.; Sejnowski, T. J. (1994). "The predictive brain: temporal coincidence and temporal order in synaptic learning mechanisms". Learning & Memory. 1 (1): 1–33. doi:10.1101/lm.1.1.1. ISSN 1072-0502. PMID 10467583. S2CID 44560099. https://doi.org/10.1101%2Flm.1.1.1 ↩
Sejnowski, T.J.; Dayan, P.; Montague, P.R. (1995). "Predictive Hebbian learning". Proceedings of the eighth annual conference on Computational learning theory - COLT '95. pp. 15–18. doi:10.1145/225298.225300. ISBN 0897917235. S2CID 1709691. 0897917235 ↩
Discount rate parameter allows for a time preference toward more immediate rewards, and away from distant future rewards /wiki/Time_preference ↩
Sutton & Barto (2018), p. 134. - Sutton, Richard S.; Barto, Andrew G. (2018). Reinforcement Learning: An Introduction (2nd ed.). Cambridge, MA: MIT Press. http://www.incompleteideas.net/book/the-book.html ↩
Sutton & Barto (2018), p. 135. - Sutton, Richard S.; Barto, Andrew G. (2018). Reinforcement Learning: An Introduction (2nd ed.). Cambridge, MA: MIT Press. http://www.incompleteideas.net/book/the-book.html ↩
Sutton & Barto (2018), p. 130?. - Sutton, Richard S.; Barto, Andrew G. (2018). Reinforcement Learning: An Introduction (2nd ed.). Cambridge, MA: MIT Press. http://www.incompleteideas.net/book/the-book.html ↩
Tesauro (1995). - Tesauro, Gerald (March 1995). "Temporal Difference Learning and TD-Gammon". Communications of the ACM. 38 (3): 58–68. doi:10.1145/203330.203343. S2CID 6023746. http://www.research.ibm.com/massive/tdl.html ↩
Sutton & Barto (2018), p. 175. - Sutton, Richard S.; Barto, Andrew G. (2018). Reinforcement Learning: An Introduction (2nd ed.). Cambridge, MA: MIT Press. http://www.incompleteideas.net/book/the-book.html ↩
Schultz, W. (1998). "Predictive reward signal of dopamine neurons". Journal of Neurophysiology. 80 (1): 1–27. CiteSeerX 10.1.1.408.5994. doi:10.1152/jn.1998.80.1.1. PMID 9658025. S2CID 52857162. /wiki/CiteSeerX_(identifier) ↩
Dayan, P. (2001). "Motivated reinforcement learning" (PDF). Advances in Neural Information Processing Systems. 14. MIT Press: 11–18. http://books.nips.cc/papers/files/nips14/CS01.pdf ↩
Tobia, M. J., etc. (2016). "Altered behavioral and neural responsiveness to counterfactual gains in the elderly". Cognitive, Affective, & Behavioral Neuroscience. 16 (3): 457–472. doi:10.3758/s13415-016-0406-7. PMID 26864879. S2CID 11299945.{{cite journal}}: CS1 maint: multiple names: authors list (link) https://doi.org/10.3758%2Fs13415-016-0406-7 ↩
Smith, A., Li, M., Becker, S. and Kapur, S. (2006). "Dopamine, prediction error, and associative learning: a model-based account". Network: Computation in Neural Systems. 17 (1): 61–84. doi:10.1080/09548980500361624. PMID 16613795. S2CID 991839.{{cite journal}}: CS1 maint: multiple names: authors list (link) /wiki/Doi_(identifier) ↩