Menu
Home Explore People Places Arts History Plants & Animals Science Life & Culture Technology
On this page
Automatic basis function construction

In machine learning, automatic basis function construction, also known as basis discovery, is a technique that finds a set of general-purpose (task-independent) basis functions to simplify complex data (state space) into a smaller, manageable form (lower-dimensional embedding) while accurately capturing the value function. Unlike methods relying on expert-designed functions, this approach works without prior knowledge of the specific problem area (domain), making it effective in situations where creating tailored basis functions is challenging or impractical.

We don't have any images related to Automatic basis function construction yet.
We don't have any YouTube videos related to Automatic basis function construction yet.
We don't have any PDF documents related to Automatic basis function construction yet.
We don't have any Books related to Automatic basis function construction yet.
We don't have any archived web articles related to Automatic basis function construction yet.

Motivation

In reinforcement learning (RL), many real-world problems modeled as Markov Decision Processes (MDPs) involve large or continuous state spaces—sets of all possible situations an agent might encounter—which are too complex to handle directly and often need approximation for efficient computation.1

Linear function approximators2 (LFAs), valued for their simplicity and low computational demands, are a common solution. Using LFAs effectively requires addressing two challenges: optimizing weights (adjusting importance of features) and building basis functions (creating simplified representations). While specially designed basis functions can excel in specific tasks, they are limited to those particular areas (domains). Automatic basis function construction offers a more flexible approach, enabling broader use across diverse problems without relying on task-specific expertise.

Problem definition

A Markov decision process with finite state space and fixed policy is defined with a 5-tuple s , a , p , γ , r {\displaystyle {s,a,p,\gamma ,r}} , which includes the finite state space S = 1 , 2 , … , s {\displaystyle S={1,2,\ldots ,s}} , the finite action space A {\displaystyle A} , the reward function r {\displaystyle r} , discount factor γ ∈ [ 0 , 1 ) {\displaystyle \gamma \in [0,1)} , and the transition model P {\displaystyle P} .

Bellman equation is defined as:

v = r + γ P v . {\displaystyle v=r+\gamma Pv.\,}

When the number of elements in S {\displaystyle S} is small, v {\displaystyle v} is usually maintained as tabular form. While S {\displaystyle S} grows too large for this kind of representation. v {\displaystyle v} is commonly being approximated via a linear combination of basis function Φ = ϕ 1 , ϕ 2 , … , ϕ n {\displaystyle \Phi ={\phi _{1},\phi _{2},\ldots ,\phi _{n}}} ,3 so that we have:

v ≈ v ^ = ∑ i = 1 n θ n ϕ n {\displaystyle v\approx {\hat {v}}=\sum _{i=1}^{n}\theta _{n}\phi _{n}}

Here Φ {\displaystyle \Phi } is a | S | × n {\displaystyle |S|\times n} matrix in which every row contains a feature vector for corresponding row, θ {\displaystyle \theta } is a weight vector with n parameters and usually n ≪ | s | {\displaystyle n\ll |s|} .

Basis construction looks for ways to automatically construct better basis function Φ {\displaystyle \Phi } which can represent the value function well.

A good construction method should have the following characteristics:

  • Small error bounds between the estimate and real value function
  • Form orthogonal basis in the value function space
  • Converge to stationary value function fast

Proto-value basis

In this approach, Mahadevan analyzes the connectivity graph between states to determine a set of basis functions.4

The normalized graph Laplacian is defined as:

L = I − D − 1 2 W D − 1 2 {\displaystyle L=I-D^{-{\frac {1}{2}}}WD^{-{\frac {1}{2}}}}

Here W is an adjacency matrix which represents the states of fixed policy MDP which forms an undirected graph (N,E). D is a diagonal matrix related to nodes' degrees.

In discrete state space, the adjacency matrix W {\displaystyle W} could be constructed by simply checking whether two states are connected, and D could be calculated by summing up every row of W. In continuous state space, we could take random walk Laplacian of W.

This spectral framework can be used for value function approximation (VFA). Given the fixed policy, the edge weights are determined by corresponding states' transition probability. To get smooth value approximation, diffusion wavelets are used.5

Krylov basis

Krylov basis construction uses the actual transition matrix instead of random walk Laplacian. The assumption of this method is that transition model P and reward r are available.

The vectors in Neumann series are denoted as y i = P i r {\displaystyle y_{i}=P^{i}r} for all i ∈ [ 0 , ∞ ) {\displaystyle i\in [0,\infty )} .

It shows that Krylov space spanned by y 0 , y 1 , … , y m − 1 {\displaystyle y_{0},y_{1},\ldots ,y_{m-1}} is enough to represent any value function,6 and m is the degree of minimal polynomial of ( I − γ P ) {\displaystyle (I-\gamma P)} .

Suppose the minimal polynomial is p ( A ) = 1 α 0 ∑ i = 0 m − 1 α i + 1 A i {\displaystyle p(A)={\frac {1}{\alpha _{0}}}\sum _{i=0}^{m-1}\alpha _{i+1}A^{i}} , and we have B A = I {\displaystyle BA=I} , the value function can be written as:

v = B r = 1 α 0 ∑ i = 0 m − 1 α i + 1 ( I − γ P ) i r = ∑ i = 0 m − 1 α i + 1 β i y i . {\displaystyle v=Br={\frac {1}{\alpha _{0}}}\sum _{i=0}^{m-1}\alpha _{i+1}(I-\gamma P)^{i}r=\sum _{i=0}^{m-1}\alpha _{i+1}\beta _{i}y_{i}.} 7 Algorithm Augmented Krylov Method8 z 1 , z 2 , … , z k {\displaystyle z_{1},z_{2},\ldots ,z_{k}} are top real eigenvectors of P z k + 1 := r {\displaystyle z_{k+1}:=r} for i := 1 : ( l + k ) {\displaystyle i:=1:(l+k)} do if i > k + 1 {\displaystyle i>k+1} then z i := P z i − 1 {\displaystyle z_{i}:=Pz_{i-1}} ; end if for j := 1 : ( i − 1 ) {\displaystyle j:=1:(i-1)} do z i := z i − < z j , z i > z j ; {\displaystyle z_{i}:=z_{i}-<z_{j},z_{i}>z_{j};} end for if ∥ z i ∥≈ 0 {\displaystyle \parallel z_{i}\parallel \approx 0} then break; end if end for
  • k: number of eigenvectors in basis
  • l: total number of vectors

Bellman error basis

Bellman error (or BEBFs) is defined as: ε = r + γ P v ^ − v ^ = r + γ P Φ θ − Φ θ {\displaystyle \varepsilon =r+\gamma P{\hat {v}}-{\hat {v}}=r+\gamma P\Phi \theta -\Phi \theta } .

Loosely speaking, Bellman error points towards the optimal value function.9 The sequence of BEBF form a basis space which is orthogonal to the real value function space; thus with sufficient number of BEBFs, any value function can be represented exactly.

Algorithm BEBF stage stage i=1, ϕ 1 = r {\displaystyle \phi _{1}=r} ; stage i ∈ [ 2 , N ] {\displaystyle i\in [2,N]} compute the weight vector θ i {\displaystyle \theta _{i}} according to current basis function Φ i {\displaystyle \Phi _{i}} ; compute new bellman error by ε = r + γ P Φ i θ i − Φ i θ i {\displaystyle \varepsilon =r+\gamma P\Phi _{i}\theta _{i}-\Phi _{i}\theta _{i}} ; add bellman error to form new basis function: Φ i + 1 = [ Φ i : ε ] {\displaystyle \Phi _{i+1}=[\Phi _{i}:\varepsilon ]} ;
  • N represents the number of iterations till convergence.
  • ":" means juxtaposing matrices or vectors.

Bellman average reward bases

Bellman Average Reward Bases (or BARBs)10 is similar to Krylov Bases, but the reward function is being dilated by the average adjusted transition matrix P − P ∗ {\displaystyle P-P^{*}} . Here P ∗ {\displaystyle P^{*}} can be calculated by many methods in.11

BARBs converges faster than BEBFs and Krylov when γ {\displaystyle \gamma } is close to 1.

Algorithm BARBs stage stage i=1, P ∗ r {\displaystyle P^{*}r} ; stage i ∈ [ 2 , N ] {\displaystyle i\in [2,N]} compute the weight vector θ i {\displaystyle \theta _{i}} according to current basis function Φ i {\displaystyle \Phi _{i}} ; compute new basis: : ϕ i + 1 = r − P ∗ r + P Φ i θ i − Φ i θ i {\displaystyle :\phi _{i+1}=r-P^{*}r+P\Phi _{i}\theta _{i}-\Phi _{i}\theta _{i}} , and add it to form new bases matrix Φ i + 1 = [ Φ i : ϕ i + 1 ] {\displaystyle \Phi _{i+1}=[\Phi _{i}:\phi _{i+1}]} ;
  • N represents the number of iterations till convergence.
  • ":" means juxtaposing matrices or vectors.

Discussion and analysis

There are two principal types of basis construction methods.

The first type of methods are reward-sensitive, like Krylov and BEBFs; they dilate the reward function geometrically through transition matrix. However, when discount factor γ {\displaystyle \gamma } approaches to 1, Krylov and BEBFs converge slowly. This is because the error Krylov based methods are restricted by Chebyshev polynomial bound.12 To solve this problem, methods such as BARBs are proposed. BARBs is an incremental variant of Drazin bases, and converges faster than Krylov and BEBFs when γ {\displaystyle \gamma } becomes large.

Another type is reward-insensitive proto value basis function derived from graph Lapalacian. This method uses graph information, but the construction of adjacency matrix makes this method hard to analyze.13

See also

  • [1] UMASS ALL lab

References

  1. [No reference provided in original]

  2. Keller, Philipp; Mannor, Shie; Precup, Doina. (2006) Automatic Basis Function Construction for Approximate Dynamic Programming and Reinforcement Learning. Proceedings of the 23rd International Conference on Machine Learning, Pittsburgh, PA. /wiki/Doina_Precup

  3. Richard S. Sutton and Andrew G. Barto. Reinforcement Learning: An Introduction.(1998) MIT Press, Cambridge, MA, chapter 8

  4. Mahadevan, Sridhar; Maggioni, Mauro. (2005) Value function approximation with diffusion wavelets and Laplacian eigenfuctions. Proceedings of Advances in Neural Information Processing Systems.

  5. Mahadevan, Sridhar; Maggioni, Mauro. (2005) Value function approximation with diffusion wavelets and Laplacian eigenfuctions. Proceedings of Advances in Neural Information Processing Systems.

  6. Ilse C. F. Ipsen and Carl D. Meyer. The idea behind Krylov methods. American Mathematical Monthly, 105(10):889–899, 1998. /wiki/Ilse_Ipsen

  7. M. Petrik. An analysis of Laplacian methods for value function approximation in MDPs. In Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI), pages 2574–2579, 2007

  8. M. Petrik. An analysis of Laplacian methods for value function approximation in MDPs. In Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI), pages 2574–2579, 2007

  9. R. Parr, C. Painter-Wakefield, L.-H. Li, and M. Littman. Analyzing feature generation for value-function approximation. In ICML’07, 2007.

  10. S. Mahadevan and B. Liu. Basis construction from power series expansions of value functions. In NIPS’10, 2010

  11. William J. Stewart. Numerical methods for computing stationary distributions of finite irreducible Markov chains. In Advances in Computational Probability. Kluwer Academic Publishers, 1997. /wiki/Markov_chain

  12. M. Petrik. An analysis of Laplacian methods for value function approximation in MDPs. In Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI), pages 2574–2579, 2007

  13. M. Petrik. An analysis of Laplacian methods for value function approximation in MDPs. In Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI), pages 2574–2579, 2007