Generally, progressive-iterative approximation (PIA) can be divided into interpolation and approximation schemes.18 In interpolation algorithms, the number of control points is equal to that of the data points; in approximation algorithms, the number of control points can be less than that of the data points. Specifically, there are some representative iteration methods—such as local-PIA,19 implicit-PIA,20 fairing-PIA,21 and isogeometric least-squares progressive-iterative approximation (IG-LSPIA)22—that are specialized for solving the isogeometric analysis problem.23
In interpolation algorithms of PIA,24252627 every data point is used as a control point. To facilitate the description of the PIA iteration format for different forms of curves and surfaces, the following formula is uniformly used: P ( t ) = ∑ i = 1 n P i B i ( t ) . {\displaystyle \mathbf {P} (\mathbf {t} )=\sum _{i=1}^{n}\mathbf {P} _{i}B_{i}(\mathbf {t} ).} For example:
Additionally, this can be applied to NURBS curves and surfaces, T-spline surfaces, and triangular Bernstein–Bézier surfaces.31
Given an ordered data set Q i {\displaystyle \mathbf {Q} _{i}} with parameters t i {\displaystyle t_{i}} satisfying t 1 < t 2 < ⋯ {\displaystyle t_{1}<t_{2}<\cdots } for i = 1 , 2 , ⋯ , n {\displaystyle i=1,2,\cdots ,n} , the initial fitting curve is:32 P ( 0 ) ( t ) = ∑ i = 1 n P i ( 0 ) B i ( t ) {\displaystyle \mathbf {P} ^{(0)}(t)=\sum _{i=1}^{n}\mathbf {P} _{i}^{(0)}B_{i}(t)} where the initial control points of the initial fitting curve P i ( 0 ) {\displaystyle \mathbf {P} _{i}^{(0)}} can be randomly selected. Suppose that after the k {\displaystyle k} th iteration, the k {\displaystyle k} th fitting curve P ( k ) ( t ) {\displaystyle \mathbf {P} ^{(k)}(t)} is generated by
To construct the ( k + 1 ) {\displaystyle (k+1)} st curve, we first calculate the difference vectors, Δ i ( k ) = Q i − P ( k ) ( t i ) , i = 1 , 2 , ⋯ , n {\displaystyle \mathbf {\Delta } _{i}^{(k)}=\mathbf {Q} _{i}-\mathbf {P} ^{(k)}(t_{i}),\quad i=1,2,\cdots ,n} and use them to update the control points by P i ( k + 1 ) = P i ( k ) + Δ i ( k ) {\displaystyle \mathbf {P} _{i}^{(k+1)}=\mathbf {P} _{i}^{(k)}+\mathbf {\Delta } _{i}^{(k)}} which leads to the ( k + 1 ) {\displaystyle (k+1)} st fitting curve: P ( k + 1 ) ( t ) = ∑ i = 1 n P i ( k + 1 ) B i ( t ) . {\displaystyle \mathbf {P} ^{(k+1)}(t)=\sum _{i=1}^{n}\mathbf {P} _{i}^{(k+1)}B_{i}(t).} In this way, we obtain a sequence of curves P ( α ) ( t ) , α = 0 , 1 , 2 , ⋯ {\textstyle \mathbf {P} ^{(\alpha )}(t),\alpha =0,1,2,\cdots } , which converges to a limit curve that interpolates the give data points,3334 i.e., lim α → ∞ P ( α ) ( t i ) = Q i , i = 1 , 2 , ⋯ , n . {\displaystyle \lim \limits _{\alpha \rightarrow \infty }\mathbf {P} ^{(\alpha )}(t_{i})=\mathbf {Q} _{i},\quad i=1,2,\cdots ,n.}
For the B-spline curve and surface fitting problem, Deng and Lin proposed a least-squares progressive–iterative approximation (LSPIA),3536 which allows the number of control points to be less than the number of the data points and is more suitable for large-scale data fitting problems.37
Assume there exists m {\displaystyle m} data points and n {\displaystyle n} control points, where n ≤ m {\displaystyle n\leq m} . Start with equation (1), which gives the k {\displaystyle k} th fitting curve as P ( k ) ( t ) = ∑ j = 1 n P j ( k ) B j ( t ) . {\displaystyle \mathbf {P} ^{(k)}(t)=\sum _{j=1}^{n}\mathbf {P} _{j}^{(k)}B_{j}(t).} To generate the ( k + 1 ) {\displaystyle (k+1)} th fitting curve, first compute the difference vectors for the data points3839 δ i ( k ) = Q i − P ( k ) ( t i ) , i = 1 , 2 , ⋯ , m {\displaystyle {\boldsymbol {\delta }}_{i}^{(k)}=\mathbf {Q} _{i}-\mathbf {P} ^{(k)}(t_{i}),\quad i=1,2,\cdots ,m} and then the difference vectors for the control points Δ j ( k ) = ∑ i ∈ I j c i B j ( t i ) δ i ( k ) ∑ i ∈ I j c i B j ( t i ) , j = 1 , 2 , ⋯ , n {\displaystyle \mathbf {\Delta } _{j}^{(k)}={\frac {\sum _{i\in I_{j}}{c_{i}B_{j}(t_{i}){\boldsymbol {\delta }}_{i}^{(k)}}}{\sum _{i\in I_{j}}c_{i}B_{j}(t_{i})}},\quad j=1,2,\cdots ,n} where I j {\displaystyle I_{j}} is the index set of the data points in the j {\displaystyle j} th group, whose parameters fall in the local support of the j {\displaystyle j} th basis function, i.e., B j ( t i ) ≠ 0 {\displaystyle B_{j}(t_{i})\neq 0} . The c i {\displaystyle c_{i}} are weights that guarantee the convergence of the algorithm, usually taken as c i = 1 , i ∈ I j {\displaystyle c_{i}=1,i\in I_{j}} .
Finally, the control points of the ( k + 1 ) {\displaystyle (k+1)} th curve are updated by P j ( k + 1 ) = P j ( k ) + Δ j ( k ) , {\displaystyle \mathbf {P} _{j}^{(k+1)}=\mathbf {P} _{j}^{(k)}+\mathbf {\Delta } _{j}^{(k)},} leading to the ( k + 1 ) {\displaystyle (k+1)} th fitting curve P ( k + 1 ) ( t ) {\displaystyle \mathbf {P} ^{(k+1)}(t)} . In this way, we obtain a sequence of curve, and the limit curve converges to the least-squares fitting result to the given data points.4041
In the local-PIA method,42 the control points are divided into active and fixed control points, whose subscripts are denoted as I = { i 1 , i 2 , ⋯ , i I } {\textstyle I=\left\{i_{1},i_{2},\cdots ,i_{I}\right\}} and J = { j 1 , j 2 , ⋯ , j J } {\textstyle J=\left\{j_{1},j_{2},\cdots ,j_{J}\right\}} , respectively. Assume that, the k {\textstyle k} th fitting curve is P ( k ) ( t ) = ∑ j = 1 n P j ( k ) B j ( t ) {\textstyle \mathbf {P} ^{(k)}(t)=\sum _{j=1}^{n}\mathbf {P} _{j}^{(k)}B_{j}(t)} , where the fixed control points satisfy P j ( k ) = P j ( 0 ) , j ∈ J , k = 0 , 1 , 2 , ⋯ . {\displaystyle \mathbf {P} _{j}^{(k)}=\mathbf {P} _{j}^{(0)},\quad j\in J,\quad k=0,1,2,\cdots .} Then, on the one hand, the iterative formula of the difference vector Δ h ( k + 1 ) {\textstyle \mathbf {\Delta } _{h}^{(k+1)}} corresponding to the fixed control points is Δ h ( k + 1 ) = Q h − ∑ j = 1 n P j ( k + 1 ) B j ( t h ) = Q h − ∑ j ∈ J P j ( k + 1 ) B j ( t h ) − ∑ i ∈ I ( P i ( k ) + Δ i ( k ) ) B i ( t h ) = Q h − ∑ j = 1 n P j ( k ) B j ( t h ) − ∑ i ∈ I Δ i ( k ) B i ( t h ) = Δ h ( k ) − ∑ i ∈ I Δ i ( k ) B i ( t h ) , h ∈ J . {\displaystyle {\begin{aligned}\mathbf {\Delta } _{h}^{(k+1)}&=\mathbf {Q} _{h}-\sum _{j=1}^{n}\mathbf {P} _{j}^{(k+1)}B_{j}(t_{h})\\&=\mathbf {Q} _{h}-\sum _{j\in J}\mathbf {P} _{j}^{(k+1)}B_{j}(t_{h})-\sum _{i\in I}\left(\mathbf {P} _{i}^{(k)}+\mathbf {\Delta } _{i}^{(k)}\right)B_{i}(t_{h})\\&=\mathbf {Q} _{h}-\sum _{j=1}^{n}\mathbf {P} _{j}^{(k)}B_{j}(t_{h})-\sum _{i\in I}\mathbf {\Delta } _{i}^{(k)}B_{i}(t_{h})\\&=\mathbf {\Delta } _{h}^{(k)}-\sum _{i\in I}\mathbf {\Delta } _{i}^{(k)}B_{i}(t_{h}),\quad h\in J.\end{aligned}}} On the other hand, the iterative formula of the difference vector D l ( k + 1 ) {\textstyle \mathbf {D} _{l}^{(k+1)}} corresponding to the active control points is Δ l ( k + 1 ) = Q l − ∑ j = 1 n P j ( k + 1 ) B j ( t l ) = Q l − ∑ j = 1 n P j ( k ) B j ( t l ) − ∑ i ∈ I Δ i ( k ) B i ( t l ) = Δ l ( k ) − ∑ i ∈ I Δ i ( k ) B i ( t l ) = − Δ i 1 ( k ) B i 1 ( t l ) − Δ i 2 ( k ) B i 2 ( t l ) − ⋯ + ( 1 − B l ( t l ) ) Δ l ( k ) − ⋯ − Δ i I ( k ) B i I ( t l ) , l ∈ I . {\displaystyle {\begin{aligned}\mathbf {\Delta } _{l}^{(k+1)}&=\mathbf {Q} _{l}-\sum _{j=1}^{n}\mathbf {P} _{j}^{(k+1)}B_{j}(t_{l})\\&=\mathbf {Q} _{l}-\sum _{j=1}^{n}\mathbf {P} _{j}^{(k)}B_{j}(t_{l})-\sum _{i\in I}\mathbf {\Delta } _{i}^{(k)}B_{i}(t_{l})\\&=\mathbf {\Delta } _{l}^{(k)}-\sum _{i\in I}\mathbf {\Delta } _{i}^{(k)}B_{i}(t_{l})\\&=-\mathbf {\Delta } _{i_{1}}^{(k)}B_{i_{1}}(t_{l})-\mathbf {\Delta } _{i_{2}}^{(k)}B_{i_{2}}(t_{l})-\cdots +\left(1-B_{l}(t_{l})\right)\mathbf {\Delta } _{l}^{(k)}-\cdots -\mathbf {\Delta } _{i_{I}}^{(k)}B_{i_{I}}(t_{l}),\quad l\in I.\end{aligned}}} Arranging the above difference vectors into a one-dimensional sequence, D ( k + 1 ) = [ Δ j 1 ( k + 1 ) , Δ j 2 ( k + 1 ) , ⋯ , Δ j J ( k + 1 ) , Δ i 1 ( k + 1 ) , Δ i 2 ( k + 1 ) , ⋯ , Δ i I ( k + 1 ) ] T , k = 0 , 1 , 2 , ⋯ , {\displaystyle \mathbf {D} ^{(k+1)}=\left[\mathbf {\Delta } _{j_{1}}^{(k+1)},\mathbf {\Delta } _{j_{2}}^{(k+1)},\cdots ,\mathbf {\Delta } _{j_{J}}^{(k+1)},\mathbf {\Delta } _{i_{1}}^{(k+1)},\mathbf {\Delta } _{i_{2}}^{(k+1)},\cdots ,\mathbf {\Delta } _{i_{I}}^{(k+1)}\right]^{T},\quad k=0,1,2,\cdots ,} the local iteration format in matrix form is, D ( k + 1 ) = T D ( k ) , k = 0 , 1 , 2 , ⋯ , {\displaystyle \mathbf {D} ^{(k+1)}=\mathbf {T} \mathbf {D} ^{(k)},\quad k=0,1,2,\cdots ,} where T {\textstyle \mathbf {T} } is the iteration matrix: T = [ E J − B 1 0 E I − B 2 ] , {\displaystyle \mathbf {T} ={\begin{bmatrix}\mathbf {E} _{J}&-\mathbf {B} _{1}\\0&\mathbf {E} _{I}-\mathbf {B} _{2}\end{bmatrix}},} where E J {\textstyle \mathbf {E} _{J}} and E I {\textstyle \mathbf {E} _{I}} are the identity matrices and B 1 = [ B i 1 ( t j 1 ) B i 2 ( t j 1 ) ⋯ B i I ( t j 1 ) B i 1 ( t j 2 ) B i 2 ( t j 2 ) ⋯ B i I ( t j 2 ) ⋮ ⋮ ⋮ ⋮ B i 1 ( t j J ) B i 2 ( t j J ) ⋯ B i I ( t j J ) ] , B 2 = [ B i 1 ( t i 1 ) B i 2 ( t i 1 ) ⋯ B i I ( t i 1 ) B i 1 ( t i 2 ) B i 2 ( t i 2 ) ⋯ B i I ( t i 2 ) ⋮ ⋮ ⋮ ⋮ B i 1 ( t i I ) B i 2 ( t i I ) ⋯ B i I ( t i I ) ] . {\displaystyle \mathbf {B} _{1}={\begin{bmatrix}B_{i_{1}}\left(t_{j_{1}}\right)&B_{i_{2}}\left(t_{j_{1}}\right)&\cdots &B_{i_{I}}\left(t_{j_{1}}\right)\\B_{i_{1}}\left(t_{j_{2}}\right)&B_{i_{2}}\left(t_{j_{2}}\right)&\cdots &B_{i_{I}}\left(t_{j_{2}}\right)\\\vdots &\vdots &\vdots &\vdots \\B_{i_{1}}\left(t_{j_{J}}\right)&B_{i_{2}}\left(t_{j_{J}}\right)&\cdots &B_{i_{I}}\left(t_{j_{J}}\right)\\\end{bmatrix}},\mathbf {B} _{2}={\begin{bmatrix}B_{i_{1}}\left(t_{i_{1}}\right)&B_{i_{2}}\left(t_{i_{1}}\right)&\cdots &B_{i_{I}}\left(t_{i_{1}}\right)\\B_{i_{1}}\left(t_{i_{2}}\right)&B_{i_{2}}\left(t_{i_{2}}\right)&\cdots &B_{i_{I}}\left(t_{i_{2}}\right)\\\vdots &\vdots &\vdots &\vdots \\B_{i_{1}}\left(t_{i_{I}}\right)&B_{i_{2}}\left(t_{i_{I}}\right)&\cdots &B_{i_{I}}\left(t_{i_{I}}\right)\\\end{bmatrix}}.} The above local iteration format converges and can be extended to blending surfaces43 and subdivision surfaces.44
The PIA format for implicit curve and surface reconstruction is presented in the following.45 Given an ordered point cloud { Q i } i = 1 n {\textstyle \left\{\mathbf {Q} _{i}\right\}_{i=1}^{n}} and a unit normal vector { n i } i = 1 n {\textstyle \left\{\mathbf {n} _{i}\right\}_{i=1}^{n}} on the data points, we want to reconstruct an implicit curve from the given point cloud. To avoid a trivial solution, some offset points { Q l } l = n + 1 2 n {\textstyle \left\{\mathbf {Q} _{l}\right\}_{l=n+1}^{2n}} are added to the point cloud.46 They are offset by a distance σ {\textstyle \sigma } along the unit normal vector of each point Q l = Q i + σ n i , l = n + i , i = 1 , 2 , ⋯ , n . {\displaystyle \mathbf {Q} _{l}=\mathbf {Q} _{i}+\sigma \mathbf {n} _{i},\quad l=n+i,\quad i=1,2,\cdots ,n.} Assume that ϵ {\textstyle \epsilon } is the value of the implicit function at the offset point f ( Q l ) = ϵ , l = n + 1 , n + 2 , ⋯ , 2 n . {\displaystyle f\left(\mathbf {Q} _{l}\right)=\epsilon ,\quad l=n+1,n+2,\cdots ,2n.} Let the implicit curve after the α {\textstyle \alpha } th iteration be f ( α ) ( x , y ) = ∑ i = 1 N u ∑ j = 1 N v C i j ( α ) B i ( x ) B j ( y ) , {\displaystyle f^{(\alpha )}(x,y)=\sum _{i=1}^{N_{u}}\sum _{j=1}^{N_{v}}C_{ij}^{(\alpha )}B_{i}(x)B_{j}(y),} where C i j ( α ) {\textstyle C_{ij}^{(\alpha )}} is the control point.
Define the difference vector of data points as47 δ k ( α ) = 0 − f ( α ) ( x k , y k ) , k = 1 , 2 , ⋯ , n , δ l ( α ) = ϵ − f ( α ) ( x l , y l ) , l = n + 1 , n + 2 , ⋯ , 2 n . {\displaystyle {\begin{aligned}{\boldsymbol {\delta }}_{k}^{(\alpha )}&=0-f^{(\alpha )}(x_{k},y_{k}),\quad k=1,2,\cdots ,n,\\{\boldsymbol {\delta }}_{l}^{(\alpha )}&=\epsilon -f^{(\alpha )}(x_{l},y_{l}),\quad l=n+1,n+2,\cdots ,2n.\end{aligned}}} Next, calculate the difference vector of control coefficients Δ i j ( α ) = μ ∑ k = 1 2 n B i ( x k ) B j ( y k ) δ k ( α ) , i = 1 , 2 , ⋯ , N u , j = 1 , 2 , ⋯ , N v , {\displaystyle {\boldsymbol {\Delta }}_{ij}^{(\alpha )}=\mu \sum _{k=1}^{2n}B_{i}(x_{k})B_{j}(y_{k}){\boldsymbol {\delta }}_{k}^{(\alpha )},\quad i=1,2,\cdots ,N_{u},\quad j=1,2,\cdots ,N_{v},} where μ {\textstyle \mu } is the convergence coefficient. As a result, the new control coefficients are C i j ( α + 1 ) = C i j ( α ) + Δ i j ( α ) , {\displaystyle C_{ij}^{(\alpha +1)}=C_{ij}^{(\alpha )}+{\boldsymbol {\Delta }}_{ij}^{(\alpha )},} leading to the new algebraic B-spline curve f ( α + 1 ) ( x , y ) = ∑ i = 1 N u ∑ j = 1 N v C i j ( α + 1 ) B i ( x ) B j ( y ) . {\displaystyle f^{(\alpha +1)}(x,y)=\sum _{i=1}^{N_{u}}\sum _{j=1}^{N_{v}}C_{ij}^{(\alpha +1)}B_{i}(x)B_{j}(y).} The above procedure is carried out iteratively to generate a sequence of algebraic B-spline functions { f ( α ) ( x , y ) , α = 0 , 1 , 2 , ⋯ } {\textstyle \left\{f^{(\alpha )}(x,y),\quad \alpha =0,1,2,\cdots \right\}} . The sequence converges to a minimization problem with constraints when the initial control coefficients C i j ( 0 ) = 0 {\textstyle C_{ij}^{(0)}=0} .48
Assume that the implicit surface generated after the α {\textstyle \alpha } th iteration is f ( α ) ( x , y , z ) = ∑ i = 1 N u ∑ j = 1 N v ∑ k = 1 N w C i j k ( α ) B i ( x ) B j ( y ) B k ( z ) , {\displaystyle f^{(\alpha )}(x,y,z)=\sum _{i=1}^{N_{u}}\sum _{j=1}^{N_{v}}\sum _{k=1}^{N_{w}}C_{ijk}^{(\alpha )}B_{i}(x)B_{j}(y)B_{k}(z),} the iteration format is similar to that of the curve case.4950
To develop fairing-PIA, we first define the functionals as follows:51 F r , j ( f ) = ∫ t 1 t m B r , j ( t ) f d t , j = 1 , 2 , ⋯ , n , r = 1 , 2 , 3 , {\displaystyle {\mathcal {F}}_{r,j}(f)=\int _{t_{1}}^{t_{m}}B_{r,j}(t)fdt,\quad j=1,2,\cdots ,n,\quad r=1,2,3,} where B r , j ( t ) {\textstyle B_{r,j}(t)} represents the r {\textstyle r} th derivative of the basis function B j ( t ) {\textstyle B_{j}(t)} ,52 (e.g. B-spline basis function).
Let the curve after the k {\textstyle k} th iteration be P [ k ] ( t ) = ∑ j = 1 n B j ( t ) P j [ k ] , t ∈ [ t 1 , t m ] . {\displaystyle \mathbf {P} ^{[k]}(t)=\sum _{j=1}^{n}B_{j}(t)\mathbf {P} _{j}^{[k]},\quad t\in [t_{1},t_{m}].} To construct the new curve P [ k + 1 ] ( t ) {\textstyle \mathbf {P} ^{[k+1]}(t)} , we first calculate the ( k + 1 ) {\textstyle (k+1)} st difference vectors for data points,53 d i [ k ] = Q i − P [ k ] ( t i ) , i = 1 , 2 , ⋯ , m . {\displaystyle \mathbf {d} _{i}^{[k]}=\mathbf {Q} _{i}-\mathbf {P} ^{[k]}(t_{i}),\quad i=1,2,\cdots ,m.} Then, the fitting difference vectors and the fairing vectors for control points are calculated by54 δ j [ k ] = ∑ h ∈ I j B j ( t h ) d h [ k ] , j = 1 , 2 , ⋯ , n η j [ k ] = ∑ l = 1 n F r , l ( B r , j ( t ) ) P l [ k ] , j = 1 , 2 , ⋯ , n {\displaystyle {\begin{aligned}{\boldsymbol {\delta }}_{j}^{[k]}&=\sum _{h\in I_{j}}B_{j}(t_{h})\mathbf {d} _{h}^{[k]},\quad j=1,2,\cdots ,n\\{\boldsymbol {\eta }}_{j}^{[k]}&=\sum _{l=1}^{n}{\mathcal {F}}_{r,l}\left(B_{r,j}(t)\right)\mathbf {P} _{l}^{[k]},\quad j=1,2,\cdots ,n\\\end{aligned}}} Finally, the control points of the ( k + 1 ) {\displaystyle (k+1)} st curve are produced by55 P j [ k + 1 ] = P j [ k ] + μ j [ ( 1 − ω j ) δ j [ k ] − ω j η j [ k ] ] , j = 1 , 2 , ⋯ , n , {\displaystyle \mathbf {P} _{j}^{[k+1]}=\mathbf {P} _{j}^{[k]}+\mu _{j}\left[\left(1-\omega _{j}\right){\boldsymbol {\delta }}_{j}^{[k]}-\omega _{j}{\boldsymbol {\eta }}_{j}^{[k]}\right],\quad j=1,2,\cdots ,n,} where μ j {\displaystyle \mu _{j}} is a normalization weight, and ω j {\displaystyle \omega _{j}} is a smoothing weight corresponding to the j {\displaystyle j} th control point. The smoothing weights can be employed to adjust the smoothness individually, thus bringing great flexibility for smoothness.56 The larger the smoothing weight is, the smoother the generated curve is. The new curve is obtained as follows P [ k + 1 ] ( t ) = ∑ j = 1 n B j ( t ) P j [ k + 1 ] , t ∈ [ t 1 , t m ] . {\displaystyle \mathbf {P} ^{[k+1]}(t)=\sum _{j=1}^{n}B_{j}(t)\mathbf {P} _{j}^{[k+1]},\quad t\in [t_{1},t_{m}].} In this way, we obtain a sequence of curves { P [ k ] ( t ) , k = 1 , 2 , 3 , ⋯ } {\textstyle \left\{\mathbf {P} ^{[k]}(t),\;k=1,2,3,\cdots \right\}} . The sequence converges to the solution of the conventional fairing method based on energy minimization when all smoothing weights are equal ( ω j = ω {\textstyle \omega _{j}=\omega } ).57 Similarly, the fairing-PIA can be extended to the surface case.
Isogeometric least-squares progressive-iterative approximation (IG-LSPIA).58 Given a boundary value problem59 { L u = f , in Ω , G u = g , on ∂ Ω , {\displaystyle \left\{{\begin{aligned}{\mathcal {L}}u=f,&\quad {\text{in}}\;\Omega ,\\{\mathcal {G}}u=g,&\quad {\text{on}}\;\partial \Omega ,\end{aligned}}\right.} where u : Ω → R {\textstyle u:\Omega \to \mathbb {R} } is the unknown solution, L {\textstyle {\mathcal {L}}} is the differential operator, G {\textstyle {\mathcal {G}}} is the boundary operator, and f {\textstyle f} and g {\textstyle g} are the continuous functions. In the isogeometric analysis method, NURBS basis functions60 are used as shape functions to solve the numerical solution of this boundary value problem.61 The same basis functions are applied to represent the numerical solution u h {\textstyle u_{h}} and the geometric mapping G {\textstyle G} : u h ( τ ^ ) = ∑ j = 1 n R j ( τ ^ ) u j , G ( τ ^ ) = ∑ j = 1 n R j ( τ ^ ) P j , {\displaystyle {\begin{aligned}u_{h}\left({\hat {\tau }}\right)&=\sum _{j=1}^{n}R_{j}({\hat {\tau }})u_{j},\\G({\hat {\tau }})&=\sum _{j=1}^{n}R_{j}({\hat {\tau }})P_{j},\end{aligned}}} where R j ( τ ^ ) {\textstyle R_{j}({\hat {\tau }})} denotes the NURBS basis function, u j {\textstyle u_{j}} is the control coefficient. After substituting the collocation points62 τ ^ i , i = 1 , 2 , . . . , m {\textstyle {\hat {\tau }}_{i},i=1,2,...,{m}} into the strong form of PDE, we obtain a discretized problem63 { L u h ( τ ^ i ) = f ( G ( τ ^ i ) ) , i ∈ I L , G u h ( τ ^ j ) = g ( G ( τ ^ j ) ) , j ∈ I G , {\displaystyle \left\{{\begin{aligned}{\mathcal {L}}u_{h}({\hat {\tau }}_{i})=f(G({\hat {\tau }}_{i})),&\quad i\in {\mathcal {I_{L}}},\\{\mathcal {G}}u_{h}({\hat {\tau }}_{j})=g(G({\hat {\tau }}_{j})),&\quad j\in {\mathcal {I_{G}}},\end{aligned}}\right.} where I L {\textstyle {\mathcal {I_{L}}}} and I G {\textstyle {\mathcal {I_{G}}}} denote the subscripts of internal and boundary collocation points, respectively.
Arranging the control coefficients u j {\textstyle u_{j}} of the numerical solution u h ( τ ^ ) {\textstyle u_{h}({\hat {\tau }})} into an 1 {\textstyle 1} -dimensional column vector U = [ u 1 , u 2 , . . . , u n ] T {\textstyle \mathbf {U} =[u_{1},u_{2},...,u_{n}]^{T}} , the discretized problem can be reformulated in matrix form as A U = b {\displaystyle \mathbf {AU} =\mathbf {b} } where A {\textstyle \mathbf {A} } is the collocation matrix and b {\textstyle \mathbf {b} } is the load vector.
Assume that the discretized load values are data points { b i } i = 1 m {\textstyle \left\{b_{i}\right\}_{i=1}^{m}} to be fitted. Given the initial guess of the control coefficients { u j ( 0 ) } j = 1 n , n < m {\textstyle \left\{u_{j}^{(0)}\right\}_{j=1}^{n},n<m} , we obtain an initial blending function64 U ( 0 ) ( τ ^ ) = ∑ j = 1 n A j ( τ ^ ) u j ( 0 ) , τ ^ ∈ [ τ ^ 1 , τ ^ m ] , {\displaystyle U^{(0)}({\hat {\tau }})=\sum _{j=1}^{n}A_{j}({\hat {\tau }})u_{j}^{(0)},\quad {\hat {\tau }}\in [{\hat {\tau }}_{1},{\hat {\tau }}_{m}],} where A j ( τ ^ ) {\textstyle A_{j}({\hat {\tau }})} , j = 1 , 2 , ⋯ , n {\textstyle j=1,2,\cdots ,n} , represents the combination of different order derivatives of the NURBS basis functions determined using the operators L {\textstyle {\mathcal {L}}} and G {\textstyle {\mathcal {G}}} A j ( τ ^ ) = { L R j ( τ ^ ) , τ ^ in Ω p i n , G R j ( τ ^ ) , τ ^ in Ω p b d , j = 1 , 2 , ⋯ , n , {\displaystyle A_{j}({\hat {\tau }})=\left\{{\begin{aligned}{\mathcal {L}}R_{j}({\hat {\tau }}),&\quad {\hat {\tau }}\ {\text{in}}\ \Omega _{p}^{in},\\{\mathcal {G}}R_{j}({\hat {\tau }}),&\quad {\hat {\tau }}\ {\text{in}}\ \Omega _{p}^{bd},\quad j=1,2,\cdots ,n,\end{aligned}}\right.} where Ω p i n {\textstyle \Omega _{p}^{in}} and Ω p b d {\textstyle \Omega _{p}^{bd}} indicate the interior and boundary of the parameter domain, respectively. Each A j ( τ ^ ) {\textstyle A_{j}({\hat {\tau }})} corresponds to the j {\textstyle j} th control coefficient. Assume that J i n {\textstyle J_{in}} and J b d {\textstyle J_{bd}} are the index sets of the internal and boundary control coefficients, respectively. Without loss of generality, we further assume that the boundary control coefficients have been obtained using strong or weak imposition and are fixed, i.e., u j ( k ) = u j ∗ , j ∈ J b d , k = 0 , 1 , 2 , ⋯ . {\displaystyle u_{j}^{(k)}=u_{j}^{*},\quad j\in J_{bd},\quad k=0,1,2,\cdots .} The k {\textstyle k} th blending function, generated after the k {\textstyle k} th iteration of IG-LSPIA,65 is assumed to be as follows: U ( k ) ( τ ^ ) = ∑ j = 1 n A j ( τ ^ ) u j ( k ) , τ ^ ∈ [ τ ^ 1 , τ ^ m ] . {\displaystyle U^{(k)}({\hat {\tau }})=\sum _{j=1}^{n}A_{j}({\hat {\tau }})u_{j}^{(k)},\quad {\hat {\tau }}\in [{\hat {\tau }}_{1},{\hat {\tau }}_{m}].} Then, the difference vectors for collocation points (DCP) in the ( k + 1 ) {\textstyle (k+1)} st iteration are obtained using δ i ( k ) = b i − ∑ j = 1 n A j ( τ ^ i ) u j ( k ) = b i − ∑ j ∈ J b d A j ( τ ^ i ) u j ( k ) − ∑ j ∈ J i n A j ( τ ^ i ) u j ( k ) , i = 1 , 2 , . . . , m . {\displaystyle {\begin{aligned}{\boldsymbol {\delta }}_{i}^{(k)}&=b_{i}-\sum _{j=1}^{n}A_{j}({\hat {\tau }}_{i})u_{j}^{(k)}\\&=b_{i}-\sum _{j\in J_{bd}}A_{j}({\hat {\tau }}_{i})u_{j}^{(k)}-\sum _{j\in J_{in}}A_{j}({\hat {\tau }}_{i})u_{j}^{(k)},\quad i=1,2,...,m.\end{aligned}}} Moreover, group all load values whose parameters fall in the local support of the j {\textstyle j} th derivatives function, i.e., A j ( τ ^ i ) ≠ 0 {\textstyle A_{j}({\hat {\tau }}_{i})\neq 0} , into the j {\textstyle j} th group corresponding to the j {\textstyle j} th control coefficient, and denote the index set of the j {\textstyle j} th group of load values as I j {\textstyle I_{j}} . Lastly, the differences for control coefficients (DCC) can be constructed as follows:66 d j ( k ) = μ ∑ h ∈ I j A j ( τ ^ h ) δ h ( k ) , j = 1 , 2 , . . . , n , {\displaystyle d_{j}^{(k)}=\mu \sum _{h\in I_{j}}A_{j}({\hat {\tau }}_{h}){\boldsymbol {\delta }}_{h}^{(k)},\quad j=1,2,...,n,} where μ {\textstyle \mu } is a normalization weight to guarantee the convergence of the algorithm.
Thus, the new control coefficients are updated via the following formula, u j ( k + 1 ) = u j ( k ) + d j ( k ) , j = 1 , 2 , . . . , n , {\displaystyle u_{j}^{(k+1)}=u_{j}^{(k)}+d_{j}^{(k)},\quad j=1,2,...,n,} Consequently, the ( k + 1 ) {\textstyle (k+1)} st blending function is generated as follows: U ( k + 1 ) ( τ ^ ) = ∑ j = 1 n A j ( τ ^ ) u j ( k + 1 ) . {\displaystyle U^{(k+1)}({\hat {\tau }})=\sum _{j=1}^{n}A_{j}({\hat {\tau }})u_{j}^{(k+1)}.} The above iteration process is performed until the desired fitting precision is reached and a sequence of blending functions is obtained { U ( k ) ( τ ^ ) , k = 0 , 1 , … } . {\displaystyle \left\{U^{(k)}({\hat {\tau }}),k=0,1,\dots \right\}.} The IG-LSPIA converges to the solution of a constrained least-squares collocation problem.67
Let n be the number of control points and m be the number of data points.
If n = m {\textstyle n=m} , the PIA iterative format in matrix form is 68 P ( α + 1 ) = P ( α ) + Δ ( α ) = P ( α ) + Q − B P ( α ) = ( I − B ) P ( α ) + Q {\displaystyle {\begin{aligned}\mathbf {P^{(\alpha +1)}} &=\mathbf {P^{(\alpha )}} +\mathbf {\Delta } ^{(\alpha )}\\&=\mathbf {P} ^{(\alpha )}+\mathbf {Q} -\mathbf {B} \mathbf {P} ^{(\alpha )}\\&=\left(\mathbf {I} -\mathbf {B} \right)\mathbf {P} ^{(\alpha )}+\mathbf {Q} \end{aligned}}} where Q = [ Q 1 , Q 2 , ⋯ , Q m ] T P ( α ) = [ P 1 ( α ) , P 2 ( α ) , ⋯ , P n ( α ) ] T Δ ( α ) = [ Δ 1 ( α ) , Δ 2 ( α ) , ⋯ , Δ n ( α ) ] T B = [ B 1 ( t 1 ) B 2 ( t 1 ) ⋯ B n ( t 1 ) B 1 ( t 2 ) B 2 ( t 2 ) ⋯ B n ( t 2 ) ⋮ ⋮ ⋱ ⋮ B 1 ( t m ) B 2 ( t m ) ⋯ B n ( t m ) ] . {\displaystyle {\begin{aligned}\mathbf {Q} &=\left[\mathbf {Q} _{1},\mathbf {Q} _{2},\cdots ,\mathbf {Q} _{m}\right]^{T}\\\mathbf {P^{(\alpha )}} &=\left[\mathbf {P} _{1}^{(\alpha )},\mathbf {P} _{2}^{(\alpha )},\cdots ,\mathbf {P} _{n}^{(\alpha )}\right]^{T}\\\mathbf {\Delta } ^{(\alpha )}&=\left[\mathbf {\Delta } _{1}^{(\alpha )},\mathbf {\Delta } _{2}^{(\alpha )},\cdots ,\mathbf {\Delta } _{n}^{(\alpha )}\right]^{T}\\\mathbf {B} &={\begin{bmatrix}B_{1}(t_{1})&B_{2}(t_{1})&\cdots &B_{n}(t_{1})\\B_{1}(t_{2})&B_{2}(t_{2})&\cdots &B_{n}(t_{2})\\\vdots &\vdots &\ddots &\vdots \\B_{1}(t_{m})&B_{2}(t_{m})&\cdots &B_{n}(t_{m})\\\end{bmatrix}}.\end{aligned}}} The convergence of the PIA is related to the properties of the collocation matrix. If the spectral radius of the iteration matrix I − B {\displaystyle \mathbf {I} -\mathbf {B} } is less than 1 {\displaystyle 1} , then the PIA is convergent. It has been shown that the PIA methods are convergent for Bézier curves and surfaces, B-spline curves and surfaces, NURBS curves and surfaces, triangular Bernstein–Bézier surfaces, and subdivision surfaces (Loop, Catmull-Clark, Doo-Sabin).69
If n < m {\textstyle n<m} , the LSPIA in matrix form is7071 P ( α + 1 ) = P ( α ) + μ B T Δ ( α ) = P ( α ) + μ B T ( Q − B P ( α ) ) = ( I − μ B T B ) P ( α ) + μ B T Q . {\displaystyle {\begin{aligned}\mathbf {P^{(\alpha +1)}} &=\mathbf {P^{(\alpha )}} +\mu \mathbf {B} ^{T}\mathbf {\Delta } ^{(\alpha )}\\&=\mathbf {P} ^{(\alpha )}+\mu \mathbf {B} ^{T}\left(\mathbf {Q} -\mathbf {B} \mathbf {P} ^{(\alpha )}\right)\\&=\left(\mathbf {I} -\mu \mathbf {B} ^{T}\mathbf {B} \right)\mathbf {P} ^{(\alpha )}+\mu \mathbf {B} ^{T}\mathbf {Q} .\end{aligned}}} When the matrix B T B {\textstyle \mathbf {B} ^{T}\mathbf {B} } is nonsingular, the following results can be obtained:72
Lemma—If 0 < μ < 2 λ 0 {\textstyle 0<\mu <{\frac {2}{\lambda _{0}}}} , where λ 0 {\textstyle \lambda _{0}} is the largest eigenvalue of the matrix B T B {\textstyle \mathbf {B} ^{T}\mathbf {B} } , then the eigenvalues of μ B T B {\textstyle \mu \mathbf {B} ^{T}\mathbf {B} } are real numbers and satisfy 0 < λ ( μ B T B ) < 2 {\textstyle 0<\lambda (\mu \mathbf {B} ^{T}\mathbf {B} )<2} .
Proof Since B T B {\textstyle \mathbf {B} ^{T}\mathbf {B} } is nonsingular, and μ > 0 {\textstyle \mu >0} , then λ ( μ B T B ) > 0 {\textstyle \lambda (\mu \mathbf {B} ^{T}\mathbf {B} )>0} . Moreover, λ ( μ B T B ) = μ λ ( B T B ) < 2 λ ( B T B ) λ 0 < 2. {\displaystyle \lambda (\mu \mathbf {B} ^{T}\mathbf {B} )=\mu \lambda (\mathbf {B} ^{T}\mathbf {B} )<2{\frac {\lambda (\mathbf {B} ^{T}\mathbf {B} )}{\lambda _{0}}}<2.} In summary, 0 < λ ( μ B T B ) < 2 {\textstyle 0<\lambda (\mu \mathbf {B} ^{T}\mathbf {B} )<2} .
Theorem—If 0 < μ < 2 λ 0 {\textstyle 0<\mu <{\frac {2}{\lambda _{0}}}} , LSPIA is convergent, and converges to the least-squares fitting result to the given data points.7374
Proof From the matrix form of iterative format, we obtain the following: P ( α + 1 ) = ( I − μ B T B ) P ( α ) + μ B T Q , = ( I − μ B T B ) [ ( I − μ B T B ) P ( α − 1 ) + μ B T Q ] + μ B T Q , = ( I − μ B T B ) 2 P ( α − 1 ) + ∑ i = 0 1 ( I − μ B T B ) μ B T Q , = ⋯ = ( I − μ B T B ) α + 1 P ( 0 ) + ∑ i = 0 α ( I − μ B T B ) α μ B T Q . {\displaystyle {\begin{aligned}\mathbf {P^{(\alpha +1)}} &=\left(\mathbf {I} -\mu \mathbf {B} ^{T}\mathbf {B} \right)\mathbf {P} ^{(\alpha )}+\mu \mathbf {B} ^{T}\mathbf {Q} ,\\&=\left(\mathbf {I} -\mu \mathbf {B} ^{T}\mathbf {B} \right)\left[\left(\mathbf {I} -\mu \mathbf {B} ^{T}\mathbf {B} \right)\mathbf {P} ^{(\alpha -1)}+\mu \mathbf {B} ^{T}\mathbf {Q} \right]+\mu \mathbf {B} ^{T}\mathbf {Q} ,\\&=\left(\mathbf {I} -\mu \mathbf {B} ^{T}\mathbf {B} \right)^{2}\mathbf {P} ^{(\alpha -1)}+\sum _{i=0}^{1}\left(\mathbf {I} -\mu \mathbf {B} ^{T}\mathbf {B} \right)\mu \mathbf {B} ^{T}\mathbf {Q} ,\\&=\cdots \\&=\left(\mathbf {I} -\mu \mathbf {B} ^{T}\mathbf {B} \right)^{\alpha +1}\mathbf {P} ^{(0)}+\sum _{i=0}^{\alpha }\left(\mathbf {I} -\mu \mathbf {B} ^{T}\mathbf {B} \right)^{\alpha }\mu \mathbf {B} ^{T}\mathbf {Q} .\\\end{aligned}}} According to above Lemma, the spectral radius of the matrix μ B T B {\textstyle \mu \mathbf {B} ^{T}\mathbf {B} } satisfies 0 < ρ ( μ B T B ) < 2 {\displaystyle 0<\rho \left({\mu \mathbf {B} ^{T}\mathbf {B} }\right)<2} and thus the spectral radius of the iteration matrix satisfies 0 < ρ ( I − μ B T B ) < 1. {\displaystyle 0<\rho \left({\mathbf {I} -\mu \mathbf {B} ^{T}\mathbf {B} }\right)<1.} When α → ∞ {\textstyle \alpha \rightarrow \infty } ( I − μ B T B ) ∞ = 0 , ∑ i = 0 ∞ ( I − μ B T B ) α = 1 μ ( B T B ) − 1 . {\displaystyle \left(\mathbf {I} -\mu \mathbf {B} ^{T}\mathbf {B} \right)^{\infty }=0,\ \sum _{i=0}^{\infty }\left(\mathbf {I} -\mu \mathbf {B} ^{T}\mathbf {B} \right)^{\alpha }={\frac {1}{\mu }}\left(\mathbf {B} ^{T}\mathbf {B} \right)^{-1}.} As a result, P ( ∞ ) = ( B T B ) − 1 B T Q , {\displaystyle \mathbf {P} ^{(\infty )}=\left(\mathbf {B} ^{T}\mathbf {B} \right)^{-1}\mathbf {B} ^{T}\mathbf {Q} ,} i.e., B T B P ( ∞ ) = B T Q {\textstyle \mathbf {B} ^{T}\mathbf {B} \mathbf {P} ^{(\infty )}=\mathbf {B} ^{T}\mathbf {Q} } , which is equivalent to the normal equation of the fitting problem. Hence, the LSPIA algorithm converges to the least squares result for a given sequence of points.
Lin et al. showed that LSPIA converges even when the iteration matrix is singular.75
Since PIA has obvious geometric meaning, constraints can be easily integrated in the iterations. Currently, PIA has been widely applied in many fields, such as data fitting, reverse engineering, geometric design, mesh generation, data compression, fairing curve and surface generation, and isogeometric analysis.
For implicit curve and surface reconstruction, PIA avoids the additional zero level set and regularization term, which greatly improves the speed of the reconstruction algorithm.87
Firstly, the data points are sampled on the original curve. Then, the initial polynomial approximation curve or rational approximation curve of the offset curve is generated from these sampled points. Finally, the offset curve is approximated iteratively using the PIA method.88
Given a triangular mesh model as input, the algorithm first constructs the initial hexahedral mesh, then extracts the quadrilateral mesh of the surface as the initial boundary mesh. During the iterations, the movement of each mesh vertex is constrained to ensure the validity of the mesh. Finally, the hexahedral model is fitted to the given input model. The algorithm can guarantee the validity of the generated hexahedral mesh, i.e., the Jacobi value at each mesh vertex is greater than zero.89
First, the image data are converted into a one-dimensional sequence by Hilbert scan. Then, these data points are fitted by LSPIA to generate a Hilbert curve. Finally, the Hilbert curve is sampled, and the compressed image can be reconstructed. This method can well preserve the neighborhood information of pixels.90
Given a data point set, we first define the fairing functional, and calculate the fitting difference vector and the fairing vector of the control point; then, adjust the control points with fairing weights. According to the above steps, the fairing curve and surface can be generated iteratively. Due to the sufficient fairing parameters, the method can achieve global or local fairing. It is also flexible to adjust knot vectors, fairing weights, or data parameterization after each round of iteration. The traditional energy-minimization method is a special case of this method, i.e., when the smooth weights are all the same.91
The discretized load values are regarded as the set of data points, and the combination of the basis functions and their derivative functions is used as the blending function for fitting. The method automatically adjusts the degrees of freedom of the numerical solution of the partial differential equation according to the fitting result of the blending function to the load values. In addition, the average iteration time per step is only related to the number of data points (i.e., collocation points) and unrelated to the number of control coefficients.92
Lin, Hong-Wei; Bao, Hu-Jun; Wang, Guo-Jin (2005). "Totally positive bases and progressive iteration approximation". Computers & Mathematics with Applications. 50 (3–4): 575–586. doi:10.1016/j.camwa.2005.01.023. ISSN 0898-1221. https://doi.org/10.1016%2Fj.camwa.2005.01.023 ↩
Lin, Hongwei; Maekawa, Takashi; Deng, Chongyang (2018). "Survey on geometric iterative methods and their applications". Computer-Aided Design. 95: 40–51. doi:10.1016/j.cad.2017.10.002. ISSN 0010-4485. /wiki/Doi_(identifier) ↩
Lin, Hongwei; Wang, Guojin; Dong, Chenshi (2004). "Constructing iterative non-uniform B-spline curve and surface to fit data points". Science in China Series F. 47 (3): 315. doi:10.1360/02yf0529. ISSN 1009-2757. S2CID 966980. /wiki/Doi_(identifier) ↩
Qi, Dongxu; Tian, Zixian; Zhang, Auxin; Feng, Jiabin (1975). "The method of numeric polish in curve fitting". Acta Math Sinica. 18 (3): 173–184. ↩
Carl, de Boor (1979). "How does Agee's smoothing method work?". Proceedings of the 1979 Army Numerical Analysis and Computers Conference, ARO Report. ↩
Maekawa, Takashi; Yasunori, Matsumoto; Ken, Namiki (2007). "Interpolation by geometric algorithm". Computer-Aided Design. 39 (4): 313–323. doi:10.1016/j.cad.2006.12.008. /wiki/Doi_(identifier) ↩
Cheng, Fuhua; Fan, Fengtao; Lai, Shuhua; Huang, Conglin; Wang, Jiaxi; Yong, Junhai (2008). "Progressive Interpolation Using Loop Subdivision Surfaces". Advances in Geometric Modeling and Processing. Lecture Notes in Computer Science. Vol. 4975. pp. 526–533. doi:10.1007/978-3-540-79246-8_43. ISBN 978-3-540-79245-1. 978-3-540-79245-1 ↩
Hoschek, Josef (February 1993). Fundamentals of computer aided geometric design. USA: A. K. Peters, Ltd. ISBN 978-1-56881-007-2. 978-1-56881-007-2 ↩
Shi, Limin; Wang, Renhong (2006). "An iterative algorithm of NURBS interpolation and approximation". Journal of Mathematical Research with Applications. 26 (4): 735–743. ↩
Lin, Hongwei; Zhang, Zhiyu (2013). "An Efficient Method for Fitting Large Data Sets Using T-Splines". SIAM Journal on Scientific Computing. 35 (6): A3052 – A3068. Bibcode:2013SJSC...35A3052L. doi:10.1137/120888569. ISSN 1064-8275. /wiki/Bibcode_(identifier) ↩
Hamza, Yusuf Fatihu; Lin, Hongwei; Li, Zhao (2020). "Implicit progressive-iterative approximation for curve and surface reconstruction". Computer Aided Geometric Design. 77: 101817. arXiv:1909.00551. doi:10.1016/j.cagd.2020.101817. S2CID 202540812. /wiki/ArXiv_(identifier) ↩
Lin, Hongwei (2010). "Local progressive-iterative approximation format for blending curves and patches". Computer Aided Geometric Design. 27 (4): 322–339. doi:10.1016/j.cagd.2010.01.003. ISSN 0167-8396. /wiki/Doi_(identifier) ↩
Jiang, Yini; Lin, Hongwei; Huang, Weixian (2023-05-16). "Fairing-PIA: progressive-iterative approximation for fairing curve and surface generation". The Visual Computer. 40 (3): 1467–1484. arXiv:2211.11416. doi:10.1007/s00371-023-02861-7. ISSN 0178-2789. /wiki/ArXiv_(identifier) ↩
Jiang, Yini; Lin, Hongwei (2023-02-10). "IG-LSPIA: Least Squares Progressive Iterative Approximation for Isogeometric Collocation Method". Mathematics. 11 (4): 898. doi:10.3390/math11040898. ISSN 2227-7390. https://doi.org/10.3390%2Fmath11040898 ↩
Hughes, T. J. R.; Cottrell, J. A.; Bazilevs, Y. (2005-10-01). "Isogeometric analysis: CAD, finite elements, NURBS, exact geometry and mesh refinement". Computer Methods in Applied Mechanics and Engineering. 194 (39): 4135–4195. Bibcode:2005CMAME.194.4135H. doi:10.1016/j.cma.2004.10.008. ISSN 0045-7825. https://www.sciencedirect.com/science/article/pii/S0045782504005171 ↩
Chen, Jie; Wang, Guo-Jin (2011). "Progressive iterative approximation for triangular Bézier surfaces". Computer-Aided Design. 43 (8): 889–895. doi:10.1016/j.cad.2011.03.012. ISSN 0010-4485. /wiki/Doi_(identifier) ↩
Lin, Hongwei; Jin, Sinan; Hu, Qianqian; Liu, Zhenbao (2015). "Constructing B-spline solids from tetrahedral meshes for isogeometric analysis". Computer Aided Geometric Design. 35–36: 109–120. doi:10.1016/j.cagd.2015.03.013. ISSN 0167-8396. https://dx.doi.org/10.1016/j.cagd.2015.03.013 ↩
Lin, Hongwei; Cao, Qi; Zhang, Xiaoting (2018). "The Convergence of Least-Squares Progressive Iterative Approximation for Singular Least-Squares Fitting System". Journal of Systems Science and Complexity. 31 (6): 1618–1632. doi:10.1007/s11424-018-7443-y. ISSN 1009-6124. S2CID 255157830. /wiki/Doi_(identifier) ↩
Deng, Chongyang; Lin, Hongwei (2014). "Progressive and iterative approximation for least squares B-spline curve and surface fitting". Computer-Aided Design. 47: 32–44. doi:10.1016/j.cad.2013.08.012. ISSN 0010-4485. /wiki/Doi_(identifier) ↩
Zhao, Yu; Lin, Hongwei; Bao, Hujun (2012). "Local progressive interpolation for subdivision surface fitting". Journal of Computer Research and Development. 49 (8): 1699–1707. ↩
Liu, Shengjun; Liu, Tao; Hu, Ling; Shang, Yuanyuan; Liu, Xinru (2021-09-01). "Variational progressive-iterative approximation for RBF-based surface reconstruction". The Visual Computer. 37 (9): 2485–2497. doi:10.1007/s00371-021-02213-3. ISSN 1432-2315. https://doi.org/10.1007/s00371-021-02213-3 ↩
Lin, Hongwei; Hu, Qianqian; Xiong, Yunyang (2013-12-01). "Consistency and convergence properties of the isogeometric collocation method". Computer Methods in Applied Mechanics and Engineering. 267: 471–486. Bibcode:2013CMAME.267..471L. doi:10.1016/j.cma.2013.09.025. ISSN 0045-7825. https://www.sciencedirect.com/science/article/pii/S0045782513002521 ↩
Horn, Roger A.; Johnson, Charles R. (2012-10-22). Matrix Analysis. Cambridge University Press. doi:10.1017/cbo9781139020411. ISBN 978-0-521-83940-2. 978-0-521-83940-2 ↩
Liu, Chengzhi; Han, Xuli; Li, Juncheng (2020). "Preconditioned progressive iterative approximation for triangular Bézier patches and its application". Journal of Computational and Applied Mathematics. 366: 112389. doi:10.1016/j.cam.2019.112389. ISSN 0377-0427. S2CID 202942809. https://doi.org/10.1016%2Fj.cam.2019.112389 ↩
Sajavičius, Svajūnas (2023). "Hyperpower least squares progressive iterative approximation". Journal of Computational and Applied Mathematics. 422: 114888. doi:10.1016/j.cam.2022.114888. ISSN 0377-0427. S2CID 252965212. /wiki/Doi_(identifier) ↩
Lu, Lizheng (2010). "Weighted progressive iteration approximation and convergence analysis". Computer Aided Geometric Design. 27 (2): 129–137. doi:10.1016/j.cagd.2009.11.001. ISSN 0167-8396. /wiki/Doi_(identifier) ↩
Zhang, Li (2014-05-01). "Weighted Local Progressive Iterative Approximation for Tensor-product B\'{e}zier Surfaces". Journal of Information and Computational Science. 11 (7): 2117–2124. doi:10.12733/jics20103359. ISSN 1548-7741. /wiki/Doi_(identifier) ↩
Li, Shasha; Xu, Huixia; Deng, Chongyang (2019). "Data-weighted least square progressive and iterative approximation and related B-Spline curve fitting". Journal of Computer-Aided Design & Computer Graphics. 31 (9): 1574–1580. ↩
Huang, Zheng-Da; Wang, Hui-Di (2020). "On a progressive and iterative approximation method with memory for least square fitting". Computer Aided Geometric Design. 82: 101931. arXiv:1908.06417. doi:10.1016/j.cagd.2020.101931. ISSN 0167-8396. S2CID 201070122. /wiki/ArXiv_(identifier) ↩
Rios, Dany; Jüttler, Bert (2022). "LSPIA, (stochastic) gradient descent, and parameter correction". Journal of Computational and Applied Mathematics. 406: 113921. doi:10.1016/j.cam.2021.113921. ISSN 0377-0427. S2CID 244018717. /wiki/Doi_(identifier) ↩
Lin, Hongwei (2012). "Adaptive data fitting by the progressive-iterative approximation". Computer Aided Geometric Design. 29 (7): 463–473. doi:10.1016/j.cagd.2012.03.005. ISSN 0167-8396. /wiki/Doi_(identifier) ↩
Zhao, Yu; Lin, Hongwei; Bao, Hujun (2012). "Local progressive interpolation for subdivision surface fitting". Computer Research and Development. 49 (8): 1699–1707. ↩
Zhang, Li; Wang, Huan; Li, Yuanyuan; Tan, Jieqing (2014). "A progressive iterative approximation method in offset approximation". Journal of Computer Aided Design and Computer Graphics. 26 (10): 1646–1653. ↩
Lin, Hongwei; Jin, Sinan; Liao, Hongwei; Jian, Qun (2015). "Quality guaranteed all-hex mesh generation by a constrained volume iterative fitting algorithm". Computer-Aided Design. 67–68: 107–117. doi:10.1016/j.cad.2015.05.004. ISSN 0010-4485. /wiki/Doi_(identifier) ↩
Hu, Lijuan; Yi, Yeqing; Liu, Chengzhi; Li, Juncheng (2020). "Iterative method for image compression by using LSPIA". IAENG International Journal of Computer Science. 47 (4): 1–7. ↩