# Smolyak algorithm

Boolean method, discrete blending method, hyperbolic cross points method, sparse grid method, fully symmetric quadrature formulas

Many high-dimensional problems are difficult to solve, cf. also Optimization of computational algorithms; Information-based complexity; Curse of dimension. High-dimensional linear tensor product problems can be solved efficiently by the Smolyak algorithm, which provides a general principle to construct good approximations for the -dimensional case from approximations for the univariate case. The Smolyak algorithm is often optimal or almost optimal if approximations for the univariate case are properly chosen.

The original paper of S.A. Smolyak was published in 1963 [a6]. There exist many variants of the basic algorithm for specific problems, see [a1], [a2], [a3], [a4], [a5], [a6], [a7], [a8], [a9], [a10]. For a thorough analysis with explicit error bounds see [a8], for major modifications see [a9]. The basic idea is here explained using the examples of multivariate integration and approximation. Assume one wishes to approximate, for smooth functions , using finitely many function values of . Multivariate integration (the first ) is needed in many areas even for very large , and often a Monte-Carlo method or number-theoretic method (based on low-discrepancy sequences, see Discrepancy) are used. Multivariate approximation (the second ) is often part of the solution of operator equations using the Galerkin method, see [a2], [a4], [a10].

For , much is known about good quadrature or interpolation formulas (Cf. also Quadrature formula; Interpolation formula.) Here, for quadrature formulas and for approximation formulas. It is assumed that a sequence of such with is given. For -functions the optimal order is (a1)

where is a norm on , as in (a2). The error is defined by and , respectively. In the multivariate case , one first defines tensor product formulas  which serve as building blocks for the Smolyak algorithm. Then the Smolyak algorithm can be written as  To compute , one only needs to know function values at the "sparse grid" where denotes the set of points used by . For the tensor product norm (a2)

the error bound (a3)

follows from (a1), where is the number of points in . This is the basic result of [a6]; see [a8], [a9] for much more explicit error bounds. The error bound (a3) should be compared with the optimal order of tensor product formulas.