We present a survey of the fundamentals and the applications of sparse grids, with a focus on the solution of partial differential equations (PDEs). The sparse grid approach, introduced in Zenger (1991), is based on a higher-dimensional multiscale basis, which is derived from a one-dimensional multi-scale basis by a tensor product construction. Discretizations on sparse grids involve $O(N \cdot (\log N)^{d-1})$ degrees of freedom only, where $d$ denotes the underlying problem's dimensionality and where $N$ is the number of grid points in one coordinate direction at the boundary. The accuracy obtained with piecewise linear basis functions, for example, is $O(N^{-2} \cdot (\log N)^{d-1})$ with respect to the $L_{2^-}$ and $L_{\infty}$-norm, if the solution has bounded second mixed derivatives. This way, the curse of dimensionality, i.e., the exponential dependence $O(N^d)$ of conventional approaches, is overcome to some extent. For the energy norm, only $O(N)$ degrees of freedom are needed to give an accuracy of $O(N^{-1})$. That is why sparse grids are especially well-suited for problems of very high dimensionality. The sparse grid approach can be extended to nonsmooth solutions by adaptive refinement methods. Furthermore, it can be generalized from piecewise linear to higher-order polynomials. Also, more sophisticated basis functions like interpolets, prewavelets, or wavelets can be used in a straightforward way. We describe the basic features of sparse grids and report the results of various numerical experiments for the solution of elliptic PDEs as well as for other selected problems such as numerical quadrature and data mining. [PUBLICATION ABSTRACT

This survey covers the state of the art of techniques for solving general-purpose constrained global optimization problems and continuous constraint satisfaction problems, with emphasis on complete techniques that provably find all solutions (if there are finitely many). The core of the material is presented in sufficient detail that the survey may serve as a text for teaching constrained global optimization. After giving motivations for and important examples of applications of global optimization, a precise problem definition is given, and a general form of the traditional first-order necessary conditions for a solution. Then more than a dozen software packages for complete global search are described. A quick review of incomplete methods for bound-constrained problems and recipes for their use in the constrained case follows; an explicit example is discussed, introducing the main techniques used within branch and bound techniques. Sections on interval arithmetic, constrained propagation and local optimization are followed by a discussion of how to avoid the cluster problem. Then a discussion of important problem transformations follows, in particular of linear, convex, and semilinear (= mixed integer linear) relaxations that are important for handling larger problems. Next, reliability issues - centring on rounding error handling and testing methodologies - are discussed, and the COCONUT framework for the integration of the different techniques is introduced. A list of challenges facing the field in the near future concludes the survey. [PUBLICATION ABSTRACT

A computational framework is presented for integrating the electrical, mechanical and biochemical functions of the heart. Finite element techniques are used to solve the large-deformation soft tissue mechanics using orthotropic constitutive laws based in the measured fibre-sheet structure of myocardial (heart muscle) tissue. The reaction-diffusion equations governing electrical current flow in the heart are solved on a grid of deforming material points which access systems of ODEs representing the cellular processes underlying the cardiac action potential. Navier-Stokes equations are solved for coronary blood flow in a system of branching blood vessels embedded in the deforming myocardium and the delivery of oxygen and metabolites is coupled to the energy-dependent cellular processes. The framework presented here for modelling coupled physical conservation laws at the tissue and organ levels is also appropriate for other organ systems in the body and we briefly discuss applications to the lungs and the musculo-skeletal system. The computational framework is also designed to reach down to subcellular processes, including signal transduction cascades and metabolic pathways as well as ion channel electrophysiology, and we discuss the development of ontologies and markup language standards that will help link the tissue and organ level models to the vast array of gene and protein data that are now available in web-accessible databases. [PUBLICATION ABSTRACT

The qualitative and quantitative analysis of numerical methods for delay differential equations is now quite well understood, as reflected in the recent monograph by Bellen and Zennaro (2003). This is in remarkable contrast to the situation in the numerical analysis of functional equations, in which delays occur in connection with memory terms described by Volterra integral operators. The complexity of the convergence and asymptotic stability analyses has its roots in new 'dimensions' not present in DDEs: the problems have distributed delays; kernels in the Volterra operators may be weakly singular; a second discretization step (approximation of the memory term by feasible quadrature processes) will in general be necessary before solution approximations can be computed. The purpose of this review is to introduce the reader to functional integral and integro-differential equations of Volterra type and their discretization, focusing on collocation techniques; to describe the 'state of the art' in the numerical analysis of such problems; and to show that - especially for many 'classical' equations whose analysis dates back more than 100 years - we still have a long way to go before we reach a level of insight into their discretized versions to compare with that achieved for DDEs. [PUBLICATION ABSTRACT

We first survey componentwise and normwise perturbation bounds for the standard least squares (LS) and minimum norm problems. Then some recent estimates of the optimal backward error for an alleged solution to an LS problem are presented. These results are particularly interesting when the algorithm used is not backward stable. The QR factorization and the singular value decomposition (SVD), developed in the 1960s and early 1970s, remain the basic tools for solving both the LS and the total least squares (TLS) problems. Current algorithms based on Householder or Gram-Schmidt QR factorizations are reviewed. The use of the SVD to determine the numerical rank of a matrix, as well as for computing a sequence of regularized solutions, is then discussed. The solution of the TLS problem in terms of the SVD of the compound matrix $(b\ A)$ is described. Some recent algorithmic developments are motivated by the need for the efficient implementation of the QR factorization on modern computer architectures. This includes blocked algorithms as well as newer recursive implementations. Other developments come from needs in different application areas. For example, in signal processing rank-revealing orthogonal decompositions need to be frequently updated. We review several classes of such decompositions, which can be more efficiently updated than the SVD. Two algorithms for the orthogonal bidiagonalization of an arbitrary matrix were given by Golub and Kahan in 1965, one using Householder transformations and the other a Lanczos process. If used to transform the matrix $(b\ A)$ to upper bidiagonal form, this becomes a powerful tool for solving various LS and TLS problems. This bidiagonal decomposition gives a core regular subproblem for the TLS problem. When implemented by the Lanczos process it forms the kernel in the iterative method LSQR. It is also the basis of the partial least squares (PLS) method, which has become a standard tool in statistics. We present some generalized QR factorizations which can be used to solve different generalized least squares problems. Many applications lead to LS problems where the solution is subject to constraints. This includes linear equality and inequality constraints. Quadratic constraints are used to regularize solutions to discrete ill-posed LS problems. We survey these classes of problems and discuss their solution. As in all scientific computing, there is a trend that the size and complexity of the problems being solved is steadily growing. Large problems are often sparse or structured. Algorithms for the efficient solution of banded and block-angular LS problems are given, followed by a brief discussion of the general sparse case. Iterative methods are attractive, in particular when matrix-vector multiplication is cheap. [PUBLICATION ABSTRACT