In Monte Carlo methods quadrupling the sample size halves the error. In simulations of stochastic partial differential equations (SPDEs), the total work is the sample size times the solution cost of an instance of the partial differential equation. A Multi-level Monte Carlo method is introduced which allows, in certain cases, to reduce the overall work to that of the discretization of one instance of the deterministic PDE. The model problem is an elliptic equation with stochastic coefficients. Multi-level Monte Carlo errors and work estimates are given both for the mean of the solutions and for higher moments. The overall complexity of computing mean fields as well as k-point correlations of the random solution is proved to be of log-linear complexity in the number of unknowns of a single Multi-level solve of the deterministic elliptic problem. Numerical examples complete the theoretical analysis.

We consider matrix eigenvalue problems that are nonlinear in the eigenvalue parameter. One of the most fundamental differences from the linear case is that distinct eigenvalues may have linearly dependent eigenvectors or even share the same eigenvector. This has been a severe hindrance in the development of general numerical schemes for computing several eigenvalues of a nonlinear eigenvalue problem, either simultaneously or subsequently. The purpose of this work is to show that the concept of invariant pairs offers a way of representing eigenvalues and eigenvectors that is insensitive to this phenomenon. To demonstrate the use of this concept in the development of numerical methods, we have developed a novel block Newton method for computing such invariant pairs. Algorithmic aspects of this method are considered and a few academic examples demonstrate its viability.

In this paper we review and we extend the reduced basis approximation and a posteriori error estimation for steady Stokes flows in affinely parametrized geometries, focusing on the role played by the Brezzi’s and Babuška’s stability constants. The crucial ingredients of the methodology are a Galerkin projection onto a low-dimensional space of basis functions properly selected, an affine parametric dependence enabling to perform competitive Offline-Online splitting in the computational procedure and a rigorous a posteriori error estimation on field variables. The combinatiofn of these three factors yields substantial computational savings which are at the basis of an efficient model order reduction, ideally suited for real-time simulation and many-query contexts (e.g. optimization, control or parameter identification). In particular, in this work we focus on (i) the stability of the reduced basis approximation based on the Brezzi’s saddle point theory and the introduction of a supremizer operator on the pressure terms, (ii) a rigorous a posteriori error estimation procedure for velocity and pressure fields based on the Babuška’s inf-sup constant (including residuals calculations), (iii) the computation of a lower bound of the stability constant, and (iv) different options for the reduced basis spaces construction. We present some illustrative results for both interior and external steady Stokes flows in parametrized geometries representing two parametrized classical Poiseuille and Couette flows, a channel contraction and a simple flow control problem around a curved obstacle.

This note proposes a novel approach to derive a worst-case $$O(1/k)$$ O ( 1 / k ) convergence rate measured by the iteration complexity in a non-ergodic sense for the Douglas–Rachford alternating direction method of multipliers proposed by Glowinski and Marrocco.

For computations of planetary motions with special linear multistep methods an excellent long-time behaviour is reported in the literature, without a theoretical explanation. Neither the total energy nor the angular momentum exhibit secular error terms. In this paper we completely explain this behaviour by studying the modified equation of these methods and by analyzing the remarkably stable propagation of parasitic solution components.

In the present paper we construct virtual element spaces that are H(div)-conforming and H(curl)-conforming on general polygonal and polyhedral elements; these spaces can be interpreted as a generalization of well known finite elements. We moreover present the basic tools needed to make use of these spaces in the approximation of partial differential equations. Finally, we discuss the construction of exact sequences of VEM spaces.

An posteriori error analysis for the virtual element method (VEM) applied to general elliptic problems is presented. The resulting error estimator is of residual-type and applies on very general polygonal/polyhedral meshes. The estimator is fully computable as it relies only on quantities available from the VEM solution, namely its degrees of freedom and element-wise polynomial projection. Upper and lower bounds of the error estimator with respect to the VEM approximation error are proven. The error estimator is used to drive adaptive mesh refinement in a number of test problems. Mesh adaptation is particularly simple to implement since elements with consecutive co-planar edges/faces are allowed and, therefore, locally adapted meshes do not require any local mesh post-processing.

In this paper, we define a new class of finite elements for the discretization of problems with Dirichlet boundary conditions. In contrast to standard finite elements, the minimal dimension of the approximation space is independent of the domain geometry and this is especially advantageous for problems on domains with complicated micro-structures. For the proposed finite element method we prove the optimal-order approximation (up to logarithmic terms) and convergence estimates valid also in the cases when the exact solution has a reduced regularity due to re-entering corners of the domain boundary. Numerical experiments confirm the theoretical results and show the potential of our proposed method.

We consider the application of multilevel Monte Carlo methods to elliptic PDEs with random coefficients. We focus on models of the random coefficient that lack uniform ellipticity and boundedness with respect to the random parameter, and that only have limited spatial regularity. We extend the finite element error analysis for this type of equation, carried out in Charrier et al. (SIAM J Numer Anal, 2013), to more difficult problems, posed on non-smooth domains and with discontinuities in the coefficient. For this wider class of model problem, we prove convergence of the multilevel Monte Carlo algorithm for estimating any bounded, linear functional and any continuously Fréchet differentiable non-linear functional of the solution. We further improve the performance of the multilevel estimator by introducing level dependent truncations of the Karhunen–Loève expansion of the random coefficient. Numerical results complete the paper.

Coarse spaces are instrumental in obtaining scalability for domain decomposition methods for partial differential equations (PDEs). However, it is known that most popular choices of coarse spaces perform rather weakly in the presence of heterogeneities in the PDE coefficients, especially for systems of PDEs. Here, we introduce in a variational setting a new coarse space that is robust even when there are such heterogeneities. We achieve this by solving local generalized eigenvalue problems in the overlaps of subdomains that isolate the terms responsible for slow convergence. We prove a general theoretical result that rigorously establishes the robustness of the new coarse space and give some numerical examples on two and three dimensional heterogeneous PDEs and systems of PDEs that confirm this property.

Least squares approximation is a technique to find an approximate solution to a system of linear equations that has no exact solution. In a typical setting, one lets n be the number of constraints and d be the number of variables, with $${n \gg d}$$ . Then, existing exact methods find a solution vector in O(nd 2) time. We present two randomized algorithms that provide accurate relative-error approximations to the optimal value and the solution vector of a least squares approximation problem more rapidly than existing exact algorithms. Both of our algorithms preprocess the data with the Randomized Hadamard transform. One then uniformly randomly samples constraints and solves the smaller problem on those constraints, and the other performs a sparse random projection and solves the smaller problem on those projected coordinates. In both cases, solving the smaller problem provides relative-error approximations, and, if n is sufficiently larger than d, the approximate solution can be computed in O(nd ln d) time.

In this paper we propose a stabilized conforming finite volume element method for the Stokes equations. On stating the convergence of the method, optimal a priori error estimates in different norms are obtained by establishing the adequate connection between the finite volume and stabilized finite element formulations. A superconvergence result is also derived by using a postprocessing projection method. In particular, the stabilization of the continuous lowest equal order pair finite volume element discretization is achieved by enriching the velocity space with local functions that do not necessarily vanish on the element boundaries. Finally, some numerical experiments that confirm the predicted behavior of the method are provided.

In this paper, we propose a theoretical study of the approximation properties of NURBS spaces, which are used in Isogeometric Analysis. We obtain error estimates that are explicit in terms of the mesh-size h, the degree p and the global regularity, measured by the parameter k. Our approach covers the approximation with global regularity from C 0 up to C k–1, with 2k − 1 ≤ p. Notice that the interesting case of higher regularity, up to k = p, is still open. However, our results give an indication of the role of the smoothness k in the approximation properties, and offer a first mathematical justification of the potential of Isogeometric Analysis based on globally smooth NURBS.

The aim of this paper is to develop an hp-version a posteriori error analysis for the time discretization of parabolic problems by the continuous Galerkin (cG) and the discontinuous Galerkin (dG) time-stepping methods, respectively. The resulting error estimators are fully explicit with respect to the local time-steps and approximation orders. Their performance within an hp-adaptive refinement procedure is illustrated with a series of numerical experiments.

We introduce a new family of strong linearizations of matrix polynomials—which we call “block Kronecker pencils”—and perform a backward stability analysis of complete polynomial eigenproblems. These problems are solved by applying any backward stable algorithm to a block Kronecker pencil, such as the staircase algorithm for singular pencils or the QZ algorithm for regular pencils. This stability analysis allows us to identify those block Kronecker pencils that yield a computed complete eigenstructure which is exactly that of a slightly perturbed matrix polynomial. The global backward error analysis in this work presents for the first time the following key properties: it is a rigorous analysis valid for finite perturbations (i.e., it is not a first order analysis), it provides precise bounds, it is valid simultaneously for a large class of linearizations, and it establishes a framework that may be generalized to other classes of linearizations. These features are related to the fact that block Kronecker pencils are a particular case of the new family of “strong block minimal bases pencils”, which are robust under certain perturbations and, so, include certain perturbations of block Kronecker pencils.

In this paper we analyze the numerical approximation of diffusion problems over polyhedral domains in $$\mathbb {R}^d$$ R d ( $$d = 1, 2,3$$ d = 1 , 2 , 3 ), with diffusion coefficient $$a({\varvec{x}},\omega )$$ a ( x , ω ) given as a lognormal random field, i.e., $$a({\varvec{x}},\omega ) = \exp (Z({\varvec{x}},\omega ))$$ a ( x , ω ) = exp ( Z ( x , ω ) ) where $${\varvec{x}}$$ x is the spatial variable and $$Z({\varvec{x}}, \cdot )$$ Z ( x , · ) is a Gaussian random field. The analysis presents particular challenges since the corresponding bilinear form is not uniformly bounded away from $$0$$ 0 or $$\infty $$ ∞ over all possible realizations of $$a$$ a . Focusing on the problem of computing the expected value of linear functionals of the solution of the diffusion problem, we give a rigorous error analysis for methods constructed from (1) standard continuous and piecewise linear finite element approximation in physical space; (2) truncated Karhunen–Loève expansion for computing realizations of $$a$$ a (leading to a possibly high-dimensional parametrized deterministic diffusion problem); and (3) lattice-based quasi-Monte Carlo (QMC) quadrature rules for computing integrals over parameter space which define the expected values. The paper contains novel error analysis which accounts for the effect of all three types of approximation. The QMC analysis is based on a recent result on randomly shifted lattice rules for high-dimensional integrals over the unbounded domain of Euclidean space, which shows that (under suitable conditions) the quadrature error decays with $$\mathcal {O}(n^{-1+\delta })$$ O ( n - 1 + δ ) with respect to the number of quadrature points $$n$$ n , where $$\delta >0$$ δ > 0 is arbitrarily small and where the implied constant in the asymptotic error bound is independent of the dimension of the domain of integration.

We present a general and simple procedure to construct quasi-interpolants in hierarchical spaces. Such spaces are composed of a hierarchy of nested spaces and provide a flexible framework for local refinement. The proposed hierarchical quasi-interpolants are described in terms of the so-called truncated hierarchical basis. Assuming a quasi-interpolant is selected for each space associated with a particular level in the hierarchy, the hierarchical quasi-interpolants are obtained without any additional manipulation. The main properties (like polynomial reproduction) of the quasi-interpolants selected at each level are locally preserved in the hierarchical construction. We show how to construct hierarchical local projectors, and the local approximation order of the underling hierarchical space is also investigated. The presentation is detailed for the truncated hierarchical B-spline basis, and we discuss its extension to a more general framework.

The inverse problem of electrical impedance tomography is severely ill-posed, meaning that, only limited information about the conductivity can in practice be recovered from boundary measurements of electric current and voltage. Recently it was shown that a simple monotonicity property of the related Neumann-to-Dirichlet map can be used to characterize shapes of inhomogeneities in a known background conductivity. In this paper we formulate a monotonicity-based shape reconstruction scheme that applies to approximative measurement models, and regularizes against noise and modelling error. We demonstrate that for admissible choices of regularization parameters the inhomogeneities are detected, and under reasonable assumptions, asymptotically exactly characterized. Moreover, we rigorously associate this result with the complete electrode model, and describe how a computationally cheap monotonicity-based reconstruction algorithm can be implemented. Numerical reconstructions from both simulated and real-life measurement data are presented.

For the augmented system of linear equations, Golub, Wu and Yuan recently studied an SOR-like method (BIT 41(2001)71–85). By further accelerating it with another parameter, in this paper we present a generalized SOR (GSOR) method for the augmented linear system. We prove its convergence under suitable restrictions on the iteration parameters, and determine its optimal iteration parameters and the corresponding optimal convergence factor. Theoretical analyses show that the GSOR method has faster asymptotic convergence rate than the SOR-like method. Also numerical results show that the GSOR method is more effective than the SOR-like method when they are applied to solve the augmented linear system. This GSOR method is further generalized to obtain a framework of the relaxed splitting iterative methods for solving both symmetric and nonsymmetric augmented linear systems by using the techniques of vector extrapolation, matrix relaxation and inexact iteration. Besides, we also demonstrate a complete version about the convergence theory of the SOR-like method.

We propose and analyze a novel multi-index Monte Carlo (MIMC) method for weak approximation of stochastic models that are described in terms of differential equations either driven by random measures or with random coefficients. The MIMC method is both a stochastic version of the combination technique introduced by Zenger, Griebel and collaborators and an extension of the multilevel Monte Carlo (MLMC) method first described by Heinrich and Giles. Inspired by Giles’s seminal work, we use in MIMC high-order mixed differences instead of using first-order differences as in MLMC to reduce the variance of the hierarchical differences dramatically. This in turn yields new and improved complexity results, which are natural generalizations of Giles’s MLMC analysis and which increase the domain of the problem parameters for which we achieve the optimal convergence, $${\mathcal {O}}(\mathrm {TOL}^{-2}).$$ O ( TOL - 2 ) . Moreover, in MIMC, the rate of increase of required memory with respect to $$\mathrm {TOL}$$ TOL is independent of the number of directions up to a logarithmic term which allows far more accurate solutions to be calculated for higher dimensions than what is possible when using MLMC. We motivate the setting of MIMC by first focusing on a simple full tensor index set. We then propose a systematic construction of optimal sets of indices for MIMC based on properly defined profits that in turn depend on the average cost per sample and the corresponding weak error and variance. Under standard assumptions on the convergence rates of the weak error, variance and work per sample, the optimal index set turns out to be the total degree type. In some cases, using optimal index sets, MIMC achieves a better rate for the computational complexity than the corresponding rate when using full tensor index sets. We also show the asymptotic normality of the statistical error in the resulting MIMC estimator and justify in this way our error estimate, which allows both the required accuracy and the confidence level in our computational results to be prescribed. Finally, we include numerical experiments involving a partial differential equation posed in three spatial dimensions and with random coefficients to substantiate the analysis and illustrate the corresponding computational savings of MIMC.