V. L. Rvachev called R-functions ‘logically charged functions’ because they encode complete logical information within the standard setting of real analysis. He invented them in the 1960s as a means for unifying logic, geometry, and analysis within a common computational framework – in an effort to develop a new computationally effective language for modelling and solving boundary value problems. Over the last forty years, R-functions have been accepted as a valuable tool in computer graphics, geometric modelling, computational physics, and in many areas of engineering design, analysis, and optimization. Yet, many elements of the theory of R-functions continue to be rediscovered in different application areas and special situations. The purpose of this survey is to expose the key ideas and concepts behind the theory of R-functions, explain the utility of R-functions in a broad range of applications, and to discuss selected algorithmic issues arising in connection with their use.

We are concerned here with processing discontinuous functions from their spectral information. We focus on two main aspects of processing such piecewise smooth data: detecting the edges of a piecewise smooth f, namely, the location and amplitudes of its discontinuities; and recovering with high accuracy the underlying function in between those edges. If f is a smooth function, say analytic, then classical Fourier projections recover f with exponential accuracy. However, if f contains one or more discontinuities, its global Fourier projections produce spurious Gibbs oscillations which spread throughout the smooth regions, enforcing local loss of resolution and global loss of accuracy. Our aim in the computation of the Gibbs phenomenon is to detect edges and to reconstruct piecewise smooth functions, while regaining the high accuracy encoded in the spectral data.To detect edges, we utilize a general family of edge detectors based on concentration kernels. Each kernel forms an approximate derivative of the delta function, which detects edges by separation of scales. We show how such kernels can be adapted to detect edges with one- and two-dimensional discrete data, with noisy data, and with incomplete spectral information. The main feature is concentration kernels which enable us to convert global spectral moments into local information in physical space. To reconstruct f with high accuracy we discuss novel families of mollifiers and filters. The main feature here is making these mollifiers and filters adapted to the local region of smoothness while increasing their accuracy together with the dimension of the data. These mollifiers and filters form approximate delta functions which are properly parametrized to recover f with (root-) exponential accuracy.

Molecular dynamics is discussed from a mathematical perspective. The recent history of method development is briefly surveyed with an emphasis on the use of geometric integration as a guiding principle. The recovery of statistical mechanical averages from molecular dynamics is then introduced, and the use of backward error analysis as a technique for analysing the accuracy of numerical averages is described. This article gives the first rigorous estimates for the error in statistical averages computed from molecular dynamics simulation based on backward error analysis. It is shown that molecular dynamics introduces an appreciable bias at stepsizes which are below the stability threshold. Simulations performed in such a regime can be corrected by use of a stepsize-dependent reweighting factor. Numerical experiments illustrate the efficacy of this approach. In the final section, several open problems in dynamics-based molecular sampling are considered.

This article demonstrates how numerical methods for atmospheric models can be validated by showing that they give the theoretically predicted rate of convergence to relevant asymptotic limit solutions. This procedure is necessary because the exact solution of the Navier–Stokes equations cannot be resolved by production models. The limit solutions chosen are those most important for weather and climate prediction. While the best numerical algorithms for this purpose largely reflect current practice, some important limit solutions cannot be captured by existing methods. The use of Lagrangian rather than Eulerian averaging may be required in these cases.

We survey and unify recent results on the existence of accurate algorithms for evaluating multivariate polynomials, and more generally for accurate numerical linear algebra with structured matrices. By ‘accurate’ we mean that the computed answer has relative error less than 1, i.e., has some correct leading digits. We also address efficiency, by which we mean algorithms that run in polynomial time in the size of the input. Our results will depend strongly on the model of arithmetic: most of our results will use the so-called traditional model (TM), where the computed result of op(a, b), a binary operation like a+b, is given by op(a, b) * (1+δ) where all we know is that |δ| ≤ ε ≪ 1. Here ε is a constant also known as machine epsilon.We will see a common reason for the following disparate problems to permit accurate and efficient algorithms using only the four basic arithmetic operations: finding the eigenvalues of a suitably discretized scalar elliptic PDE, finding eigenvalues of arbitrary products, inverses, or Schur complements of totally non-negative matrices (such as Cauchy and Vandermonde), and evaluating the Motzkin polynomial. Furthermore, in all these cases the high accuracy is ‘deserved’, i.e., the answer is determined much more accurately by the data than the conventional condition number would suggest.In contrast, we will see that evaluating even the simple polynomial x + y + z accurately is impossible in the TM, using only the basic arithmetic operations. We give a set of necessary and sufficient conditions to decide whether a high accuracy algorithm exists in the TM, and describe progress toward a decision procedure that will take any problem and provide either a high-accuracy algorithm or a proof that none exists.When no accurate algorithm exists in the TM, it is natural to extend the set of available accurate operations by a library of additional operations, such as x + y + z, dot products, or indeed any enumerable set which could then be used to build further accurate algorithms. We show how our accurate algorithms and decision procedure for finding them extend to this case.Finally, we address other models of arithmetic, and the relationship between (im)possibility in the TM and (in)efficient algorithms operating on numbers represented as bit strings.

Finite volume methods apply directly to the conservation law form of a differential equation system; and they commonly yield cell average approximations to the unknowns rather than point values. The discrete equations that they generate on a regular mesh look rather like finite difference equations; but they are really much closer to finite element methods, sharing with them a natural formulation on unstructured meshes. The typical projection onto a piecewise constant trial space leads naturally into the theory of optimal recovery to achieve higher than first-order accuracy. They have dominated aerodynamics computation for over forty years, but they have never before been the subject of an Acta Numerica article. We shall therefore survey their early formulations before describing powerful developments in both their theory and practice that have taken place in the last few years.

textabstractThis paper describes methods that are important for the numerical evaluation of certain functions that frequently occur in applied mathematics, physics and mathematical statistics. This includes what we consider to be the basic methods, such as recurrence relations, series expansions (both convergent and asymptotic), and numerical quadrature. Several other methods are available and some of these will be discussed in less detail. Examples will be given on the use of special functions in certain problems from mathematical physics and mathematical statistics (integrals and series with special functions).