Compressed sensing is a new area of signal processing. Its goal is to minimize the number of samples that need to be taken from a signal for faithful reconstruction. The performance of compressed sensing on signal classes is directly related to Gelfand widths. Similar to the deeper constructions of optimal subspaces in Gelfand widths, most sampling algorithms are based on randomization. However, for possible circuit implementation, it is important to understand what can be done with purely deterministic sampling. In this note, we show how to construct sampling matrices using finite fields. One such construction gives cyclic matrices which are interesting for circuit implementation. While the guaranteed performance of these deterministic constructions is not comparable to the random constructions, these matrices have the best known performance for purely deterministic constructions.
Let be an iteration function in a complete metric space . In this paper we present some new general complete convergence theorems for the Picard iteration with order of convergence at least . Each of these theorems contains a priori and a posteriori error estimates as well as some other estimates. A central role in the new theory is played by the notions of a of and a of . We study the convergence of the Picard iteration associated to with respect to a function of initial conditions . The initial conditions in our convergence results utilize only information at the starting point . More precisely, the initial conditions are given in the form , where is an interval on containing 0. The new convergence theory is applied to the Newton iteration in Banach spaces. We establish three complete -versions of the famous semilocal Newton–Kantorovich theorem as well as a complete version of the famous semilocal -theorem of Smale for analytic functions.
The subject of quantum computing brings together ideas from classical information theory, computer science, and quantum physics. This review aims to summarize not just quantum computing, but the whole subject of quantum information theory. Information can be identified as the most general thing which must propagate from a cause to an effect. It therefore has a fundamentally important role in the science of physics. However, the mathematical treatment of information, especially information processing, is quite recent, dating from the mid-20th century. This has meant that the full significance of information as a basic concept in physics is only now being discovered. This is especially true in quantum mechanics. The theory of quantum information and computing puts this significance on a firm footing, and has led to some profound and exciting new insights into the natural world. Among these are the use of quantum states to permit the secure transmission of classical information (quantum cryptography), the use of quantum entanglement to permit reliable transmission of quantum states (teleportation), the possibility of preserving quantum coherence in the presence of irreversible noise processes (quantum error correction), and the use of controlled quantum evolution for efficient computation (quantum computation). The common theme of all these insights is the use of quantum entanglement as a computational resource. It turns out that information theory and quantum mechanics fit together very well. In order to explain their relationship, this review begins with an introduction to classical information theory and computer science, including Shannon's theorem, error correcting codes, Turing machines and computational complexity. The principles of quantum mechanics are then outlined, and the Einstein, Podolsky and Rosen (EPR) experiment described. The EPR-Bell correlations, and quantum entanglement in general, form the essential new ingredient which distinguishes quantum from classical information theory and, arguably, quantum from classical physics. Basic quantum information ideas are next outlined, including qubits and data compression, quantum gates, the 'no cloning' property and teleportation. Quantum cryptography is briefly sketched. The universal quantum computer (QC) is described, based on the Church-Turing principle and a network model of computation. Algorithms for such a computer are discussed, especially those for finding the period of a function, and searching a random list. Such algorithms prove that a QC of sufficiently precise construction is not only fundamentally different from any computer which can only manipulate classical information, but can compute a small class of functions with greater efficiency. This implies that some important computational tasks are impossible for any device apart from a QC. To build a universal QC is well beyond the abilities of current technology. However, the principles of quantum information physics can be tested on smaller devices. The current experimental situation is reviewed, with emphasis on the linear ion trap, high-Q optical cavities, and nuclear magnetic resonance methods. These allow coherent control in a Hilbert space of eight dimensions (three qubits) and should be extendable up to a thousand or more dimensions (10 qubits). Among other things, these systems will allow the feasibility of quantum computing to be assessed. In fact such experiments are so difficult that it seemed likely until recently that a practically useful QC (requiring, say, 1000 qubits) was actually ruled out by considerations of experimental imprecision and the unavoidable coupling between any system and its environment. However, a further fundamental part of quantum information physics provides a solution to this impasse. This is quantum error correction (QEC). An introduction to QEC is provided. The evolution of the QC is restricted to a carefully chosen subspace of its Hilbert space. Errors are almost certain to cause a departure from this subspace. QEC provides a means to detect and undo such departures without upsetting the quantum computation. This achieves the apparently impossible, since the computation preserves quantum coherence even though during its course all the qubits in the computer will have relaxed spontaneously many times. The review concludes with an outline of the main features of quantum information physics and avenues for future research.
We give a new algorithm for the multiplication of -bit integers in the bit complexity model, which is asymptotically faster than all previously known algorithms. More precisely, we prove that two -bit integers can be multiplied in time , where and . Assuming standard conjectures about the distribution of Mersenne primes, we give yet another algorithm that achieves . The fastest previously known algorithm was due to Fürer, who proved the existence of a complexity bound of the above form for some finite . We show that an optimised variant of Fürer’s algorithm achieves only , suggesting that our new algorithm is faster than Fürer’s by a factor of .
Membrane systems, also called P systems, are biologically inspired theoretical models of distributed and parallel computing. This paper presents a new class of tissue-like P systems with cell separation, a feature which allows the generation of new workspace. We study the efficiency of the class of P systems and draw a conclusion that only tractable problems can be efficiently solved by using cell separation and communication rules with the length of at most 1. We further present an efficient (uniform) solution to SAT by using cell separation and communication rules with length at most 6. We conclude that a borderline between efficiency and non-efficiency exists in terms of the length of communication rules (assuming ). We discuss future research topics and open problems.
We study the approximation of expectations for solutions of SDEs and functionals by means of restricted Monte Carlo algorithms that may only use random bits instead of random numbers. We consider the worst case setting for functionals from the Lipschitz class w.r.t. the supremum norm. We construct a random bit multilevel Euler algorithm and establish upper bounds for its error and cost. Furthermore, we derive matching lower bounds, up to a logarithmic factor, that are valid for all random bit Monte Carlo algorithms, and we show that, for the given quadrature problem, random bit Monte Carlo algorithms are at least almost as powerful as general randomized algorithms.
For an isotropic convex body we consider the isotropic constant of the symmetric random polytope generated by independent random points which are distributed according to the cone probability measure on the boundary of . We show that with overwhelming probability , where is an absolute constant. If is unconditional we argue that even with overwhelming probability and thereby verify the hyperplane conjecture for this model. The proofs are based on concentration inequalities for sums of sub-exponential or sub-Gaussian random variables, respectively, and, in the unconditional case, on a new -estimate for linear functionals with respect to the cone measure in the spirit of Bobkov and Nazarov, which might be of independent interest.
M. B. Levin used Sobol–Faure low discrepancy sequences with Pascal triangle matrices modulo 2 to construct, a real number such that the first terms of the sequence have discrepancy . This is the lowest discrepancy known for this kind of sequences. In this note we characterize Levin’s construction in terms of nested perfect necklaces, which are a variant of the classical de Bruijn sequences. Moreover, we show that every real number whose binary expansion is the concatenation of nested perfect necklaces of exponentially increasing order satisfies that the first terms of have discrepancy . For the order being a power of 2, we give the exact number of nested perfect necklaces and an explicit method based on matrices to construct each of them. The computation of the th digit of the binary expansion of a real number built from nested perfect necklaces requires elementary mathematical operations.
Many authors have studied exponentially-convergent tractability in the worst case setting. But almost all articles related to tractability are focused on the classes of some multivariate functions defined over the unit cubes or . Note that simplex is also important in geometry and has many practical applications. In this paper we considered tractability of the approximation problems of function spaces with exponential weights defined over products of simplices and obtained necessary and sufficient conditions for certain kinds of tractability, including EC-tractability.
We study EC- -weak tractability of multivariate linear problems in the average case setting. This paper extends earlier work in the worst case setting. The parameters and allow us to study the information complexity of a -variate problem with respect to different powers of , corresponding to the bits of accuracy, and . We consider the absolute and normalized error criteria. In particular, a multivariate problem is EC- -weakly tractable iff . We deal with general linear problems and linear tensor product problems. We show necessary and sufficient conditions for EC- -weak tractability. In the case of general linear problem these conditions are matching. For linear tensor product problems, we also show matching conditions with the exception of some cases where , in general.
This paper is devoted to discussing multivariate approximation problems with analytic Korobov kernels in the worst and average case settings. We consider algorithms that use finitely many evaluations of arbitrary continuous linear functionals. We investigate exponential convergence- -weak tractability (EC- -WT) under the absolute or normalized error criterion. We completely solve the problem by filling the remaining gaps left open on EC- -WT. That is, we obtain necessary and sufficient conditions for EC- -WT for and in the worst case setting and for except in the average case setting.
Nowadays, asymptotically fast algorithms are widely used in computer algebra for computations in towers of algebraic field extensions of small height. Yet it is still unknown how to reach softly linear time for products and inversions in towers of arbitrary height. In this paper we design the first algorithm for general ground fields with a complexity exponent that can be made arbitrarily close to one from the asymptotic point of view. We deduce new faster algorithms for changes of tower representations, including the computation of primitive element representations in subquadratic time.
In this paper we study the worst-case error of numerical integration on the unit sphere , , for certain spaces of continuous functions on . For the classical Sobolev spaces ( ) upper and lower bounds for the worst case integration error have been obtained in Brauchart and Hesse (2007), Hesse (2006) and Hesse and Sloan (2005,2006). We investigate the behaviour for by introducing spaces , , with an extra logarithmic weight. For these spaces we obtain similar upper and lower bounds for the worst case integration error.
We study approximation properties of additive random fields , , which are sums of uncorrelated zero-mean random processes with continuous covariance functions. The average case approximation complexity is defined as the minimal number of evaluations of arbitrary linear functionals needed to approximate , with relative 2-average error not exceeding a given threshold . We investigate the growth of for arbitrary fixed and . The results are applied to the sums of the Wiener processes with different variance parameters.
In this work, we show that uniform integrability is not a necessary condition for central limit theorems (CLT) to hold for normalized multilevel Monte Carlo (MLMC) estimators and we provide near optimal weaker conditions under which the CLT is achieved. In particular, if the variance decay rate dominates the computational cost rate (i.e., ), we prove that the CLT applies to the standard (variance minimizing) MLMC estimator. For other settings where the CLT may not apply to the standard MLMC estimator, we propose an alternative estimator, called the mass-shifted MLMC estimator, to which the CLT always applies. This comes at a small efficiency loss: the computational cost of achieving mean square approximation error is at worst a factor higher with the mass-shifted estimator than with the standard one.
The irregularities of a distribution of points in the unit interval are often measured with various notions of discrepancy. The discrepancy function can be defined with respect to intervals of the form or arbitrary subintervals of the unit interval. In the former case, it is a well known fact in discrepancy theory that the -element point set in the unit interval with the lowest or norm of the discrepancy function is the centered regular grid We show a stronger result on the distribution of discrepancy functions of point sets in , which basically says that the distribution of the discrepancy function of is in some sense minimal among all -element point sets. As a consequence, we can extend the above result to rearrangement-invariant norms, including , Orlicz and Lorentz norms. We study the same problem for the discrepancy notions with respect to arbitrary subintervals. In this case, we will observe that we have to deal with integrals of convolutions of functions. To this end, we prove a general upper bound on such expressions, which might be of independent interest as well.