This article is about a curious phenomenon. Suppose we have a data matrix, which is the superposition of a low-rank component and a sparse component. Can we recover each component individually? We prove that under some suitable assumptions, it is possible to recover both the low-rank and the sparse components exactly by solving a very convenient convex program called Principal Component Pursuit ; among all feasible decompositions, simply minimize a weighted combination of the nuclear norm and of the 1 norm. This suggests the possibility of a principled approach to robust principal component analysis since our methodology and results assert that one can recover the principal components of a data matrix even though a positive fraction of its entries are arbitrarily corrupted. This extends to the situation where a fraction of the entries are missing as well. We discuss an algorithm for solving this optimization problem, and present applications in the area of video surveillance, where our methodology allows for the detection of objects in a cluttered background, and in the area of face recognition, where it offers a principled way of removing shadows and specularities in images of faces.

Human activity recognition is an important area of computer vision research. Its applications include surveillance systems, patient monitoring systems, and a variety of systems that involve interactions between persons and electronic devices such as human-computer interfaces. Most of these applications require an automated recognition of high-level activities, composed of multiple simple (or atomic) actions of persons. This article provides a detailed overview of various state-of-the-art research papers on human activity recognition. We discuss both the methodologies developed for simple human actions and those for high-level activities. An approach-based taxonomy is chosen that compares the advantages and limitations of each approach. Recognition methodologies for an analysis of the simple actions of a single person are first presented in the article. Space-time volume approaches and sequential approaches that represent and recognize activities directly from input images are discussed. Next, hierarchical recognition methodologies for high-level activities are presented and compared. Statistical approaches, syntactic approaches, and description-based approaches for hierarchical recognition are discussed in the article. In addition, we further discuss the papers on the recognition of human-object interactions and group activities. Public datasets designed for the evaluation of the recognition methodologies are illustrated in our article as well, comparing the methodologies' performances. This review will provide the impetus for future research in more productive areas.

In this paper, the invariance properties of the time fractional -dimensional Zakharov–Kuznetsov modified equal width (ZK-MEW) equation have been investigated using the Lie group analysis method. Lie point symmetries of the time fractional -dimensional ZK-MEW equation have been derived by using the Lie group analysis method of fractional differential equations. Using the Lie symmetry analysis, the vector fields and the symmetry reduction of this equation are obtained. It is shown that the time fractional -dimensional ZK-MEW equation can be transformed to an equation with Erdélyi–Kober fractional derivative. Finally using new conservation theorem with formal Lagrangian, the new conserved vectors are well constructed with a detailed derivation, which constitutes the conservation analysis for the time fractional -dimensional ZK-MEW equation.

In many computer vision systems, the same object can be observed at varying viewpoints or even by different sensors, which brings in the challenging demand for recognizing objects from distinct even heterogeneous views. In this work we propose a Multi-view Discriminant Analysis (MvDA) approach, which seeks for a single discriminant common space for multiple views in a non-pairwise manner by jointly learning multiple view-specific linear transforms. Specifically, our MvDA is formulated to jointly solve the multiple linear transforms by optimizing a generalized Rayleigh quotient, i.e., maximizing the between-class variations and minimizing the within-class variations from both intra-view and inter-view in the common space. By reformulating this problem as a ratio trace problem, the multiple linear transforms are achieved analytically and simultaneously through generalized eigenvalue decomposition. Furthermore, inspired by the observation that different views share similar data structures, a constraint is introduced to enforce the view-consistency of the multiple linear transforms. The proposed method is evaluated on three tasks: face recognition across pose, photo versus. sketch face recognition, and visual light image versus near infrared image face recognition on Multi-PIE, CUFSF and HFB databases respectively. Extensive experiments show that our MvDA achieves significant improvements compared with the best known results.

This paper provides a review of stochastic Data Envelopment Analysis (DEA). We discuss extensions of deterministic DEA in three directions: (i) deviations from the deterministic frontier are modeled as stochastic variables, (ii) random noise in terms of measurement errors, sample noise, and specification errors is made an integral part of the model, and (iii) the frontier is stochastic as is the underlying Production Possibility Set (PPS). Stochastic DEA utilizes non-parametric convex or conical hull reference technologies based upon axioms from production theory accompanied by a statistical foundation in terms of axioms from statistics or distributional assumptions. The approaches allow for an estimation of stochastic inefficiency compared to a deterministic or a stochastic PPS and for statistical inference while maintaining an axiomatic foundation. Focus is on bridges and differences between approaches within the field of Stochastic DEA including semi-parametric Stochastic Frontier Analysis (SFA) and Chance Constrained DEA (CCDEA). We argue that statistical inference based upon homogenous bootstrapping in contrast to a management science approach imposes a restrictive structure on inefficiency, which may not facilitate the communication of results of the analysis to decision makers. Semi-parametric SFA and CCDEA differ w.r.t. the modeling of noise and stochastic inefficiency. The two approaches are in spite of the inherent differences shown to be complements in the sense that the stochastic PPSs obtained by the two approaches share basic similarities in the case of one output and multiple inputs. Recent contributions related to (i) disentangling of random noise and random inefficiency and (ii) obtaining smooth shape constrained estimators of the frontier are discussed.

Massive Online Analysis (MOA) is a software environment for implementing algorithms and running experiments for online learning from evolving data streams. MOA includes a collection of offline and online methods as well as tools for evaluation. In particular, it implements boosting, bagging, and Hoeffding Trees, all with and without Naive Bayes classifiers at the leaves. MOA supports bi-directional interaction with WEKA, the Waikato Environment for Knowledge Analysis, and is released under the GNU GPL license.

This paper provides a sketch of some of the major research thrusts in data envelopment analysis (DEA) over the three decades since the appearance of the seminal work of [Charnes, A., Cooper, W.W., Rhodes, E.L., 1978. Measuring the efficiency of decision making units. European Journal of Operational Research 2, 429–444]. The focus herein is primarily on methodological developments, and in no manner does the paper address the many excellent applications that have appeared during that period. Specifically, attention is primarily paid to (1) the various models for measuring efficiency, (2) approaches to incorporating restrictions on multipliers, (3) considerations regarding the status of variables, and (4) modeling of data variation.

Pointillistic based super‐resolution techniques, such as photoactivated localization microscopy (PALM), involve multiple cycles of sequential activation, imaging, and precise localization of single fluorescent molecules. A super‐resolution image, having nanoscopic structural information, is then constructed by compiling all the image sequences. Because the final image resolution is determined by the localization precision of detected single molecules and their density, accurate image reconstruction requires imaging of biological structures labeled with fluorescent molecules at high density. In such image datasets, stochastic variations in photon emission and intervening dark states lead to uncertainties in identification of single molecules. This, in turn, prevents the proper utilization of the wealth of information on molecular distribution and quantity. A recent strategy for overcoming this problem is pair‐correlation analysis applied to PALM. Using rigorous statistical algorithms to estimate the number of detected proteins, this approach allows the spatial organization of molecules to be quantitatively described. Inphotoactivated localization microscopy (PALM) photoactivable fluorescent proteins (PA‐FPs) are stochastically activated, imaged, their localization determined and then bleached. By repeating this cycle with different subsets of PA‐FPs and combining the data, a super‐resolution image is obtained. An overview of latest developments in pair‐correlation analysis is given here.

Evolutionary network analysis has found an increasing interest in the literature because of the importance of different kinds of dynamic social networks, email networks, biological networks, and social streams. When a network evolves, the results of data mining algorithms such as community detection need to be correspondingly updated. Furthermore, the specific kinds of changes to the structure of the network, such as the impact on community structure or the impact on network structural parameters, such as node degrees, also needs to be analyzed. Some dynamic networks have a much faster rate of edge arrival and are referred to as network streams or graph streams. The analysis of such networks is especially challenging, because it needs to be performed with an online approach, under the one-pass constraint of data streams. The incorporation of content can add further complexity to the evolution analysis process. This survey provides an overview of the vast literature on graph evolution analysis and the numerous applications that arise in different contexts.