Multi-attribute analysis is a useful tool in many economical, managerial, constructional, etc, problems. The accuracy of performance measures in COPRAS (The multi-attribute COmplex PRoportional ASsessment of alternatives) method is usually assumed to he accurate. This method assumes direct and proportional dependence of the Weight and utility degree of investiaged versions on a system of attributes adequately describing the alternatives and on values and weights of the attributes. However. there is usually some uncertainty involved in all multi-attribute model inputs. The objective of this research is to demonstrate flow simulation can be used to reflect fuzzy inputs, which allows more complete interpretation of model results. A case study is used to reflect fuzzy the concept of general contractor choice of on the basis of multiple attributes of efficiency with fuzzy inputs applying CwOPRAS-G method. The research has concluded that the COPRAS-G method is appropriate to use.
The paper presents a novel method for the extraction of facial features based on the Gabor-wavelet representation of face images and the kernel partial-least-squares discrimination (KPLSD) algorithm. The proposed feature-extraction method, called the Gabor-based kernel partial-least-squares discrimination (GKPLSD), is performed in two consecutive steps. In the first step a set of forty Gabor wavelets is used to extract discriminative and robust facial features, while in the second step the kernel partial-least-squares discrimination technique is used to reduce the dimensionality of the Gabor feature vector and to further enhance its discriminatory power. For optimal performance, the KPLSD-based transformation is implemented using the recently proposed fractional-power-polynomial models. The experimental results based on the XM2VTS and ORL databases show that the GKPLSD approach outperforms feature-extraction methods such as principal component analysis (PCA), linear discriminant analysis (LDA), kernel principal component analysis (KPCA) or generalized discriminant analysis (GDA) as well as combinations of these methods with Gabor representations of the face images. Furthermore, as the KPLSD algorithm is derived from the kernel partial-least-squares regression (KPLSR) model it does not suffer from the small-sample-size problem, which is regularly encountered in the field of face recognition.
Advanced Encryption Standard (AES) block cipher system is widely used in crypto-graphic applications. A nonlinear substitution operation is the main factor of the AES cipher system strength, The purpose of the proposed approach is to generate the random S-boxes changing for every change of the secret key. The fact that the S-boxes are randomly key-dependent and unknown is the main strength of the new approach, since both linear and differential cryptanalysis require known S-boxes. In the paper, we briefly analyze the AES algorithm, substitution S-boxes, linear and differential cryptanalysis, and describe a randomly key-dependent S-box and inverse S-box generation algorithm. After that, we introduce the independency measure of the S-box elements, and experimentally investigate the quality of the generated S-boxes.
The main scientific problems investigated in this paper deal with the problem of multiple criteria evaluation of the quality of the main components of c-learning systems, i.e., learning objects (LOs) and virtual learning environments (VLEs). The aim of the paper is to analyse the existing LO and VLE quality evaluation methods, and to create more comprehensive methods based on learning individualisation approach. LOs and VLEs quality evaluation criteria are further investigated as the optimisation parameters and several optimisation methods are explored to be applied. Application of the experts' additive utility function using evaluation criteria ratings and their weights is explored in more detail. These new elements make the given work distinct from all the other earlier works in the area.
This paper has two achievements. The first aim of this paper is optimization of the lossy compression coder realized as companding quantizer with optimal compression law. This optimization is achieved by optimizing maximal amplitude for that optimal companding quantizer for Laplacian source. Approximate expression in closed form for optimal maximal amplitude is found. Although this expression is very simple and suitable for practical implementation, it satisfy optimality criterion for Lloyd-Max quantizer (for R >= 6 bits/sample). In the second part of this paper novel simple lossless compression method is presented. This method is much simpler than Huffman method, but it gives better results. Finally, at the end of the paper, we join optimal companding quantizer and lossless coding method together in one generalized compression method. This method is applied on the concrete still image and good results are obtained. Besides still images, this method also could be used for compression speech and bio-medical signals.
Most of real-life data are not often truly high-dimensional. The data points just fie on a low-dimensional manifold embedded in a high-dimensional space. Nonlinear manifold learning methods automatically discover the low-dimensional nonlinear manifold in a high-dimensional data space and then embed the data points into a low-dimensional embedding space, preserving the underlying structure in the data. In this paper, we have used the locally linear embedding method on purpose to unravel a manifold. In order to quantitatively estimate the topology preservation of a manifold after unfolding it in a low-dimensional space, some quantitative numerical measure must be used. There are lots of different measures of topology preservation. We have investigated three measures: Spearman's rho, Konig's measure (KM), and mean relative rank errors (MRRE). After investigating different manifolds, it turned out that only KM and MRRE gave proper results of manifold topology preservation in all the cases. The main reason is that Spearman's rho considers distances between all the pairs of points from the analysed data set, while KM and MRRE evaluate a limited number of neighbours of each point from the analysed data set.
In this paper, a method for the study of cluster stability is purposed. We draw pairs of samples from the data, according to two sampling distributions. The first distribution corresponds to the high density zones of data-elements distribution. Thus it is associated with the clusters cores. The second one, associated with file cluster margins, is related to the low density zones. The samples are clustered and the two obtained partitions are compared. The partitions are considered to be consistent if the obtained clusters are similar. The resemblance is measured by the total number of edges, in the clusters minimal spanning trees, connecting points from different samples. We use the Friedman and Rafsky two sample test Statistic. Under the homogeneity hypothesis, this statistic is normally distributed. Thus, it can he expected that the true number of clusters corresponds to the statistic empirical distribution which is closest to normal. Numerical experiments demonstrate the ability of the approach to detect the true number of clusters.
Secure communication between set-top boxes (STBs) and smart cards is directly related to the benefit of the service providers and the legal rights of users, while key exchange is the essential pan of a secure communication. In 2004, Jiang et al. proposed a key exchange protocol for STBs and smart cards based upon Schnorr's digital signature protocol and a one-way hash function. This paper, however, demonstrates that Jiang et al.'s protocol is vulnerable to an impersonation attack and does not provide perfect forward secrecy. In addition, in order to isolate such problems, vie present a new secure key exchange protocol based on a one-way hash function and Diffie-Hellman key exchange algorithm.
Inter-Organizational Workflow (IOW) aims at Supporting the collaboration between several autonomous and heterogeneous business processes. distributed over different enterprises or organizations. Coordination of these processes is a fundamental issue that has been mainly addressed in a static context, but it still remains open in a dynamic one such as the Internet in which LOW applications are more and more enacted nowadays. In such a context, Multi-Agent Systems (MAS) are known to be a natural solution for modeling IOW since they provide adequate abstractions and specific mediators to cope with IOW coordination. Consequently, this paper provides an agent-based model for coordinating business processes involved in a dynamic IOW. This model is a triplet (E, M, R). E is the set of coordinated entities. It corresponds to the different business processes that may be published, discovered or deployed by IOW partners. M is the media supporting coordination. It is a multi-agent architecture compliant with the Workflow Management Coalition architecture and integrating specific components devoted to coordination issues. Finally, R is the set of rules governing the coordination. In our context, R is described through all organizational model aiming at structuring the interaction among the coordinated entities and the different components of the architecture.
Recent changes in the intersection of the fields of intelligent Systems optimization and statistical learning are surveyed. These changes bring new theoretical and computational challenges to the existing research areas racing from web page mining to computer v. vision. patient recognition, financial mathematics. bioinformatics and many other ones.
The paper studies stochastic optimization problems in Reproducing Kernel Hilbert Spaces (RKHS). The objective function of such problems is a mathematical expectation functional depending on decision rules (or strategies), i.e. on functions of observed random parameters. Feasible rules are restricted to belong to a RKHS. This kind of problems arises, in on-line decision making and in statistical learning theory. We solve the problem by sample average approximation combined with Tihonov's regularization and establish sufficient conditions for uniform convergence of approximate solutions with probability one, Jointly with a rule for downward adjustment of the regularization factor with increasing sample size.
Interoperability is becoming an area with high focus both at national and cross-national level. This paper presents an assessment of the maturity levels of cross-national interoperability activities within the governmental domain in 13 nations. This analysis includes ail assessment of national enterprise architecture programs and national interoperability collaborations, in order to find out whether these serve as important precursors for engaging in cross-national interoperability collaborations. This paper document the importance of national activities as a precursor for engaging in cross-national interoperability collaboration by demonstrating the relation between the maturity of national and cross-national activities.
It is known that the minimum affine separating committee (MASC) combinatorial optimization problem, which is related to some machine learning techniques, is NP-hard and does not belong to Apx class unless P = NP. In this paper, it is shown that the MASC problem formulated in a fixed dimension space within n > 1 is intractable even if sets defining an instance of the problem are in general position. A new polynomial-time approximation algorithm for this modification of the MASC problem is presented. An approximation ratio and complexity bounds of the algorithm are obtained.
Business rules are relatively new addition in the field of Enterprise Resource Planning (ERP) systems, which are kind of business information systems, development. Recently some relevant enhancements of existing business information systems engineering methods were introduced, although there are still open issues Of how business rules may be used and improve qualitative and quantitative attributes of such kind of information systems. The paper discusses existing business information systems engineering issues arising out of using business rules approach. The paper also introduces several ways of business rule involvement aiming at ensuring ERP systems development agility based on running researches in the field also carried out by the authors.
This paper presents a novel data modulation scheme PCCD-OFDM-ASK: the phase continuous context dependent orthogonal frequency division multiplexing amplitude shift keying. The proposed modulation is successfully applied in the mobile payment system. It is used to transmit the digital data over the speech channel of the mobile communication system GSM, as well as CDMA. The main key points of the proposed modulation schemes are: precise signal synchronization between the modulator and demodulator, signal energy independent operation, on line adaptation of frequency characteristics of the transmission channel, and controlled frequency bandwidth thus enabling non-overlapped full duplex communication over the GSM's voice channel.
In this paper, we propose a new ID-based threshold signature scheme from the bilinear pairings, which is provably secure in the random oracle model under the bilinear Diffie-Hellman assumption. Our scheme adopts the approach that the private key associated with an identity rather than the master key of PKG is shared. Comparing to the-state-of-art work by Back and Zheng, our scheme has the following advantages. (1) The round-complexity of the threshold signing protocol is Optimal. Namely, during the signing procedure, each party broadcasts only one message. (2) The communication channel is optimal. Namely, during the threshold signing procedure, the broadcast channel among signers is enough. No private channel between any two signing parties is needed. (3) Our scheme is Much more efficient than the Back and Zheng scheme in term of computation, since we try our best to avoid using bilinear pairings. Indeed, the private key of an identity is indirectly distributed by sharing a number x(ID) is an element of Z(q)*, which is much more efficient than directly sharing the element in the bilinear group. And the major computationally expensive operation called distributed key generation protocol based on the bilinear map is avoided. (4) At last, the proactive security can be easily added to our scheme.
Semantic Web is envisioned as semantic description of data and services enabling unambiguous computerized interpretation. Thanks to semantic description, computers can perform demanding tasks Such as automation of discovery and access to heterogeneous data sources. Although this is possible with the existing technologies, combination of web services technology. ontologies and generative programming methods makes this simpler and more efficient. This paper presents the model for dynamic generation of web set-vices for data retrieval from heterogeneous data sources using ontologies. Emphasis is on dynamic generation of web services customized to a particular user based on the request defined by ontology. The paper also describes a prototype of the model implementation. Some advantages of our approach over other approaches are also provided.
In this paper, an efficient hybrid genetic algorithm (HGA) and its variants for the wellknown combinatorial optimization problem, the quadratic assignment problem (QAP) are discussed. In particular, we tested our algorithms on a special type of QAPs, the structured quadratic assignment problems. The results from the computational experiments on this class of problems demonstrate that HGAs allow to achieve near-optimal and (pseudo-)optimal solutions at very reasonable computation times. The obtained results also confirm that the hybrid genetic algorithms are among the most suitable heuristic approaches for this type of QAPs.
In this study, the performance of the modified subgradient algorithm (MSG) to solve the 0-1 quadratic knapsack problem (QKP) was examined. The MSG was Proposed by Gasimov for solving dual problems constructed with respect to sharp Augmented Lagrangian function. The MSG has some important proven properties. For example, it is convergent, and it guarantees zero duality gap for the problems such that its objective and constraint functions are all Lipschtz. Additionally, the MSG has been successfully used for solving non-convex continuous and some combinatorial problems with equality constraints since it was first proposed. lit this study, the MSG was used to solve the QKP which has an inequality constraint. The first step ill solving the problem was converting zero-one nonlinear QKP problem into continuous nonlinear problem by adding only one constraint and not adding, any new variables. Second, in order to Solve the continuous QKP. dual problem With "zero duality gap" was constructed by using the sharp Augmented Lagrangian function. Finally. the MSG was used to solve the dual problem. by considering the equality constraint in the computation of the norm. To compare the performance of the MSG with some other methods, some test instances from the relevant literature were solved both by using the MSG and by using three different MINLP solvers of GAMS software. The results obtained were presented and discussed.
We propose a distributed key generation protocol for pairing-based cryptosystems which is adaptively secure in the erasure-free and secure channel model, and at the same time completely avoids the use of interactive zero-knowledge proofs. Utilizing it as the threshold key generation protocol, we present a secure (t, n) threshold signature scheme based on the Waters' signature scheme. We prove that Our scheme is unforgeable and robust against my adaptive adversary who can choose players for corruption at any time during the run of the protocols and make adaptive chosen-message attacks. And the security proof Of Ours is in the standard model (without random oracles). In addition Our scheme achieves optimal resilience, that is, the adversary can corrupt any t < n/2 players.