The JPEG image is the most popular file format in relation to digital images. However, up to the present time, there seems to have been very few data hiding techniques taking the JPEG image into account. In this paper, we shall propose a novel high capacity data hiding method based on JPEG. The proposed method employs a capacity table to estimate the number of bits that can be hidden in each DCT component so that significant distortions in the stego-image can be avoided. The capacity table is derived from the JPEG default quantization table and the Human Visual System (HVS). Then, the adaptive least-significant bit (LSB) substitution technique is employed to process each quantized DCT coefficient. The proposed data hiding method enables us to control the level of embedding capacity by using a capacity factor. According to our experimental results, our new scheme can achieve an impressively high embedding capacity of around 20% of the compressed image size with little noticeable degradation of image quality.
Extensive amounts of knowledge and data stored in medical databases require the development of specialized tools for storing, accessing, analysis, and effectiveness usage of stored knowledge and data. Intelligent methods such as neural networks, fuzzy sets, decision trees, and expert systems are, slowly but steadily, applied in the medical fields. Recently, rough set theory is a new intelligent technique was used for the discovery of data dependencies, data reduction, approximate set classification, and rule induction from databases. In this paper, we present a rough set method for generating classification rules from a set of observed 360 samples of the breast cancer data. The attributes are selected, normalized and then the rough set dependency rules are generated directly from the real value attribute vector. Then the rough set reduction technique is applied to find all reducts of the data which contains the minimal subset of attributes that are associated with a class label for classification. Experimental results from applying the rough set analysis to the set of data samples are given and evaluated. In addition, the generated rules are also compared to the well-known IDS classifier algorithm. The study showed that the theory of rough sets seems to be a useful toot for inductive learning and a valuable aid for building expert systems.
The paper describes a new method to segment ischemic stroke region on computed tomography (CT) images by utilizing joint features from mean, standard deviation, histogram, and gray level co-occurrence matrix methods. Presented unsupervised segmentation technique shows ability to segment ischemic stroke region.
A proxy signature allows a designated person, called a proxy signer, to sign the message on behalf of the original signer. Proxy signatures are very useful tools when one needs to delegate his/her signing capability to other party. A number of proxy signature schemes have been proposed and succeeded for proxy delegations, but the schemes are in defective in proxy revocations. In this paper, we propose two proxy signature schemes based on RSA cryptosystems. The proposed first scheme does not consider proxy revocation mechanism; however, it will help us to compare our protocol with the existing RSA-based schemes. The proposed second scheme provides an effective proxy revocation mechanism. The proposed schemes do not require any secure channel to proxy key delivery and support the necessary security requirements of proxy signature.
A novel approach to outlier detection on the ground of the properties of distribution of distances between multidimensional points is presented. The basic idea is to evaluate the outlier factor for each data point. The factor is used to rank the dataset objects regarding their degree of being an outlier. Selecting the points with the minimal factor values can then identify outliers. The main advantages of the approach are: (1) no parameter choice in outlier detection is necessary; (2) detection is not dependent on clustering algorithms. To demonstrate the quality of the outlier detection, the experiments were performed on widely used datasets. A comparison with some popular detection methods shows the superiority of our approach.
The paper deals with the intelligent functional model for optimizing the product design and its manufacturing process in hybrid manufacturing systems consisting of people, machines and computers. The knowledge-based framework of an intelligent functional model has been developed. It furnishes the possibility for a product designer and manufacturer to find an optimal production plan in the early stage of the product design. The mathematical model formalization is provided. A consecutive optimization scheme has been applied for selecting an optimal alternative of a product design and its production plan. The proposed model is being implemented both in industry and university education process.
In this article we propose a novel Wavelet Packet Decomposition (WPD)-based modification of the classical Principal Component Analysis (PCA)-based face recognition method. The proposed modification allows to use PCA-based face recognition with a large number of training images and perform training much faster than using the traditional PCA-based method. The proposed method was tested with a database containing photographies of 423 persons and achieved 82-89% first one recognition rate. These results are close to that achieved by the classical PCA-based method (83-90%).
The present paper describes the development and the performance of parallel FEM software for solving various CFD problems. Domain decomposition strategy and parallel iterative GMRES solver have been adapted to the universal space-time FEM code FEMTOOL, which allows implementation of any partial differential equation with minor expenses. The developed data structures, the static load balancing and the inter-processor communication algorithms have been particularly suited for homogeneous distributed memory PC clusters. The universality of the considered parallel algorithms has been validated solving applications described by the Poisson equation, by the convective transport equation and by the Navier-Stokes equations. Three typical benchmark problems have been solved in order to perform the efficiency study. The performance of parallel computations, the speed-up and the efficiency have been measured on three BEOWULF PC clusters as well as on the cluster of IBM RISC workstations and on the IBM SP2 supercomputer.
We consider a problem of nonlinear stochastic optimization with linear constraints. The method of epsilon-feasible solution by series of Monte-Carlo estimators has been developed for solving this problem avoiding "jamming" or "zigzagging". Our approach is distinguished by two peculiarities: the optimality of solution is tested in a statistical manner and the Monte-Carlo sample size is adjusted so as to decrease the total amount of Monte-Carlo trials and, at the same time, to guarantee the estimation of the objective function with an admissible accuracy. Under some general conditions we prove by the martingale approach that the proposed method converges a.s. to the stationary point of the problem solved. As a counterexample the maximization of the probability of portfolio desired return is given, too.
In this paper, a new digital watermarking method based on vector quantization (VQ) is proposed. In contrast with conventional VQ-based watermarking schemes, the mean of sub-blocks is used to train the VQ codebook. In addition, the Anti-Gray Coding (AGC) technique is employed to enhance the robustness of the proposed watermarking scheme. In this scheme, the secret keys are used to hide the associated information between the original image and the watermark. Then the set of secret keys will be registered to a trusted third party for future verification. Thus, the original image remains unchanged after the watermark is melted into the set of secret keys. Experimental results show that the watermark can survive various possible attacks. Besides that, the size of the secret keys can be reduced.
This paper describes our research on statistical language modeling of Lithuanian. The idea of improving sparse n-gram models of highly inflected Lithuanian language by interpolating them with complex n-gram models based on word clustering and morphological word decomposition was investigated. Words, word base forms and part-of-speech tags were clustered into 50 to 5000 automatically generated classes. Multiple 3-gram and 4-gram class-based language models were built and evaluated on Lithuanian text corpus, which contained 85 million words. Class-based models linearly interpolated with the 3-gram model led up to a 13% reduction in the perplexity compared with the baseline 3-gram model. Morphological models decreased out-of-vocabulary word rate from 1.5% to 1.02%.
Text categorization - the assignment of natural language documents to one or more predefined categories based on their semantic content - is an important component in many information organization and management tasks. Performance of neural networks learning is known to be sensitive to the initial weights and architecture. This paper discusses the use multilayer neural network initialization with decision tree classifier for improving text categorization accuracy. Decision tree from root node until a final leave is used for initialization of each single unit. Growing decision trees with increasingly larger amounts of training data will result in larger decision tree sizes. As a result, the neural networks constructed from these decision trees are often larger and more complex than necessary. Appropriate choice of certainly factor is able to produce trees that are essentially constant in size in the face of increasingly larger training sets. Experimental results support the conclusion that error based pruning can be used to produce appropriately sized trees, which are directly mapped to optimal neural network architecture with good accuracy. The experimental evaluation demonstrates this approach provides better classification accuracy with Reuters-21578 corpus, one of the standard benchmarks for text categorization tasks. We present results comparing the accuracy of this approach with multilayer neural network initialized with traditional random method and decision tree classifiers.
Tree is one of the most studied and practically useful classes of graphs and is the attention of a great number of studies. There is absence of generalized results for tree as a class and even for one kind of labeling as whole. Only specialized results exist limited to specific types of trees only. A number of conjectures stand being unsolved. Graham and Sloane (1980) conjectured trees to be Harmonious and Ringel-Kotzig conjectured trees to be Graceful about three decades ago. Kotzig and Rosa (1970) ask the question whether all trees are Magic or not. No generalized result for Antimagic labeling is given for trees so far. This paper presents the methodologies to obtain the major labeling schemes for trees viz., Harmonious, Sequential, Felicitous, Graceful, Antimagic and found the trees to be not Magic except T(2, 1), thus solving the said conjectures. These findings could also be useful for those working in fields where graphs serve as models.
In this paper on basis of the results (Dyomin et al., 2003a) the structure of Shannon information amount in the joint filtering and extrapolation problem of the stochastic processes by continuous-discrete time memory observations is investigated. For particular class of processes with applying of the general results the problem of optimal transmission over the lag channels is considered and efficiency of filtering and extrapolation receptions under transmission over channels with memory or lag is investigated.
In this paper a useful educational tool is presented for minimizing low order Boolean expressions. The algorithm follows the Karnaugh map looping approach and provides optimal results. For the implementation, C++ was used on the CodeWarrior for Palm Operating System environment. In order to make the overall implementation efficient, the object oriented approach was used. Two step-by-step examples are presented to illustrate the efficiency of the proposed algorithm. The proposed application can be used by students and professors in the fields of electrical and computer engineering and computer science.
This paper develops a representation of multi-model based controllers by using artificial intelligence typical structures. These structures will be neural networks, genetic algorithms and fuzzy logic. The interpretation of multimodel controllers in an artificial intelligence frame will allow the application of each specific technique to the design of improved multimodel based controllers. The obtained artificial intelligence based mullimodel controllers are compared with classical single model based ones. It is shown through simulation examples that a transient response improvement can be achieved by using multiestimation based techniques. Furthermore, a method for synthesizing multimodel based neural network controllers from already designed single model based ones is presented. The proposed methodology allows to extend the existing single model based neural controllers to multimodel based ones, extending the applicability of this kind of techniques to a more general type of controllers. Also, some applications of genetic algorithms and fuzzy logic to multimodel controller design are proposed. Thus, the mutation operation from genetic algorithms inspires a robustness test which consists of a random modification of the estimates which is used to select the estimates leading to the better identification performance towards parameterizing online the adaptive controller. Such a test is useful for plants operating in a noisy environment. The proposed robustness test improves the selection of the plant model used to parameterize the adaptive controller it? comparison to classical multimodel schemes where the controller parameterization choice is basically taken based on the identification accuracy of each model. Moreover, the fuzzy logic approach suggests new ideas to the design of multiestimation structures which can be applied to a broad variety of adaptive controllers such as robotic manipulator controller design.
We propose a layered Soft IP Customisation (SIPC) model for specifying and implementing system-level soft IP design processes such as wrapping and customisation. The SIPC model has three layers: (1) Specification Layer for specification of a customisation process using UML class diagrams, (2) Generalisation Layer for representation of a customisation process using the metaprogramming techniques, and (3) Generation Layer for generation of the customised soft IP instances from metaspecifications. UML allows us to specify customisation of soft IPs at a high level of abstraction. Metaprogramming allows us to manage variability in a domain, develop generic domain components, and describe generation of customised component instances. The usage of the SIPC model eases and accelerates reuse, adaptation and integration of the pre-designed soft IN into new hardware designs.
The problem of post-processing of a classified image is addressed from the point of view of the Dempster-Shafer theory of evidence. Each neighbour of a pixel being analyzed is considered as an item of evidence supporting particular hypotheses regarding the class label of that pixel. The strength of support is defined as a function of the degree of uncertainty in class label of the neighbour, and the distance between the neighbour and the pixel being considered. A post-processing window defines the neighbours. Basic belief masses are obtained for each of the neighbours and aggregated according to the rule of orthogonal sum. The final label of the pixel is chosen according to the maximum of the belief function.
This paper presents a new in-place pseudo linear radix sorting algorithm. The proposed algorithm, called MSL (Map Shuffle Loop) is an improvement over ARL (Maus, 2002). The ARL algorithm uses an in-place permutation loop of linear complexity in terms of input size. MSL uses a faster permutation loop searching for the next element to permute group by group, instead of element by element. The algorithm and its runtime behavior are discussed in detail. The performance of MSL is compared with quicksort and the fastest variant of radix sorting algorithms, which is the Least Significant Digit (LSD) radix sorting algorithm (Sedgewick, 2003).
Neural networks built of Hodgkin-Huxley neurons were examined. These structures behaved like Liquid State Machines (LSM). They could effectively process different input signals (i.e., Morse alphabet) into precisely defined output. It is also shown that there is a possibility of logical gates creation with use of Hodgkin-Huxley neurons and simple LSMs.