In this article, perspectives from Cloud computing practitioners are shown in order to address clients concerns and bring about awareness of the measures that put in place to ensure software security of the client services running in the Cloud. In addition, the authors have investigated the impacts of a number of the existing approaches and techniques to put a systematic survey of the current software security issues in the Cloud environment. Based on such perspectives and survey, a generic framework conceptually is designed to outline the possible current solutions of software security issues in the Cloud and to present a preferred software security approach to investigate the Cloud research community. As a potential enhancement on the proposed Cloud software security framework, the concepts of fuzzy systems might be used to solve a large numbers of issues in the Cloud security on different framework levels.
Pure air is vital for sustaining human life. Air pollution causes long-term effects on people. There is an urgent need for protecting people from its profound effects. In general, people are unaware of the levels to which they are exposed to air pollutants. Vehicles, burning various kinds of waste, and industrial gases are the top three onset agents of air pollution. Of these three top agents, human beings are exposed frequently to the pollutants due to motor vehicles. To aid in protecting people from vehicular air pollutants, this article proposes a framework that utilizes deep learning models. The framework utilizes a deep belief network to predict the levels of air pollutants along the paths people travel and also a comparison with the predictions made by a feed forward neural network and an extreme learning machine. When evaluating the deep belief neural network for the case study undertaken, a deep belief network was able to achieve a higher index of agreement and lower RMSE values.
The real challenge in human-computer interaction is understanding human emotions by machines and responding to it accordingly. Emotion varies by gender and age of the speaker, location, and cause. This article focuses on the improvement of emotion recognition (ER) from speech using gender-biased influences in emotional expression. The problem is addressed by testing emotional speech with an appropriate specific-gender ER system. As acoustical characteristics vary among the genders, there may not be a common optimal feature set across both genders. Gender-based speech emotion recognition, a two-level hierarchical ER system is proposed, where the first level is gender identification which identifies the gender, and the second level is a gender-specific ER system, trained with an optimal feature set of expressions of a particular gender. The proposed system increases the accuracy of traditional Speech Emotion Recognition Systems (SER) by 10.36% than the SER trained with mixed gender training when tested on the EMO-DB Corpus.
Water management has always been a topic of serious discussion since infrastructure, rural, and industrial development flourished. Due to the depleting water resources, this is now even a bigger challenge. So, here is developed an IoT-based water management system where ultrasonic sensors are employed for predicting the depth of water in the tank and accordingly pumping the water to the sub tank of the apartment. In addition, the time series analysis Auto Regressive Integrative Moving Average (ARIMA) and Least Square Linear Regression (LSLR) algorithms were employed and compared for predicting the water demand for next six months based on the historical water consumption record of the main reservoir/tank. The information on the amount of water consumed from the main reservoir is pushed to the cloud and to the mobile application developed for utilities. The purpose is to access the water consumption pattern and predict water demand for the next six months from the cloud.
Currently, for content-based recommendations, semantic analysis of text from webpages seems to be a major problem. In this research, we present a semantic web content mining approach for recommender systems in online shopping. The methodology is based on two major phases. The first phase is the semantic preprocessing of textual data using the combination of a developed ontology and an existing ontology. The second phase uses the Naïve Bayes algorithm to make the recommendations. The output of the system is evaluated using precision, recall and f-measure. The results from the system showed that the semantic preprocessing improved the recommendation accuracy of the recommender system by 5.2% over the existing approach. Also, the developed system is able to provide a platform for content-based recommendation in online shopping. This system has an edge over the existing recommender approaches because it is able to analyze the textual contents of users feedback on a product in order to provide the necessary product recommendation.
There is an abundance of existing biomedical ontologies such as the National Cancer Institute Thesaurus and the Systematized Nomenclature of Medicine-Clinical Terms. Implementing these ontologies in a particular system however, may cause unnecessary high usage of memory and slows down the systems' performance. On the other hand, building a new ontology from scratch will require additional time and efforts. Therefore, this research explores the ontology reuse approach in order to develop an Abdominal Ultrasound Ontology by extracting concepts from existing biomedical ontologies. This article presents the reader with a step by step method in reusing ontologies together with suggestions of the off-the-shelf tools that can be used to ease the process. The results show that ontology reuse is beneficial especially in the biomedical field as it allows for developers from the non-technical background to build and use domain specific ontology with ease. It also allows for developers with technical background to develop ontologies with minimal involvements from domain experts.
Nowadays, farmers can search for treatments for their plants using search engines and applications. Most existing works are developed in the form of rule-based question answering platforms. However, an observation could be incorrectly given by the farmer. This work recommends that diseases and treatments must be considered from a set of related observations. Thus, we develop a theoretical framework for systems to manage a farmer's observation data. We investigate and formalize desirable characteristics of such systems. The observation data is attached with a geolocation in which related contextual data is found. The framework is formalized based on algebra, in which required types and functions are identified. Its key characteristics are described by: (1) the defined type called warncons for representing observation data; (2) the similarity function for warncons; and (3) the warncons composition function for composing similar warncons. Finally, we show that the framework helps observation data to become richer and improve advice-finding.
The semantic web is a global initiative which employs ontologies to offer rich, semantic-based knowledge representation. Concepts in these ontologies are explored to find (dis)similarities between them using (dis)similarity measures. Despite the existence of numerous (dis)similarity measures, none have dynamically determined the quantum of information required to discover (dis)similarities between concepts. In this article, a new, efficient, feature-based semantic dissimilarity measure is proposed where the prime novelty lies in the dynamic selection of the semantic neighourhood (features) of the concepts. The neighbourhood is dynamically selected in accordance with the local density of the concept and the density of the ontology determined by the proposed density coefficient. Further, the proposed measure also scales down the dissimilarity value in accordance with the depth of the concept pair, using the novel Depth Coefficient.
The need to manage electronic documents is an open issue in the digital era. It becomes a challenging problem on the internet where a large amount of data needs even more efficient and effective methods and techniques for mining and representing information. In this context, document summarization, browsing processes and visualization techniques have had a great impact on several dimensions of user information perception. In this context, the use of ontologies for knowledge representation has rapidly grown in the last years in several application domains together with social-based techniques such as tag clouds. This form of visualization tool is becoming particularly useful in the interaction process between users and social applications where a huge amount of data needs to have effective and efficient interfaces. In this article, the authors propose a novel methodology based on a combination of ontologies and Tag Clouds for web document collections browsing and summarizing, they call this tool Semantic Tag Cloud.
Detection and realization of new trends from corpus are achieved through Emergent Trend Detection (ETD) methods, which is a principal application of text mining. This article discusses the influence of the Particle Swarm Optimization (PSO) on Dynamic Adaptive Self Organizing Maps (DASOM) in the design of an efficient ETD scheme by optimizing the neural parameters of the network. This hybrid machine learning scheme is designed to accomplish maximum accuracy with minimum computational time. The efficiency and scalability of the proposed scheme is analyzed and compared with standard algorithms such as SOM, DASOM and Linear Regression analysis. The system is trained and tested on DBLP database, University of Trier, Germany. The superiority of hybrid DASOM algorithm over the well-known algorithms in handling high dimensional large-scale data to detect emergent trends from the corpus is established in this article.
Entity synonyms play an important role in natural language processing applications, such as query expansion and question answering. There are three main distribution characteristics in web texts:1) appearing in parallel structures; 2) occurring with specific patterns in sentences; and 3) distributed in similar contexts. The first and second characteristics rely on reliable prior knowledge and are susceptive to data sparseness, bringing high accuracy and low recall to synonym extraction. The third one may lead to high recall but low accuracy, since it identifies a somewhat loose semantic similarity. Existing methods, such as context-based and pattern-based methods, only consider one characteristic for synonym extraction and rarely take their complementarity into account. For increasing recall, this article proposes a novel extraction framework that can combine the three characteristics for extracting synonyms from the web, where an Entity Synonym Network (ESN) is built to incorporate synonymous knowledge. To improve accuracy, the article treats synonym detection as a ranking problem and uses the Spreading Activation model as a ranking means to detect the hard noise in ESN. Experimental results show the proposed method achieves better accuracy and recall than the state-of-the-art methods.
This article proposes a system equipped with the enhanced Bayesian classification techniques to automatically assign folders to store electronic text documents. Despite computer technology advancements in the information age where electronic text files are so pervasive in information exchange, almost every single document created or downloaded from the Internet requires manual classification by the users before being deposited into a folder in a computer. Not only does such a tedious task cause inconvenience to users, the time taken to repeatedly classify and allocate a folder for each text document impedes productivity, especially when dealing with a huge number of files and deep layers of folders. In order to overcome this, a prototype system is built to evaluate the performance of the enhanced Bayesian text classifier for automatic folder allocation, by categorizing text documents based on the existing types of text documents and folders present in user's hard drive. In this article, the authors deploy a High Relevance Keyword Extraction (HRKE) technique and an Automatic Computed Document Dependent (ACDD) Weighting Factor technique to a Bayesian classifier in order to obtain better classification accuracy, while maintaining the low training cost and simple classifying processes using the conventional Bayesian approach.
In the supply chain network, information sharing between enterprises can produce synergistic effect and improve the benefits. In this article, evolutionary game theory is used to analyse the evolution process of the information sharing behaviour between supply chain network enterprises with different penalties and information sharing risk costs. Analysis and agent-based simulation results show that when the amount of information between enterprises in supply chain networks is very large, it is difficult to form a sharing of cooperation; increase penalties, control cost sharing risk can increase the probability of supply chain information sharing network and shorten the time for information sharing.
Due to the lopsided nature of investor investment-related model research under the traditional P2P environment, and in order to improve the research effect, this study proposes an agent-based complex network testing investor trust model. This model is based on interest trust, and combines with the Bayesian method to effectively evaluate the model trust, and builds a multi-steady-state agent system based on this. At the same time, it effectively analyzes the evolutionary mechanism of the system, and validates the model's application in combination with comparative experiments. The research shows that the model can effectively improve the success rate of executing tasks and shorten the distance between cooperative agents, thus ensuring the reliability of the selection of cooperative objects and providing theoretical reference for subsequent related research.
Knowledge discovery with geo-spatial information processing is of prime importance in geomorphology. The temporal characteristics of evolving geographic features result in geo-spatial events that occur at a specific geographic location. Those events when consecutively occur result in a geo-spatial process that causes a phenomenal change over the period of time. Event and process are essential constituents in geo-spatial dynamism. The geo-spatial data acquired by remote sensing technology is the source of input for knowledge discovery of geographic features. This article performs qualitative inference of geographic process by identifying events causing geo-spatial deformation over time. The evolving geographic features and their types have association with spatial and temporal factors. Event calculus-based spatial knowledge formalism allows reasoning over intervals of time. Hence, representation of Event Attributed Spatial Entity (EASE) Knowledge is proposed. Logical event-based queries are evaluated on the formal representation of EASE Knowledge Base. Event-based queries are executed on the proposed knowledge base and when experimented on, real data sets yielded comprehensive results. Further, the significance of EASE-based spatio-temporal reasoning is proved by evaluating with respect to query processing time and accuracy. The enhancement of EASE with a direction for further development to explore its significance towards prediction is discussed towards the end.
The purpose of data integration is that integrates multi-source heterogeneous data. Ontology solves semantic describing of multi-source heterogeneous data. The authors propose a practical approach based on ontology modeling and an information toolkit named Karma modeling for fast data integration, and demonstrate an application example in detail. Armed Conflict Location & Event Data Project (ACLED) is a publicly available conflict event dataset designed for disaggregated conflict analysis and crisis mapping. The authors analyzed the ACLED dataset and domain knowledge to build an Armed Conflict Event ontology, then constructed Karma models to integrate ACLED datasets and publish RDF data. Through SPARQL query to check the correctness of published RDF data. Authors design and developed an ACLED Query System based on Jena API, Canvas JS, and Baidu API, etc. technologies, which provides convenience for governments and researches to analyze regional conflict events and crisis early warning, and it verifies the validity of constructed ontology and the correctness of Karma modeling.
The article covers process models for HR IT projects and in particular for HR transformation projects. Based on the authors' experience, an applied process model for HR transformation projects in a cloud-based environment is derived. The article identifies findings applicable to the fields of organisation, business, and IT as well as decisions and critical success factors in the specific context of cloud-based HR solutions.
Business intelligence (BI) institutionalization has become a growing research area within the information systems (IS) discipline because of the decision-making iteration in businesses. Studies on BI application in improving decision support are not new. However, research on BI institutionalization seems sparse. BI institutionalization may positively contribute to a managerial role in using BI application repetitively for the decision-making iteration in businesses. This article aims to carry out an integrative literature review and report consolidated views of the body of knowledge. The study adopted a qualitative content analysis to generate themes about BI routinization in the decision-making iteration. Eighty-eight research articles were selected for the study. However, 57 articles were finally included for review. The findings suggest information management capability as the key necessity for BI application and its alignment with the organizational standard for BI institutionalization.
The traffic controlling systems at present are microcontroller-based, which is semi-automatic in nature where time is the only parameter that is considered. With the introduction of IoT in traffic signaling systems, research is being done considering density as a parameter for automating the traffic signaling system and regulate traffic dynamically. Security is a concern when sensitive data of great volume is being transmitted wirelessly. Security protocols that have been implemented for IoT networks can protect the system against attacks and are purely based on standard cryptosystem. They cannot handle heterogeneous data type. To prevent the issues on security protocols, the authors have implemented SVM machine learning algorithm for analyzing the traffic data pattern and detect anomalies. The SVM implementation has been done for the UK traffic data set between 2011-2016 for three cities. The implementation been carried out in Raspberry Pi3 processor functioning as an edge router and SVM machine learning algorithm using Python Scikit Libraries.
Cloud computing has seen tremendous growth in recent days. As a result of this, there has been a great increase in the growth of data centers all over the world. These data centers consume a lot of energy, resulting in high operating costs. The imbalance in load distribution among the servers in the data center results in increased energy consumption. Server consolidation can be handled by migrating all virtual machines in those underutilized servers. Migration causes performance degradation of the job, based on the migration time and number of migrations. Considering these aspects, the proposed clustering agent-based model improves energy saving by efficient allocation of the VMs to the hosting servers, which reduces the response time for initial allocation. Middle VM migration (MVM) strategy for server consolidation minimizes the number of VM migrations. Further, randomization of extra resource requirement done to cater to real-time scenarios needs more resource requirements than the initial requirement. Simulation results show that the proposed approach reduces the number of migrations and response time for user request and improves energy saving in the cloud environment.