This paper contributes to the AC small signal modeling and analysis of Z source converter (ZSC) in continuous conduction mode. The AC small signal model considers the dynamics introduced by Z network uniquely contained in ZSC. AC small signal model of ZSC is derived and computer simulation results are used to validate the small signal modeling method. Various applications of the AC small signal models to ZSC design and experimental verifications are presented.
ComDim analysis was designed to assess the relationships between individuals and variables within a multiblock setting where several variables, organized in blocks, are measured on the same individuals. An overview of this method is presented together with some of its properties. Furthermore, we discuss a new extension of the method of analysis to the case of ( K+1 ) datasets. More precisely, the aim is to explore the relationships between a response dataset and K other datasets. An illustration of this latter strategy of analysis on the basis of a case study involving Time Domain ‐ Nuclear Magnetic Resonance data is outlined and the outcomes are compared with those of Multiblock Partial Least Squares regression. An overview of ComDim analysis is presented together with some of its properties. Furthermore, a new extension of this method to the case of K+1 datasets is discussed. More precisely, the aim is to explore the relationships between a response dataset and K other datasets. An illustration of this latter strategy of analysis on the basis of Time Domain ‐ Nuclear Magnetic Resonance data is outlined and the outcomes are compared to those of Multiblock PLS regression.
This paper proposes a combined canonical variate analysis (CVA) and Fisher discriminant analysis (FDA) scheme (denoted as CVA–FDA) for fault diagnosis, which employs CVA for pretreating the data and subsequently utilizes FDA for fault classification. In addition to the improved handling of serial correlations in the data, the utilization of CVA in the first step provides similar or reduced dimensionality of the pretreated datasets compared with the original datasets, as well as decreased degree of overlap. The effectiveness of the proposed approach is demonstrated on the Tennessee Eastman process. The simulation results demonstrate that (i) CVA–FDA provides better and more consistent fault diagnosis than FDA, especially for data rich in dynamic behavior; and (ii) CVA–FDA outperforms dynamic FDA in both discriminatory power and computational time.
This paper provides a review of stochastic Data Envelopment Analysis (DEA). We discuss extensions of deterministic DEA in three directions: (i) deviations from the deterministic frontier are modeled as stochastic variables, (ii) random noise in terms of measurement errors, sample noise, and specification errors is made an integral part of the model, and (iii) the frontier is stochastic as is the underlying Production Possibility Set (PPS). Stochastic DEA utilizes non-parametric convex or conical hull reference technologies based upon axioms from production theory accompanied by a statistical foundation in terms of axioms from statistics or distributional assumptions. The approaches allow for an estimation of stochastic inefficiency compared to a deterministic or a stochastic PPS and for statistical inference while maintaining an axiomatic foundation. Focus is on bridges and differences between approaches within the field of Stochastic DEA including semi-parametric Stochastic Frontier Analysis (SFA) and Chance Constrained DEA (CCDEA). We argue that statistical inference based upon homogenous bootstrapping in contrast to a management science approach imposes a restrictive structure on inefficiency, which may not facilitate the communication of results of the analysis to decision makers. Semi-parametric SFA and CCDEA differ w.r.t. the modeling of noise and stochastic inefficiency. The two approaches are in spite of the inherent differences shown to be complements in the sense that the stochastic PPSs obtained by the two approaches share basic similarities in the case of one output and multiple inputs. Recent contributions related to (i) disentangling of random noise and random inefficiency and (ii) obtaining smooth shape constrained estimators of the frontier are discussed.
Sensitivity analysis plays an important role in building energy analysis. It can be used to identify the key variables affecting building thermal performance from both energy simulation models and observational study. This paper is focused on the application of sensitivity analysis in the field of building performance analysis. First, the typical steps of implementation of sensitivity analysis in building analysis are described. A number of practical issues in applying sensitivity analysis are also discussed, such as the determination of input variations, the choice of building energy programs, how to reduce computational time for energy models. Second, the sensitivity analysis methods used in building performance analysis are reviewed. These methods can be categorized into local and global sensitivity analysis. The global methods can be further divided into four approaches: regression, screening-based, variance-based, and meta-model sensitivity analysis. Recent research has been concentrated on global methods because they can explore the whole input space and most of them allow the self-verification, i.e., how much variance of the model output (building energy consumption) has been explained by the method used in the analysis. Third, we discuss several important topics, which are often overlooked in the domain of building performance analysis. These topics include the application of sensitivity analysis in observational study, how to deal with correlated inputs, the computation of the variations of sensitivity index, and the software issues. Lastly, the practical guidance is given based on the advantages and disadvantaged of different sensitivity analysis methods in assessing building thermal performance. The recommendations for further research in the future are made to provide more robust analysis in assessing building energy performance.
This paper provides a sketch of some of the major research thrusts in data envelopment analysis (DEA) over the three decades since the appearance of the seminal work of [Charnes, A., Cooper, W.W., Rhodes, E.L., 1978. Measuring the efficiency of decision making units. European Journal of Operational Research 2, 429–444]. The focus herein is primarily on methodological developments, and in no manner does the paper address the many excellent applications that have appeared during that period. Specifically, attention is primarily paid to (1) the various models for measuring efficiency, (2) approaches to incorporating restrictions on multipliers, (3) considerations regarding the status of variables, and (4) modeling of data variation.
Effective management and sustainable development of groundwater resources of arid and semi-arid environments require monitoring of groundwater quality and quantity. The aim of this paper is to develop a reasonable methodological framework for producing the suitability map for drinking water through the geographic information system, remote sensing and field surveys of the Andimeshk-Dezful, Khozestan province, Iran as a semi-arid region. This study investigated the delineation of groundwater potential zone based on Dempster–Shafer (DS) theory of evidence and evaluate its applicability for groundwater potentiality mapping. The study also analyzed the spatial distribution of groundwater nitrate concentration; and produced the suitability map for drinking water. The study has been carried out with the following steps: i) creation of maps of groundwater conditioning factors; ii) assessment of groundwater occurrence characteristics; iii) creation of groundwater potentiality map (GPM) and model validation; iv) collection and chemical analysis of water samples; v) assessment of groundwater nitrate pollution; and vi) creation of groundwater potentiality and quality map. The performance of the DS was also evaluated using the receiver operating characteristic (ROC) curve method and pumping test data to ensure its generalization ability, which eventually, the GPM showed 87.76% accuracy. The detailed analysis of groundwater potentiality and quality revealed that the ‘non acceptable’ areas covers an area of about 1479 km (60%). The study will provide significant information for groundwater management and exploitation in areas where groundwater is a major source of water and its exploration is critical to support drinking water need.
Flux balance analysis is a mathematical approach for analyzing the flow of metabolites through a metabolic network. This primer covers the theoretical basis of the approach, several practical examples and a software toolbox for performing the calculations.
BACKGROUND: A phase 3 trial demonstrated superiority at interim analysis for everolimus over placebo in patients with metastatic renal cell carcinoma (mRCC) progressing on vascular endothelial growth factor receptor-tyrosine kinase inhibitors. Final results and analysis of prognostic factors are reported. METHODS: Patients with mRCC (N = 416) were randomized (2:1) to everolimus 10 mg/d (n = 277) or placebo (n = 139) plus best supportive care. Progression-free survival (PFS) and safety were assessed to the end of double-blind treatment. Mature overall survival (OS) data were analyzed, and prognostic factors for survival were investigated by multivariate analyses. A rank-preserving structural failure time model estimated the effect on OS, correcting for crossover from placebo to everolimus. RESULTS: The median PFS was 4.9 months (everolimus) versus 1.9 months (placebo) (hazard ratio [HR], 0.33; P /= 5% of patients included infections (all types, 10%), dyspnea (7%), and fatigue (5%). The median OS was 14.8 months (everolimus) versus 14.4 months (placebo) (HR, 0.87; P = .162), with 80% of patients in the placebo arm crossed over to everolimus. By the rank-preserving structural failure time model, the survival corrected for crossover was 1.9-fold longer (95% confidence interval, 0.5-8.5) with everolimus compared with placebo only. Independent prognostic factors for shorter OS in the study included low performance status, high corrected calcium, low hemoglobin, and prior sunitinib (P < .01). CONCLUSIONS: These results established the efficacy and safety of everolimus in patients with mRCC after progression on sunitinib and/or sorafenib.
Understanding emotions is an important aspect of personal development and growth, and as such it is a key tile for the emulation of human intelligence. Besides being important for the advancement of AI, emotion processing is also important for the closely related task of polarity detection. The opportunity to automatically capture the general public's sentiments about social events, political movements, marketing campaigns, and product preferences has raised interest in both the scientific community, for the exciting open challenges, and the business world, for the remarkable fallouts in marketing and financial market prediction. This has led to the emerging fields of affective computing and sentiment analysis, which leverage human-computer interaction, information retrieval, and multimodal signal processing for distilling people's sentiments from the ever-growing amount of online social data.