Distributionally robust optimization is a paradigm for decision making under uncertainty where the uncertain problem data are governed by a probability distribution that is itself subject to uncertainty. The distribution is then assumed to belong to an ambiguity set comprising all distributions that are compatible with the decision maker’s prior information. In this paper, we propose a unifying framework for modeling and solving distributionally robust optimization problems. We introduce standardized ambiguity sets that contain all distributions with prescribed conic representable confidence sets and with mean values residing on an affine manifold. These ambiguity sets are highly expressive and encompass many ambiguity sets from the recent literature as special cases. They also allow us to characterize distributional families in terms of several classical and/or robust statistical indicators that have not yet been studied in the context of robust optimization. We determine conditions under which distributionally robust optimization problems based on our standardized ambiguity sets are computationally tractable. We also provide tractable conservative approximations for problems that violate these conditions.
The purpose of this paper is to introduce the reader to the field of closed-loop supply chains with a strong business perspective, i.e., we focus on profitable value recovery from returned products. It recounts the evolution of research in this growing area over the past 15 years, during which it developed from a narrow, technically focused niche area to a fully recognized subfield of supply chain management. We use five phases to paint an encompassing view of this evolutionary process for the reader to understand past achievements and potential future operations research opportunities.
Stochastic programming can effectively describe many decision-making problems in uncertain environments. Unfortunately, such programs are often computationally demanding to solve. In addition, their solution can be misleading when there is ambiguity in the choice of a distribution for the random parameters. In this paper, we propose a model that describes uncertainty in both the distribution form (discrete, Gaussian, exponential, etc.) and moments (mean and covariance matrix). We demonstrate that for a wide range of cost functions the associated distributionally robust (or min-max) stochastic program can be solved efficiently. Furthermore, by deriving a new confidence region for the mean and the covariance matrix of a random vector, we provide probabilistic arguments for using our model in problems that rely heavily on historical data. These arguments are confirmed in a practical example of portfolio selection, where our framework leads to better-performing policies on the "true" distribution underlying the daily returns of financial assets.
We propose an algorithmic framework that successfully addresses three vehicle routing problems: the multidepot VRP, the periodic VRP, and the multidepot periodic VRP with capacitated vehicles and constrained route duration. The metaheuristic combines the exploration breadth of population-based evolutionary search, the aggressive-improvement capabilities of neighborhood-based metaheuristics, and advanced population-diversity management schemes. Extensive computational experiments show that the method performs impressively in terms of computational efficiency and solution quality, identifying either the best known solutions, including the optimal ones, or new best solutions for all currently available benchmark instances for the three problem classes. The proposed method also proves extremely competitive for the capacitated VRP.
We show that multigrid ideas can be used to reduce the computational complexity of estimating an expected value arising from a stochastic differential equation using Monte Carlo path simulations. In the simplest case of a Lipschitz payoff and a Euler discretisation, the computational cost to achieve an accuracy of O ( ) is reduced from O ( –3 ) to O ( –2 (log ) 2 ). The analysis is supported by numerical results showing significant computational savings.
In many applications involving make-to-order or time-sensitive (e.g., perishable, seasonal) products, finished orders are often delivered to customers immediately or shortly after the production. Consequently, there is little or no finished product inventory in the supply chain such that production and outbound distribution are very intimately linked and must be scheduled jointly to achieve a desired on-time delivery performance at minimum total cost. Research on integrated scheduling models of production and outbound distribution is relatively recent but is growing very rapidly. In this paper, we provide a survey of such existing models. We present a unified model representation scheme, classify existing models into several different classes, and for each class of the models give an overview of the optimality properties, computational tractability, and solution algorithms for the various problems studied in the literature. We clarify the tractability of some open problems left in the literature and some new problems by providing intractability proofs or polynomial-time exact algorithms. We also identify several problem areas and issues for future research.
Reliable facility location models consider unexpected failures with site-dependent probabilities, as well as possible customer reassignment. This paper proposes a compact mixed integer program (MIP) formulation and a continuum approximation (CA) model to study the reliable uncapacitated fixed charge location problem (RUFL), which seeks to minimize initial setup costs and expected transportation costs in normal and failure scenarios. The MIP determines the optimal facility locations as well as the optimal customer assignments and is solved using a custom-designed Lagrangian relaxation (LR) algorithm. The CA model predicts the total system cost without details about facility locations and customer assignments, and it provides a fast heuristic to find near-optimum solutions. Our computational results show that the LR algorithm is efficient for mid-sized RUFL problems and that the CA solutions are close to optimal in most of the test instances. For large-scale problems, the CA method is a good alternative to the LR algorithm that avoids prohibitively long running times.
We extend the basic theory of kriging, as applied to the design and analysis of deterministic computer experiments, to the stochastic simulation setting. Our goal is to provide flexible, interpolation-based metamodels of simulation output performance measures as functions of the controllable design or decision variables, or uncontrollable environmental variables. To accomplish this, we characterize both the intrinsic uncertainty inherent in a stochastic simulation and the extrinsic uncertainty about the unknown response surface. We use tractable examples to demonstrate why it is critical to characterize both types of uncertainty, derive general results for experiment design and analysis, and present a numerical example that illustrates the stochastic kriging method.
In this paper we present a unit commitment model for studying the impact of large-scale wind integration in power systems with transmission constraints and system component failures. The model is formulated as a two-stage stochastic program with uncertain wind production in various locations of the network as well as generator and transmission line failures. We present a scenario selection algorithm for selecting and weighing wind power production scenarios and composite element failures, and we provide a parallel dual decomposition algorithm for solving the resulting mixed-integer program. We validate the proposed scenario selection algorithm by demonstrating that it outperforms alternative reserve commitment approaches in a 225 bus model of California with 130 generators and 375 transmission lines. We use our model to quantify day-ahead generator capacity commitment, operating cost impacts, and renewable energy utilization levels for various degrees of wind power integration. We then demonstrate that failing to account for transmission constraints and contingencies can result in significant errors in assessing the economic impacts of renewable energy integration.
This paper proposes three strong second order cone programming (SOCP) relaxations for the AC optimal power flow (OPF) problem. These three relaxations are incomparable to each other and two of them are incomparable to the standard SDP relaxation of OPF. Extensive computational experiments show that these relaxations have numerous advantages over existing convex relaxations in the literature: (i) their solution quality is extremely close to that of the standard SDP relaxation (the best one is within 99.96% of the SDP relaxation on average for all the IEEE test cases) and consistently outperforms previously proposed convex quadratic relaxations of the OPF problem, (ii) the solutions from the strong SOCP relaxations can be directly used as a warm start in a local solver such as IPOPT to obtain a high quality feasible OPF solution, and (iii) in terms of computation times, the strong SOCP relaxations can be solved an order of magnitude faster than the standard SDP relaxation. For example, one of the proposed SOCP relaxations together with IPOPT produces a feasible solution for the largest instance in the IEEE test cases (the 3375-bus system) and also certifies that this solution is within 0.13% of global optimality, all this computed within 157.20 seconds on a modest personal computer. Overall, the proposed strong SOCP relaxations provide a practical approach to obtain feasible OPF solutions with extremely good quality within a time framework that is compatible with the real-time operation in the current industry practice.
Crowdsourcing systems, in which numerous tasks are electronically distributed to numerous "information pieceworkers," have emerged as an effective paradigm for human-powered solving of large-scale problems in domains such as image classification, data entry, optical character recognition, recommendation, and proofreading. Because these low-paid workers can be unreliable, nearly all such systems must devise schemes to increase confidence in their answers, typically by assigning each task multiple times and combining the answers in an appropriate manner, e.g., majority voting. In this paper, we consider a general model of such crowdsourcing tasks and pose the problem of minimizing the total price (i.e., number of task assignments) that must be paid to achieve a target overall reliability. We give a new algorithm for deciding which tasks to assign to which workers and for inferring correct answers from the workers' answers. We show that our algorithm, inspired by belief propagation and low-rank matrix approximation, significantly outperforms majority voting and, in fact, is optimal through comparison to an oracle that knows the reliability of every worker. Further, we compare our approach with a more general class of algorithms that can dynamically assign tasks. By adaptively deciding which questions to ask to the next set of arriving workers, one might hope to reduce uncertainty more efficiently. We show that, perhaps surprisingly, the minimum price necessary to achieve a target reliability scales in the same manner under both adaptive and nonadaptive scenarios. Hence, our nonadaptive approach is order optimal under both scenarios. This strongly relies on the fact that workers are fleeting and cannot be exploited. Therefore, architecturally, our results suggest that building a reliable worker-reputation system is essential to fully harnessing the potential of adaptive designs.
This paper presents a summary of the discrete mathematical part of my work, the Analytic Hierarchy Process (AHP) and its generalization to dependence and feedback, the Analytic Network Process (ANP), for measuring tangible and intangible factors, particularly as applied to decision making. The factors of the decision are arranged in hierarchical or network structures and judgments are then made by the decision maker, or by an expert, about the dominant element for each pair with respect to a common property. From simple judgments on two elements at a time with respect to a common property, priority vectors are obtained that are combined throughout the structure to give the best outcome for a decision. The judgments may be inconsistent, and there is a mathematical way to measure inconsistency so that the outlying judgments may be revised by the decision maker in an acceptable way or a decision may be delayed until more consistent information is obtained. In practical applications using either hierarchical or network structures, decisions are often analyzed in separate parts for their benefits, opportunities, costs, and risks, and the results are then combined in an appropriate way into an overall synthesis of those priorities. The mathematics has been generalized in the literature to the Neural Network Process (NNP), the continuous case for modeling how the brain synthesizes signals. There has been a diversity of applications over the past 30 to 40 years, and some of these are reported here. A brief mention is made of other methods of decision making and how AHP/ANP may compare with them.
In this paper, we describe an effective exact method for solving both the capacitated vehicle routing problem ( cvrp ) and the vehicle routing problem with time windows ( vrptw ) that improves the method proposed by Baldacci et al. [Baldacci, R., N. Christofides, A. Mingozzi. 2008. An exact algorithm for the vehicle routing problem based on the set partitioning formulation with additional cuts. Math. Programming 115 (2) 351-385] for the cvrp . The proposed algorithm is based on the set partitioning (SP) formulation of the problem. We introduce a new route relaxation called ng -route, used by different dual ascent heuristics to find near-optimal dual solutions of the LP-relaxation of the SP model. We describe a column-and-cut generation algorithm strengthened by valid inequalities that uses a new strategy for solving the pricing problem. The new ng -route relaxation and the different dual solutions achieved allow us to generate a reduced SP problem containing all routes of any optimal solution that is finally solved by an integer programming solver. The proposed method solves four of the five open Solomon's vrptw instances and significantly improves the running times of state-of-the-art algorithms for both vrptw and cvrp .
We consider a supply chain with a retailer and a supplier: A newsvendor-like retailer has a single opportunity to order a product from a supplier to satisfy future uncertain demand. Both the retailer and supplier are capital constrained and in need of short-term financing. In the presence of bankruptcy risks for both the retailer and supplier, we model their strategic interaction as a Stackelberg game with the supplier as the leader. We use the supplier early payment discount scheme as a decision framework to analyze all decisions involved in optimally structuring the trade credit contract (discounted wholesale price if paying early, financing rate if delaying payment) from the supplier's perspective. Under mild assumptions we conclude that a risk-neutral supplier should always finance the retailer at rates less than or equal to the risk-free rate. The retailer, if offered an optimally structured trade credit contract, will always prefer supplier financing to bank financing. Furthermore, under optimal trade credit contracts, both the supplier's profit and supply chain efficiency improve, and the retailer might improve his profits relative to under bank financing (or equivalently, a rich retailer under wholesale price contracts), depending on his current "wealth" (working capital and collateral).
Effective route planning for battery electric commercial vehicle (ECV) fleets has to take into account their limited autonomy and the possibility of visiting recharging stations during the course of a route. In this paper, we consider four variants of the electric vehicle-routing problem with time windows: (i) at most a single recharge per route is allowed, and batteries are fully recharged on visit of a recharging station; (ii) multiple recharges per route, full recharges only; (iii) at most a single recharge per route, and partial battery recharges are possible; and (iv) multiple, partial recharges. For each variant, we present exact branch-price-and-cut algorithms that rely on customized monodirectional and bidirectional labeling algorithms for generating feasible vehicle routes. In computational studies, we find that all four variants are solvable for instances with up to 100 customers and 21 recharging stations. This success can be attributed to the tailored resource extension functions (REFs) that enable efficient labeling with constant time feasibility checking and strong dominance rules, even if these REFs are intricate and rather elaborate to derive. The studies also highlight the superiority of the bidirectional labeling algorithms compared to the monodirectional ones. Finally, we find that allowing multiple as well as partial recharges both help to reduce routing costs and the number of employed vehicles in comparison to the variants with single and with full recharges.
In this paper we study resource allocation problems that involve multiple self-interested parties or players and a central decision maker. We introduce and study the price of fairness, which is the relative system efficiency loss under a "fair" allocation assuming that a fully efficient allocation is one that maximizes the sum of player utilities. We focus on two well-accepted, axiomatically justified notions of fairness, viz., proportional fairness and max-min fairness. For these notions we provide a tight characterization of the price of fairness for a broad family of problems.
Emissions trading is a market-based mechanism for curbing emissions, and it has been implemented in Europe, North America, and several other parts of the world. To study its impact on production planning, we develop a dynamic production model, where a manufacturer produces a single product to satisfy random market demands. The manufacturer has access to both a green and a regular production technology, of which the former is more costly but yields fewer emissions. To comply with the emissions regulations, the manufacturer can buy or sell the allowances in each period via forward contracts in an outside market with stochastic trading prices while needing to keep a nonnegative allowance account balance at the end of the planning horizon. We first derive several important structural properties of the model, and based upon them, we characterize the optimal emissions trading and production policies that minimize the manufacturer's expected total discounted cost. In particular, the optimal emissions trading policy is a target interval policy with two thresholds that decrease with the starting inventory level. The optimal production policy is established by first determining the optimal technology choice and then showing the optimality of a base-stock type of production policy. We show that the optimal base-stock level is independent of the starting inventory level and the allowance level when the manufacturer trades the allowance or uses both technologies simultaneously. A numerical study using representative data from the cement industry is conducted to illustrate the analytical results and to examine the value of green technology for the manufacturer.
A DEA-based stochastic frontier estimation framework is presented to evaluate contextual variables affecting productivity that allows for both one-sided inefficiency deviations as well as two-sided random noise. Conditions are identified under which a two-stage procedure consisting of DEA followed by ordinary least squares (OLS) regression analysis yields consistent estimators of the impact of contextual variables. Conditions are also identified under which DEA in the first stage followed by maximum likelihood estimation (MLE) in the second stage yields consistent estimators of the impact of contextual variables. This requires the contextual variables to be independent of the input variables, but the contextual variables may be correlated with each other. Monte Carlo simulations are carried out to compare the performance of our two-stage approach with one-stage and two-stage parametric approaches. Simulation results indicate that DEA-based procedures with OLS, maximum likelihood, or even Tobit estimation in the second stage perform as well as the best of the parametric methods in the estimation of the impact of contextual variables on productivity. Simulation results also indicate that DEA-based procedures perform better than parametric methods in the estimation of individual decision-making unit (DMU) productivity. Overall, the results establish DEA as a nonparametric stochastic frontier estimation (SFE) methodology.
In this paper we focus on a linear optimization problem with uncertainties, having expectations in the objective and in the set of constraints. We present a modular framework to obtain an approximate solution to the problem that is distributionally robust and more flexible than the standard technique of using linear rules. Our framework begins by first affinely extending the set of primitive uncertainties to generate new linear decision rules of larger dimensions and is therefore more flexible. Next, we develop new piecewise-linear decision rules that allow a more flexible reformulation of the original problem. The reformulated problem will generally contain terms with expectations on the positive parts of the recourse variables. Finally, we convert the uncertain linear program into a deterministic convex program by constructing distributionally robust bounds on these expectations. These bounds are constructed by first using different pieces of information on the distribution of the underlying uncertainties to develop separate bounds and next integrating them into a combined bound that is better than each of the individual bounds.
A robust approach to solving linear optimizationproblems with uncertain data was proposed in the early 1970s and has recentlybeen extensively studied and extended. Under this approach, we are willing toaccept a suboptimal solution for the nominal values of the data in order toensure that the solution remains feasible and near optimal when the datachanges. A concern with such an approach is that it might be too conservative.In this paper, we propose an approach that attempts to make this trade-off moreattractive; that is, we investigate ways to decrease what we call the price ofrobustness. In particular, we flexibly adjust the level of conservatism of therobust solutions in terms of probabilistic bounds of constraint violations. Anattractive aspect of our method is that the new robust formulation is also alinear optimization problem. Thus we naturally extend our methods to discreteoptimization problems in a tractable way. We report numerical results for aportfolio optimization problem, a knapsack problem, and a problem from the NetLib library.