ORPP logo
Image from Google Jackets

Handbook of Simulation Optimization.

By: Material type: TextTextSeries: International Series in Operations Research and Management Science SeriesPublisher: New York, NY : Springer, 2014Copyright date: ©2015Edition: 1st edDescription: 1 online resource (400 pages)Content type:
  • text
Media type:
  • computer
Carrier type:
  • online resource
ISBN:
  • 9781493913848
Subject(s): Genre/Form: Additional physical formats: Print version:: Handbook of Simulation OptimizationDDC classification:
  • 003.76
LOC classification:
  • T57.6-.97
Online resources:
Contents:
Intro -- Preface -- Contents -- Contributors -- Selected Abbreviations and Notation -- 1 Overview of the Handbook -- References -- 2 Discrete Optimization via Simulation -- 2.1 Introduction -- 2.1.1 Designing a Highly Reliable System -- 2.1.2 Flow-Line Throughput -- 2.1.3 Inventory Management with Dynamic Customer Substitution -- 2.1.4 Themes -- 2.2 Optimality Conditions -- 2.3 Ranking and Selection -- 2.4 Ordinal Optimization -- 2.5 Globally Convergent Random Search Algorithms -- 2.6 Locally Convergent Random Search Algorithms -- 2.7 Algorithm Enhancements -- 2.8 Using Commercial Solvers -- 2.8.1 Preliminary Experiment to Control Sampling Variability -- 2.8.2 Restarting the Optimization -- 2.8.3 Statistical Clean Up After Search -- References -- 3 Ranking and Selection: Efficient Simulation Budget Allocation -- 3.1 Introduction -- 3.1.1 Intuitive Explanations of Simulation Budget Allocation -- 3.1.2 Overview of Ranking and Selection (R&amp -- S) -- 3.1.3 Organization -- 3.2 Problem Formulation and Selection Procedures -- 3.2.1 Problem Formulation of Selecting the Best -- 3.2.2 A Generic Algorithm for Selection Procedures -- 3.2.3 General Concepts for OCBA and EVI -- Basic Idea of OCBA -- Basic Idea of EVI -- 3.3 Optimal Computing Budget Allocation (OCBA) -- 3.3.1 Maximization of PCS -- 3.3.2 Asymptotic Allocation Rule -- 3.3.3 Sequential Heuristic Algorithm for Allocation -- OCBA Algorithm -- Alternative Simpler OCBA Procedure -- 3.3.4 Numerical Results -- 3.3.5 Minimization of EOC -- 3.3.6 Other Variants -- Subset Selection Problem -- Handling Optimization with Multiple Performance Measures -- Other Recent Developments -- Generalized OCBA Notions -- 3.4 Expected Value of Information (EVI) -- 3.4.1 Linear Loss (LL) -- Minimization of EOC -- Allocation Rules -- 3.4.2 Small-Sample EVI Allocation Rule (LL1) -- Small-Sample EVI.
Sequential Algorithm -- 3.4.3 Stopping Rules -- 3.4.4 Numerical Results for LL and LL1 -- 3.4.5 Economics of Selection Procedures (ESP) -- Maximizing Expected Reward -- Optimal Stopping Problem for the Special Case of k==1 Alternative -- ESP Allocation Rule and Stopping Rule -- 3.4.6 Other Variants of EVI -- 3.5 Conclusion -- References -- 4 Response Surface Methodology -- 4.1 Introduction -- 4.2 RSM Basics -- 4.3 RSM in Simulation -- 4.4 Adapted Steepest Descent (ASD) -- 4.5 Multiple Responses: Generalized RSM -- 4.6 Testing an Estimated Optimum in GRSM: KKT Conditions -- 4.7 Robust Optimization -- Taguchi's Robust Optimization -- Ben-Tal et al.'s Robust Optimization -- 4.8 Conclusions -- References -- 5 Stochastic Gradient Estimation -- 5.1 Introduction -- 5.2 Indirect Gradient Estimators -- 5.2.1 Finite Differences -- 5.2.2 Simultaneous Perturbation -- 5.3 Direct Gradient Estimators -- 5.3.1 Derivatives of Random Variables -- 5.3.2 Derivatives of Measures -- 5.3.3 Input Distribution Examples -- 5.3.4 Output Examples -- Stochastic Activity Network -- Single-Server Queue -- Variance Reduction -- Higher Derivatives -- 5.3.5 Rudimentary Theory -- 5.3.6 Guidelines for the Practitioner -- 5.4 Quantile Sensitivity Estimation -- 5.5 New Approaches for Using Direct Stochastic Gradients in Simulation Optimization -- 5.5.1 Direct Gradient Augmented Regression (DiGAR) -- 5.5.2 Stochastic Kriging with Gradients (SKG) -- 5.5.3 Gradient Extrapolated Stochastic Kriging (GESK) -- Numerical Example -- 5.5.4 Secant-Tangents AveRaged Stochastic Approximation (STAR-SA) -- 5.6 Concluding Remarks -- References -- 6 An Overview of Stochastic Approximation -- 6.1 Introduction -- 6.2 Classical Methods -- 6.2.1 Robbins-Monro (RM) Algorithm -- 6.2.2 Kiefer-Wolfowitz (KW) Algorithm -- 6.3 Well-Known Variants -- 6.3.1 Kesten's Rule -- 6.3.2 Averaging Iterates.
6.3.3 Varying Bounds -- 6.3.4 Simultaneous Perturbation Stochastic Approximation (SPSA) -- 6.4 Recent Modifications -- 6.4.1 Scaled-and-Shifted Kiefer-Wolfowitz (SSKW) -- 6.4.2 Robust Stochastic Approximation (RSA) -- 6.4.3 Accelerated Stochastic Approximation (AC-SA) for Strongly Convex Functions -- 6.4.4 Accelerated Stochastic Approximation (AC-SA) for Convex Functions -- 6.4.5 Secant-Tangents AveRaged Stochastic Approximation (STAR-SA) -- 6.5 Numerical Experiments -- Sensitivity Analysis of KW and Its Variants -- RM, RM with Iterate Averaging, Robust SA and Accelerated SA -- STAR-SA, RM, and KW -- 6.6 Concluding Remarks -- References -- 7 Stochastic Approximation Methods and Their Finite-Time Convergence Properties -- 7.1 Introduction -- 7.2 Convex Stochastic Optimization -- 7.2.1 Robust SA Method for Nonsmooth Stochastic Convex Optimization -- 7.2.2 Accelerated SA Methods for Nonsmooth and Smooth Stochastic Optimization -- 7.2.3 Accelerated SA Methods for Strongly Convex Optimization -- 7.3 Nonconvex Stochastic Optimization -- 7.4 Randomized Stochastic Zeroth-Order (SZO) Methods -- 7.5 Summary -- References -- 8 A Guide to Sample Average Approximation -- 8.1 Introduction -- 8.2 When Is SAA Appropriate? -- 8.3 Detecting When SAA Is Appropriate -- 8.4 Known Properties -- 8.4.1 Almost Sure Convergence -- 8.4.2 Convergence Rates for the SAA Method -- 8.4.3 The SAA Method in the Nonconvex Case -- 8.5 SAA Implementation -- 8.5.1 Sample Size Choice -- 8.5.2 Refined SAA Methods -- 8.6 Asymptotic Efficiency Calculation -- 8.6.1 Asymptotic Rates for Stochastic Approximation -- 8.6.2 Asymptotic Rates for the SAA Method -- 8.6.3 Asymptotic Rates for the RA Method -- 8.7 Conclusions -- References -- 9 Stochastic Constraints and Variance Reduction Techniques -- 9.1 Introduction -- 9.2 Problems with Stochastic Constraints.
9.2.1 Problems with General Expected-Value Constraints -- The SAA Approach -- Solution Methods and Choice of Sample Sizes -- Assessing Solution Quality -- 9.2.2 Problems with Probabilistic Constraints -- The SAA Approach -- Solution Methods and Choice of Sample Sizes -- Assessing Solution Quality -- 9.2.3 Problems with Stochastic Dominance Constraints -- Review of Stochastic Dominance -- Basic Properties and Reformulations -- The SAA Approach and Solution Methods -- Assessing Solution Quality -- 9.3 Variance Reduction Techniques -- 9.3.1 Antithetic Variates -- 9.3.2 Latin Hypercube Sampling -- 9.3.3 Quasi-Monte Carlo -- 9.3.4 Importance Sampling -- 9.3.5 Control Variates -- 9.4 Conclusions -- References -- 10 A Review of Random Search Methods -- 10.1 Introduction -- 10.2 Structure of Random Search Methods for Simulation Optimization -- 10.3 Discrete Simulation Optimization -- 10.3.1 Simulated Annealing -- 10.3.2 Other Developments -- 10.4 Extensions -- 10.4.1 Continuous Simulation Optimization -- 10.4.2 Simulation Optimization with Multiple Objectives -- References -- 11 Stochastic Adaptive Search Methods: Theory and Implementation -- 11.1 Introduction -- 11.2 Optimization Models for Complex Systems with Noise -- 11.3 Performance Analyses of Stochastic Adaptive Search Methods -- 11.3.1 Pure Random Search and Pure Adaptive Search -- 11.3.2 Hesitant Adaptive Search and Backtracking Adaptive Search -- 11.3.3 Annealing Adaptive Search -- 11.4 Optimization Algorithms using Hit-and-Run -- 11.4.1 Improving Hit-and-Run (IHR) -- 11.4.2 Simulated Annealing with Hit-and-Run -- 11.4.3 Population-Based Simulated Annealing (Interacting Particle Algorithm) with Hit-and-Run -- 11.4.4 Pattern Hit-and-Run (PHR) -- 11.5 Estimation and Optimization -- 11.6 Conclusions -- References -- 12 Model-Based Stochastic Search Methods -- 12.1 Introduction.
12.2 A Brief Review of Model-Based Methods -- 12.3 A Model Reference Optimization Framework -- 12.3.1 Convergence Result -- 12.3.2 Simulation Optimization -- 12.4 A Stochastic Approximation Framework -- 12.4.1 Convergence Results -- Convergence of the CE Method -- Model-Based Annealing Random Search (MARS) -- 12.4.2 Simulation Optimization -- Application of MARS to Finite-Horizon Markov Decision Processes -- 12.5 A Stochastic Averaging Approach -- 12.6 Conclusions and Open Research Questions -- References -- 13 Solving Markov Decision Processes via Simulation -- 13.1 Introduction -- 13.2 Background -- 13.2.1 Notation and Assumptions -- 13.2.2 Performance Metrics -- 13.2.3 Bellman Equations -- 13.3 Discounted Reward MDPs -- 13.3.1 Q-Learning -- 13.3.2 Q-Learning with Function Approximation -- Basis Functions -- Bellman Error -- 13.3.3 SARSA(λ) -- 13.3.4 Approximate Policy Iteration -- 13.3.5 Actor-Critic Algorithm -- 13.3.6 Evolutionary Policy Iteration -- 13.4 Average Reward MDPs -- 13.4.1 Relative Q-Learning -- 13.4.2 R-SMART -- SSP-Version -- Contraction-Factor Version -- 13.5 Finite Horizon MDPs -- 13.5.1 Special Case of Stochastic Shortest Path (SSP) -- 13.5.2 Pursuit Learning Automata (PLA) Sampling -- 13.6 Numerical Results -- 13.6.1 Infinite Horizon -- 13.6.2 Finite Horizon -- 13.7 Extensions -- 13.8 Convergence Theory -- 13.9 Concluding Remarks -- References -- Index.
Tags from this library: No tags from this library for this title. Log in to add tags.
Star ratings
    Average rating: 0.0 (0 votes)
No physical items for this record

Intro -- Preface -- Contents -- Contributors -- Selected Abbreviations and Notation -- 1 Overview of the Handbook -- References -- 2 Discrete Optimization via Simulation -- 2.1 Introduction -- 2.1.1 Designing a Highly Reliable System -- 2.1.2 Flow-Line Throughput -- 2.1.3 Inventory Management with Dynamic Customer Substitution -- 2.1.4 Themes -- 2.2 Optimality Conditions -- 2.3 Ranking and Selection -- 2.4 Ordinal Optimization -- 2.5 Globally Convergent Random Search Algorithms -- 2.6 Locally Convergent Random Search Algorithms -- 2.7 Algorithm Enhancements -- 2.8 Using Commercial Solvers -- 2.8.1 Preliminary Experiment to Control Sampling Variability -- 2.8.2 Restarting the Optimization -- 2.8.3 Statistical Clean Up After Search -- References -- 3 Ranking and Selection: Efficient Simulation Budget Allocation -- 3.1 Introduction -- 3.1.1 Intuitive Explanations of Simulation Budget Allocation -- 3.1.2 Overview of Ranking and Selection (R&amp -- S) -- 3.1.3 Organization -- 3.2 Problem Formulation and Selection Procedures -- 3.2.1 Problem Formulation of Selecting the Best -- 3.2.2 A Generic Algorithm for Selection Procedures -- 3.2.3 General Concepts for OCBA and EVI -- Basic Idea of OCBA -- Basic Idea of EVI -- 3.3 Optimal Computing Budget Allocation (OCBA) -- 3.3.1 Maximization of PCS -- 3.3.2 Asymptotic Allocation Rule -- 3.3.3 Sequential Heuristic Algorithm for Allocation -- OCBA Algorithm -- Alternative Simpler OCBA Procedure -- 3.3.4 Numerical Results -- 3.3.5 Minimization of EOC -- 3.3.6 Other Variants -- Subset Selection Problem -- Handling Optimization with Multiple Performance Measures -- Other Recent Developments -- Generalized OCBA Notions -- 3.4 Expected Value of Information (EVI) -- 3.4.1 Linear Loss (LL) -- Minimization of EOC -- Allocation Rules -- 3.4.2 Small-Sample EVI Allocation Rule (LL1) -- Small-Sample EVI.

Sequential Algorithm -- 3.4.3 Stopping Rules -- 3.4.4 Numerical Results for LL and LL1 -- 3.4.5 Economics of Selection Procedures (ESP) -- Maximizing Expected Reward -- Optimal Stopping Problem for the Special Case of k==1 Alternative -- ESP Allocation Rule and Stopping Rule -- 3.4.6 Other Variants of EVI -- 3.5 Conclusion -- References -- 4 Response Surface Methodology -- 4.1 Introduction -- 4.2 RSM Basics -- 4.3 RSM in Simulation -- 4.4 Adapted Steepest Descent (ASD) -- 4.5 Multiple Responses: Generalized RSM -- 4.6 Testing an Estimated Optimum in GRSM: KKT Conditions -- 4.7 Robust Optimization -- Taguchi's Robust Optimization -- Ben-Tal et al.'s Robust Optimization -- 4.8 Conclusions -- References -- 5 Stochastic Gradient Estimation -- 5.1 Introduction -- 5.2 Indirect Gradient Estimators -- 5.2.1 Finite Differences -- 5.2.2 Simultaneous Perturbation -- 5.3 Direct Gradient Estimators -- 5.3.1 Derivatives of Random Variables -- 5.3.2 Derivatives of Measures -- 5.3.3 Input Distribution Examples -- 5.3.4 Output Examples -- Stochastic Activity Network -- Single-Server Queue -- Variance Reduction -- Higher Derivatives -- 5.3.5 Rudimentary Theory -- 5.3.6 Guidelines for the Practitioner -- 5.4 Quantile Sensitivity Estimation -- 5.5 New Approaches for Using Direct Stochastic Gradients in Simulation Optimization -- 5.5.1 Direct Gradient Augmented Regression (DiGAR) -- 5.5.2 Stochastic Kriging with Gradients (SKG) -- 5.5.3 Gradient Extrapolated Stochastic Kriging (GESK) -- Numerical Example -- 5.5.4 Secant-Tangents AveRaged Stochastic Approximation (STAR-SA) -- 5.6 Concluding Remarks -- References -- 6 An Overview of Stochastic Approximation -- 6.1 Introduction -- 6.2 Classical Methods -- 6.2.1 Robbins-Monro (RM) Algorithm -- 6.2.2 Kiefer-Wolfowitz (KW) Algorithm -- 6.3 Well-Known Variants -- 6.3.1 Kesten's Rule -- 6.3.2 Averaging Iterates.

6.3.3 Varying Bounds -- 6.3.4 Simultaneous Perturbation Stochastic Approximation (SPSA) -- 6.4 Recent Modifications -- 6.4.1 Scaled-and-Shifted Kiefer-Wolfowitz (SSKW) -- 6.4.2 Robust Stochastic Approximation (RSA) -- 6.4.3 Accelerated Stochastic Approximation (AC-SA) for Strongly Convex Functions -- 6.4.4 Accelerated Stochastic Approximation (AC-SA) for Convex Functions -- 6.4.5 Secant-Tangents AveRaged Stochastic Approximation (STAR-SA) -- 6.5 Numerical Experiments -- Sensitivity Analysis of KW and Its Variants -- RM, RM with Iterate Averaging, Robust SA and Accelerated SA -- STAR-SA, RM, and KW -- 6.6 Concluding Remarks -- References -- 7 Stochastic Approximation Methods and Their Finite-Time Convergence Properties -- 7.1 Introduction -- 7.2 Convex Stochastic Optimization -- 7.2.1 Robust SA Method for Nonsmooth Stochastic Convex Optimization -- 7.2.2 Accelerated SA Methods for Nonsmooth and Smooth Stochastic Optimization -- 7.2.3 Accelerated SA Methods for Strongly Convex Optimization -- 7.3 Nonconvex Stochastic Optimization -- 7.4 Randomized Stochastic Zeroth-Order (SZO) Methods -- 7.5 Summary -- References -- 8 A Guide to Sample Average Approximation -- 8.1 Introduction -- 8.2 When Is SAA Appropriate? -- 8.3 Detecting When SAA Is Appropriate -- 8.4 Known Properties -- 8.4.1 Almost Sure Convergence -- 8.4.2 Convergence Rates for the SAA Method -- 8.4.3 The SAA Method in the Nonconvex Case -- 8.5 SAA Implementation -- 8.5.1 Sample Size Choice -- 8.5.2 Refined SAA Methods -- 8.6 Asymptotic Efficiency Calculation -- 8.6.1 Asymptotic Rates for Stochastic Approximation -- 8.6.2 Asymptotic Rates for the SAA Method -- 8.6.3 Asymptotic Rates for the RA Method -- 8.7 Conclusions -- References -- 9 Stochastic Constraints and Variance Reduction Techniques -- 9.1 Introduction -- 9.2 Problems with Stochastic Constraints.

9.2.1 Problems with General Expected-Value Constraints -- The SAA Approach -- Solution Methods and Choice of Sample Sizes -- Assessing Solution Quality -- 9.2.2 Problems with Probabilistic Constraints -- The SAA Approach -- Solution Methods and Choice of Sample Sizes -- Assessing Solution Quality -- 9.2.3 Problems with Stochastic Dominance Constraints -- Review of Stochastic Dominance -- Basic Properties and Reformulations -- The SAA Approach and Solution Methods -- Assessing Solution Quality -- 9.3 Variance Reduction Techniques -- 9.3.1 Antithetic Variates -- 9.3.2 Latin Hypercube Sampling -- 9.3.3 Quasi-Monte Carlo -- 9.3.4 Importance Sampling -- 9.3.5 Control Variates -- 9.4 Conclusions -- References -- 10 A Review of Random Search Methods -- 10.1 Introduction -- 10.2 Structure of Random Search Methods for Simulation Optimization -- 10.3 Discrete Simulation Optimization -- 10.3.1 Simulated Annealing -- 10.3.2 Other Developments -- 10.4 Extensions -- 10.4.1 Continuous Simulation Optimization -- 10.4.2 Simulation Optimization with Multiple Objectives -- References -- 11 Stochastic Adaptive Search Methods: Theory and Implementation -- 11.1 Introduction -- 11.2 Optimization Models for Complex Systems with Noise -- 11.3 Performance Analyses of Stochastic Adaptive Search Methods -- 11.3.1 Pure Random Search and Pure Adaptive Search -- 11.3.2 Hesitant Adaptive Search and Backtracking Adaptive Search -- 11.3.3 Annealing Adaptive Search -- 11.4 Optimization Algorithms using Hit-and-Run -- 11.4.1 Improving Hit-and-Run (IHR) -- 11.4.2 Simulated Annealing with Hit-and-Run -- 11.4.3 Population-Based Simulated Annealing (Interacting Particle Algorithm) with Hit-and-Run -- 11.4.4 Pattern Hit-and-Run (PHR) -- 11.5 Estimation and Optimization -- 11.6 Conclusions -- References -- 12 Model-Based Stochastic Search Methods -- 12.1 Introduction.

12.2 A Brief Review of Model-Based Methods -- 12.3 A Model Reference Optimization Framework -- 12.3.1 Convergence Result -- 12.3.2 Simulation Optimization -- 12.4 A Stochastic Approximation Framework -- 12.4.1 Convergence Results -- Convergence of the CE Method -- Model-Based Annealing Random Search (MARS) -- 12.4.2 Simulation Optimization -- Application of MARS to Finite-Horizon Markov Decision Processes -- 12.5 A Stochastic Averaging Approach -- 12.6 Conclusions and Open Research Questions -- References -- 13 Solving Markov Decision Processes via Simulation -- 13.1 Introduction -- 13.2 Background -- 13.2.1 Notation and Assumptions -- 13.2.2 Performance Metrics -- 13.2.3 Bellman Equations -- 13.3 Discounted Reward MDPs -- 13.3.1 Q-Learning -- 13.3.2 Q-Learning with Function Approximation -- Basis Functions -- Bellman Error -- 13.3.3 SARSA(λ) -- 13.3.4 Approximate Policy Iteration -- 13.3.5 Actor-Critic Algorithm -- 13.3.6 Evolutionary Policy Iteration -- 13.4 Average Reward MDPs -- 13.4.1 Relative Q-Learning -- 13.4.2 R-SMART -- SSP-Version -- Contraction-Factor Version -- 13.5 Finite Horizon MDPs -- 13.5.1 Special Case of Stochastic Shortest Path (SSP) -- 13.5.2 Pursuit Learning Automata (PLA) Sampling -- 13.6 Numerical Results -- 13.6.1 Infinite Horizon -- 13.6.2 Finite Horizon -- 13.7 Extensions -- 13.8 Convergence Theory -- 13.9 Concluding Remarks -- References -- Index.

Description based on publisher supplied metadata and other sources.

Electronic reproduction. Ann Arbor, Michigan : ProQuest Ebook Central, 2024. Available via World Wide Web. Access may be limited to ProQuest Ebook Central affiliated libraries.

There are no comments on this title.

to post a comment.

© 2024 Resource Centre. All rights reserved.