Machine Learning for Risk Calculations : A Practitioner's View.
Material type:
- text
- computer
- online resource
- 9781119791393
- 332.10285631
- Q325.5 .R859 2022
Cover -- Title Page -- Copyright Page -- Contents -- Acknowledgements -- Foreword -- Motivation and aim of this book -- Part One Fundamental Approximation Methods -- Chapter 1 Machine Learning -- 1.1 Introduction to Machine Learning -- 1.1.1 A brief history of Machine Learning Methods -- 1.1.2 Main sub‐categories in Machine Learning -- 1.1.3 Applications of interest -- 1.2 The Linear Model -- 1.2.1 General concepts -- 1.2.2 The standard linear model -- 1.3 Training and predicting -- 1.3.1 The frequentist approach -- 1.3.2 The Bayesian approach -- 1.3.3 Testing - in search of consistent accurate predictions -- 1.3.4 Underfitting and overfitting -- 1.3.5 K‐fold cross‐validation -- 1.4 Model complexity -- 1.4.1 Regularisation -- 1.4.2 Cross‐validation for regularisation -- 1.4.3 Hyper‐parameter optimisation -- Chapter 2 Deep Neural Nets -- 2.1 A brief history of Deep Neural Nets -- 2.2 The basic Deep Neural Net model -- 2.2.1 Single neuron -- 2.2.2 Artificial Neural Net -- 2.2.3 Deep Neural Net -- 2.3 Universal Approximation Theorems -- 2.4 Training of Deep Neural Nets -- 2.4.1 Backpropagation -- 2.4.2 Backpropagation example -- 2.4.3 Optimisation of cost function -- 2.4.4 Stochastic gradient descent -- 2.4.5 Extensions of stochastic gradient descent -- 2.5 More sophisticated DNNs -- 2.5.1 Convolution Neural Nets -- 2.5.2 Other famous architectures -- 2.6 Summary of chapter -- Chapter 3 Chebyshev Tensors -- 3.1 Approximating functions with polynomials -- 3.2 Chebyshev Series -- 3.2.1 Lipschitz continuity and Chebyshev projections -- 3.2.2 Smooth functions and Chebyshev projections -- 3.2.3 Analytic functions and Chebyshev projections -- 3.3 Chebyshev Tensors and interpolants -- 3.3.1 Tensors and polynomial interpolants -- 3.3.2 Misconception over polynomial interpolation -- 3.3.3 Chebyshev points -- 3.3.4 Chebyshev interpolants.
3.3.5 Aliasing phenomenon -- 3.3.6 Convergence rates of Chebyshev interpolants -- 3.3.7 High‐dimensional Chebyshev interpolants -- 3.4 Ex ante error estimation -- 3.5 What makes Chebyshev points unique -- 3.6 Evaluation of Chebyshev interpolants -- 3.6.1 Clenshaw algorithm -- 3.6.2 Barycentric interpolation formula -- 3.6.3 Evaluating high‐dimensional tensors -- 3.6.4 Example of numerical stability -- 3.7 Derivative approximation -- 3.7.1 Convergence of Chebyshev derivatives -- 3.7.2 Computation of Chebyshev derivatives -- 3.7.3 Derivatives in high dimensions -- 3.8 Chebyshev splines -- 3.8.1 Gibbs phenomenon -- 3.8.2 Splines -- 3.8.3 Splines of Chebyshev -- 3.8.4 Chebyshev Splines in high dimensions -- 3.9 Algebraic operations with Chebyshev Tensors -- 3.10 Chebyshev Tensors and Machine Learning -- 3.11 Summary of chapter -- Part Two The toolkit - plugging in approximation methods -- Chapter 4 Introduction: why is a toolkit needed -- 4.1 The pricing problem -- 4.1.0 Risk calculation flow -- 4.1.0 Pricing problem example -- 4.2 Risk calculation with proxy pricing -- 4.3 The curse of dimensionality -- 4.4 The techniques in the toolkit -- Chapter 5 Composition techniques -- 5.1 Leveraging from existing parametrisations -- 5.1.1 Risk factor generating models -- 5.1.2 Pricing functions and model risk factors -- 5.1.3 The tool obtained -- 5.2 Creating a parametrisation -- 5.2.1 Principal Component Analysis -- 5.2.2 Autoencoders -- 5.3 Summary of chapter -- Chapter 6 Tensors in TT format and Tensor Extension Algorithms -- 6.1 Tensors in TT format -- 6.1.1 Motivating example -- 6.1.2 General case -- 6.1.3 Basic operations -- 6.1.4 Evaluation of Chebyshev Tensors in TT format -- 6.2 Tensor Extension Algorithms -- 6.3 Step 1 - Optimising over tensors of fixed rank -- 6.3.1 The Fundamental Completion Algorithm.
6.4 Step 2 - Optimising over tensors of varying rank -- 6.4.1 The Rank Adaptive Algorithm -- 6.5 Step 3 - Adapting the sampling set -- 6.5.1 The Sample Adaptive Algorithm -- 6.6 Summary of chapter -- Chapter 7 Sliding Technique -- 7.1 Slide -- 7.2 Slider -- 7.3 Evaluating a slider -- 7.3.1 Relation to Taylor approximation -- 7.4 Summary of chapter -- Chapter 8 The Jacobian projection technique -- 8.1 Setting the background -- 8.2 What we can recover -- 8.2.1 Intuition behind g and its derivative dg -- 8.2.2 Using the derivative of f -- 8.2.3 When kn becomes a problem -- 8.3 Partial derivatives via projections onto the Jacobian -- Part Three Hybrid solutions - approximation methods and the toolkit -- Chapter 9 Introduction -- 9.1 The dimensionality problem revisited -- 9.2 Exploiting the Composition Technique -- Chapter 10 The Toolkit and Deep Neural Nets -- 10.1 Building on P using the image of g -- 10.2 Building on f -- Chapter 11 The Toolkit and Chebyshev Tensors -- 11.1 Full Chebyshev Tensor -- 11.2 TT‐format Chebyshev Tensor -- 11.3 Chebyshev Slider -- 11.4 A final note -- Chapter 12 Hybrid Deep Neural Nets and Chebyshev Tensors Frameworks -- 12.1 The fundamental idea -- 12.1.1 Factorable Functions -- 12.2 DNN+CT with Static Training Set -- 12.2.0 Calibration of f2 -- 12.2.0 Training the hNN -- 12.3 DNN+CT with Dynamic Training Set -- 12.4 Numerical Tests -- 12.4.1 Cost Function Minimisation -- 12.4.2 Maximum Error -- 12.5 Enhanced DNN+CT architectures and further research -- Part Four Applications -- Chapter 13 The aim -- 13.1 Suitability of the approximation methods -- 13.2 Understanding the variables at play -- 13.2.0 Model parameters -- 13.2.0 Market risk factors -- 13.2.0 Model risk factors -- 13.2.0 Trade parameters -- 13.2.0 Choosing carefully -- Chapter 14 When to use Chebyshev Tensors and when to use Deep Neural Nets.
14.1 Speed and convergence -- 14.1.1 Speed of evaluation -- 14.1.2 Convergence -- 14.1.3 Convergence Rate in Real‐Life Contexts -- 14.2 The question of dimension -- 14.2.0 Full Chebyshev Tensors -- 14.2.0 Chebyshev Tensors in TT format -- 14.2.0 Deep Neural Nets -- 14.2.1 Taking into account the application -- 14.3 Partial derivatives and ex ante error estimation -- 14.3.0 Partial Derivatives -- 14.3.0 Error Control -- 14.4 Summary of chapter -- Chapter 15 Counterparty credit risk -- 15.1 Monte Carlo simulations for CCR -- 15.1.1 Scenario diffusion -- 15.1.2 Pricing step - computational bottleneck -- 15.2 Solution -- 15.2.1 Popular solutions -- 15.2.2 The hybrid solution -- 15.2.3 Variables at play -- 15.2.4 Optimal setup -- 15.2.5 Possible proxies -- 15.2.6 Portfolio calculations -- 15.2.7 If the model space is not available -- 15.3 Tests -- 15.3.1 Trade types, risk factors and proxies -- 15.3.2 Proxy at each time point -- 15.3.3 Proxy for all time points -- 15.3.4 Adding non‐risk‐driving variables -- 15.3.5 High‐dimensional problems -- 15.4 Results Analysis and Conclusions -- 15.4.0 Computational cost -- 15.4.0 Memory impact -- 15.4.0 Pricing accuracy -- 15.4.0 Balance between computational cost and accuracy -- 15.4.0 Our aim -- 15.5 Summary of chapter -- Chapter 16 Market Risk -- 16.1 VaR‐like calculations -- 16.1.1 Common techniques in the computation of VaR -- 16.2 Enhanced Revaluation Grids -- 16.3 Fundamental Review of the Trading Book -- 16.3.1 Challenges -- 16.3.2 Solution -- 16.3.3 The intuition behind Chebyshev Sliders -- 16.4 Proof of concept -- 16.4.1 Proof of concept specifics -- 16.4.2 Test specifics -- 16.4.3 Results for swap -- 16.4.4 Results for swaptions 10‐day liquidity horizon -- 16.4.5 Results for swaptions 60‐day liquidity horizon -- 16.4.6 Daily computation and reusability -- 16.4.7 Beyond regulatory minimum calculations.
16.5 Stability of technique -- 16.6 Results beyond vanilla portfolios - further research -- 16.7 Summary of chapter -- Chapter 17 Dynamic sensitivities -- 17.1 Simulating sensitivities -- 17.1.1 Scenario diffusion -- 17.1.2 Computing sensitivities -- 17.1.3 Computational cost -- 17.1.4 Methods available -- 17.2 The Solution -- 17.2.1 Hybrid method -- 17.2.1 Example -- 17.2.1 Variables at play -- 17.2.1 Possible proxies -- 17.3 An important use of dynamic sensitivities -- 17.4 Numerical tests -- 17.4.1 FX Swap -- 17.4.2 European Spread Option -- 17.5 Discussion of results -- 17.6 Alternative methods -- 17.7 Summary of chapter -- Chapter 18 Pricing model calibration -- 18.1 Introduction -- 18.1.1 Examples of pricing models -- 18.2 Solution -- 18.2.1 Variables at play -- 18.2.2 Possible proxies -- 18.2.3 Domain of approximation -- 18.3 Test description -- 18.3.1 Test setup -- 18.4 Results with Chebyshev Tensors -- 18.4.1 Rough Bergomi model with constant forward variance -- 18.4.2 Rough Bergomi model with piece‐wise constant forward variance -- 18.5 Results with Deep Neural Nets -- 18.6 Comparison of results via CT and DNN -- 18.7 Summary of chapter -- Chapter 19 Approximation of the implied volatility function -- 19.1 The computation of implied volatility -- 19.1.1 Available methods -- 19.2 Solution -- 19.2.1 Reducing the dimension of the problem -- 19.2.2 Two‐dimensional CTs -- 19.2.3 Domain of approximation -- 19.2.4 Splitting the domain -- 19.2.5 Scaling the time‐scaled implied volatility -- 19.2.6 Implementation -- 19.3 Results -- 19.3.1 Parameters used for CTs -- 19.3.2 Comparisons to other methods -- 19.4 Summary of chapter -- Chapter 20 Optimisation Problems -- 20.1 Balance sheet optimisation -- 20.1.0 A bank's balance sheet -- 20.1.0 The Optimisation -- 20.2 Minimisation of margin funding cost -- 20.2.0 Computational Strategies for MVA.
20.2.0 The Use Case.
Description based on publisher supplied metadata and other sources.
Electronic reproduction. Ann Arbor, Michigan : ProQuest Ebook Central, 2024. Available via World Wide Web. Access may be limited to ProQuest Ebook Central affiliated libraries.
There are no comments on this title.