ORPP logo
Image from Google Jackets

Robust Statistics : Theory and Methods (with R).

By: Contributor(s): Material type: TextTextSeries: Wiley Series in Probability and Statistics SeriesPublisher: Newark : John Wiley & Sons, Incorporated, 2019Copyright date: ©2019Edition: 2nd edDescription: 1 online resource (463 pages)Content type:
  • text
Media type:
  • computer
Carrier type:
  • online resource
ISBN:
  • 9781119214670
Subject(s): Genre/Form: Additional physical formats: Print version:: Robust StatisticsLOC classification:
  • QA276 .M376 2019
Online resources:
Contents:
Cover -- Title Page -- Copyright -- Contents -- Preface -- Preface to the First Edition -- About the Companion Website -- Chapter 1 Introduction -- 1.1 Classical and robust approaches to statistics -- 1.2 Mean and standard deviation -- 1.3 The "three sigma edit" rule -- 1.4 Linear regression -- 1.4.1 Straight‐line regression -- 1.4.2 Multiple linear regression -- 1.5 Correlation coefficients -- 1.6 Other parametric models -- 1.7 Problems -- Chapter 2 Location and Scale -- 2.1 The location model -- 2.2 Formalizing departures from normality -- 2.3 M‐estimators of location -- 2.3.1 Generalizing maximum likelihood -- 2.3.2 The distribution of M‐estimators -- 2.3.3 An intuitive view of M‐estimators -- 2.3.4 Redescending M‐estimators -- 2.4 Trimmed and Winsorized means -- 2.5 M‐estimators of scale -- 2.6 Dispersion estimators -- 2.7 M‐estimators of location with unknown dispersion -- 2.7.1 Previous estimation of dispersion -- 2.7.2 Simultaneous M‐estimators of location and dispersion -- 2.8 Numerical computing of M‐estimators -- 2.8.1 Location with previously‐computed dispersion estimation -- 2.8.2 Scale estimators -- 2.8.3 Simultaneous estimation of location and dispersion -- 2.9 Robust confidence intervals and tests -- 2.9.1 Confidence intervals -- 2.9.2 Tests -- 2.10 Appendix: proofs and complements -- 2.10.1 Mixtures -- 2.10.2 Asymptotic normality of M‐estimators -- 2.10.3 Slutsky's lemma -- 2.10.4 Quantiles -- 2.10.5 Alternative algorithms for M‐estimators -- 2.11 Recommendations and software -- 2.12 Problems -- Chapter 3 Measuring Robustness -- 3.1 The influence function -- 3.1.1 *The convergence of the SC to the IF -- 3.2 The breakdown point -- 3.2.1 Location M‐estimators -- 3.2.2 Scale and dispersion estimators -- 3.2.3 Location with previously‐computed dispersion estimator -- 3.2.4 Simultaneous estimation -- 3.2.5 Finite‐sample breakdown point.
3.3 Maximum asymptotic bias -- 3.4 Balancing robustness and efficiency -- 3.5 *"Optimal" robustness -- 3.5.1 Bias‐ and variance‐optimality of location estimators -- 3.5.2 Bias optimality of scale and dispersion estimators -- 3.5.3 The infinitesimal approach -- 3.5.4 The Hampel approach -- 3.5.5 Balancing bias and variance: the general problem -- 3.6 Multidimensional parameters -- 3.7 *Estimators as functionals -- 3.8 Appendix: Proofs of results -- 3.8.1 IF of general M‐estimators -- 3.8.2 Maximum BP of location estimators -- 3.8.3 BP of location M‐estimators -- 3.8.4 Maximum bias of location M‐estimators -- 3.8.5 The minimax bias property of the median -- 3.8.6 Minimizing the GES -- 3.8.7 Hampel optimality -- 3.9 Problems -- Chapter 4 Linear Regression 1 -- 4.1 Introduction -- 4.2 Review of the least squares method -- 4.3 Classical methods for outlier detection -- 4.4 Regression M‐estimators -- 4.4.1 M‐estimators with known scale -- 4.4.2 M‐estimators with preliminary scale -- 4.4.3 Simultaneous estimation of regression and scale -- 4.5 Numerical computing of monotone M‐estimators -- 4.5.1 The L1 estimator -- 4.5.2 M‐estimators with smooth ψ‐function -- 4.6 BP of monotone regression estimators -- 4.7 Robust tests for linear hypothesis -- 4.7.1 Review of the classical theory -- 4.7.2 Robust tests using M‐estimators -- 4.8 *Regression quantiles -- 4.9 Appendix: Proofs and complements -- 4.9.1 Why equivariance? -- 4.9.2 Consistency of estimated slopes under asymmetric errors -- 4.9.3 Maximum FBP of equivariant estimators -- 4.9.4 The FBP of monotone M‐estimators -- 4.10 Recommendations and software -- 4.11 Problems -- Chapter 5 Linear Regression 2 -- 5.1 Introduction -- 5.2 The linear model with random predictors -- 5.3 M‐estimators with a bounded ρ‐function -- 5.3.1 Properties of M‐estimators with a bounded ‐function.
5.4 Estimators based on a robust residual scale -- 5.4.1 S‐estimators -- 5.4.2 L‐estimators of scale and the LTS estimator -- 5.4.3 −estimators -- 5.5 MM‐estimators -- 5.6 Robust inference and variable selection for M‐estimators -- 5.6.1 Bootstrap robust confidence intervals and tests -- 5.6.2 Variable selection -- 5.7 Algorithms -- 5.7.1 Finding local minima -- 5.7.2 Starting values: the subsampling algorithm -- 5.7.3 A strategy for faster subsampling‐based algorithms -- 5.7.4 Starting values: the Peña‐Yohai estimator -- 5.7.5 Starting values with numeric and categorical predictors -- 5.7.6 Comparing initial estimators -- 5.8 Balancing asymptotic bias and efficiency -- 5.8.1 "Optimal" redescending M‐estimators -- 5.9 Improving the efficiency of robust regression estimators -- 5.9.1 Improving efficiency with one‐step reweighting -- 5.9.2 A fully asymptotically efficient one‐step procedure -- 5.9.3 Improving finite‐sample efficiency and robustness -- 5.9.4 Choosing a regression estimator -- 5.10 Robust regularized regression -- 5.10.1 Ridge regression -- 5.10.2 Lasso regression -- 5.10.3 Other regularized estimators -- 5.11 *Other estimators -- 5.11.1 Generalized M‐estimators -- 5.11.2 Projection estimators -- 5.11.3 Constrained M‐estimators -- 5.11.4 Maximum depth estimators -- 5.12 Other topics -- 5.12.1 The exact fit property -- 5.12.2 Heteroskedastic errors -- 5.12.3 A robust multiple correlation coefficient -- 5.13 *Appendix: proofs and complements -- 5.13.1 The BP of monotone M‐estimators with random X -- 5.13.2 Heavy‐tailed x -- 5.13.3 Proof of the exact fit property -- 5.13.4 The BP of S‐estimators -- 5.13.5 Asymptotic bias of M‐estimators -- 5.13.6 Hampel optimality for GM‐estimators -- 5.13.7 Justification of RFPE* -- 5.14 Recommendations and software -- 5.15 Problems -- Chapter 6 Multivariate Analysis -- 6.1 Introduction.
6.2 Breakdown and efficiency of multivariate estimators -- 6.2.1 Breakdown point -- 6.2.2 The multivariate exact fit property -- 6.2.3 Efficiency -- 6.3 M‐estimators -- 6.3.1 Collinearity -- 6.3.2 Size and shape -- 6.3.3 Breakdown point -- 6.4 Estimators based on a robust scale -- 6.4.1 The minimum volume ellipsoid estimator -- 6.4.2 S‐estimators -- 6.4.3 The MCD estimator -- 6.4.4 S‐estimators for high dimension -- 6.4.5 ‐estimators -- 6.4.6 One‐step reweighting -- 6.5 MM‐estimators -- 6.6 The Stahel-Donoho estimator -- 6.7 Asymptotic bias -- 6.8 Numerical computing of multivariate estimators -- 6.8.1 Monotone M‐estimators -- 6.8.2 Local solutions for S‐estimators -- 6.8.3 Subsampling for estimators based on a robust scale -- 6.8.4 The MVE -- 6.8.5 Computation of S‐estimators -- 6.8.6 The MCD -- 6.8.7 The Stahel-Donoho estimator -- 6.9 Faster robust scatter matrix estimators -- 6.9.1 Using pairwise robust covariances -- 6.9.2 The Peña-Prieto procedure -- 6.10 Choosing a location/scatter estimator -- 6.10.1 Efficiency -- 6.10.2 Behavior under contamination -- 6.10.3 Computing times -- 6.10.4 Tuning constants -- 6.10.5 Conclusions -- 6.11 Robust principal components -- 6.11.1 Spherical principal components -- 6.11.2 Robust PCA based on a robust scale -- 6.12 Estimation of multivariate scatter and location with missing data -- 6.12.1 Notation -- 6.12.2 GS estimators for missing data -- 6.13 Robust estimators under the cellwise contamination model -- 6.14 Regularized robust estimators of the inverse of the covariance matrix -- 6.15 Mixed linear models -- 6.15.1 Robust estimation for MLM -- 6.15.2 Breakdown point of MLM estimators -- 6.15.3 S‐estimators for MLMs -- 6.15.4 Composite ‐estimators -- 6.16 *Other estimators of location and scatter -- 6.16.1 Projection estimators -- 6.16.2 Constrained M‐estimators -- 6.16.3 Multivariate depth.
6.17 Appendix: proofs and complements -- 6.17.1 Why affine equivariance? -- 6.17.2 Consistency of equivariant estimators -- 6.17.3 The estimating equations of the MLE -- 6.17.4 Asymptotic BP of monotone M‐estimators -- 6.17.5 The estimating equations for S‐estimators -- 6.17.6 Behavior of S‐estimators for high p -- 6.17.7 Calculating the asymptotic covariance matrix of location M‐estimators -- 6.17.8 The exact fit property -- 6.17.9 Elliptical distributions -- 6.17.10 Consistency of Gnanadesikan-Kettenring correlations -- 6.17.11 Spherical principal components -- 6.17.12 Fixed point estimating equations and computing algorithm for the GS estimator -- 6.18 Recommendations and software -- 6.19 Problems -- Chapter 7 Generalized Linear Models -- 7.1 Binary response regression -- 7.2 Robust estimators for the logistic model -- 7.2.1 Weighted MLEs -- 7.2.2 Redescending M‐estimators -- 7.3 Generalized linear models -- 7.3.1 Conditionally unbiased bounded influence estimators -- 7.4 Transformed M‐estimators -- 7.4.1 Definition of transformed M‐estimators -- 7.4.2 Some examples of variance‐stabilizing transformations -- 7.4.3 Other estimators for GLMs -- 7.5 Recommendations and software -- 7.6 Problems -- Chapter 8 Time Series -- 8.1 Time series outliers and their impact -- 8.1.1 Simple examples of outliers influence -- 8.1.2 Probability models for time series outliers -- 8.1.3 Bias impact of AOs -- 8.2 Classical estimators for AR models -- 8.2.1 The Durbin-Levinson algorithm -- 8.2.2 Asymptotic distribution of classical estimators -- 8.3 Classical estimators for ARMA models -- 8.4 M‐estimators of ARMA models -- 8.4.1 M‐estimators and their asymptotic distribution -- 8.4.2 The behavior of M‐estimators in AR processes with additive outliers -- 8.4.3 The behavior of LS and M‐estimators for ARMA processes with infinite innovation variance.
8.5 Generalized M‐estimators.
Tags from this library: No tags from this library for this title. Log in to add tags.
Star ratings
    Average rating: 0.0 (0 votes)
No physical items for this record

Cover -- Title Page -- Copyright -- Contents -- Preface -- Preface to the First Edition -- About the Companion Website -- Chapter 1 Introduction -- 1.1 Classical and robust approaches to statistics -- 1.2 Mean and standard deviation -- 1.3 The "three sigma edit" rule -- 1.4 Linear regression -- 1.4.1 Straight‐line regression -- 1.4.2 Multiple linear regression -- 1.5 Correlation coefficients -- 1.6 Other parametric models -- 1.7 Problems -- Chapter 2 Location and Scale -- 2.1 The location model -- 2.2 Formalizing departures from normality -- 2.3 M‐estimators of location -- 2.3.1 Generalizing maximum likelihood -- 2.3.2 The distribution of M‐estimators -- 2.3.3 An intuitive view of M‐estimators -- 2.3.4 Redescending M‐estimators -- 2.4 Trimmed and Winsorized means -- 2.5 M‐estimators of scale -- 2.6 Dispersion estimators -- 2.7 M‐estimators of location with unknown dispersion -- 2.7.1 Previous estimation of dispersion -- 2.7.2 Simultaneous M‐estimators of location and dispersion -- 2.8 Numerical computing of M‐estimators -- 2.8.1 Location with previously‐computed dispersion estimation -- 2.8.2 Scale estimators -- 2.8.3 Simultaneous estimation of location and dispersion -- 2.9 Robust confidence intervals and tests -- 2.9.1 Confidence intervals -- 2.9.2 Tests -- 2.10 Appendix: proofs and complements -- 2.10.1 Mixtures -- 2.10.2 Asymptotic normality of M‐estimators -- 2.10.3 Slutsky's lemma -- 2.10.4 Quantiles -- 2.10.5 Alternative algorithms for M‐estimators -- 2.11 Recommendations and software -- 2.12 Problems -- Chapter 3 Measuring Robustness -- 3.1 The influence function -- 3.1.1 *The convergence of the SC to the IF -- 3.2 The breakdown point -- 3.2.1 Location M‐estimators -- 3.2.2 Scale and dispersion estimators -- 3.2.3 Location with previously‐computed dispersion estimator -- 3.2.4 Simultaneous estimation -- 3.2.5 Finite‐sample breakdown point.

3.3 Maximum asymptotic bias -- 3.4 Balancing robustness and efficiency -- 3.5 *"Optimal" robustness -- 3.5.1 Bias‐ and variance‐optimality of location estimators -- 3.5.2 Bias optimality of scale and dispersion estimators -- 3.5.3 The infinitesimal approach -- 3.5.4 The Hampel approach -- 3.5.5 Balancing bias and variance: the general problem -- 3.6 Multidimensional parameters -- 3.7 *Estimators as functionals -- 3.8 Appendix: Proofs of results -- 3.8.1 IF of general M‐estimators -- 3.8.2 Maximum BP of location estimators -- 3.8.3 BP of location M‐estimators -- 3.8.4 Maximum bias of location M‐estimators -- 3.8.5 The minimax bias property of the median -- 3.8.6 Minimizing the GES -- 3.8.7 Hampel optimality -- 3.9 Problems -- Chapter 4 Linear Regression 1 -- 4.1 Introduction -- 4.2 Review of the least squares method -- 4.3 Classical methods for outlier detection -- 4.4 Regression M‐estimators -- 4.4.1 M‐estimators with known scale -- 4.4.2 M‐estimators with preliminary scale -- 4.4.3 Simultaneous estimation of regression and scale -- 4.5 Numerical computing of monotone M‐estimators -- 4.5.1 The L1 estimator -- 4.5.2 M‐estimators with smooth ψ‐function -- 4.6 BP of monotone regression estimators -- 4.7 Robust tests for linear hypothesis -- 4.7.1 Review of the classical theory -- 4.7.2 Robust tests using M‐estimators -- 4.8 *Regression quantiles -- 4.9 Appendix: Proofs and complements -- 4.9.1 Why equivariance? -- 4.9.2 Consistency of estimated slopes under asymmetric errors -- 4.9.3 Maximum FBP of equivariant estimators -- 4.9.4 The FBP of monotone M‐estimators -- 4.10 Recommendations and software -- 4.11 Problems -- Chapter 5 Linear Regression 2 -- 5.1 Introduction -- 5.2 The linear model with random predictors -- 5.3 M‐estimators with a bounded ρ‐function -- 5.3.1 Properties of M‐estimators with a bounded ‐function.

5.4 Estimators based on a robust residual scale -- 5.4.1 S‐estimators -- 5.4.2 L‐estimators of scale and the LTS estimator -- 5.4.3 −estimators -- 5.5 MM‐estimators -- 5.6 Robust inference and variable selection for M‐estimators -- 5.6.1 Bootstrap robust confidence intervals and tests -- 5.6.2 Variable selection -- 5.7 Algorithms -- 5.7.1 Finding local minima -- 5.7.2 Starting values: the subsampling algorithm -- 5.7.3 A strategy for faster subsampling‐based algorithms -- 5.7.4 Starting values: the Peña‐Yohai estimator -- 5.7.5 Starting values with numeric and categorical predictors -- 5.7.6 Comparing initial estimators -- 5.8 Balancing asymptotic bias and efficiency -- 5.8.1 "Optimal" redescending M‐estimators -- 5.9 Improving the efficiency of robust regression estimators -- 5.9.1 Improving efficiency with one‐step reweighting -- 5.9.2 A fully asymptotically efficient one‐step procedure -- 5.9.3 Improving finite‐sample efficiency and robustness -- 5.9.4 Choosing a regression estimator -- 5.10 Robust regularized regression -- 5.10.1 Ridge regression -- 5.10.2 Lasso regression -- 5.10.3 Other regularized estimators -- 5.11 *Other estimators -- 5.11.1 Generalized M‐estimators -- 5.11.2 Projection estimators -- 5.11.3 Constrained M‐estimators -- 5.11.4 Maximum depth estimators -- 5.12 Other topics -- 5.12.1 The exact fit property -- 5.12.2 Heteroskedastic errors -- 5.12.3 A robust multiple correlation coefficient -- 5.13 *Appendix: proofs and complements -- 5.13.1 The BP of monotone M‐estimators with random X -- 5.13.2 Heavy‐tailed x -- 5.13.3 Proof of the exact fit property -- 5.13.4 The BP of S‐estimators -- 5.13.5 Asymptotic bias of M‐estimators -- 5.13.6 Hampel optimality for GM‐estimators -- 5.13.7 Justification of RFPE* -- 5.14 Recommendations and software -- 5.15 Problems -- Chapter 6 Multivariate Analysis -- 6.1 Introduction.

6.2 Breakdown and efficiency of multivariate estimators -- 6.2.1 Breakdown point -- 6.2.2 The multivariate exact fit property -- 6.2.3 Efficiency -- 6.3 M‐estimators -- 6.3.1 Collinearity -- 6.3.2 Size and shape -- 6.3.3 Breakdown point -- 6.4 Estimators based on a robust scale -- 6.4.1 The minimum volume ellipsoid estimator -- 6.4.2 S‐estimators -- 6.4.3 The MCD estimator -- 6.4.4 S‐estimators for high dimension -- 6.4.5 ‐estimators -- 6.4.6 One‐step reweighting -- 6.5 MM‐estimators -- 6.6 The Stahel-Donoho estimator -- 6.7 Asymptotic bias -- 6.8 Numerical computing of multivariate estimators -- 6.8.1 Monotone M‐estimators -- 6.8.2 Local solutions for S‐estimators -- 6.8.3 Subsampling for estimators based on a robust scale -- 6.8.4 The MVE -- 6.8.5 Computation of S‐estimators -- 6.8.6 The MCD -- 6.8.7 The Stahel-Donoho estimator -- 6.9 Faster robust scatter matrix estimators -- 6.9.1 Using pairwise robust covariances -- 6.9.2 The Peña-Prieto procedure -- 6.10 Choosing a location/scatter estimator -- 6.10.1 Efficiency -- 6.10.2 Behavior under contamination -- 6.10.3 Computing times -- 6.10.4 Tuning constants -- 6.10.5 Conclusions -- 6.11 Robust principal components -- 6.11.1 Spherical principal components -- 6.11.2 Robust PCA based on a robust scale -- 6.12 Estimation of multivariate scatter and location with missing data -- 6.12.1 Notation -- 6.12.2 GS estimators for missing data -- 6.13 Robust estimators under the cellwise contamination model -- 6.14 Regularized robust estimators of the inverse of the covariance matrix -- 6.15 Mixed linear models -- 6.15.1 Robust estimation for MLM -- 6.15.2 Breakdown point of MLM estimators -- 6.15.3 S‐estimators for MLMs -- 6.15.4 Composite ‐estimators -- 6.16 *Other estimators of location and scatter -- 6.16.1 Projection estimators -- 6.16.2 Constrained M‐estimators -- 6.16.3 Multivariate depth.

6.17 Appendix: proofs and complements -- 6.17.1 Why affine equivariance? -- 6.17.2 Consistency of equivariant estimators -- 6.17.3 The estimating equations of the MLE -- 6.17.4 Asymptotic BP of monotone M‐estimators -- 6.17.5 The estimating equations for S‐estimators -- 6.17.6 Behavior of S‐estimators for high p -- 6.17.7 Calculating the asymptotic covariance matrix of location M‐estimators -- 6.17.8 The exact fit property -- 6.17.9 Elliptical distributions -- 6.17.10 Consistency of Gnanadesikan-Kettenring correlations -- 6.17.11 Spherical principal components -- 6.17.12 Fixed point estimating equations and computing algorithm for the GS estimator -- 6.18 Recommendations and software -- 6.19 Problems -- Chapter 7 Generalized Linear Models -- 7.1 Binary response regression -- 7.2 Robust estimators for the logistic model -- 7.2.1 Weighted MLEs -- 7.2.2 Redescending M‐estimators -- 7.3 Generalized linear models -- 7.3.1 Conditionally unbiased bounded influence estimators -- 7.4 Transformed M‐estimators -- 7.4.1 Definition of transformed M‐estimators -- 7.4.2 Some examples of variance‐stabilizing transformations -- 7.4.3 Other estimators for GLMs -- 7.5 Recommendations and software -- 7.6 Problems -- Chapter 8 Time Series -- 8.1 Time series outliers and their impact -- 8.1.1 Simple examples of outliers influence -- 8.1.2 Probability models for time series outliers -- 8.1.3 Bias impact of AOs -- 8.2 Classical estimators for AR models -- 8.2.1 The Durbin-Levinson algorithm -- 8.2.2 Asymptotic distribution of classical estimators -- 8.3 Classical estimators for ARMA models -- 8.4 M‐estimators of ARMA models -- 8.4.1 M‐estimators and their asymptotic distribution -- 8.4.2 The behavior of M‐estimators in AR processes with additive outliers -- 8.4.3 The behavior of LS and M‐estimators for ARMA processes with infinite innovation variance.

8.5 Generalized M‐estimators.

Description based on publisher supplied metadata and other sources.

Electronic reproduction. Ann Arbor, Michigan : ProQuest Ebook Central, 2024. Available via World Wide Web. Access may be limited to ProQuest Ebook Central affiliated libraries.

There are no comments on this title.

to post a comment.

© 2024 Resource Centre. All rights reserved.