ORPP logo
Image from Google Jackets

Fundamental Statistical Inference : A Computational Approach.

By: Material type: TextTextSeries: Wiley Series in Probability and Statistics SeriesPublisher: Newark : John Wiley & Sons, Incorporated, 2018Copyright date: ©2018Edition: 1st edDescription: 1 online resource (585 pages)Content type:
  • text
Media type:
  • computer
Carrier type:
  • online resource
ISBN:
  • 9781119417880
Subject(s): Genre/Form: Additional physical formats: Print version:: Fundamental Statistical InferenceLOC classification:
  • QA276 .P365 2018
Online resources:
Contents:
Cover -- Title Page -- Copyright -- Contents -- Preface -- Part I Essential Concepts in Statistics -- Chapter 1 Introducing Point and Interval Estimation -- 1.1 Point Estimation -- 1.1.1 Bernoulli Model -- 1.1.2 Geometric Model -- 1.1.3 Some Remarks on Bias and Consistency -- 1.2 Interval Estimation via Simulation -- 1.3 Interval Estimation via the Bootstrap -- 1.3.1 Computation and Comparison with Parametric Bootstrap -- 1.3.2 Application to Bernoulli Model and Modification -- 1.3.3 Double Bootstrap -- 1.3.4 Double Bootstrap with Analytic Inner Loop -- 1.4 Bootstrap Confidence Intervals in the Geometric Model -- 1.5 Problems -- Chapter 2 Goodness of Fit and Hypothesis Testing -- 2.1 Empirical Cumulative Distribution Function -- 2.1.1 The Glivenko-Cantelli Theorem -- 2.1.2 Proofs of the Glivenko-Cantelli Theorem -- 2.1.3 Example with Continuous Data and Approximate Confidence Intervals -- 2.1.4 Example with Discrete Data and Approximate Confidence Intervals -- 2.2 Comparing Parametric and Nonparametric Methods -- 2.3 Kolmogorov-Smirnov Distance and Hypothesis Testing -- 2.3.1 The Kolmogorov-Smirnov and Anderson-Darling Statistics -- 2.3.2 Significance and Hypothesis Testing -- 2.3.3 Small-Sample Correction -- 2.4 Testing Normality with KD and AD -- 2.5 Testing Normality with W2 and U2 -- 2.6 Testing the Stable Paretian Distributional Assumption: First Attempt -- 2.7 Two-Sample Kolmogorov Test -- 2.8 More on (Moron?) Hypothesis Testing -- 2.8.1 Explanation -- 2.8.2 Misuse of Hypothesis Testing -- 2.8.3 Use and Misuse of p&amp -- hyphen -- Values -- 2.9 Problems -- Chapter 3 Likelihood -- 3.1 Introduction -- 3.1.1 Scalar Parameter Case -- 3.1.2 Vector Parameter Case -- 3.1.3 Robustness and the MCD Estimator -- 3.1.4 Asymptotic Properties of the Maximum Likelihood Estimator -- 3.2 Cramér-Rao Lower Bound -- 3.2.1 Univariate Case.
3.2.2 Multivariate Case -- 3.3 Model Selection -- 3.3.1 Model Misspecification -- 3.3.2 The Likelihood Ratio Statistic -- 3.3.3 Use of Information Criteria -- 3.4 Problems -- Chapter 4 Numerical Optimization -- 4.1 Root Finding -- 4.1.1 One Parameter -- 4.1.2 Several Parameters -- 4.2 Approximating the Distribution of the Maximum Likelihood Estimator -- 4.3 General Numerical Likelihood Maximization -- 4.3.1 Newton-Raphson and Quasi-Newton Methods -- 4.3.2 Imposing Parameter Restrictions -- 4.4 Evolutionary Algorithms -- 4.4.1 Differential Evolution -- 4.4.2 Covariance Matrix Adaption Evolutionary Strategy -- 4.5 Problems -- Chapter 5 Methods of Point Estimation -- 5.1 Univariate Mixed Normal Distribution -- 5.1.1 Introduction -- 5.1.2 Simulation of Univariate Mixtures -- 5.1.3 Direct Likelihood Maximization -- 5.1.4 Use of the EM Algorithm -- 5.1.5 Shrinkage-Type Estimation -- 5.1.6 Quasi-Bayesian Estimation -- 5.1.7 Confidence Intervals -- 5.2 Alternative Point Estimation Methodologies -- 5.2.1 Method of Moments Estimator -- 5.2.2 Use of Goodness-of-Fit Measures -- 5.2.3 Quantile Least Squares -- 5.2.4 Pearson Minimum Chi-Square -- 5.2.5 Empirical Moment Generating Function Estimator -- 5.2.6 Empirical Characteristic Function Estimator -- 5.3 Comparison of Methods -- 5.4 A Primer on Shrinkage Estimation -- 5.5 Problems -- Part II Further Fundamental Concepts in Statistics -- Chapter 6 Q-Q Plots and Distribution Testing -- 6.1 P-P Plots and Q-Q Plots -- 6.2 Null Bands -- 6.2.1 Definition and Motivation -- 6.2.2 Pointwise Null Bands via Simulation -- 6.2.3 Asymptotic Approximation of Pointwise Null Bands -- 6.2.4 Mapping Pointwise and Simultaneous Significance Levels -- 6.3 Q-Q Test -- 6.4 Further P-P and Q-Q Type Plots -- 6.4.1 (Horizontal) Stabilized P-P Plots -- 6.4.2 Modified S&amp -- hyphen -- P Plots -- 6.4.3 MSP Test for Normality.
6.4.4 Modified Percentile (Fowlkes&amp -- hyphen -- MP) Plots -- 6.5 Further Tests for Composite Normality -- 6.5.1 Motivation -- 6.5.2 Jarque-Bera Test -- 6.5.3 Three Powerful (and More Recent) Normality Tests -- 6.5.4 Testing Goodness of Fit via Binning: Pearson's XP2 Test -- 6.6 Combining Tests and Power Envelopes -- 6.6.1 Combining Tests -- 6.6.2 Power Comparisons for Testing Composite Normality -- 6.6.3 Most Powerful Tests and Power Envelopes -- 6.7 Details of a Failed Attempt -- 6.8 Problems -- Chapter 7 Unbiased Point Estimation and Bias Reduction -- 7.1 Sufficiency -- 7.1.1 Introduction -- 7.1.2 Factorization -- 7.1.3 Minimal Sufficiency -- 7.1.4 The Rao-Blackwell Theorem -- 7.2 Completeness and the Uniformly Minimum Variance Unbiased Estimator -- 7.3 An Example with i.i.d. Geometric Data -- 7.4 Methods of Bias Reduction -- 7.4.1 The Bias&amp -- hyphen -- Function Approach -- 7.4.2 Median&amp -- hyphen -- Unbiased Estimation -- 7.4.3 Mode&amp -- hyphen -- Adjusted Estimator -- 7.4.4 The Jackknife -- 7.5 Problems -- Chapter 8 Analytic Interval Estimation -- 8.1 Definitions -- 8.2 Pivotal Method -- 8.2.1 Exact Pivots -- 8.2.2 Asymptotic Pivots -- 8.3 Intervals Associated with Normal Samples -- 8.3.1 Single Sample -- 8.3.2 Paired Sample -- 8.3.3 Two Independent Samples -- 8.3.4 Welch's Method for μ1−μ2 when σ12≠σ22 -- 8.3.5 Satterthwaite's Approximation -- 8.4 Cumulative Distribution Function Inversion -- 8.4.1 Continuous Case -- 8.4.2 Discrete Case -- 8.5 Application of the Nonparametric Bootstrap -- 8.6 Problems -- Part III Additional Topics -- Chapter 9 Inference in a Heavy&amp -- hyphen -- Tailed Context -- 9.1 Estimating the Maximally Existing Moment -- 9.2 A Primer on Tail Estimation -- 9.2.1 Introduction -- 9.2.2 The Hill Estimator -- 9.2.3 Use with Stable Paretian Data -- 9.3 Noncentral Student's t Estimation -- 9.3.1 Introduction.
9.3.2 Direct Density Approximation -- 9.3.3 Quantile&amp -- hyphen -- Based Table Lookup Estimation -- 9.3.4 Comparison of NCT Estimators -- 9.4 Asymmetric Stable Paretian Estimation -- 9.4.1 Introduction -- 9.4.2 The Hint Estimator -- 9.4.3 Maximum Likelihood Estimation -- 9.4.4 The McCulloch Estimator -- 9.4.5 The Empirical Characteristic Function Estimator -- 9.4.6 Testing for Symmetry in the Stable Model -- 9.5 Testing the Stable Paretian Distribution -- 9.5.1 Test Based on the Empirical Characteristic Function -- 9.5.2 Summability Test and Modification -- 9.5.3 ALHADI: The α-Hat Discrepancy Test -- 9.5.4 Joint Test Procedure -- 9.5.5 Likelihood Ratio Tests -- 9.5.6 Size and Power of the Symmetric Stable Tests -- 9.5.7 Extension to Testing the Asymmetric Stable Paretian Case -- Chapter 10 The Method of Indirect Inference -- 10.1 Introduction -- 10.2 Application to the Laplace Distribution -- 10.3 Application to Randomized Response -- 10.3.1 Introduction -- 10.3.2 Estimation via Indirect Inference -- 10.4 Application to the Stable Paretian Distribution -- 10.5 Problems -- Appendix A Review of Fundamental Concepts in Probability Theory -- A.1 Combinatorics and Special Functions -- A.2 Basic Probability and Conditioning -- A.3 Univariate Random Variables -- A.4 Multivariate Random Variables -- A.5 Continuous Univariate Random Variables -- A.6 Conditional Random Variables -- A.7 Generating Functions and Inversion Formulas -- A.8 Value at Risk and Expected Shortfall -- A.9 Jacobian Transformations -- A.10 Sums and Other Functions -- A.11 Saddlepoint Approximations -- A.12 Order Statistics -- A.13 The Multivariate Normal Distribution -- A.14 Noncentral Distributions -- A.15 Inequalities and Convergence -- A.15.1 Inequalities for Random Variables -- A.15.2 Convergence of Sequences of Sets -- A.15.3 Convergence of Sequences of Random Variables.
A.16 The Stable Paretian Distribution -- A.17 Problems -- A.18 Solutions -- References -- Index -- EULA.
Tags from this library: No tags from this library for this title. Log in to add tags.
Star ratings
    Average rating: 0.0 (0 votes)
No physical items for this record

Cover -- Title Page -- Copyright -- Contents -- Preface -- Part I Essential Concepts in Statistics -- Chapter 1 Introducing Point and Interval Estimation -- 1.1 Point Estimation -- 1.1.1 Bernoulli Model -- 1.1.2 Geometric Model -- 1.1.3 Some Remarks on Bias and Consistency -- 1.2 Interval Estimation via Simulation -- 1.3 Interval Estimation via the Bootstrap -- 1.3.1 Computation and Comparison with Parametric Bootstrap -- 1.3.2 Application to Bernoulli Model and Modification -- 1.3.3 Double Bootstrap -- 1.3.4 Double Bootstrap with Analytic Inner Loop -- 1.4 Bootstrap Confidence Intervals in the Geometric Model -- 1.5 Problems -- Chapter 2 Goodness of Fit and Hypothesis Testing -- 2.1 Empirical Cumulative Distribution Function -- 2.1.1 The Glivenko-Cantelli Theorem -- 2.1.2 Proofs of the Glivenko-Cantelli Theorem -- 2.1.3 Example with Continuous Data and Approximate Confidence Intervals -- 2.1.4 Example with Discrete Data and Approximate Confidence Intervals -- 2.2 Comparing Parametric and Nonparametric Methods -- 2.3 Kolmogorov-Smirnov Distance and Hypothesis Testing -- 2.3.1 The Kolmogorov-Smirnov and Anderson-Darling Statistics -- 2.3.2 Significance and Hypothesis Testing -- 2.3.3 Small-Sample Correction -- 2.4 Testing Normality with KD and AD -- 2.5 Testing Normality with W2 and U2 -- 2.6 Testing the Stable Paretian Distributional Assumption: First Attempt -- 2.7 Two-Sample Kolmogorov Test -- 2.8 More on (Moron?) Hypothesis Testing -- 2.8.1 Explanation -- 2.8.2 Misuse of Hypothesis Testing -- 2.8.3 Use and Misuse of p&amp -- hyphen -- Values -- 2.9 Problems -- Chapter 3 Likelihood -- 3.1 Introduction -- 3.1.1 Scalar Parameter Case -- 3.1.2 Vector Parameter Case -- 3.1.3 Robustness and the MCD Estimator -- 3.1.4 Asymptotic Properties of the Maximum Likelihood Estimator -- 3.2 Cramér-Rao Lower Bound -- 3.2.1 Univariate Case.

3.2.2 Multivariate Case -- 3.3 Model Selection -- 3.3.1 Model Misspecification -- 3.3.2 The Likelihood Ratio Statistic -- 3.3.3 Use of Information Criteria -- 3.4 Problems -- Chapter 4 Numerical Optimization -- 4.1 Root Finding -- 4.1.1 One Parameter -- 4.1.2 Several Parameters -- 4.2 Approximating the Distribution of the Maximum Likelihood Estimator -- 4.3 General Numerical Likelihood Maximization -- 4.3.1 Newton-Raphson and Quasi-Newton Methods -- 4.3.2 Imposing Parameter Restrictions -- 4.4 Evolutionary Algorithms -- 4.4.1 Differential Evolution -- 4.4.2 Covariance Matrix Adaption Evolutionary Strategy -- 4.5 Problems -- Chapter 5 Methods of Point Estimation -- 5.1 Univariate Mixed Normal Distribution -- 5.1.1 Introduction -- 5.1.2 Simulation of Univariate Mixtures -- 5.1.3 Direct Likelihood Maximization -- 5.1.4 Use of the EM Algorithm -- 5.1.5 Shrinkage-Type Estimation -- 5.1.6 Quasi-Bayesian Estimation -- 5.1.7 Confidence Intervals -- 5.2 Alternative Point Estimation Methodologies -- 5.2.1 Method of Moments Estimator -- 5.2.2 Use of Goodness-of-Fit Measures -- 5.2.3 Quantile Least Squares -- 5.2.4 Pearson Minimum Chi-Square -- 5.2.5 Empirical Moment Generating Function Estimator -- 5.2.6 Empirical Characteristic Function Estimator -- 5.3 Comparison of Methods -- 5.4 A Primer on Shrinkage Estimation -- 5.5 Problems -- Part II Further Fundamental Concepts in Statistics -- Chapter 6 Q-Q Plots and Distribution Testing -- 6.1 P-P Plots and Q-Q Plots -- 6.2 Null Bands -- 6.2.1 Definition and Motivation -- 6.2.2 Pointwise Null Bands via Simulation -- 6.2.3 Asymptotic Approximation of Pointwise Null Bands -- 6.2.4 Mapping Pointwise and Simultaneous Significance Levels -- 6.3 Q-Q Test -- 6.4 Further P-P and Q-Q Type Plots -- 6.4.1 (Horizontal) Stabilized P-P Plots -- 6.4.2 Modified S&amp -- hyphen -- P Plots -- 6.4.3 MSP Test for Normality.

6.4.4 Modified Percentile (Fowlkes&amp -- hyphen -- MP) Plots -- 6.5 Further Tests for Composite Normality -- 6.5.1 Motivation -- 6.5.2 Jarque-Bera Test -- 6.5.3 Three Powerful (and More Recent) Normality Tests -- 6.5.4 Testing Goodness of Fit via Binning: Pearson's XP2 Test -- 6.6 Combining Tests and Power Envelopes -- 6.6.1 Combining Tests -- 6.6.2 Power Comparisons for Testing Composite Normality -- 6.6.3 Most Powerful Tests and Power Envelopes -- 6.7 Details of a Failed Attempt -- 6.8 Problems -- Chapter 7 Unbiased Point Estimation and Bias Reduction -- 7.1 Sufficiency -- 7.1.1 Introduction -- 7.1.2 Factorization -- 7.1.3 Minimal Sufficiency -- 7.1.4 The Rao-Blackwell Theorem -- 7.2 Completeness and the Uniformly Minimum Variance Unbiased Estimator -- 7.3 An Example with i.i.d. Geometric Data -- 7.4 Methods of Bias Reduction -- 7.4.1 The Bias&amp -- hyphen -- Function Approach -- 7.4.2 Median&amp -- hyphen -- Unbiased Estimation -- 7.4.3 Mode&amp -- hyphen -- Adjusted Estimator -- 7.4.4 The Jackknife -- 7.5 Problems -- Chapter 8 Analytic Interval Estimation -- 8.1 Definitions -- 8.2 Pivotal Method -- 8.2.1 Exact Pivots -- 8.2.2 Asymptotic Pivots -- 8.3 Intervals Associated with Normal Samples -- 8.3.1 Single Sample -- 8.3.2 Paired Sample -- 8.3.3 Two Independent Samples -- 8.3.4 Welch's Method for μ1−μ2 when σ12≠σ22 -- 8.3.5 Satterthwaite's Approximation -- 8.4 Cumulative Distribution Function Inversion -- 8.4.1 Continuous Case -- 8.4.2 Discrete Case -- 8.5 Application of the Nonparametric Bootstrap -- 8.6 Problems -- Part III Additional Topics -- Chapter 9 Inference in a Heavy&amp -- hyphen -- Tailed Context -- 9.1 Estimating the Maximally Existing Moment -- 9.2 A Primer on Tail Estimation -- 9.2.1 Introduction -- 9.2.2 The Hill Estimator -- 9.2.3 Use with Stable Paretian Data -- 9.3 Noncentral Student's t Estimation -- 9.3.1 Introduction.

9.3.2 Direct Density Approximation -- 9.3.3 Quantile&amp -- hyphen -- Based Table Lookup Estimation -- 9.3.4 Comparison of NCT Estimators -- 9.4 Asymmetric Stable Paretian Estimation -- 9.4.1 Introduction -- 9.4.2 The Hint Estimator -- 9.4.3 Maximum Likelihood Estimation -- 9.4.4 The McCulloch Estimator -- 9.4.5 The Empirical Characteristic Function Estimator -- 9.4.6 Testing for Symmetry in the Stable Model -- 9.5 Testing the Stable Paretian Distribution -- 9.5.1 Test Based on the Empirical Characteristic Function -- 9.5.2 Summability Test and Modification -- 9.5.3 ALHADI: The α-Hat Discrepancy Test -- 9.5.4 Joint Test Procedure -- 9.5.5 Likelihood Ratio Tests -- 9.5.6 Size and Power of the Symmetric Stable Tests -- 9.5.7 Extension to Testing the Asymmetric Stable Paretian Case -- Chapter 10 The Method of Indirect Inference -- 10.1 Introduction -- 10.2 Application to the Laplace Distribution -- 10.3 Application to Randomized Response -- 10.3.1 Introduction -- 10.3.2 Estimation via Indirect Inference -- 10.4 Application to the Stable Paretian Distribution -- 10.5 Problems -- Appendix A Review of Fundamental Concepts in Probability Theory -- A.1 Combinatorics and Special Functions -- A.2 Basic Probability and Conditioning -- A.3 Univariate Random Variables -- A.4 Multivariate Random Variables -- A.5 Continuous Univariate Random Variables -- A.6 Conditional Random Variables -- A.7 Generating Functions and Inversion Formulas -- A.8 Value at Risk and Expected Shortfall -- A.9 Jacobian Transformations -- A.10 Sums and Other Functions -- A.11 Saddlepoint Approximations -- A.12 Order Statistics -- A.13 The Multivariate Normal Distribution -- A.14 Noncentral Distributions -- A.15 Inequalities and Convergence -- A.15.1 Inequalities for Random Variables -- A.15.2 Convergence of Sequences of Sets -- A.15.3 Convergence of Sequences of Random Variables.

A.16 The Stable Paretian Distribution -- A.17 Problems -- A.18 Solutions -- References -- Index -- EULA.

Description based on publisher supplied metadata and other sources.

Electronic reproduction. Ann Arbor, Michigan : ProQuest Ebook Central, 2024. Available via World Wide Web. Access may be limited to ProQuest Ebook Central affiliated libraries.

There are no comments on this title.

to post a comment.

© 2024 Resource Centre. All rights reserved.