ORPP logo
Image from Google Jackets

Parallel Scientific Computing.

By: Contributor(s): Material type: TextTextPublisher: Newark : John Wiley & Sons, Incorporated, 2016Copyright date: ©2016Edition: 1st edDescription: 1 online resource (287 pages)Content type:
  • text
Media type:
  • computer
Carrier type:
  • online resource
ISBN:
  • 9781118761717
Subject(s): Genre/Form: Additional physical formats: Print version:: Parallel Scientific ComputingDDC classification:
  • 004.35
LOC classification:
  • QA76.58 -- .M346 2016eb
Online resources:
Contents:
Intro -- Table of Contents -- Title -- Copyright -- Preface -- Introduction -- 1 Computer Architectures -- 1.1. Different types of parallelism -- 1.2. Memory architecture -- 1.3. Hybrid architecture -- 2 Parallelization and Programming Models -- 2.1. Parallelization -- 2.2. Performance criteria -- 2.3. Data parallelism -- 2.4. Vectorization: a case study -- 2.5. Message-passing -- 2.6. Performance analysis -- 3 Parallel Algorithm Concepts -- 3.1. Parallel algorithms for recurrences -- 3.2. Data locality and distribution: product of matrices -- 4 Basics of Numerical Matrix Analysis -- 4.1. Review of basic notions of linear algebra -- 4.2. Properties of matrices -- 5 Sparse Matrices -- 5.1. Origins of sparse matrices -- 5.2. Parallel formation of sparse matrices: shared memory -- 5.3. Parallel formation by block of sparse matrices: distributed memory -- 6 Solving Linear Systems -- 6.1. Direct methods -- 6.2. Iterative methods -- 7 LU Methods for Solving Linear Systems -- 7.1. Principle of LU decomposition -- 7.2. Gauss factorization -- 7.3. Gauss-Jordan factorization -- 7.4. Crout and Cholesky factorizations for symmetric matrices -- 8 Parallelization of LU Methods for Dense Matrices -- 8.1. Block factorization -- 8.2. Implementation of block factorization in a message-passing environment -- 8.3. Parallelization of forward and backward substitutions -- 9 LU Methods for Sparse Matrices -- 9.1. Structure of factorized matrices -- 9.2. Symbolic factorization and renumbering -- 9.3. Elimination trees -- 9.4. Elimination trees and dependencies -- 9.5. Nested dissections -- 9.6. Forward and backward substitutions -- 10 Basics of Krylov Subspaces -- 10.1. Krylov subspaces -- 10.2. Construction of the Arnoldi basis -- 11 Methods with Complete Orthogonalization for Symmetric Positive Definite Matrices.
11.1. Construction of the Lanczos basis for symmetric matrices -- 11.2. The Lanczos method -- 11.3. The conjugate gradient method -- 11.4. Comparison with the gradient method -- 11.5. Principle of preconditioning for symmetric positive definite matrices -- 12 Exact Orthogonalization Methods for Arbitrary Matrices -- 12.1. The GMRES method -- 12.2. The case of symmetric matrices: the MINRES method -- 12.3. The ORTHODIR method -- 12.4. Principle of preconditioning for non-symmetric matrices -- 13 Biorthogonalization Methods for Non-symmetric Matrices -- 13.1. Lanczos biorthogonal basis for non-symmetric matrices -- 13.2. The non-symmetric Lanczos method -- 13.3. The biconjugate gradient method: BiCG -- 13.4. The quasi-minimal residual method: QMR -- 13.5. The BiCGSTAB -- 14 Parallelization of Krylov Methods -- 14.1. Parallelization of dense matrix-vector product -- 14.2. Parallelization of sparse matrix-vector product based on node sets -- 14.3. Parallelization of sparse matrix-vector product based on element sets -- 14.4. Parallelization of the scalar product -- 14.5. Summary of the parallelization of Krylov methods -- 15 Parallel Preconditioning Methods -- 15.1. Diagonal -- 15.2. Incomplete factorization methods -- 15.3. Schur complement method -- 15.4. Algebraic multigrid -- 15.5. The Schwarz additive method of preconditioning -- 15.6. Preconditioners based on the physics -- Appendices -- Appendix 1: Exercises -- A1.1. Parallelization techniques -- A1.2. Matrix analysis -- A1.3. Direct methods -- A1.4. Iterative methods -- A1.5. Domain decomposition methods -- Appendix 2: Solutions -- A2.1. Parallelization techniques -- A2.2. Matrix analysis -- A2.3. Direct methods -- A2.4. Iterative methods -- A2.5. Domain decomposition methods -- Appendix 3: Bibliography and Comments -- A3.1. Parallel algorithms -- A3.2. OpenMP -- A3.3. MPI.
A3.4. Performance tools -- A3.5. Numerical analysis and methods -- A3.6. Finite volume method -- A3.7. Finite element method -- A3.8. Matrix analysis -- A3.9. Direct methods -- A3.10. Iterative methods -- A3.11. Mesh and graph partitioning -- A3.12. Domain decomposition methods -- Bibliography -- Index -- End User License Agreement.
Tags from this library: No tags from this library for this title. Log in to add tags.
Star ratings
    Average rating: 0.0 (0 votes)
No physical items for this record

Intro -- Table of Contents -- Title -- Copyright -- Preface -- Introduction -- 1 Computer Architectures -- 1.1. Different types of parallelism -- 1.2. Memory architecture -- 1.3. Hybrid architecture -- 2 Parallelization and Programming Models -- 2.1. Parallelization -- 2.2. Performance criteria -- 2.3. Data parallelism -- 2.4. Vectorization: a case study -- 2.5. Message-passing -- 2.6. Performance analysis -- 3 Parallel Algorithm Concepts -- 3.1. Parallel algorithms for recurrences -- 3.2. Data locality and distribution: product of matrices -- 4 Basics of Numerical Matrix Analysis -- 4.1. Review of basic notions of linear algebra -- 4.2. Properties of matrices -- 5 Sparse Matrices -- 5.1. Origins of sparse matrices -- 5.2. Parallel formation of sparse matrices: shared memory -- 5.3. Parallel formation by block of sparse matrices: distributed memory -- 6 Solving Linear Systems -- 6.1. Direct methods -- 6.2. Iterative methods -- 7 LU Methods for Solving Linear Systems -- 7.1. Principle of LU decomposition -- 7.2. Gauss factorization -- 7.3. Gauss-Jordan factorization -- 7.4. Crout and Cholesky factorizations for symmetric matrices -- 8 Parallelization of LU Methods for Dense Matrices -- 8.1. Block factorization -- 8.2. Implementation of block factorization in a message-passing environment -- 8.3. Parallelization of forward and backward substitutions -- 9 LU Methods for Sparse Matrices -- 9.1. Structure of factorized matrices -- 9.2. Symbolic factorization and renumbering -- 9.3. Elimination trees -- 9.4. Elimination trees and dependencies -- 9.5. Nested dissections -- 9.6. Forward and backward substitutions -- 10 Basics of Krylov Subspaces -- 10.1. Krylov subspaces -- 10.2. Construction of the Arnoldi basis -- 11 Methods with Complete Orthogonalization for Symmetric Positive Definite Matrices.

11.1. Construction of the Lanczos basis for symmetric matrices -- 11.2. The Lanczos method -- 11.3. The conjugate gradient method -- 11.4. Comparison with the gradient method -- 11.5. Principle of preconditioning for symmetric positive definite matrices -- 12 Exact Orthogonalization Methods for Arbitrary Matrices -- 12.1. The GMRES method -- 12.2. The case of symmetric matrices: the MINRES method -- 12.3. The ORTHODIR method -- 12.4. Principle of preconditioning for non-symmetric matrices -- 13 Biorthogonalization Methods for Non-symmetric Matrices -- 13.1. Lanczos biorthogonal basis for non-symmetric matrices -- 13.2. The non-symmetric Lanczos method -- 13.3. The biconjugate gradient method: BiCG -- 13.4. The quasi-minimal residual method: QMR -- 13.5. The BiCGSTAB -- 14 Parallelization of Krylov Methods -- 14.1. Parallelization of dense matrix-vector product -- 14.2. Parallelization of sparse matrix-vector product based on node sets -- 14.3. Parallelization of sparse matrix-vector product based on element sets -- 14.4. Parallelization of the scalar product -- 14.5. Summary of the parallelization of Krylov methods -- 15 Parallel Preconditioning Methods -- 15.1. Diagonal -- 15.2. Incomplete factorization methods -- 15.3. Schur complement method -- 15.4. Algebraic multigrid -- 15.5. The Schwarz additive method of preconditioning -- 15.6. Preconditioners based on the physics -- Appendices -- Appendix 1: Exercises -- A1.1. Parallelization techniques -- A1.2. Matrix analysis -- A1.3. Direct methods -- A1.4. Iterative methods -- A1.5. Domain decomposition methods -- Appendix 2: Solutions -- A2.1. Parallelization techniques -- A2.2. Matrix analysis -- A2.3. Direct methods -- A2.4. Iterative methods -- A2.5. Domain decomposition methods -- Appendix 3: Bibliography and Comments -- A3.1. Parallel algorithms -- A3.2. OpenMP -- A3.3. MPI.

A3.4. Performance tools -- A3.5. Numerical analysis and methods -- A3.6. Finite volume method -- A3.7. Finite element method -- A3.8. Matrix analysis -- A3.9. Direct methods -- A3.10. Iterative methods -- A3.11. Mesh and graph partitioning -- A3.12. Domain decomposition methods -- Bibliography -- Index -- End User License Agreement.

Description based on publisher supplied metadata and other sources.

Electronic reproduction. Ann Arbor, Michigan : ProQuest Ebook Central, 2024. Available via World Wide Web. Access may be limited to ProQuest Ebook Central affiliated libraries.

There are no comments on this title.

to post a comment.

© 2024 Resource Centre. All rights reserved.