Lecture 10 of 'Scientific Computing' (wi4201)
previous lecture,
next lecture
Collegerama
- The following subjects are discussed:
- Definition of Krylov subspace
- Short intro of the Chebyshev method
- Conjugate Gradient algorithm
- first step, use of A-norm, condition A is SPD is crucial
- optimality property, general form
- discussion of the algorithm, flops per iteration, memory
- for SPD matrices, CG is the best method
- properties of the CG iterates and residual vectors
- proof by induction
- after a finite number of iterations CG has converged
- monotone convergence of the error in the A-norm
- "lucky" breakdown
- CG can also be used for SSPD (symmetric, semi-positive definite)
- example: pressure matrix in Navier Stokes
- CG converges for every SPD matrix A
- speed of convergence depens on the 2-norm condition number of A
- Material is described in pages 96-101 of the lecture notes.
Back to
Lectures of Scientific Computing page.