Id: 008440
Credits Min: 3
Credits Max: 3
Description
Computations that involve matrix algorithms are happening everywhere in the world at every moment in time, whether these be embedded in the training of neural networks in data science, in computer vision programs in a computer game, or in price calculations for financial options Tasks such as solving linear systems, computing eigenvectors and eigenvalues of large matrices, solving linear regression problems, often form the core of large scale computations. This course will describe efficient techniques for solving problems such as these. Both the theoretical foundations of the methods and practical considerations for how to implement the methods efficiently will be emphasized. Practical applications such as image compression, least squares data fitting, and numerical solution of differential equations, will also be presented. Topics for this course include: vector and matrix norms, orthogonal matrices, projectors, singular value decomposition (SVD); least squares problems, QR. factorization, Gram-Schmidt orthogonalization, Householder triangularization; stability and error analysis, condition number; direct methods for Ax = b, Krylov methods, Arnoldi iteration, GMRES, steepest descent, conjugate gradient method, preconditioning; applications: image compression by SVD, finite-difference scheme for ordinary and partial differential equations, least squares data fitting.
View Current Offerings