MTH 201 -- Linear Algebra

Instructor: Krishna Kaipa
E-mail: kaipa@iiserpune.ac.in
Office: 446 Academic Building
Office Hours: To be announced. It may also be convenient to ask questions by email (virtual office hours).
The office hours of TAs will be uploaded soon. You can also ask TAs questions by email.
Phone: x8162
Timings: Lectures in LHC 103 on Mondays and Tuesdays from 3:10 pm - 4:10 pm.
Tutorials are on thursdays, venue and time below:

Tutorial Batches: Roll numbers, venue, time and TA

B1a batch: (Roll numbers  01-24)     Room 101    Thu 4:20 - 5:20 pm      TA: Sucheta Dawn,   suchetamtma@gmail.com
B1b batch: (Roll numbers  25-48)     Room 103    Thu 4:20 - 5:20 pm      TA: Tathagat Mandal,  tathagat.mandal@students.iiserpune.ac.in
B2a batch: (Roll numbers  49-73)     Room 105    Thu 4:20 - 5:20 pm      TA: Arun Kumar,  arunkumar@students.iiserpune.ac.in
B2b batch: (Roll numbers  74-96)     Room 106    Thu 4:20 - 5:20 pm      TA: Aman Jhinga,  aman@iiserpune.ac.in
B3a batch: (Roll numbers 98-120)    Room 105    Thu 10:40 - 11:40 am   TA: Uday Jagdale,  jagadale.uday@iiserpune.ac.in
B3b batch: (Roll numbers 121-145)  Room 106    Thu 10:40 - 11:40 am   TA: Sneha Jondhale,  snehajondhale12@gmail.com
B4a batch: (Roll numbers 146-165)  Room 107    Thu 4:20 - 5:20 pm      TA: Munaif Iqbal,  munaifsci@students.iiserpune.ac.in
B4b batch: (Roll numbers > 166)      Room 108    Thu 4:20 - 5:20 pm      TA: Mahendra Prasad,  msprasad@students.iiserpune.ac.in


Course Policy and information

  1. There will be an assignment every week, which will be discussed in the tutorial that week.
    Assignments do not have to be submitted and carry no marks.
  2. There will be a 15 minute quiz at the beginning of every tutorial based on the assignment of the previous week.
  3. Quizzes carry 36% weightage. Mid-term and final examination, each carry 32% weightage.
    There will be no make up quiz for missed quizzes, however worst 2 quizzes will be dropped.
  4. Textbook: Linear Algebra With Applications, by Otto Bretscher, Pearson. Edition 3/4/5.
    Library has 10 copies of 5th edition. Pearson India has published 3d edition.
  5. Syllabus: Chapters 1-8 of the textbook.
  6. Miscellaneous: No calculators allowed in quizzes and exams.


Assignments

Assignment 1     Assignment 2     Assignment 3     Assignment 4     Assignment 5     Assignment 6     Assignment 7


Lectures

Lecture 1 - Aug 3
In this lecture we talked about the following:
  1. Basic matrix operations: Adding two matrices of same size, multiplying a matrix by a scalar,
    multiplying two matrices of compatible size.
  2. Elementary row operations to bring a matrix into RREF (Reduced Row Echelon form).

Lecture 2 - Aug 4
In this lecture we talked about the following points. Instructor's notes for first week are here
  1. Let A be an \(m \times n\) matrix. What is the procedure to bring it to RREF form?
    If A is the zero matrix, it is already RREF,and there is nothing to do. So, assume that A is not zero matrix.
    1. Since \(A \neq 0\) matrix, not all columns can be the zero vector. Scan the columns of \(A\) starting from first to the last column.
      The first time you encounter a column which is not the zero vector, stop: The column you are at is defined to be \(p_1\).
      Mostly the first column itself will be non-zero so that \(p_1 = 1\).
    2. So you are now on the \(p_1\) th column, and it is not a column of zeros.
      By interchanging rows of \(A\), ensure that the topmost entry of this column is nonzero.
      Having done that, multiply Row 1 by a nonzero number to ensure that the topmost entry of this column is actually 1.
      Now subtract suitable multiples of the first row from Row 2, Row3, \(\dots\) Row m, so that the \(p_1\)th column looks like 1 followed by \(m-1\) zeros.
      We say \(p_1\) is the first pivot.
    3. Now, focus on the matrix \(A_1\) sitting on the right of the first pivot and below it. More precisely, \(A_1\) is obtained by removing the first row of \(A\) and the first \(p_1\) columns of \(A\). It is of size \( (m-1) \times (n- p_1)\). Repeat the previous step for the matrix \(A_2\). We define \(p_2\) for \(A\) to be \(p_1\) for \(A_1\). Either we are able to repeat this process \(m\) times (recall \(A\) has \(m\) rows), or we encounter a matrix \(A_r\) of size \( (m-r) \times (n - p_r) \) with all entries zero. In either case, we can go no further. We stop here.
    4. We need one more step to reach the RREF form of \(A\). The matrix obtained above is almost RREF, except that in the pivot columns, the entries below the pivot entries are zero, but the entries above the pivot need not be zero: Consider the \(i\)-th row with pivot in the \(p_i\) th column. Then from the first \(i-1\) rows we have to subtract suitable multiples of the \(i\) th row to ensure that the \(p_i\) th columns looks like all zeros except a 1 in the \(i\) th row. Having done this we have the RREF form of \(A\).

  2. We defined rank of a matrix \(A\) as the number of pivot columns in RREF form of \(A\)
    We applied RREF to solving a linear system of \(m\)equations in \(n\) unknowns, \(A \vec x = \vec b\). This method is called Gaussian Elimination:
    Obtain the RREF form \( [A' |b'] \) of the augmented matrix \( [ A | b ] \).
    If \(p_r = n+1\) for the RREF matrix \( [ A | b ] \), then the system is inconsistent and has no solutions.
    Otherwise the solution space has \(n-r\) degrees of freedom. The non-pivot variables are completely free, and the pivot variables are completely determined in terms of the non-pivot variables.

Lecture 3 - Aug 10
In this lecture we talked about the following points.
  1. Recap of first week's material.
  2. Definition and examples of Linear Transformation.

Lecture 4 - Aug 11
In this lecture we talked about the following points. Instructor's notes for Week 2 are here
  1. Interpreted \(T_A(e_j)\) as the j-th column of A.
    Interpreted \(Ax\) as \( \sum_{j=1}^n x_j \; j-th \text{column of } A\).
    Interpreted j-th column of AB as Av where v is the j-th column of B
  2. A transformation \(T: \mathbb R^n \to \mathbb R^m\) is linear if and only if $$ T(a \vec u + b \vec v) = a T(\vec u) + b T(\vec v) $$
  3. Composition of two linear tranformations, and relation to Matrix Multiplication
  4. example of fact that \(AB\) need not equal \(BA\) even if \(A\) and \(b\) are square matrices of the same size.
  5. Since there was time left, we discussed when \(T_A\) is invertible in terms of Gaussian elimination. This will be continued in Lecture 5.

Lecture 5 - Aug 17
In this lecture we talked about the following points.
  1. We showed that if A is a \(m \times n\) matrix with m > n, then there exists a \(b \in \mathbb R^m\) such that the system of equations \(Ax = b\) is inconsistent.
    We also showed that if n >m, then the system \(Ax = 0\) has a solution other than the trivial one \(x = \vec 0\).
    Actually we showed that \(Ax = 0\) has a unique solution (namely \(x = \vec 0\)) if and only if rank(A) = n.
  2. If A is a \(m \times n\) matrix, then the linear transformation \(T_A:\mathbb R^n \to \mathbb R^m\) is invertible if and only if $$ m = n = \text{rank}(A) \qquad \Leftrightarrow \qquad m = n \;\text{ and } \; Ax = 0 \text{ has no solution other than } x = \vec 0.$$
  3. When \(T_A\) is invertible, the inverse is linear and hence of the form \(T_B\) for some \(n \times n\) matrix \(B\) satisfying $$ AB = I_n = BA$$

Lecture 6 - Aug 18
In this lecture we talked about the following points. Instructor's notes for Week 3 are here
  1. inverse and determinant of a \(2 \times 2\) matrix.
  2. associativity of matrix multiplication and distributive property
  3. We showed that for each elementary row operation on \(m \times n\) matrices there is a corresponding invertible \(m \times m\) matrix denoted \(E\)
    (and known as an elementary matrix) such that \(A \mapsto E A\) realizes the row operation.
  4. We showed that every invertible \(m \times m\) matrix is a product of \(m \times m\) elementary matrices. We used this to compute inverse of a \(m \times m\) matrix \(A\)
    by using row reduction to \( [A | I_m ]\) to bring it to the form \( [ I_n | A^{-1} ] \).

Lecture 7 - Aug 24
In this lecture we talked about the following points.
  1. Recap of orthogonal projection and reflection in a plane.
  2. We showed that given a square (say \(n \times n\) ) matrix\(A\), if we can find another \(n \times n\) matrix \(B\) such that one of the two conditions
    \(AB = I_n \) and \(BA = I_n\) are satisfied, then the other condition is automatically satisfied and \(B = A^{-1}, \, A = B^{-1} \).
  3. We showed that if \(E\) is an elementary \(m \times m\) matrix realizing a certain row operation on \(m \times n\) matrices, then we can get
    the exact same opertion on columns instead of rows of an \(n \times m\) matrix \(B\), by doing \(B \mapsto B E^t\).
  4. We defined the notion of a subspace \(W\) of \(\mathbb R^n\) and kernel of a linear transformation \(T: \mathbb R^n \to \mathbb R^m\) and showed that
    image and kernel of a \(T\) are subspaces of \(\mathbb R^m\) and \(\mathbb R^n\) respectively.

Lecture 8 - Aug 25
In this lecture we talked about the following points. Instructor's notes for Week 4 + Week 5 together are here
  1. We found all subspaces of \(\mathbb R^1\) and \(\mathbb R^2\).
  2. Given a subspace \(W \subset \mathbb R^n\) we discussed an algorithm which produces a 1-1 linear transformation \(T:\mathbb R^m \to \mathbb R^n\)
    with image being \(W\) and where \(m \leq n\). We will over this again next week.

Lecture 9 - Aug 31
In this lecture we talked about the following points.
  1. We discussed the algorithm that produces a basis for a subspace \(W \subset \mathbb R^n\) again. We showed any two bases have the same size,
    defined the number of elements in a basis of \(W\) to be the dimension of \(W\) denoted \(\text{dim}(W)\)
  2. We also recapped the column operations \( A \mapsto A E^t\) where \(A\) is \(n \times m\) matrix and \(E\) is an \(m \times m\) elementary matrix.

Lecture 10 - Sept 1
In this lecture we talked about the following points. Instructor's notes for Week 4 + Week 5 together are here
  1. We showed that givem an \(m \times n\) matrix \(A\), we can find invertible matrices \(P \text{ and } Q\) of sizes \(n \times n \text{ and } m \times m\)
    respectively, such that $$ Q A P = \begin{pmatrix} I_r & 0\\ 0 & 0 \end{pmatrix} \qquad \text{where } \; r=\text{rank}(A) $$ Clearly \(\text{ker}(QAP)\) has basis \( (e_{r+1}, \dots, e_n) \subset \mathbb R^n\). Therefore kernel of \(QAP\) has dimension \(n-r\).
    Also \(\text{im}(QAP)\) has basis \( (e_1, \dots, e_r) \subset \mathbb R^m \). Therefore dimension of image of \(QAP\) is rank\(A\).
  2. We defined nullity of a matrix \(A\) to be the dimension of its kernel.
  3. If \(Q \text{ and } P\) are as above we showed that $$ \text{ker}(QA) = \text{ker}(A), \quad \text{im}(AP)= \text{im}(A); \qquad \text{dim(ker}(AP))= \text{dim(ker}(A)), \quad \text{dim(im}(QA)) = \text{dim(im}(A)) $$
  4. In particular the dimensions of image of \(A\) and \(QAP\) is the same, and hence equal to rank\(A\).
    Thus we interpreted the rank of a matrix A (defined as number of pivots in RREF(A) ) as the dimension of the image of A, (i.e image of the linear transformation \(x \mapsto Ax\) ).
  5. In particular the nullities, i.e dimensions of kernel of \(A\) and \(QAP\) is the same, and hence equal to rank\(A\).
    \(n - \text{rank}(A)\). Thus we proved the $$ \text{RANK NULLITY THEOREM: } \; \text{ If } A \text{ is } m \times n, \text{ then } \; \text{rank}(A) + \text{nullity}(A) = n. $$

Lecture 11 - Sept 7
In this lecture we talked about the following points.
  1. We talked about change of basis in \(\mathbb R^n\) related the matrix \(QAP^{-1}\) from last week to change of basis
  2. Introduced the concept of a vector space

Lecture 12 - Sept 8
In this lecture we talked about the following points. Instructor's notes for Week 6 are here
  1. We went over the definition of a vector space in more detail and gave examples.
  2. We defined basis, linear independence, span, dimension.

Lecture 13 - Sept 14
In this lecture we talked about the following points.
  1. Recap of abstract vectors spaces, bases, dimension, linear independence, span.
  2. Rank Nullity theorem

Lecture 14 - Sept 15
In this lecture we talked about the following points. Instructor's notes for Week 7 are here
  1. Finished Rank Nullity theorem, solved a couple of problems based on it.
  2. Mid term review. Instructor's notes for review are here

Weeks 8 and 9
  1. Mid term exam on September 22.
  2. Return of Mid term exam paper on October 1. Answers and question paper are here


Supplementary Material    (for deeper understanding and challenging problems)

Supplementary Problems

extra problems week 1 answers         extra problems weeks 2 and 3

Supplementary Notes

Week 1
  1. Row equivalence : We say a matrix \(B\) is row equivalent to a matrix \(A\), if \(B\) can be obtained from \(A\) by a sequence of row operations. Since Row operations are reversible, it follows that if \(B\) is row equivalent to \(A\), then \(A\) is also row equivalent to \(B\). Recall this is one of the 3 conditions of an equivalence relation (symmetry) . The other two being reflexivity (\(A\) is row equivalent to itself), and transitivity: if \(B\) can be obtained from \(A\) by row operations, and further row operations on \(B\) yield \(C\), then clearly \(C\) is obtained from \(A\) by row operations.

  2. Claim: If \(A\) and \(B\) are row equivalent, then the systems \(Ax = 0\) and \(Bx = 0\) have the same solution set.
    Proof: If \(A\) and \(B\) are row equivalent, then the sequence of row operations that turn A into B, also turns \([A | 0 ]\) into \( [B| 0]\). We proved in class that row operations on the augmented matrix \([A | b]\) does not change the solutions of \(Ax = b\). Therefore solutions of \(Ax = 0\) and \(Bx = 0\) coincide. \( \qquad \Box \)

  3. Theorem: The RREF form a matrix is unique.
    Proof: Suppose \(A\) and \(B\) are RREF forms of a \(m \times n\) matrix \(M\). Since all the three elementary row operations are reversible, it follows that \(B\) can be obtained from \(A\) by applying row operations, and vice-versa. In other words \(A\) and \(B\) are row equivalent. So, it suffices to prove that if two RREF matrices \(A\) and \(B\) are row equivalent then \(A = B\). We do this by induction on the number of columns, \(n\).

    The base case is \(n=1\). As we have seen in Assignment 1, there are only 2 RREF \(n \times 1\) matrices namely, the zero column vector and the column vector which is 1 followed by \(n-1\) zeros. Surely, one cannot turn the zero column vector into a non-zero column vector by row operations. This finishes the proof of the base case.
    Inductively, assume that the theorem holds when the number of columns is \(n-1\). Writing \( A =[ A' | v ] \) and \(B = [B'|w ]\), observe that \(A'\) and \(B'\) are themselves RREF, as well as row equivalent. By the inductive hypothesis, we conclude \(A' = B'\). Thus it suffices to show \(v = w\).

    We may also assume neither of \(A\) and \(B\) is the zero matrix, because the only matrix row equivalent to a zero matrix is itself. Let \( p_1 < p_2 < \dots < p_r\) be the pivot sequence of \(A\). If \( p_r < n \) then \(p_1 < p_2 < \dots < p_r\) is also the pivot sequence of \(A' = B'\), and moreover the sytems \(A'x = v\) and \(B'x = w\) are consistent (and have the same solution set). Take a solution \(x = \vec s\) of these systems. We have \(v - w = A's - B's = 0\).
    If on the other hand \(p_r = n\), then it follows that the systems \(A' x = v\) is inconsistent. Since B is row equivalent to A, the system \(B'x = w\) is inconsistent. Recall inconsistency is possible only if the last column of the augmented matrix is a pivot column. Also note that the pivot sequence of \(A' = B'\) must be \( p_1 < p_2 < \dots < p_{r-1} \). Thus the \(r\) th row of \(A\) and \(B\) has pivot in the last column. This means both \(v\) and \(w\) are equal to the column vector with a \(1\) in the \(r\) th row, and zeros elsewhere. \( \qquad \blacksquare \)