Before multiplying the matrices, it is instructive to check whether they meet the dimensionality criterion. Here, and , and the column dimension of is equal to the row dimension of . Therefore, exists, and the product will be of dimension . We have
2.
Before multiplying the matrices, it is instructive to check whether they meet the dimensionality criterion. Here, and , and the column dimension of is equal to the row dimension of . Therefore, exists, and the product will be of dimension , and moreover, exists and is of dimension . We have
For , we can either multiply this expression out directly, or use that and exploit that we already know . Using the latter approach,
Finally,
3.
We can simplify multiplication by noting that , i.e. we are looking for the scalar product of with itself. We have
Hence,
This is also the answer you should obtain when multiplying the expression out directly, i.e. without the scalar product simplification. Note that for this approach, the order of multiplication is irrelevant, and you should always obtain the same result.
(ii) The course has introduced matrices for left-multiplication that achieve elementary matrix operations. Use the following examples to verify that this approach works:
Left-multiply
to exchange the columns of .
Left-multiply
to add 5 times row 2 to row 1 of .
Left-multiply
to subtract row 1 from row 2 and subsequently multiply row 1 by 3.
1.
Indeed, left-multiplying to has exchanged the rows of .
2.
Indeed, left-multiplying to has left row 2 unchanged; further, it is easily verified that 5 times row 2 has been added to row 1.
3.
Indeed, left-multiplying to has performed the desired operation.
This example shows that several operations can be performed at once, i.e. using only one matrix , or respectively, that sequential application of operations can be summarized by a single matrix. However, be careful when doing so: all entries in must refer to the initial matrix that you start from, and not some intermediate matrix obtained after one or multiple operations. As a concrete example, if we wanted to first multiply row 1 by 3 and then subtract this from the second row, we would use , as we subtract 3 times the initial first row from the second.
b. Determinant
(i) For the following matrices, compute the determinant using an appropriate rule.
1.
The rule we are using is that for a matrix , the determinant can be computed as
Accordingly,
2.
The rule we are using is that for a matrix , the determinant can be computed as (recall the intuition of left- and right-diagonals)
Accordingly,
3.
The rule we are using is again that for a matrix , the determinant can be computed as (recall the intuition of left- and right-diagonals)
Because one row is equal to zeros except for one entry, we know that only one right- and left-diagonal each will not reduce to zero, which simplifies computation. This would be the case also if one column had zeros everywhere but for one entry. We obtain
Alternatively, we could have obtained this result from a Laplace-expansion for the first row, which is the more direct/formal way to exploit the zeros:
where the last row uses the 2×2-rule we saw in Ex. 1.b.(i).1. Typically, you would not need to write out the last two summands in the first line when performing the Laplace-expansion, it is just done here for tractability.
(ii) In the course, we said that we typically deal with matrices of manageable size when computing the determinant, or that the matrix has a convenient structure (triangular, diagonal), where computing the determinant is as simple as multiplying all diagonal elements to obtain the trace. In some applications, however, you may not be that lucky, and revert to the general Laplace rule. Especially if there are zeros in the matrix, this method is still quite easily and quickly applied; this exercise is supposed to convince you of this fact.
Compute when
When applying Laplace rule, you typically look for the row or column with the most zeros, as this gives you the shortest sum of sub-determinants in the next step. Here, accordingly, we first expand by the third column. Be careful to correctly carry over the non-zero entry!
Now, the first row has the most zeros, and we will use this row for expansion.
Now, we can apply the 3×3 rule twice:
and
Accordingly, we obtain
c. Inversion
(i) For the following matrices, perform a test for invertability (using e.g. the determinant) and, if possible, compute the inverse matrix, using either a shortcut theorem or the Gauss-Jordan method:
1.
Using the rule for the determinant,
Hence, is invertible. Using the rule for the inverse,
2.
Using the rule for the determinant,
Hence, is not invertible.
Indeed, you could have also used the rank criterion here, as it is relatively straightforward to see that the columns/rows of are linearly dependent; for instance, subtracting 3 times row 3 from row 1 yields row 2. Hence, can not have full row rank, and can thus not be invertible.
3.
Using the rule for the determinant,
Hence, is invertible.
The solution for the inverse is obtained from Gauss-Jordan. There are infinitely many ways of obtaining the inverse; your steps may be different and/or in a different order, the final result that you obtain should, however, coincide with the one of this solution.
For Gauss-Jordan, we bring to identity form using elementary matrix operations, and perform the same operations in the same order to an identity matrix of appropriate dimension. As is done commonly, we write the intermediate matrices next to each other at every stage. Initially, we have
The first row can be used to eliminate all but one non-zero entry in the first column. Here, we add two times row 1 to row 3 to obtain
Next, row 2 can be used to eliminate all but one non-zero entry in the second column. Here, we subtract row 2 from row 3 to obtain
Next, we want a in row 3 to be able to eliminate non-zero entries in the third column. For this, we multiply row 3 by 1/9:
Finally, we subtract 2 times row 3 from row 2 and 4 times row 3 from row 1 to obtain the identity matrix:
Accordingly,
To verify that you have computed the correct inverse, you can multiply to (it does not matter whether you left- or right-multiply as ) and check whether you arrive at the identity matrix. This is a helpful tool for problem sets or exams that you can use to rule out mistakes, especially if you need the inverse matrix in subsequent problems.
(ii) In rare cases, you may come across applications where you need to invert “bigger” matrices than we usually deal with, i.e. matrices of higher dimension. Generally, this is quite computation-intensive, however, some matrices make your life easier than others. This exercise gives you some examples where despite matrix size, invertability checks and inversion are still manageable.
For the following matrices, perform a test for invertability (using e.g. the determinant) and, if possible, compute the inverse matrix, using either a shortcut theorem or the Gauss-Jordan method:
1.
is an upper-triangular matrix, for which we know that the determinant is equal to the trace:
As for any triangular matrix, the determinant is unequal to zero if and only if there are no zeros on the diagonal. Here, has two zeros on the diagonal, and hence, is not invertible.
Alternatively, you could argue that column 2 is the zero vector, which is linearly dependent of any other (set of) vector(s), and hence, can not have full rank and is therefore not invertible.
2.
is an upper-triangular matrix, and has no zeros on the diagonal. Hence, it is invertible. Due to its particular structure, Gauss-Jordan can be applied with relative ease: first, we multiply every -th row with , , which gives the second stage of Gauss-Jordan:
Now, use the 8th row to eliminate all ones in the 8th column, the 7th row for those in the 7th column, and so forth. This already yields the identity matrix on the LHS. For clarity, this step is divided into several intermediate ones below. For columns 5 through 8, this gives:
Proceeding with rows 3 and 4 where we subtract rows 5 and 7 and 6 and 8, respectively,
Note that the and and and , respectively, cancel each other out when computing rows 3 and 4. Proceeding with rows 1 and 2 where we subtract rows 3, 5 and 7 and 4, 6 and 8, respectively, we arrive at the identity on the LHS and obtain on the RHS:
Indeed, with a bit more formality, you can show that any matrix of the form of , i.e. upper-triangular with alternating row index and zero, has an inverse with diagonal equal to one over the row index, the second upper off-diagonal equal to -1 over the row index plus 2, and zeros everywhere else.
3.
For any diagonal matrix, to arrive at the identity, we must simply multiply row by where is the row--column--entry of the matrix to be inverted. Accordingly, by applying the same operations to an identity matrix, we obtain
Exercise 2: Equation Systems
As we have seen in the course, matrices are a helpful tool in solving systems of linear equations, especially when the number of equations and variables coincide. In the following, you are given two systems of equations. For either system, argue that it has a unique solution and determine it, or otherwise, decide whether the system has no or infinitely many solutions. In the latter case, give at least two different vectors that solve the equation system. Hint: You may need to determine the column space of a matrix at some point. You can do this by using that the column space is the set of linear combinations that you can obtain from any largest set of linearly independent columns of ; e.g. when and the columns , and are linearly independent but are not, then
(i) Solve
To solve the equation system using the matrix approach, we first write it in matrix notation ():
To check for a unique solution, we need to investigate whether the matrix is invertible. Using the determinant test,
Thus, we already know that the system will have a unique solution, given by
To determine the solution, it remains to invert the system’s matrix. As usual, we do this using Gauss-Jordan. The following briefly summarizes a potential sequence of elementary operations that yield the inverse; for more extensive solutions to Gauss-Jordan, please consult Ex. 1.c.
We get the one in the top left corner by multiplying row 1 by ; subsequently, we subtract it from rows 2 and 3 to obtain zeros in column 1 for these rows. We arrive at
We then exchange rows 2 and 3, multiply the new row 2 by , and use it to eliminate the non-zero entry in column 2 in the first and third row to arrive at
Finally, we multiply row 3 by and subtract it from row to obtain
Therefore, the solution is
(ii) Solve
To solve the equation system using the matrix approach, we first write it in matrix notation ():
To check for a unique solution, we need to investigate whether the matrix is invertible. Using the determinant test,
Hence, the matrix is not invertible, and there is no unique solution to the system of equations.
To determine whether there are no or infinitely many solutions, we compute the column space of the system’s matrix and check whether the vector of solutions is an element of this space, i.e. whether we can use the columns of the matrix to “reach” the solution vector through linear combination.
To compute the column space, we must first know how many linearly independent columns the system’s matrix has. It is relatively easy to see that the answer is 2: if it were 1, then for any two columns of the matrix, there would exist such that . However, this is not the case. If, on the other hand, the matrix had 3 linearly independent columns, it would have full column rank and hence be invertible; this, we have already ruled out using the determinant test. Thus, the matrix has two linearly independent columns.
According to the hint, the column space contains any linear combination that can be obtained from combining two columns of the system’s matrix. Let us pick the first two. Then,
To see if the solution vector is an element of this space, we must check whether there are such that
From the first row, . Plugging this into the third, we obtain and, accordingly, . Plugging this into the second row, we obtain
and thus no contradiction. Hence, it is true that
and therefore , where is the system’s matrix. Therefore, we know that the system has infinitely many solutions.
Comment. For a system with no solutions, we had proceeded as above, but arrived at a contradiction when checking for whether the -combination we found for the first two lines investigated also satisfies the third line. No matter which lines you start out with, you only get a -combination consistent with all three lines if the solution vector is an element of the column space of the system’s matrix.
To compute two solutions, we need to find two independent ways of “arriving at” the solutions vector by linearly combining the columns of (recall the graphic from chapter 2). A first combination has already been obtained above: we know that using only columns 1 and 2, i.e. setting , we can arrive at the solutions vector with and . A second vector can be found fixing again , or and computing the combination of free variables that yields the solution vector. There are infinitely many ways of doing so, one of which is presented below.
Fix . Bringing to the other side, we obtain
From the first line, . Plugging this into the third, , and therefore, . Again, no contradiction is obtained from line two.
In conclusion, two possible solutions are
Of course, because the system has infinitely many solutions, your two examples may well be different from the ones found here. To verify whether your solution is correct, simply plug the solutions you found into the equation system and verify that they solve it. Alternatively, you can check whether it satisfies the general structure given below.
Comment. The general formula for all solutions, i.e. the space of solutions to this equation system can be obtained by the same logic as above: if we don’t set equal to but bring it to the other side with variable argument, we can solve for and , i.e. the other two variables as functions of (alternatively, you could also solve for and or and , i.e. it does not matter which variable you “fix”). Proceeding accordingly, we get
Then, we get from line 1 that which, with line 3, yields
Plugging this into , we obtain
Hence, the space of solutions is
Exercise 3: Rank, Eigenvalues and Definiteness
a. The Rank of a Square Matrix
Consider the matrix
What is the rank of ? Hint: The rank is invariant to (i.e., unchanged by) elementary matrix operations; the rank of triangular matrices is relatively straightforward to determine using the linear independence test for vectors.
The rank of is easiest to determine from a triangular matrix obtained by applying elementary operations to . Adding times row 1 to row 2 and times row 1 to row 3, one obtains
Next, adding times row 2 to row 3 and times row 2 to row 4, we arrive at
Finally, adding 4 times the third to the fourth row, we obtain the triangular matrix:
It is more or less obvious that the rank of this matrix is : considering the rows of (recall: rank = row rank = column rank), the last row, as the zero vector, is linearly dependent of any other vector in . On the other hand, the remaining three rows are linearly independent. This can be seen from the linear independence test for vectors: if
then from line 1, . From line 2, we obtain , which, with line 3, gives , so that . Hence, for the first three rows of , implies . Therefore, by the linear independence test proposition, are linearly independent. This allows to conclude that
b. Definiteness and Eigenvalues
Consider the matrix
(i) Is positive or negative (semi-)definite? What can you say about the invertability of from this fact?
For a vector ,
Typically, when you see that the expression can not be reduced to squares (either directly or using binomial formulas) or some squares have opposing signs, you directly know that the matrix is indefinite. To show this, you find two examples where for one and for the other; this rules out all forms of (semi-)definiteness.
For , whereas for , .
Hence, is indefinite.
Because is indefinite, it meets none of the definiteness conditions for invertability, and we can not use definiteness to judge upon ‘s invertability. Note, however, that definiteness provides only a sufficient condition for invertability, such that finding that the matrix is indefinite is not a justified argument to conclude upon its non-invertability.
(ii) What are the eigenvalues of ? What can you say about the invertability of from this fact?
To compute the eigenvalues, we compute the characteristic polynomial of and then set it to zero:
Therefore,
if and only if
This is not an obvious equation to solve, with a bit of analytical reasoning and some experimentation, however, there is still a way to find the solutions. The typical approach to cubic equation systems is to be able to guess one solution (typically something “easy” like or ) and then compute the other ones from polynomial division.
First of all, we know that is not a solution, as the equation would become , a contradiction.
Hence, we can divide by to obtain
With some experimentation (or trying the usual suspects iteratively), you may find that yields , and therefore solves the equation. Dividing by , one obtains
For the former summand, if and only if
Hence, the eigenvalues of are given by
Because zero is not an element of the eigenvalues of , we know that is invertible.
(Bonus question) If is invertible, compute its inverse.
As (ii) has revealed, is invertible. We compute the inverse using Gauss-Jordan. Eliminating all but the first non-zero entry in column 1 gives
Clearing column 2 gives
Next, let us set up to clear column 3:
Clearing column 3 gives
c. Definiteness of a Matrix Product with its Transpose
Can you say something about the definiteness of where is an arbitrary matrix? Give a formal argument for your answer! Hint 1: Recall that definiteness of a matrix can be thought of as a generalization of a real number’s sign. For real numbers , we know that . Hint 2: If you have already solved the exercises for chapter 1, a sub-question of Ex. 1 should ring a bell here.
Note first that no matter the dimensions of , will always be symmetric, and it is therefore meaningful to talk about the definiteness of : if , then , such that the column dimension of coincides with the row dimension of , and the product exists with dimension .
Second, for any , with , it holds that
so that is always positive semi-definite. Hence, like the product of a (transposed) real number with itself, the respective matrix product is “non-negative”. Further, it is strictly positive if for any , , which is the case whenever has full column rank (cf. the elaborations on OLS in chapter 2). For a real number , has “full column rank” if and only if . Hence, the column rank concept is useful to think about generalizing an object’s property of “not being equal to zero” to the matrix context.