Tuesday, October 30, 2007

Section 5.1

I remember working with problems that would stop iterating when a solution was good enough for CISC280 but even then the problems were relatively small and did not save much time though it would be nice to learn more about this process in practice. The problem in my CISC280 class dealt with finding square roots that were close enough to the exact answer. The example was very useful. Find was a new MatLab function that I never knew about before. It looks very useful. What I thought was interesting though was the way that MatLab could tell that S was a sparse matrix. The definition says that a sparse matrix is a matrix that has mostly zero elements but at what point can you decide that a matrix is sparse. When there is more then half of the elements that are zero?

Monday, October 22, 2007

Section 4.6

I would have liked to see a proof of the statement that said If A is real, then so are U and V (which are then orthogonal matrices) for Theorem 4.6 (Singular Value Decomposition). I'm having a hard time believing that statement without some sort of proof to it. I also don't completely understand what a singular vector is and what its application is. I really found Example 4.6 to be very intersting. Actually this example really helped me a lot to see how SVD compression would work and I am really surprised on the output and MatLab's ability to perform this.

Sunday, October 21, 2007

Section 4.5

The most difficult part about the reading was understanding what a Hessenberg reduction does and why. The process was easy to understand after looking at the first iteration. The process seems easy to follow but I don't know if I completely understand why you do certain steps. Its an interesting algorithm for reducing time.

Tuesday, October 16, 2007

Section 4.4

For equation (4.12), I assumed that s is a scalar number but if we are shifting the matrix why would a scalar number * an identity matrix be considered shifting a matrix since it would only change the entries along the diagonal. When you say that the Wilkinson shift has the additional benefit of naturally introducing complex numbers for initially real matrices does that mean that you are introducing a shift which is complex? The most difficult part was understanding the Function 4.1 (QR iteration with Wilkinson shift) completely. I understood quadratic convergencea little better.

Tuesday, October 9, 2007

Section 4.3

I understand why RkQk = Qk*Qk Rk Qk but I wish that the labeling would be more consistant. When I first looked at the this equation I was extreamly confused as to why this equation would be true if Q was orthogonal which I assumed it would be since we were introduced to orthogonal matrices and QR factorization with Q being orthogonal matrices and introduced to unitary matrices with U. I think its interesting how a matrix, Ak, can converge to an upper triangle but that is a little harder to see how that is possible. I think the notation is kindof strange for (4.10) is that suppose to imply a limit? Other then that the material seems interesting and I wish there was more explanation as to what the function err does in Example 4.3.

Monday, October 8, 2007

Section 4.2

There was a sort function used in the example and I was wondering what kindof sort does that function produce and also I was wondering why you would have to sort at all for the Bauer-Fike Theorem. The Schur decomposition seemed a little more easy to comprehend then the Bauer-Fike theorem. I don't really understand the proof for that theorem either.

Sunday, October 7, 2007

Section 4.1

The most difficult part of this section was remembering everything about eigonvalues. I've never heard of a hermitian transpose of a matrix but the process seems doable. I just can't remember the syntax for Abar is though. I remember spands, basis, and independence from Math349 and I really remember doing diagonalizable matrices and similarity transformations in Math302. But I'm having trouble seeing what's the difference between an eigenvector and and eigenpair if an eigenvector is composed of eigenvalues.

Thursday, October 4, 2007

Section 3.4

I think I need a little bit more of an understanding of how the normal equations are an unstable method when the matrix is ill conditioned and the least squares fir is a good one. I think the algorithm example helped me to understand it a little more though. I think its completely understandable about the condition number for a rectangular case but I wish that the condition number for a square matrix was introduced earlier because I don't remember seeing that equation until now in the text but I could have just missed it or forgotten.

Tuesday, October 2, 2007

Section 3.3

Householder reflections are pretty simple to understand. QR factorization was an easy concept to understand but the algorithm took me some time to completely comprehend. I don't completely understand the normal equations. I think its an interesting concept for handling rectangular matrices.