Wednesday, November 14, 2007

Section 5.5

When calculating the maximum number of iterations that we want to compute for GMRES why is 10 an option. Why is 10 such a good number for iterations in comparison to 15 or 20? I can see that restart GMRES and MINRES are good methods for saving time. I'm not exactly sure if I completely understand the real-valued function (5.18). I don't know where the terms come from. In example 5.16, I don't exactly know what the graphs are suppose to be telling me. What I'm confused about for preconditioning though is how we would be able to get a matrix in the form of M^-1*A for better conditioning but not compute what M^-1 is.

Tuesday, November 13, 2007

Section 6.3

I'm not too sure if I see the whole point behind fixed-point iteration. Where would this algorithm be applicable. I thought it was really easy to understand the check for contraction. I understand the concept behind the fixed point iteration and I find it interesting that we are using the concepts of root-finding to do the fixed-point iterations. I'm not exactly sure if I believe all the math yet but I'm sure with looking at it more closely that won't be a problem.

Saturday, November 10, 2007

Section 6.2

In the inverse quadratic interpolation, it says to turn the parabola that resulted from the three most recent points on its side. I was wondering if it would be more accurate and yet more complicated to use more then three points. I'm also a little unsure about turning the parabola on its side. I know it would mean that the parabola would then only cross the x axis once but it just seems a little strange that it would certainly be a good approximation for the root. I find it kindof impossible that the turned parabola would always occur there.
I understand the principles behind the inverse quadratic interpolation and quasi-newton methods pretty well but I'm not exactly sure about some of the math.

Wednesday, November 7, 2007

Section 6.1

I think I remember learning something like this in one of the calculus classes but I don't remember it being called Newton's method. I figured that this probably would not be used in large cases but there was a statement in a paragraph that said "Thus when Newton's method works at all, it should converge very quickly." I'm wondering when would Newton's method not work. I was surprised how the function and its derivative was put into matlab. I don't think I've ever seen that done before in Example 6.2. That was interesting.

Tuesday, November 6, 2007

Section 5.4

The GMRES method seems relatively easy to understand after understanding Krylov's method. In Equation 5.17, I don't see how the RHS would produce a Hessenberg matrix but I'm not to sure what a Hessenberg matrix is yet. If a Hessenberg matrix is a matrix that is almost triangular then at what point can you say it is Hessenberg or not?

Sunday, November 4, 2007

Section 5.3

I understood the Krylov subspace method with the reference to the QR factorization. However, I still wonder how these algorithms compare to each other with time. I was a little confused in Example 5.6. In the comments it said that the vector was not yet orthogonal to the first direction and I was a little confused about what it meant to be orthogonal to the first direction and also what it was meant in the comments when the program continued running in different directions.

Thursday, November 1, 2007

Section 5.2

I feel like I need more explanation as to how 5.3 was derived. It looks correct but I just feel like I need more explanation on it to understand it better. The max function was something new that I learned and I find it very useful to know about. The power iteration function also made the concept a lot more understandable to me. I thought it was very useful to have a way of finding the largest and smallest eigenvalues.

Tuesday, October 30, 2007

Section 5.1

I remember working with problems that would stop iterating when a solution was good enough for CISC280 but even then the problems were relatively small and did not save much time though it would be nice to learn more about this process in practice. The problem in my CISC280 class dealt with finding square roots that were close enough to the exact answer. The example was very useful. Find was a new MatLab function that I never knew about before. It looks very useful. What I thought was interesting though was the way that MatLab could tell that S was a sparse matrix. The definition says that a sparse matrix is a matrix that has mostly zero elements but at what point can you decide that a matrix is sparse. When there is more then half of the elements that are zero?

Monday, October 22, 2007

Section 4.6

I would have liked to see a proof of the statement that said If A is real, then so are U and V (which are then orthogonal matrices) for Theorem 4.6 (Singular Value Decomposition). I'm having a hard time believing that statement without some sort of proof to it. I also don't completely understand what a singular vector is and what its application is. I really found Example 4.6 to be very intersting. Actually this example really helped me a lot to see how SVD compression would work and I am really surprised on the output and MatLab's ability to perform this.

Sunday, October 21, 2007

Section 4.5

The most difficult part about the reading was understanding what a Hessenberg reduction does and why. The process was easy to understand after looking at the first iteration. The process seems easy to follow but I don't know if I completely understand why you do certain steps. Its an interesting algorithm for reducing time.

Tuesday, October 16, 2007

Section 4.4

For equation (4.12), I assumed that s is a scalar number but if we are shifting the matrix why would a scalar number * an identity matrix be considered shifting a matrix since it would only change the entries along the diagonal. When you say that the Wilkinson shift has the additional benefit of naturally introducing complex numbers for initially real matrices does that mean that you are introducing a shift which is complex? The most difficult part was understanding the Function 4.1 (QR iteration with Wilkinson shift) completely. I understood quadratic convergencea little better.

Tuesday, October 9, 2007

Section 4.3

I understand why RkQk = Qk*Qk Rk Qk but I wish that the labeling would be more consistant. When I first looked at the this equation I was extreamly confused as to why this equation would be true if Q was orthogonal which I assumed it would be since we were introduced to orthogonal matrices and QR factorization with Q being orthogonal matrices and introduced to unitary matrices with U. I think its interesting how a matrix, Ak, can converge to an upper triangle but that is a little harder to see how that is possible. I think the notation is kindof strange for (4.10) is that suppose to imply a limit? Other then that the material seems interesting and I wish there was more explanation as to what the function err does in Example 4.3.

Monday, October 8, 2007

Section 4.2

There was a sort function used in the example and I was wondering what kindof sort does that function produce and also I was wondering why you would have to sort at all for the Bauer-Fike Theorem. The Schur decomposition seemed a little more easy to comprehend then the Bauer-Fike theorem. I don't really understand the proof for that theorem either.

Sunday, October 7, 2007

Section 4.1

The most difficult part of this section was remembering everything about eigonvalues. I've never heard of a hermitian transpose of a matrix but the process seems doable. I just can't remember the syntax for Abar is though. I remember spands, basis, and independence from Math349 and I really remember doing diagonalizable matrices and similarity transformations in Math302. But I'm having trouble seeing what's the difference between an eigenvector and and eigenpair if an eigenvector is composed of eigenvalues.

Thursday, October 4, 2007

Section 3.4

I think I need a little bit more of an understanding of how the normal equations are an unstable method when the matrix is ill conditioned and the least squares fir is a good one. I think the algorithm example helped me to understand it a little more though. I think its completely understandable about the condition number for a rectangular case but I wish that the condition number for a square matrix was introduced earlier because I don't remember seeing that equation until now in the text but I could have just missed it or forgotten.

Tuesday, October 2, 2007

Section 3.3

Householder reflections are pretty simple to understand. QR factorization was an easy concept to understand but the algorithm took me some time to completely comprehend. I don't completely understand the normal equations. I think its an interesting concept for handling rectangular matrices.

Sunday, September 30, 2007

Section 3.2

I've never heard of a pseudoinverse before but it seems pretty simple to understand and has useful properties. Orthogonality and orthonormals are also pretty easy to understand and sound familiar but I can't remember exactly where I've heard of them before. I thought the 2-norm property that applied for the orthogonal vectors was interesting. I thought Theorem 3.2 was interesting and would have been interested in seeing the proof in that. The most difficult part to understand to me was the proof for theorem 3.1.

Thursday, September 27, 2007

Section 3.1

I've heard of the least squares problem before but I don't remember really learning it I don't think. I never knew how to solve a least square problems in MATLAB and that it would be rather simple to understand and use.

Sunday, September 23, 2007

Section 2.8

Diagonally dominant matrices and banded matrices were new topics that I never have learned about before but they are relatively easy to understand. The symmetric positive definite matrices seem a little more confusing but understandable somewhat. I think that the condition x^T * A* x>0 is an interesting concept since it produces a scalar which shouldn't be a surprise but its just interesting. I don't know if I completely understand the purpose of the Cholesky factorization though.

Wednesday, September 19, 2007

Section 2.6/2.7

I don't think I really understand what a matrix norm is. I understand the properties because those are just normal properties that can be seen in other things but I just think I understand this completely. I learned that a conjugate transpose is a vector with complex entries that has been taken the transpose of. For property 3 is says:
IIalpha xII = IalphaI IIxII for any x which is an element of R^n and alpha which is an element of R but I assumed that alpha suppose to be the absolute value of alpha and I wasn't completely sure of that. I also did not understand equation 2.17 but I understood everything that followed that. I just don't understand what that definition means. I think I really understand residuals pretty well though.

Tuesday, September 18, 2007

Section 2.4/2.5

I understood the Big-Oh concept pretty well. I've had a lot of exposure to that in CISC220 and CISC280. I've never heard of the flopping term though so that was interesting to learn. I think I'm still a little confused about row pivoting. I think I understand permutation matrix a lot better. I think I remember doing some in MATH349.

Sunday, September 16, 2007

Section 2.3

The material that too me a little longer to understand was the elementry matrices. It just took me awhile to remember it all like that. I don't remember ever learning about LU factorization before but it seems pretty straightfoward and useful. The implementation was hard to understand and picture at first but after awhile it made a lot more sense.

Thursday, September 13, 2007

Section 2.2

I thought it was very interesting to find out that inverse matrices are hardly ever used in algorithms. I thought that might be the simplest thing to do if it was necessary. Is it inefficient because of time or space components? What other alternatives do they use when they need to use the inverse matrix? What is the algorithm used for the inverse function on MatLab? I thought that forward and backward substitution was also very interesting. I liked how that worked.

Tuesday, September 11, 2007

Section 1.6

I think the example on finding the roots of the quadratic polynomial was very useful in understanding the concept. I completely understood the purpose behind rewriting the code to take into consideration the subtraction cancellation so that the numbers truely apprear to be a double root like it should. I really didn't any of the information to be difficult to understand and I completely understand why this is so important to understand and learn.

Sunday, September 9, 2007

Section 1.5

I'm not too sure about the examples. I'm not too sure about how to compute the relative change. I just don't feel very comfortable with it. I understand the subtractive cancellation relatively well but I'm not too sure if I really understand what an ill posed problem is. I think I understand what a condition number is though.

Thursday, September 6, 2007

Section 1.3/1.4

In Section 1.4, I was curious about what was the reason behind calculating the accurate digits. Does this equation give the amount of accurate digits in the approximation from the actual solution. I don't think I understand completely how that equation which is so similar to the relative error, which I understand, could be related to the accurate digits equation. I liked the discussion of truncated strategies that can be found in chess. I thought that was very interesting. I'm not sure if I completely understood the floating-point numbers.
I liked Section 1.3. I found the section very understandable and useful. A lot of the ideas I have seen before in previous computer science courses.

Tuesday, September 4, 2007

Section 1.1/1.2

The most difficult part of the material to me was the Taylor series. I really haven't done much with them since Math242 and I'm not entirely sure if I learned them that well either.
The most interesting part was the example given about symbolic and numerical methods of programming. I think that is really useful to understand for considering the design for a problem. The whole concept of looking at different ways to program reminds me of previous computer science classes and I look foward to understanding more about it because I found the different methods both interesting and useful.
My absolute favorite part was the quote "It's all a lot of simple tricks and nonsense."..classic

Thursday, August 30, 2007

First Post

Name: Lucero Carmona
Year: Junior
Major: Mathematical Sciences B.S.
Minors: Computer Science and Art History

Previous Math Courses:
Math242 - Calculus II
Math243 - Calculus III
Math210 - Discrete Math
Math245 - Proof
Math268 - Perspectives on Mathematics
Math302 - Differential Equations
Math349 - Linear Algebra

Weakest Part of Background:
I think my weakest part of my math background is probability.

Strongest Part of Background:
My strongest part is probably ordinary differential equations.

Purpose of Taking Course:
I'm taking this course because its part of the major requirement. I'm also really interested in programming in MatLab and learning more about algorithms.

Special Thoughts:
I really look foward in learning more about algorithms and just some general programming on MatLab.

Other Interests:
I'm also interested in programming and art.

Worst Math Teacher's Action:
I just really couldn't understand what the professor was talking about. He had a heavy accent and I just couldn't follow what was going on.

Best Math Teacher's Action:
The professor just explained concepts really well and had amazing notes that were very useful.

Additional Comments:
Nothing yet!