Wednesday, November 14, 2007
Section 5.5
When calculating the maximum number of iterations that we want to compute for GMRES why is 10 an option. Why is 10 such a good number for iterations in comparison to 15 or 20? I can see that restart GMRES and MINRES are good methods for saving time. I'm not exactly sure if I completely understand the real-valued function (5.18). I don't know where the terms come from. In example 5.16, I don't exactly know what the graphs are suppose to be telling me. What I'm confused about for preconditioning though is how we would be able to get a matrix in the form of M^-1*A for better conditioning but not compute what M^-1 is.
Tuesday, November 13, 2007
Section 6.3
I'm not too sure if I see the whole point behind fixed-point iteration. Where would this algorithm be applicable. I thought it was really easy to understand the check for contraction. I understand the concept behind the fixed point iteration and I find it interesting that we are using the concepts of root-finding to do the fixed-point iterations. I'm not exactly sure if I believe all the math yet but I'm sure with looking at it more closely that won't be a problem.
Saturday, November 10, 2007
Section 6.2
In the inverse quadratic interpolation, it says to turn the parabola that resulted from the three most recent points on its side. I was wondering if it would be more accurate and yet more complicated to use more then three points. I'm also a little unsure about turning the parabola on its side. I know it would mean that the parabola would then only cross the x axis once but it just seems a little strange that it would certainly be a good approximation for the root. I find it kindof impossible that the turned parabola would always occur there.
I understand the principles behind the inverse quadratic interpolation and quasi-newton methods pretty well but I'm not exactly sure about some of the math.
I understand the principles behind the inverse quadratic interpolation and quasi-newton methods pretty well but I'm not exactly sure about some of the math.
Wednesday, November 7, 2007
Section 6.1
I think I remember learning something like this in one of the calculus classes but I don't remember it being called Newton's method. I figured that this probably would not be used in large cases but there was a statement in a paragraph that said "Thus when Newton's method works at all, it should converge very quickly." I'm wondering when would Newton's method not work. I was surprised how the function and its derivative was put into matlab. I don't think I've ever seen that done before in Example 6.2. That was interesting.
Tuesday, November 6, 2007
Section 5.4
The GMRES method seems relatively easy to understand after understanding Krylov's method. In Equation 5.17, I don't see how the RHS would produce a Hessenberg matrix but I'm not to sure what a Hessenberg matrix is yet. If a Hessenberg matrix is a matrix that is almost triangular then at what point can you say it is Hessenberg or not?
Sunday, November 4, 2007
Section 5.3
I understood the Krylov subspace method with the reference to the QR factorization. However, I still wonder how these algorithms compare to each other with time. I was a little confused in Example 5.6. In the comments it said that the vector was not yet orthogonal to the first direction and I was a little confused about what it meant to be orthogonal to the first direction and also what it was meant in the comments when the program continued running in different directions.
Thursday, November 1, 2007
Section 5.2
I feel like I need more explanation as to how 5.3 was derived. It looks correct but I just feel like I need more explanation on it to understand it better. The max function was something new that I learned and I find it very useful to know about. The power iteration function also made the concept a lot more understandable to me. I thought it was very useful to have a way of finding the largest and smallest eigenvalues.
Subscribe to:
Posts (Atom)