Sunday, September 30, 2007

Section 3.2

I've never heard of a pseudoinverse before but it seems pretty simple to understand and has useful properties. Orthogonality and orthonormals are also pretty easy to understand and sound familiar but I can't remember exactly where I've heard of them before. I thought the 2-norm property that applied for the orthogonal vectors was interesting. I thought Theorem 3.2 was interesting and would have been interested in seeing the proof in that. The most difficult part to understand to me was the proof for theorem 3.1.

Thursday, September 27, 2007

Section 3.1

I've heard of the least squares problem before but I don't remember really learning it I don't think. I never knew how to solve a least square problems in MATLAB and that it would be rather simple to understand and use.

Sunday, September 23, 2007

Section 2.8

Diagonally dominant matrices and banded matrices were new topics that I never have learned about before but they are relatively easy to understand. The symmetric positive definite matrices seem a little more confusing but understandable somewhat. I think that the condition x^T * A* x>0 is an interesting concept since it produces a scalar which shouldn't be a surprise but its just interesting. I don't know if I completely understand the purpose of the Cholesky factorization though.

Wednesday, September 19, 2007

Section 2.6/2.7

I don't think I really understand what a matrix norm is. I understand the properties because those are just normal properties that can be seen in other things but I just think I understand this completely. I learned that a conjugate transpose is a vector with complex entries that has been taken the transpose of. For property 3 is says:
IIalpha xII = IalphaI IIxII for any x which is an element of R^n and alpha which is an element of R but I assumed that alpha suppose to be the absolute value of alpha and I wasn't completely sure of that. I also did not understand equation 2.17 but I understood everything that followed that. I just don't understand what that definition means. I think I really understand residuals pretty well though.

Tuesday, September 18, 2007

Section 2.4/2.5

I understood the Big-Oh concept pretty well. I've had a lot of exposure to that in CISC220 and CISC280. I've never heard of the flopping term though so that was interesting to learn. I think I'm still a little confused about row pivoting. I think I understand permutation matrix a lot better. I think I remember doing some in MATH349.

Sunday, September 16, 2007

Section 2.3

The material that too me a little longer to understand was the elementry matrices. It just took me awhile to remember it all like that. I don't remember ever learning about LU factorization before but it seems pretty straightfoward and useful. The implementation was hard to understand and picture at first but after awhile it made a lot more sense.

Thursday, September 13, 2007

Section 2.2

I thought it was very interesting to find out that inverse matrices are hardly ever used in algorithms. I thought that might be the simplest thing to do if it was necessary. Is it inefficient because of time or space components? What other alternatives do they use when they need to use the inverse matrix? What is the algorithm used for the inverse function on MatLab? I thought that forward and backward substitution was also very interesting. I liked how that worked.

Tuesday, September 11, 2007

Section 1.6

I think the example on finding the roots of the quadratic polynomial was very useful in understanding the concept. I completely understood the purpose behind rewriting the code to take into consideration the subtraction cancellation so that the numbers truely apprear to be a double root like it should. I really didn't any of the information to be difficult to understand and I completely understand why this is so important to understand and learn.

Sunday, September 9, 2007

Section 1.5

I'm not too sure about the examples. I'm not too sure about how to compute the relative change. I just don't feel very comfortable with it. I understand the subtractive cancellation relatively well but I'm not too sure if I really understand what an ill posed problem is. I think I understand what a condition number is though.

Thursday, September 6, 2007

Section 1.3/1.4

In Section 1.4, I was curious about what was the reason behind calculating the accurate digits. Does this equation give the amount of accurate digits in the approximation from the actual solution. I don't think I understand completely how that equation which is so similar to the relative error, which I understand, could be related to the accurate digits equation. I liked the discussion of truncated strategies that can be found in chess. I thought that was very interesting. I'm not sure if I completely understood the floating-point numbers.
I liked Section 1.3. I found the section very understandable and useful. A lot of the ideas I have seen before in previous computer science courses.

Tuesday, September 4, 2007

Section 1.1/1.2

The most difficult part of the material to me was the Taylor series. I really haven't done much with them since Math242 and I'm not entirely sure if I learned them that well either.
The most interesting part was the example given about symbolic and numerical methods of programming. I think that is really useful to understand for considering the design for a problem. The whole concept of looking at different ways to program reminds me of previous computer science classes and I look foward to understanding more about it because I found the different methods both interesting and useful.
My absolute favorite part was the quote "It's all a lot of simple tricks and nonsense."..classic