Cramer's rule:
Use on small systems,
finding dets gets more and more time consuming
, det/a
Naive GE.
Forward elemination,
backward substitution
Operation counting
er...
most of the effort is
incurred in the elimination portion. as opposed to the back sub.
pitfalls
dividing by 0 is bad. also
dividing by numbers close to 0. pivoting trys to work around this
round off errors : every
result is based on previous result, thus errors tend to propagate
RoT: very imp. when
dealing with 100+ eqs
ill-conditioned systems
- well conditioned systems are ones where a small change in one or more of the coefficients results in a small change in the solutoin
- Ill-conditioned systems are those where small changes in coefficients result in large changes in the solution.
- an ill conditioned system is one in which the determinant is close to 0
if slopes = 0, then they're the same line or there is no soln...
But, determinant is a relative factor affected by the scales of its cofactors?- Slopes are same... x1x2 - y1y2 = 0, etc etc
/determinate of a diagonal
matrix is just the product of its diagonal elements
the determinate of a
singular system IS 0
GE will recognize this at
the end the elemination step, but you can check the determinate once
you have set up the triangular matrix
Techniques for improving
solns.
use more sig.figs.
use more sig.figs.
pivot: issues occur when
pivot element is 0, small remedy for ill cond.
partial pivoting, switch
pivot element, which is in column 1, with other element in that
column which is the biggest in that solumn. Basically, just switch to
highest clolumn 1 row.
complete pivoting: switch
to biggest absolute value in whole matrix. so column and row
pivoting algorithm: for
large matrices, exchanging rows becomes time consuming, just keep
track of appropriate subscripts
not understanding
multideminsional versoin of Newton_raphson method. need to relook at
Taylor series expansion. seriously, wtf is this [Z] = crap stuff?
non-linear equatoins look hard...
Also, complex systems have a
few workarounds. 3 are recommended. Remember Fortran? Your ex-best
friend??? Saving variables as complex is something special to it
apparently.
Gauss-Jordan method is
nothing more than Gauss elimination without the back substiution
stage. you just forward eleminate, but instead of aiming for a
triangular method, eleminate every column element except the one in
question from that column. Just like the method by hand –
typically.
There is also the concept of
flops. Know definition, with GJ, more flops are needed than GE
===
chapter 10
LU Decomposition
only operations with matrix coeffs, also if you have many right sided bectors [B] you just need to solve one coeff matrix[A].
only operations with matrix coeffs, also if you have many right sided bectors [B] you just need to solve one coeff matrix[A].
U is pretty stright forward.
just forward elminate until you get a triangle matrix
to get L, you need to put in
the f-factors and use L as its holder. diagonals are all 1's, and
f12 = a21/a11,
f31 = a31/a11,
f32 = a'32/a'22
[A] --> [L][U]
[U] = [a11,a12,a13;
0,a'22,a'23;
0,0,a''33]
[L] = [1,0,0;
f21,1,0
f31,f32,1]
SUB decompose(a,n)
DOFR k = 1, n -1
DOFOR i = K+1,n
factor = a_1,k / ak,c
//???
a_i,k = factor
DOFOR j = k + 1, n
a_i,j = a,j -
factor*a_kj
END DO
END DO
END DO
END Decompose
Croute Decomposition
aka. Doolittle
decomposition, factorization
---
Calculating the inverse
Can do this with LU method
ch 10 or something 11
special matrices
Banded Matrices have values
only in diagonal? There is BW and HBW, BW = 2*HBW+1
Everything below is a 0, and
thus algorithms will try to pivot the hell out of them; however, this
is useless.
Thomas algorithm is an
efficient method
Cholesky decomposition : [A]
= [L][L]T
This method
requires positively defined matrices to avaoid round-off error some
Gauss-Seidel method
iteratin method used to
approximate the soln, method converges on true soln
ex 11.3, try 0,0,0 as a
starting guess
|ea,1| is a means to
estimate error
Jacobi iteration
same as G-S method but
rather than using the latest available x's, this technique uses an
equation to compute a set of new x's on the basis of a set of old
x's. So as new values are generated they are not immediately used but
rather are retained for the next iteration. Not sure what this 'new
eq' is. Seems like it's just using the initial guess, and saving new
outputs. Start with 0 again?
Convergence criterion for GS
method: similar in spirit to simple fixed-point iteration
- 2 fundamental issues. Sometimes non-convergent, if it does, it does so slowly maybe
- partial derivative thingy.. basically the absolute values of the slopes must be less than unity to ensure convergence, so if a 2x2, then |a12/a11| < 1 and |a21/a22| < 1, but this is a sufficient and not necessary condition for congergence...
overrelaxation is designed
to accelerate the convergence of an already convergent system. Aka
successive or simultaneous overrelaxation or SOR
//interesting fact, they're
calling the error check a variable "sentinel"
--sleep...
No comments:
Post a Comment
(-_-)...