week1  827 
829 

week 2  91 No Class 
93 
95 
week 3  98 
910 
912 
week 4 
915 
917 
919 
week 5 
922 
924 
926 
week 6 
929 
101 
103 
week 7 
106 
108 
1010 
week 8 
1013 
1015 
1017 
week 9[exam!]  1020 
1022 
1024 
week 10 
1027 
1029 
1031 
week 11 
113 
115 
117 
week 12 
1110 
1112 
1114 
week 13 
1117 
1119 
1121 
week 14 NoClasses 

week 15 
121 
123 
125 
week 16 
128 
1210 
1212 
`A^n =A` if `n` is odd. 


`A^n` `>` 
( 

) 
A=  ( 

) 
The geometry of complex arithmetic:
If z = a+bi = z(cos(t) +i sin(t)) and w = c+di = w(cos(s) +i sin(s)) then
z+w = (a+c)+(b+d)i which corresponds geometrically to the "vector " sum of z and w in the plane, and
zw = z(cos(t) +i sin(t)) w(cos(s) +i sin(s))= z
w (cos(t) +i sin(t))(cos(s) +i sin(s))
= z w (cos(t) cos(s)  sin(t)sin(s)
+ (sin(t) cos(s) + sin(s)cos(t)) i)
= z w (cos(t+s) + sin(t+s)
i)
So you use the product of the magnitudes of z and w to determine the magnitude of the product and use the sum of the angles to determine the angle of the product.
Notation: cos(t) + i sin(t) is somtimes written
as cis(t).
Note: If we consider the series for e^{x} = 1 + x + x^{2}/2!
+x^{3}/3! + ...
then e^{ix} = 1 + ix + (ix)^{2}/2! +(ix)^{3}/3!
+ ... = 1 + ix  x^{2}/2!  ix^{3}/3! + ...
... = cos(x) + i sin(x)
Thus e^{i*pi }= cos(pi)
+ i sin(pi)= 1. So ln(1) = i *pi.
Furthermore: `e^{a+bi} = e^a*e^{bi} = e^a ( cos (b) + sin(b) i) `
Matrices with complex number entries.
If r and s are complex numbers in the matrix A, then as n get large
if r < 1 and s < 1 the powers of A will get close to the
zero matrix , if r=s=1 the powers of A will always be A, and
otherwise the powers of A will diverge .
Polynomials with complex coefficients.
Because multiplication and addition make sense for complex numbers,
we can consider polynomials with coefficients that are complex numbers
and use a complex number for the variable, making a complex polynomial
a function from the complex numbers to the complex numbers.
This can be visualized using one plane for the domain of the polynomial
and a second plane for the codomain, target, or range of the polynomial.
The Fundamental Theorem of Algebra: If f is a non constant
polynomial with complex number coefficients then there
is at least on complex number z* where f(z*) = 0.
For more on complex numbers see: Dave's Short Course on Complex Numbers,


How are these questions related to Motivation Question I?
Do Examples F[X] = { f in F^{∞}, where f(n) = 0 for all but a finite number of n.} < F^{∞}
(Internal) Sums , Intersections, and Direct Sums of Subspaces
Suppose U1, U2, ... , Un are all subspaces of V.Definition: U1+ U2+ ... + Un = {v in V where v = u1+ u2+ ... + un for uk in Uk , k = 1,2,...,n} called the (internal) sum of the subspaces.
Facts: (i) U1+ U2+ ... + Un < V.
(ii) Uk < U1+ U2+ ... + Un for each k, k= 1,2,...,n.
(iii) If W<V and Uk < W for each k, k= 1,2,...,n, then U1+ U2+ ... + Un <W.
So ...
U1+ U2+ ... + Un is the smallest subspace of V that contains Uk for each k, k= 1,2,...,n.Examples:
U1 = {(x,y,z): x+y+2z=0} U2 = {(x,y,z): 3x+yz=0}. U1 + U2 = R^{3}.Let Uk = {f in P(F): f(x) = a_{k}x^{k} where a_{k }is in F} . Then U0+ U1+ U2+ ... + Un = {f : f (x) = a_{0} + a_{1}x + a_{2}x^{2} + ...+ a_{n}x^{n} where a_{0} ,a_{1} ,a_{2},...,a_{n }are in F}.
Definition: U1 `cap` U2`cap` ... `cap` Un = {v in V where v is in Uk , for all k = 1,2,...,n} called the intersection of the subspaces.
Facts:(i) U1`cap` U2`cap` ... `cap` Un < V.
(ii) U1`cap`U2`cap` ... `cap` Un < Uk for each k, k= 1,2,...,n.
(iii) If W<V and W < Uk for each k, k= 1,2,...,n, then W<U1`cap` U2`cap` ... `cap` Un .
So ...
U1`cap` U2`cap` ... `cap` Un is the largest subspace of V that is contained in Uk for each k, k= 1,2,...,n.
Examples: U1 = {(x,y,z): x+y+2z=0} U2 = {(x,y,z): 3x+yz=0}. U1 `cap` U2 = {(x,y,z): x+y+2z=0 and 3x+yz=0}= ...
Let Uk = {f in P(F): f(x) = a_{k}x^{k} where a_{k }is in F} then Uj`cap`Uk = {0} for j not equal to k....
926
Suppose V is a v.s over F and `U_1` and `U_2` are subspaces of V. We say that V is the direct sum of U1 and U2 and we write
V = `U_1` `oplus` `U_2` if (1) `U_1` + `U_2` and (2) U1`cap` U2 = {0}.
Prop: Suppose V = `U_1` `oplus` `U_2` and `v in V`, v = `u_1 + u_2 = w_1 + w_2 ` with `u_i` and `w_i` are in `U_i` for i = 1 and 2.
Then `u_i = w_i` for i = 1,2.
Conversely, if V = `U_1` + and `U_2` and if v = `u_1 + u_2 = w_1 + w_2 ` with `u_i` and `w_i` are in `U_i` for i = 1 and 2.
implies `u_i = w_i` for i = 1,2 then V = `U_1` `oplus` `U_2`.Proof: From the hypothesis, `u_1 _ ( w_1) = w_2 + ( u_2) in U_1` and `U_2`, so it is in `U_1 cap U_2` = {0}. Thus ... `u_i = w_i` for i = 1, 2.
Conversely: if `v in U_1 nn U_2` then v = v + 0 = 0 + v, so v = 0. Thus V = ` U_1` `oplus` `U_2`.To generalize the direct sum to U1, U2, ... , Un, we would start by assuming V = U1 + U2 + ... + Un.
We might try to generalize the intersection property by assuming that `U_i` `oplus` `U_j` = {0} for all i and j that are not equal. This won't work .
929
Discuss Exercise: If U and W are subspaces of V and U `uu` W is also a subspace of V, then either U < W or W < U.
Direct Sums: Suppose U1, U2, ... , Un are all subspaces of V and U1+ U2+ ... + Un = V, we say V is the direct sum of U1, U2, ... , Un if for any v in V, the expression of v as v = u_{1}+ u_{2}+ ... + u_{n} for u_{k} in Uk is unique, i.e., if v = u_{1}'+ u_{2}'+ ... + u_{n}' for u_{k}' in Uk then u_{1} = u_{1}', u_{2}=u_{2}', ... , u_{n}=u_{n}'. In these notes we will write V = U1 `oplus` U2 `oplus`...`oplus` Un
Examples:Uk = {v in F^{n}: v = (0,... 0,a,0, ... 0) where a is in F is in the kth place on the list.} Then U1`oplus` U2`oplus` ... `oplus` Un = V.Theorem: V = U1`oplus` U2`oplus` ... `oplus` Un if and only if (i)U1+ U2+ ... + Un = V AND 0=u_{1}+ u_{2}+ ... + u_{n} for u_{k} in Uk implies u_{1}=u_{2}=...=u_{n}=0.
Theorem: V = U`oplus`W if and only if V = U+W and U`cap`W={0}.
Examples using subspaces and direct sums in appplications:
Suppose A is a square matrix (n by n) with entries in the field F.
For c in F, let W_{c }= { v in F^{n} where vA = cv}.
Fact: For any A and any c, W_{c}< F^{n }. [Comment: for most c, W_{c}= {0}. ]
Definition: If W_{c} is not the trivial subspace, then c is called an eigenvalue or characteristic value for the matrix A and nonzero elements of W_{c }are called eigen vectors or characteristic vectors for A.
Application 1 : Consider the coke and pepsi matrices:
Questions: For which c is W_{c} nontrivial?
Example A. vA = cv?_{ }where
A= (
5/6 1/6 1/4 3/4 )
Example B. vB = cv_{ }where
B= (
2/3 1/3 1/4 3/4 )
To answer this question we need to find (x,y) [not (0,0)] so that
Is R^{2} = W_{c1} + W_{c2} for these subspaces? Is this sum direct?
Example A
(x,y) (
5/6 1/6 1/4 3/4 ) = c(x,y)
Example B
(x,y) (
2/3 1/3 1/4 3/4 ) = c(x,y)
Focusing on Example B we consider for which c will the matrix equation have a nontrivial solution (x,y)?
We consider the equations: 2/3 x +1/4 y = cx and 1/3 x+3/4 y = cy.
Multiplying by 12 to get rid of the fractions and bringing the cx and cy to the left side we find that
(812c)x + 3 y = 0 and 4x + (912c)y = 0
Multiplying by 4 and (812c) then subtracting the first equation from the second we have
((812c)(912c)  12 )y = 0. For this system to have a nontrivial solution, it must be that
((812c)(912)c  12 ) = 0 or `72  (108+96) c+144c^2 12 = 0` or
`60 204c +144c^2 = 0`.
Clearly one root of this equation is 1, so factoring we have (1c)(60144c) = 0 and c = 1 and c = 5/12 are the two solutions... so there are exactly two distinct eigenvalues for example B,
c= 1 and c = 5/12 and two non trivial eigenspaces W_{1} and W_{5/12} .
General Claim: If c is different from k, then W_{c} `cap` W_{k} = {0}
Proof:?
Generalize?
What does this mean for v_{n} when n is large?
Does the distribution of v_{n} when n is large depend on v_{0}?
Application 2: For c a real number let
W_{c} = {f in C^{∞}(R) where f '(x)=c f(x)} < C^{∞}(R).
What is this subspace explicitly?
Let V={f in C^{∞}(R) where f ''(x)  f(x) = 0} < C^{∞}(R).
Claim: V = W_{1} `oplus` W_{1}
Begin? We'll come back to this later in the course!
If c is different for k, then W_{c} `cap` W_{k }= {0}
Proof:...
Back to looking at things from the point of view of individual vectors:
Linear combinations:
Def'n. Suppose S is a set of vectors in a vector space V over the field F. We say that a vector v in V is a linear combination of vectors in S if there are vectors u_{1}, u_{2}, ... , u_{n} in S and scalars a_{1}, a_{2}, ..., a_{n} in F where v = a_{1}u_{1}+ a_{2}u_{2}+ ... + a_{n}u_{n} .
Comment: For many introductory textbooks: S is a finite set.
Recall. Span (S) = {v in V where v is a linear combination of vectors in S}
If S is finite and Span (S) = V we say that S spans V and V is a "finite dimensional" v.s.
Linear Independence.
Def'n. A set of vectors S is linearly dependent if there are vectors u_{1}, u_{2}, ... , u_{n} in S and scalars `alpha_1, alpha_2, ..., alpha_n in F` NOT ALL 0 where `0 = alpha_1u_1+ alpha_2u_2+ ... + alpha_n u_n` .
A set of vectors S is linearly independent if it is not linearly dependent.
Other ways to characterize linearly independent.
A set of vectors S is linearly independent if whenever there are vectors u_{1}, u_{2}, ... , u_{n} in S and scalars `alpha_1, alpha_2, ..., alpha_n in F` in F where `0 = alpha_1u_1+ alpha_2u_2+ ... + alpha_n u_n` _{} , the scalars are all 0, i.e. `alpha_1, alpha_2, ..., alpha_n = 0` .
Examples: Suppose A is an n by m matrix: the row space of A= span ( row vectors of A) , the column space of A = Span(column vectors of A).
Relate to R(A)
Recall R(A) = "the range space of A" = { w in F^{k} where for some v in F^{n}, vA= w } < F^{k}.
w is in R(A) if and only if w is a linear combination of the row vectors, i.e., R(A) = the row space of A.
If you consider Av instead of vA, the R*(A) = the column space of A.
"Infinite dimensional" v.s. examples: P(F), F^{∞}, C^{∞} (R)
F[X] was shown to be infinite dimensional. [ If p is in SPAN(p1,....,pn) then the degree of p is no larger than the maximum of the degrees of {p1,...pn}. So F[X] cannot equal SPAN(p1,...,pn) for any finite set of polynomials i.e, F[X] is NOT finite dimensional.
Some Standard examples.
Bases def'n.
Definition: A set B is called a basis for the vector space V over F if (i) B is linearly independent and (ii) SPAN( B) = V.
Bases and representation of vectors in a f.d.v.s.
108
Suppose B is a finite basis for V with its elements in a list, (u_{1}, u_{2}, ... , u_{n}) .
If v is in V, then there are unique vectors scalars `alpha_1, alpha_2, ..., alpha_n` in F where v = ` alpha_1u_1+ alpha_2u_2+ ... + alpha_n u_n` .
The scalars are called the coordinates of v w.r.t. B, and we will write
v = [`alpha_1, alpha_2, ..., alpha_n`]_{B}.
Linear Independence Theorems
Theorem 1 : Suppose S is a linearly independent set and v1 is not an element of Span(S), then S `cup` v1 is also linearly independent.
Proof Outline: Suppose vectors u_{1}, u_{2}, ... , u_{n} in S and scalars `alpha_1, alpha_2, ..., alpha_n, alpha in F` where `0 = alpha_1u_1+ alpha_2u_2+ ... + alpha_n u_n + alpha` v1 . If `alpha` is not 0 then
`v1= alpha^{1}( alpha_1u_1+ alpha_2u_2+ ... + alpha_n u_n) in` Span(S), contradicting the hypothesis. So `alpha = 0`. Buth then `0 = alpha_1u_1+ alpha_2u_2+ ... + alpha_n u_n ` and since S is linearly independent,
`alpha_1, alpha_2, ..., alpha_n = 0`. Thus S `cup` v1 is linearly independent. EOP.
Theorem 2: Suppose S is a finite set of vectors with V = Span (S) and T is a subset of vectors in V. If n( T) > n(S) then T is linearly dependent.
Proof Outline: Suppose n(S) = N. Then by the assumption ... [Proof works by finding N homogeneous linear equations with N+1 unknowns.]
1010
Theorem 3: Every finite dimensional vector space has a basis.
Proof outline:How to constuct a basis, B, for a non trivial finite dimensional v.s., V. Since V is finite dimensional it has a subset S that is finite with Span (S) = V.
Start with the empty set. This is linearly independent. Call this B0. If span(B0) = V then you are done. B0 is a basis.
 If Span(B0) is not V then there is a vector v1 in V where v1 is not in Span(B0). Apply Theorem 1 to obtain `B1 = B0 cup {v1}` which is linearly independent. If Span(B1) then B1 is a basis for V. Otherwise continue using Theorem 1 repeatedly until the resulting set of vectors has more then the number of sets in the spanning set. But by Theorem 2, this is a contradiction. So at some stage of the process, Span(Bk) = V, and Bk is a basis for V.
Comment:The proof of the Theorem also shows that given T, a linearly independent subset of V and V a finite dimensional vector space, one can step by step add elements to T, so that eventually you have a new set S where S is lineary independent with Span(S) = V and T contained in S. In other words we can construct a set B that is a basis for V with T contained in B. This proves
Corollary: Every Linearly independent subset of a finite dimensional vector space can be extended to a basis of the vector space.
Theorem 4. If V is finite dimensional vs and B and B' are bases for V, then n(B) = n(B').
Proof: fill in ... based on the Theorem 2. n(B) <= n(B') and n(B') <= n(B) so...
Definition: The dimension of a finite dimensional v.s. over F is the number of elements in a(ny) basis for V.
Discuss dim({0}).
The empty set is linearly independent!... so The empty set is a basis for {0} and the dimension of {0} is 0!
What is Span of the empty set? Characterize SPAN(S) = the intersection of all subspaces that contain S. Then Span (empty set) = Intersection of all subspaces= {0}.
Prop: A Subspace of a finite dimensional vs is finite dimensional.
Suppose Dim(V) = n, S a set of vectors with N(S) = n. Then
(1) If S is Linearly independent, then S is a basis.
(2) If Span(S) = V, then S is a basis.
Proof: (1) S is contained is a basis, B. If B is larger than S, then B has more than n elements, which contradicts that fact that any basis for V has exactly n elements. So B = S and S is a basis.
(2) Outline:V has a basis of n elements, B. Suppose S in linearly dependent and show that there is a set with less than n elemnets that spans V. Hence B cannot be a basis. This, S is a basis.
IRMC
Theorem: Sums, intersections and dimension: If U, W <V are finite dimensional, then so is U+W and
dim(U+W) = Dim(U) + Dim(W)  Dim(U`cap`W).
Proof: (idea) build up bases of U and W from U`cap`W.... then check that the union of these bases is a basis for U+W
Problem 2.12: Suppose p_{0},...,p_{m} are in P_{m}(F) and p_{i}(2) = 0 for all i.
Prove {p_{0},...,p_{m}} is linearly dependent.
Proof: Suppose {p_{0},...,p_{m}} is linearly independent.
Notice that by the assumption for any coefficients
(a_{0}p_{0}+..+a_{m}p_{m} )(2) = a_{0}p_{0}(2)+..+a_{m}p_{m}(2) = 0and since u(x)= 1 has u(2) = 1, u (= 1) is not in the SPAN(p_{0},...,p_{m}).
Thus SPAN(p_{0},...,p_{m}) is not P_{m}(F).
But SPAN ( 1,x, ..., x^{m}) = P_{m}(F) .
By repeatedly applying the Lemma to these two sets of m+1 polynomials as in Theorem 2.6, we have SPAN (p_{0},...,p_{m})=P_{m}(F), a contradiction. So {p_{0},...,p_{m}} is not linearly independent.
End of proof.
Examples: In R^{2}, P_{4}(R).
Connect to Coke and Pepsi example: find a basis of eigen vectors using the B example for R^{2}. [Use the online technology]
Example B
(x,y) (
2/3 1/3
1/4 3/4 ) = c(x,y)
We considered the equations: 2/3 x +1/4 y = cx and 1/3 x+3/4 y = cy and showed that
there are exactly two distinct eigenvalues for example B,
c= 1 and c = 5/12 and two non trivial eigenspaces W_{1} and W_{5/12} .
Now we can use technology to find eigenvectors in each of these subspaces.
Matrix calculator, gave as a result that the eignevalue 1 had an eigenvector (1,4/3) while 5/12 had an eigenvector (1,1). These two vectors are a basis for R^{2}.
Linear Transformations: V and W vector spaces over F.
Definition: A function T:V ` >` W is a linear transformation if for any x,y in V and in F, T(x+y) = T(x) + T(y) and T(ax) = a T(x).
Examples: T(x,y) = (3x+2y,x3y) is a linear transformation T: R2 > R2.
G(x,y) = (3x+2y, x^2 2y) is not a linear trasnformation.
G(1,1) = (5, 1) , G(2,2) = (10, 0)... 2*(1,1) = (2,2) but 2* (5,1) is not (10,0)!
Notice that T(x,y)can be thought of as the result of a matric multiplication
So the two key properties are the direct consequence of the properties of matrix multiplication.... (v+w)A= vA+wA and (cv)A = c(vA).
(x,y) (
3
1
2
2 )
For A a k by n matrix : T_{A} (left argument) and _{A}T (right) are linear transformations on F^{k} and F^{n}.
T_{A} (x) = x A for x in F^{k} and _{A}T(y) = A[y]^{tr} for y in F^{n} and [y]^{tr} indicates the entries of the vector treated as a one column matrix.
The set of all linear transformations from V to W is denoted L(V,W).
V = U `oplus` W if and only if V = U+W and U`cap`W={0}.
Proof: => suppose v is in U`cap`W, then v=u in U and v=w in W, so 0 = uv. But since V= U`oplus`W, this means u=w = 0 so v=0, so U`cap`W={0}.
Note: This argument extends to V as the direct sum of any family of subspaces.<= Suppose u is in U and w is in W and u+w = 0. Then, u = w so u is also in W, and thus u is is U`cap`W={0}. So u=0 and then w= 0 . Since V=U+W, we have by 1.8, V=U`oplus`W. EOP
2.19 If V is f.d.v.s. and U1, ...Un are subspaces with V = U1 +...+ Un and
dim(V) = dim(U1)+...+ dim(Un) then V = U1 `oplus`...`oplus` UnProof outline: Choose bases for U1, ..., Un and let B be the union of these setes. Since V = U1 +...+ Un every vector in v is a linear combination of elements from B. But B has exactly dim(U1)+...+ dim(Un) = dim(V) elements in it, B is a basis for V. Now suppose 0=u_{1}+ u_{2}+ ... + u_{n} for u_{k} in Uk. Then each u_{i }=can be expressed as a linear combination of the basis vectors for Ui, and the entire linear combination is 0 implies that each coefficient is 0 because B is a basis. So u_{1}=...=u_{n}=0 and V = U1 `oplus`...`oplus` Un. EOP
How do you find a basis for the SPAN(S) in R^{n}?
Outline of use of row operations...
1017
Back to linear transformations:
Consequences of the definition: If T:V>W is a linear transformation, then for any x and y in V and a in F,
(i) T(0) = 0.
(ii) T(x) = T(x)
(iii) T(x+ay) = T(x) + aT(y).
Quick test: If T:V>W is a function and (iii) holds for any x and y in V and a in F, then the function is a linear transformation.
D... Differentiation is a linear transformation: on polynomials, on ...Example: (D(f))(x) = f' (x) or D(f) = f'.
(D(f + `alpha` g))(x) = (f+`alpha`g)' (x) = f'(x) + `alpha`g'(x) = (f'+`alpha`g') (x) or
D(f+`alpha`g) = f'+ `alpha`g'= D(f) +`alpha` D(g).
Theorem: T : V>W linear, B a basis, gives S(T):B >W.
Suppose S:B > W, then there is a unique linear transformation T(S):V>W such that S(T(S))=S.
Proof: Let T(S)(v) be defined as follows: Suppose v is expressed (uniquely) as a linear combination of elements of B, ie. v = a_{1}u_{1}+ a_{2}u_{2}+ ... + a_{n}u_{n} ... then let T(v) = a_{1}S(u_{1})+ a_{2}S(u_{2})+ ... + a_{n}S(u_{n}) ....
This is well defined since the representation of v is unique. Left to show T is linear. Clearly... if u is in B then S(T(S))(u) = S(u).
Example: T: P(F) `>` P(F).... S(x^{n}) = nx ^{n1}.
Or another example: S(x^{n}) = 1/(n+1) x ^{n+1}.
Key Spaces related to T:V>W
Null Space of T= kernel of T = {v in V where T(v) = 0 [ in W] }= N(T) < V
Range of T = Image of T = T(V) = {w in W where w = T(v) for some v in V} <W.
1020
Major result of the day: Suppose T:V>W and V is a finite dimensional v.s. over F. Then N(T) and R(T) are also finite dimensional and Dim(V) = Dim (N(T)) + Dim(R(T)).
Proof: Done in class see text: Outline: start with a basis C for N(T) and extend this to a basis B for V. Show that T(BC) is a basis for R(T).Visualize with Winplot?
1022
Algebraic stucture on L(V,W)
Definition of the sum and scalar multiplication:
T, U in L(V,W), a in F, (T+U)(v) = T(v) + U(v).
Fact:T+U is also linear.
(aT)(v) = a T(v) .
Fact:aT is also Linear.
Check: L(V,W) is a vector space over F.Composition: T:V > W and U : W > Z both linear, then UT:V>Z where UT(v) = U(T(v)) is linear.
Note: If T':V> W and U':W>Z are also linear, then U(T+T') = UT + UT' and (U+U') T = UT + UT'. If S:Z>Y is also linear then S(TU) = (ST)U.
Key focus: L(V,V) , the set of linear "operators" on V.... also called L(V).
If T and U are in L(V) then UT is also in L(V). This is the key example of what is called a "Linear Algebra"... a vector space with an extra internal operation usually described as the product. That satisfies the distributive and associative properties and has an "identity" namely Id(v) = v for all v `in V`. [Id T = T Id = T for all T `in L(V)`.
If T `in` L(V), then `T^n in` L(V).
Example: V = `C^{oo}`(R). D: V `>` V is defined by D(f )= f '. Then `D^2 +4D + Id` = (D + 3Id)(D + Id) = T `in` L(V). Finding N(T) is solving the "homogenous linear differential equation" f ''(x) + 4f '(x) + f (x) = 0.
1024
Linear Transformations and Bases
We proved that if V and W are finite dimensional then so is L(V,W) and dim(L(V,W)) = dim(V) Dim(W).
We did this using bases for V and W to find a basis for L(V,W). That basis for L(V,W) also established a function from L(V,W) to the matrices that is a linear transformation! More details will be supplied for this lecture later.
Matrices and Linear transformations.
Footnote on notation for Matrices: If the basis for V is B and for W is C and T:V>W,
the matrix of T with respect to those bases can be denoted M_{B}^{C}(T). Note  this follows a convention on the representation of a transformation.
The matrix for a vector V is denoted M_{B}(v). If we treat this as a row vector we have M_{C}(T(v))=M_{B}(v)M_{B}^{C}(T).
This can be transposed using column vectors for the matrix of the vectors and we have with this transposed view:
M_{C}(T(v))=M_{B}^{C}(T)M_{B}(v)
The function M : L(V,W) > Mat (m,n; F) is a linear transformation.
1027
Recall definition of "injective" or "1:1" function.
Recall definition of "surjective" or "onto" function.
Theorem: T is 1:1 (injective) if and only if N(T) = {0}
Proof: => Suppose T is 1:1. We now that T(0)=0 , so if T(v) = 0, then v = 0. Thus 0 is the only element of N(T) or N(T) = {0}.
<= Suppose N(T) = {0}. If T (v) = T(w) then T(vw) =T(v)T(w) = 0 so vw is in N(T).... ok, than must mean that vw = 0, so v=w and T is 1:1.
Theorem: T is onto if and only of the Range of T = W.
Theorem: T is onto if and only if for any (some) basis, B, of V, Span(T(B)) = W.
Theorem: If V and W are finite dimensional v.s. / F, dimV = dim W, T : V `>` W is linear, then T is 1:1 if and only if T is onto.
Proof: We know that dim V = dim N(T) + dim R(T).
=> If T is 1:1, then dim N(T) = 0, so dim V = dim R(T) . Thus dim R(T) = dim W and T is onto.
<= If T is onto, then dimR(T) = dim W. So dim N(T) = 0 and thus N(T) = {0} and T is 1:1.
The importance of the Null Space of T, N(T), is understanding what T does in general.
Example 1. D:P(R) > P(R)... D(f) = f'. Then N(D) = { f: f(x) = C for some constant C.} [from calculus 109!]
Notice: If f'(x) = g'(x) the f(x) = g(x) + C for some C.
Proof: consider D(f(x)  g(x)) = Df(x)  Dg(x) = 0, so f(x) g(x) is in N(T).
Example 2: Solving a system of homogeneous
linear equations. This was connected to finding the null space of a
linear trasnformation connected to a matrix. Then what about a non homogeneous
system with the same matrix. Result: If z is a solution of the non homogeneous
system of linear equations and z ' is another solution, then z' = z + n where
n is a solution to the homogeneous system.
General Proposition: T:V>W. If b is a vector in W and a is in V with T(a) = b, then T^{1}({b}) = {v in V: v = a +n where n is in N(T)} = a + N(T)
Comment: a + N(T) is called the coset of a mod N(T)...these are analogous to lines in R^{2}. More on this later in the course.
Suppose T is a linear transformation :
Let T(L) = L' = {(x'y'): (x',y')= T(x,y)}
T(x,y) = T(a,b) + t T(u,v).
If T(u,v) = (0,0) then L' = T(L) = {T(a,b)}.
If not then L' is also a line though T(a,b) in the direction of T(u,v).
[View this in winplot?]
The Division Algorithm, [proof?]