Skip to content

Introduction of Linear algebra

This is the note from MNE8108 Engineering Methods in the Department of Mechanical Engineering, CityU (semester A, 2025)

Contents mostly From Linear Algebra by Arnold J. Insel etc.

Appendix A: Sets

Definitions

Definition: sets

collection of objects, called elements of the set

  • xA;xA
  • list them using {}
  • no order: {1,2}={2,1}

subsets:

  • BA
  • proper set: BA,BA
  • empty set: ϕ

calculation

  • union: AB

  • intersection: AB

    • disjoint sets: AB=ϕ
  • finite sets: {1,2}

  • infinite sets: R

Number systems:

  • Natural number: N(N0:( include 0);N1:( not include 0))
  • Integral number: Z
  • Rational number: Q
  • Real number: R
  • Complex number: C

Cardinal number: the number if elements of a set

For infinite sets:

  • Countable sets: \alef0: N,Z,Q
  • Uncountable sets: beth one: R,C

operations

operation: from a set to itself

Unary:

  • negation:
  • factorial: !
  • trigonometric: sin,cos,

Binary:

  • +x/

tenary: ...

Arithmetic

  • +,×

    SetN0N1
    Elements0,1,2,3N01,2,3,4N1
    Operation+×
    Closure1+2=3N02×3=6N1
    Associative(1+2)+3=1+(2+3)(2×3)×4=2×(3×4)
    Commutative1+2=2+12×3=3×2
    Distributive--
  • ,/

    SetZQ
    Elementsx,y,Zx,yQ
    Operation+×
    Identity01
    inversesx1x or x1

More generally:

SetSZQ
Elementsx,y,z,Sx,y,z,Zx,y,z,Q
Operation+×
ClosurexySx+yZx×yQ
Associative(xy)z=x(yz)(x+y)+z=x+(y+z)(x×y)×z=x×(y×z)
Commutativexy=yxx+y=y+xx×y=y×x
Distributive---
IdentityeS:xe=ex=x01
inversesx,x1S:xx1=x1x=ex1x or x1

group

Given:

  • Set of elements G
  • Operation:
SetG
Elementsx,y,z,G
Operation
ClosurexyG
Associative(xy)z=x(yz)
Commutativexy=yx? (Not requestes)
Distributive-
IdentityeG:xe=ex=x
inversesx,x1G:xx1=x1x=e
  • If communtative: commutative group / abelian group
  • If not commutative: non-commutative group / non-abelian group

Example of group:

  • commutative:

    • Z with +
    • Z commutes under +
  • non-commutative:

    • Dihedral group:
      • r: rotation
      • f: reflection
      • rffr
    plae

Ring

Given:

  • Set of elements R
  • Operation: +,×
SetR
Elementsa,b,c,R
OperationAddition +Multiplication ×
Closurea+bRa×bR
Associative(a+b)+c=a+(b+c)(a×b)×c=a×(b×c)
Commutativea+b=b+aa×b=b×a? (Not required)
Distributivea×(b+c)=a×b+a×c;
(b+c)×a=b×a+c×a
Identity0R:a+0=0+a=a1R:1×a=a×1=a
Inverses(a)R:a+(a)=0a1R? (Not required)
  • distributive: left distributive v.s. right distributive
  • If communitative, just check one distributivity

Example of ring:

  • commutative:
    • Z with +,×
  • non-commutative:
    • M2(R) with matrix addition and matrix multiplication

Appendix C: Fields

Given:

  • Set of elements F
  • Operation: +,×
SetF
Elementsa,b,c,F
OperationAddition +
Closurea+bF
Associative(a+b)+c=a+(b+c)
Commutativea+b=b+a
Distributivea×(b+c)=a×b+a×c
Identity0F:a+0=0+a=a
inverses(a)F:a+(a)=0

Example of field:

  • Q with +,×
  • R with +,×
  • C with +,×
  • GF(2)/Z2, a finite field with two elements(0,1); operations: XOR + AND.

Modulo Arithmetic:

  • 17(mod5)=2
  • 172(mod5)

Theorem C.1 (Cancellation Laws)

a,b,c in a field, the following statements are true:

  • If a+b=c+b, then a=c
  • If ab=cb,b0, then a=c

Corollary

the identity elements and the inverse elements are unique

Theorem C.2

a,b in a field, the following statements are true:

  • a0=0
  • (a)b=a(b)=(ab)
  • (a)(b)=ab

1.2 vector space

Given:

  • Set of elements V,F
  • Operation: +,×
  • Commutative group V under +, with a Field F
SetVV and FFF
Elementsu,v,wVa,b,cF
OperationAddition +Scalar multiplication ×Addition +Multiplication ×
Closureu+vVa×uVa+bFa×bF
Associative(u+v)+w=u+(v+w)(a×b)×u=a×(b×u)(a+b)+c=a+(b+c)(a×b)×c=a×(b×c)
Commutativeu+v=v+ua×u=u×aa+b=b+aa×b=b×a
Distributive-a×(u+v)=a×u+a×v;<br>(a+b)×u=a×u+b×ua×(b+c)=a×b+a×c
Identity0V:u+0=0+u=u1×u=u0F:a+0=0+a=a1F:1×a=a×1=a
inverses(u)V:u+(u)=00×u=0;
(1)×u=u
(a)F:a+(a)=0a1F:a×a1=1

Definition: vector space V [core]

A vector space V over a field F consists of a set on which 2 operations (addition and scalar multiplication) are defined and closedness, s.t. the following conditions hold:

  1. x,yV,x+y=y+x (Commutativity of addition);
  2. x,y,zV,(x+y)+z=x+(y+z) (Associativity for each xV);
  3. There exists an element in V denoted by 0 s.t. x+0=0+x=x,xV;
  4. xV, there exists an element yV, s.t. x+y=0;
  5. xV,1x=x;
  6. a,bF,xV,(ab)x=a(bx);
  7. aF,x,yV,a(x+y)=ax+by;
  8. a,bF,xV,(a+b)x=ax+bx

Module definition: similar to vector space, but:

  • commutative of scalar multiplication not required
  • commutative of multiplication for number part, not required

Definition

  • sum: x+y
  • product: ax
  • scalars: elements of F
  • vectors: elements of vector space V
  • n-tuple: n elements of a field in this form: (a1,,an)
    • entries / components: a1,,an
    • 2 n-tuples are equal if ai=bi,i=0,1,2,,n
    • Fn: set of all n-tuples with entries from a field F
    • vectors in Fn: column vectors

Definition about some vector spaces

Definitions: Matrix

  • diagonal entries: aij with i=j
  • i-th row: ai1,ai2,,ain
  • j-th column: a1j,a2j,,amj
  • zero matrix: all zero
  • square: the number of rows and columns are equal
  • equal: Aij=Bij,1im,1jn
  • set of all m×n matrices with entries from a field F is a vector space: Mm×n(F)

matrix addition: (A+B)ij=Aij+Bij scalar multiplication: (cA)ij=cAij

Definitions: function

Let S be any nonempty set and F be any field, and let F(S,F) denote the set of all functions from S to F. Two functions f and g in F(S,F) are called equal if f(s)=g(s) for each sS. The set F(S,F) is a vector space with the operations of addition and scalar multiplication defined for f,gF(S,F) and cF by

(f+g)(s)=f(s)+g(s) and (cf)(s)=c[f(s)]

for each sS. Note that these are the familiar operations of addition and scalar multiplication for functions used in algebra and calculus.

Definitions: polynominal

f(x)=anxn+an1xn1++a1x1+a0
  • coefficient: ai,i=0,1,,n
  • zero polynominal: ai=0,i=0,1,,n
  • degree:
    • 1 for zero polynominal
    • largest exponent of x
  • equal if equal degree and ai=bi,i=0,1,,n

When F is a field containing infinitely many scalars, we usually regard a polynomial with coefficients from F as a function from F into F

f(x)=anxn+an1xn1++a1x1+a0g(x)=bnxn+bn1xn1++b1x1+b0

addition and scalar multiplication:

{f(x)+g(x)=(an+bn)xn+(an1+bn1)xn1++(a0+b0)cf(x)=canxn+can1xn1++ca1x1+ca0

set of all polynominal: P(F)

Theorem 1.1: Cancellation Law for Vector Addition

If x,y,z are vectors in a vector space, s.t. x+z=y+z, then x=y

Corolloary 1

The vector 0 is unique (zero vector)

Corolloary 2

The inverse element of vector is unique (additive inverse)

Theorem 1.2

In any vector space V:

  • 0x=0,xV
  • (a)x=(ax)=a(x),aF,xV
  • a0=0,aF

1.3 subspace

Definition: subspace

A subset W of a vector space V over a field F is called a subspace of V if W is a vector space over F with the operations of addition and scalar multiplication defined on V.

In any vector space V, note that V and {0} are subspaces. The latter is called the zero subspace of V.

Theorem 1.3(subspace)[core]

Let V be a vector space and W a subset of V. Then W is a subspace of V iff the following three conditions hold for the operations defined in V.

  • 0W.
  • x+yW whenever xW and yW.
  • cxW whenever cF and xW.

Examples

matrix

The transpose At of an m×n matrix A is the n×m matrix obtained from A by interchanging the rows with the columns; that is, (At)ij=Aji.

A symmetric matrix is a matrix A such that At=A.

  • Clearly, a symmetric matrix must be square. The set W of all symmetric matrices in Mn×n(F) is a subspace of Mn×n(F) since the conditions of Theorem 1.3 hold

A diagonal matrix is a n×n matrix if Mij=0 whenever ij

  • the set of diagonal matrices is a subspace of Mn×n(F)

polynominal

Let n be nonnegative integer, Pn(F) consists of all polynominals in P(F) having degree less than or equal to n.

  • Pn(F) is a subspace of P(F)

theorem and definition

Theorem1.4

Any intersecton of subspaces of a vector space V is a subspace of V

Definitions

sum of nonempty subsets S1 and S2 of a vector space V :

  • S1+S2:={x+y:xS1,yS2}

direct sum: W1W2

  • W1,W2 are subspaces of V
  • W1W2={0},W1W2=V

1.4 linear combination and systems of linear equations

Definition: linear combination

Let V be a vector space and S a nonempty subset of V. A vector vV is called a linear combination of vectors of S if there exists a finite number of vectors u1,u2,..,un in S and scalars a1,a2,..,an in F such that v=a1u1+a2u2+...+anun. In this case we also say that v is a linear combination of u1,u2,·..,un and call a1,a2,·.., An the coefficients of the linear combination.

Observe that in any vector space V, Ov=0 for each vV. Thus the zero vector is a linear combination of any nonempty subset of V.

Definition: span

Let s be a nonempty subset of a vector space V. The span of S, denoted span(S), is the set consisting of all linear combinations of the vectors in S. For convenience, we define span()={0}.

In R3, for instance, the span of the set {(1,0,0),(0,1,0)} consists of al vectors in R3 that have the form a(1,0,0)+b(0,1,0)=(a,b,0) for some scalars a and b. Thus the span of {(1,0,0),(0,1,0)} contains all the points in the xy-plane. In this case, the span of the set is a subspace of R3. This fact is true in general.

Theorem 1.5(span and subspace)

  • The span of any subset S of a vector space V is a subspace of V.
  • Any subspace of V that contains S must also contain the span of S

Definition: generate

A subset S of a vector space V generates (or spans)V if span(S)=V. In this case, we also say that the vectors of S generate (or span)V.

1.5 linear dependence and linear independence

Definition: linear independent

A subset S of a vector space V is called linearly dependent if there exists a finite number of distinct vectors u1,u2,,un in S and scalars a1,a2,,an, not all zero, such that

a1u1+a2u2++anun=0

In this case we also say that the vectors of S are linearly dependent

For any vectors u1,u2,,un, We have a1u1+a2u2++anun=0 if a1=a2==an=0. We call this the trivial representation of 0 as a linear combination of u1,u2,,un.

Thus, for a set to be linearly dependent, there must exists a nontrivial representation of 0 as a linear combination of vectors in the set.

Consequently, any subset of a vector space that contains the zero vector is linearly dependent because 0=10 is a nontrivial representation of 0 as a 1linear combination of vectors in the set.\

Definition: linearly independent

A subset S of a vector space that is not linearly dependent is called linearly independent. As before, we also say that the vectors ot S are linearly independent.

The following facts about linearly independent sets are true in any vector space:

  1. The empty set is linearly independent, for linearly dependent sets must be nonempty.
  2. A set consisting of a single nonzero vector is linearly independent. For if {u} is linearly dependent. then au=0 for some nonzero scalar a. Thus
u=a1(au)=a10=0
  1. A set is linearly independent iff the only representations of 0 as linear combinations of its vectors are trivial representations

Examples

polynominal

For k=0,1,,n, let pk(x)=xk+xk+1++xn, The set

{p0(x),p1(x),,pn(x)}

is linearly independent in Pn(F).

theorem

Theorem 1.6

Let V be a vector space, and let S1S2V.If S1 is

linearly dependent, then S2 is linearly dependent

Corollary

Let V be a vector space, S1S2V. If S2 is linearly independent, then S1 is linearly independent.

Theorem 1.7

Let S be a linearly independent subset of a vector space V.and let v be a vector in V that is not in S. Then S{v} is linearly dependent if and only if vspan(S).

1.6 bases and dimension

basis

Definition: basis [core]

A basis β for a vector space V is a linearly independent subset of V that generates V. If β is a basis for V, we also say that the vectors of β form a basis for V.

Examples

  • Recalling that span(ϕ)={0} and ϕ is linearly independent, we see that is a basis for the zero vector space.

  • In Fn, let

    e1=(1,0,0,,0),e2=(0,1,0,,0),,en=(0,0,,0,1);{e1,e2,,en}

    is readily seen to be a basis for Fn and is called the standard basis for Fn.

  • In Mm×n(F), let Eij denote the matrix whose only nonzero entry is a 1 in the ith row and jth column. Then Eij:1im,1jn is a basis for Mm×n(F).

  • In Pn(F) the set {1,x,x2,,xn} is a basis. We call this basis the standard basis for Pn(F).

Theorem

Theorem 1.8:

Let V be a vector space and β={u1,u2,,un} be a subset of V. Then β is a basis for V if and only if each vV can be uniquely expressed as a linear combination of vectors of β, that is, can be expressed in the form

v=a1u1+a2u2++anun

for unique scalars a1,a2,,an.

Theorem 1.9

If a vector space V is generated by a finite set S, then some subset of S is a basis for V. Hence V has a finite basis

Theorem 1.10(Replacement Theorem)

Let V be a vector space that is generated by a set G containing exactly n vectors, and let L be a linearly independent subset of V containing exactly m vectors. Then mn and there exists a subset H of G containing exactly nm vectors such that LH generates V.

Corollary 1

Let V be a vector space having a finite basis. Then every basis for V contains the same number of vectors

dimension

Definitions: dimension [core]

A vector space is called finite-dimensional if it has a basis consisting of a finite number of vectors. The unique number of vectors in each basis for V is called the dimension of V and is denoted by dim(V). A vector space that is not finite-dimensional is called infinite-dimensional.

Examples

  • The vector space {0} has dimension 0
  • The vector space Fn has dimension n
  • The vector space Mm×n(F) has dimension mn
  • The vector space Pn(F) has dimension n+1

The following examples show that the dimension of a vector space depends on its field of scalars.

  • Over the field of complex numbers, the vector space of complex numbers has dimension 1. (A basis is {1}.)
  • Over the field of real numbers, the vector space of complex numbers has dimension 2. (A basis is {1,i})

Corollary and theorem

Corollary 2

Let V be a vector space with dimension n.

  1. Any finite generating set for V contains at least n vectors, and a generating set for V that contains exactly n vectors is a basis for V.
  2. Any linearly independent subset of V that contains exactly is vectors is a basis for V.
  3. Every linearly independent subset of V can be extended to a basis for V.

Theorem 1.11:

Let W be a subspace of a finite-dimensional vector space V. Then W is finite-dimensional and dim(W)<dim(V). Moreover, if dim(W)=dim(V), then V=W.

Examples

The set of diagonal n×n matrices is a subspace W of Mn×n(F). A basis for W is

E11,E22,,Enn,

where Eij is the matrix in which the only nonzero entry is a 1 in the ith row and jth column. Thus dim(W)=n.

Corollary

If W is a subspace of a finite-dimensional vector space V, then any basis for W can be extended to a basis for V.

Released under the MIT License.