Midterm 2 Notes:Math 361, Spring 2014

From cartan.math.umb.edu

Math 361 Midterm 2 Notes[edit]

Linear Algebra[edit]

Vector Space[edit]

Let \(F\) be a field. A vector space over \(F\) is a set \(V\) with two functions, \(+\) and \(\cdot\). \(+\) is vector addition, \(\cdot\) is scalar multiplication.
\(V\) is an abelian group with respect to vector addition. \(+\) is a function from \(V\times V \rightarrow V\), and it's associative, commutative, \(V\) contains an additive identity (the zero vector, or \(\vec{0}\)), and \(V\) contains an additive inverse for every element. The additive inverse of \(\vec{v}\) is \(-\vec{v}\).
Scalar multiplication is more complicated. Scalar multiplication is an operator from \(V \times F \rightarrow V\). So it multiplies a field element by a vector. Scalar multiplication has some properties that kind of look like associativity and distributivity, but they're not quite the same. Anyway: $$ (c +d) \cdot \vec{v} = c \cdot \vec{v} + d \cdot \vec{v}\\ c \cdot (\vec{v} + \vec{w}) = c \cdot \vec{v} + c \cdot \vec{w}\\ cd \cdot \vec{v} = c \cdot (c \cdot \vec{v}) \\ 1 \cdot \vec{v} = \vec{v} $$ Okay, some explanation. For the purposes of the midterm, there are only three types of vector space that we care about.
First - the \(n\) tuples of \(F\). So \(F \times F \times \ldots \times F\). This is a vector space. Vector addition and scalar multiplication is just component wise. \(\mathbb{R}^3\) is an example of such a vector space. \((1,2,3)\) and \((2,3,4)\) are both elements of \(\mathbb{R}^3\). We add them component-wise: $$ (1,2,3) + (2,3,4) = (1+2, 2+3, 3+4) = (3,5,7) $$ \(3\) is a scalar (it's an element of the base field, \(\mathbb{R}\), and we do scalar multiplication component-wise: $$ 3\cdot (1,2,3) = (3*1, 3*2, 3*3) = (3,6,9) $$
The next example is the polynomials over a field. Here a polynomial is a vector, and vector addition is just polynomial addition. Scalar multiplication is not polynomial multiplication - the scalars are elements of the base field, i.e. they're constant polynomials.
Consider \(\mathbb{R}[x]\) - the real polynomials. A scalar here is \(5\). To do scalar multiplication, you turn this into a constant polynomial (getting, surprise, \(5\)), and then you multiply this constant polynomial by your actual polynomial. So: $$ 5 \cdot 5 + 3x + 4x^2 = 5(5 + 3x + 4x^2) = 25 + 15x + 20x^2 $$ You can also consider polynomials of limited degree - so all the polynomials up to degree 5, for instance. This is actually the same as 6-tuples of the base field. \(5 + 4x + 3x^2 + 2x^3 + x^4 + x^5\) can just be represented as \((5,4,3,2,1,1)\), which is an element of \(\mathbb{R}^6\).
The third example is the rational functions over a field. Vector addition is just rational function addition, and scalar multiplication is (like for polynomials) just multiplication by a constant rational function.

Linear Combinations and Other Things[edit]

Let \(F\) be a field and \(V\) a vector space over \(F\). Let \(\vec{v}_1, \ldots, \vec{v}_n\) be vectors in \(V\). A linear combination of these vectors is: $$ c_1v_1 + c_2v_2 + \ldots + c_nv_n $$ where \(c_i \in F\).
So let \(F\) be \(\mathbb{R}\) and \(V\) be \(\mathbb{R}^3\). Let our set of vectors be: $$ (1,2,3)\\ (2,3,4)\\ (3,4,5) $$ One possible linear combination of these vectors is: $$ 3(1,2,3) + 25(2,3,4) - \frac{1}{4}(3,4,5) $$ We can replace 3, 25, and \(-\frac{1}{4}\) with any elements of \(\mathbb{R}\) we want, and still get a linear combination of these three vectors.

The Span of a Set[edit]

Let's say we have a set of vectors \(V = \{\vec{v}_1, \ldots, \vec{v}_n\}\). The span of this set is the set of all vectors that are linear combinations of these vectors. So the span of \(\{(1,0,0),(0,1,0)\}\) is \(\{(x,y,0) | x,y \in \mathbb{R}\}\), because: $$ x(1,0,0) + y(0,1,0) = (x,0,0) + (0,y,0) = (x,y,0) $$

A Spanning Set[edit]

Let \(T\) be a subset of a vector space \(V\). A third set, \(S\) is a spanning set of \(T\) if every vector in \(T\) is a linear combination of vectors in \(S\). So we can multiply vectors in \(S\) by scalars, add them together, and get anything in \(T\).
We also say that \(S\) spans \(T\).
So let \(T = \{(x,0,0)\in \mathbb{R}^3| x \in \mathbb{R}\}\). This is the set of points on the \(x\)-axis. One spanning set of \(T\) is the set \(S = \{(1,0,0)\}\). We can scale this vector by \(x\) to get \((x,0,0)\), so we can get all of \(T\). Another spanning set is all of \(\mathbb{R}^3\). Obviously everything in \(T\) is a linear combination of everything in \(\mathbb{R}^3\), because everything in \(T\) is already in \(\mathbb{R}^3\). This illustrates an important point: \(S\) doesn't have to only span \(T\). The span of \(S\) can be much larger than \(T\), so long as \(T\) is in the span of \(S\).

Linear Independence[edit]

Let \(V\) be a vector space and \(S\) be a set of vectors. \(S\) is linearly independent (well, really, the vectors in \(S\) are linearly independent) if none of them is a linear combination of any of the others.
Let \(S = \{\vec{v}_1,\ldots,\vec{v}_n\). These vectors are linearly independent if and only if: $$ c_1\vec{v}_1 + \cdots + c_n\vec{v}_n = \vec{0} $$ implies that \(c_i = 0\) for all \(i\). So the only way to linearly combine the vectors and get \(\vec{0}\) is for the scalars to all be 0.
Consider \(S = \{(1,0,0),(0,1,0)\}\). These vectors are linearly independent. If you add them up, you get: $$ c(1,0,0) + d(0,1,0) = (c,d,0) $$ This only equals 0 if \(c\) and \(d\) are 0, so the vectors are linearly independent.

Basis for a Vector Space[edit]

If \(V\) is a vector space, than a basis for \(V\) is a set \(B\) of linearly independent vectors whose span is \(V\). So none of the vectors in \(B\) can be linearly combined to get another vector in \(B\), but if we use all the vectors in \(B\) we get every vector in the vector space.
\(\{(1,0,0),(0,1,0),(0,0,1)\}\) is a basis for \(\mathbb{R}^3\). Obviously two of them can't be combined to get the third (the proof is basically the same as for the last section), but if we combine all of them we get: $$ c(1,0,0) + d(0,1,0) + e(0,0,1) = (c,d,e) $$ which is everything in \(\mathbb{R}^3\).
Another example. \(\{1,x,x^2\}\) is a basis for the real polynomials of degree less than or equal to 2. Any polynomial of this degree can be written: $$ p = a*1 + bx + cx^2 = a\cdot1 + b\cdot x + c\cdot x^2 $$

Dimension of a Vector Space[edit]

There is a very important theorem that says that all bases of a vector space contain the same number of elements. We call this number of elements the dimension of the vector space.
The dimension of \(\mathbb{R}^3\) is 3, because we created a basis with three elements earlier.
When you consider vector spaces of polynomials, you need one basis element for every variable, and one basis element for every power of every variable, and one basis element for every way to multiply variables together plus one for the constant polynomials.
So \(\mathbb{R}[x]\) is infinite dimensional. There's only one variable \(x\), but we can have arbitrarily high powers of \(x\). If we limit ourselves to, say, polynomials of degree 6 or less, we get a 7-dimensional vector space.
For \(\mathbb{R}[x,y]\), limited to polynomials of total degree two or less, we get: $$ \{1,x,x^2,xy,y,y^2\} $$ which is 6-dimensional.

More Field Theory[edit]

So, time to apply some of that stuff to field theory.

Extension Fields are Vector Spaces[edit]

If we have a field extension \(F,E,\iota\), then \(E\) is a vector space over \(F\).
This is pretty obvious - we already know, from the classification of simple extensions that \(E\) is either a set of polynomials over \(F\), or a set of rational expressions over \(F\), both of which are vector spaces over \(F\). (Scalar multiplication is just multiplication by a constant.)
I should say I'm not sure if this is true of all field extensions. It's definitely true of all finitely generated field extensions (field extensions created by adjoining a finite number of elements to the base field), but it might not be true for all field extensions.

Extensions of Extensions[edit]

If \(E\) is an extension of \(K\), and \(K\) is an extension of \(F\), then \(E\) is an extension of \(F\).

Dimension of an Extension Field[edit]

If \(E\) is a (let's just say finitely-generated) extension field over \(F\), then the dimension of \(E\) over \(F\), written \([E:F]\), is the dimension of \(E\) as a vector space over \(F\).

Dimension Formula[edit]

If \(E\) is an extension of \(K\) and \(K\) is an extension of \(F\), then \([E:F] = [E:K][K:F]\).

Algebraic Extension[edit]

Let \(F,E,\iota\) be a field extension. This extension is algebraic if every element of \(E\) is algebraic over \(F\).
For example, \(E\) is algebraic over \(F\) if \(E\) was created by adjoining only algebraic elements.
\(\mathbb{Q}[x]/\langle irr(\sqrt{2})\rangle\) is algebraic over \(\mathbb{Q}\).

Finite Extensions[edit]

An extension \(E\) of \(F\) is finite dimensional if and only if it is algebraic and finitely-generated.
This makes sense. If it's not algebraic, then it's isomorphic to some type of rational expressions over \(F\). The rational expressions over a field are infinite dimensional, so \(E\) couldn't be finite dimensional if it was the rational expressions.
What if it's algebraic? Well, then it's the set of polynomials over a lot of variables. Furthermore, the degree of these polynomials is limited to some finite value. So the dimension of this vector space will be finite if and only if the number of variables is finite. The number of variables is finite if and only if the extension is finitely generated (each extension adds a single variable).

Algebraically Closed Fields[edit]

\(F\) is a field. \(F\) is algebraically closed under the following (equivalent) conditions:
Every polynomial in \(F[x]\) has a root in \(F\).
Every polynomial in \(F[x]\) splits into linear factors.
The only algebraic extension of \(F\) is the trivial extension.

Algebraic Closure of a Field[edit]

Let \(F\) be a field. An algebraic closure of \(F\) is an extension

Constructibility[edit]

Constructibility deals with constructing shapes with a compass and straightedge.
Geometric constructions work like this:
You start out with two points. You are allowed to draw lines between any points you find. Also, if there is a line of length \(n\) between two points, you can construct a circle of radius \(n\) centered at any point you've found.
So really you're trying to create a sequence of points.
We say a number \(x\) is constructible if we can find a line segment of length \(x\).
I'm not going to explain how to do all the constructions, mainly because it's a graphical problem and this is a text only interface. But we can construct any integer, and any rational number, and some other things to be explained later.

The Constructible Numbers are a Field[edit]

So if we have two constructible numbers, we can add, multiply, subtract, and divide. Geometrically this means if we have to lines of length \(x\) and \(y\), we can find a third line of length \(x+y\), and another of length \(x-y\), and \(xy\) and \(x/y\).
Proving this is, unfortunately, a graphical proof. Try google.

Constructible Numbers are Closed on Square Roots[edit]

So we know the constructible numbers are a field. This means they must at least be the rational numbers. But given any constructible number, we can find its square root, meaning the constructible numbers are not just the rational numbers.
So \(\sqrt{2}\) is in \(C\), and \(\sqrt{2} + \sqrt{3}\) is too, and so is \(\sqrt{\sqrt{2} + \sqrt{3}}\).
Also, square roots are the only thing we add. This makes sense - getting a new constructible number means we need a new line, so we need a new point. We can only create new points by intersecting lines, intersecting a line and a circle, or intersecting circles. All of these end of being linear or quadratic operations, so there can't be any way to extract, say, a cube root.

Doubling the Cube is Impossible[edit]

We have a cube of side length \(s\). We want to find a cube with twice the volume. Let the volume of the first cube be \(V = s^3\), and the second cube be \(W = t^3\). We also have \(W = 2V\). So \(t^3 = 2s^3\), so \(t = \sqrt[3]{2}s\). Solving this would require finding the cube root of 2, which we can't do.

Trisecting the Angle is Impossible[edit]

The proof is to show that one angle (60 degrees) can't be trisected.

Advanced Group Theory[edit]

New Group Operations[edit]

Meet[edit]

Let \(H\) and \(K\) be subgroups of \(G\). The meet of \(H\) and \(K\), written \(H \wedge K\), is the intersection of \(H\) and \(K\). This is also the largest subgroup that is contained by both \(H\) and \(K\).

Join[edit]

Let \(H\) and \(K\) be subgroups of \(G\). The join of \(H\) and \(K\), written \(H \vee K\), is the smallest subgroup that contains both \(H\) and \(K\). This is not necessarily the union of \(H\) and \(K\).

Product[edit]

Let \(H\) and \(K\) be subgroups of \(G\). The product of \(H\) and \(K\), written \(HK\), is the set: $$ HK = \{hk | h \in H, k\in K\} $$ This is only a group if \(H\) or \(K\) is normal in \(G\).
If \(H\) or \(K\) is normal, then \(HK = H \vee K\).

Isomorphism Theorems[edit]

The three isomorphism theorems establish isomorphisms between groups formed from the new group operations.

First Isomorphism Theorem[edit]

This is exactly the same as the fundamental theorem of group homomorphisms, which we covered in the fall.
If we have two groups \(G\) and \(K\), and a homomorphism \(\phi: G \rightarrow K\), then there is a monomorphism \(\psi: G/ker \phi \rightarrow K\), and \(Im(\psi) = Im(\phi)\). So \(\psi\) could also be thought of as an isomorphism from \(G/ ker\phi\) to \(Im(\phi)\).
Also, if \(\pi\) is the projection of \(G\) onto \(G/ ker\phi\), then \(\phi(x) = \psi(\pi(x))\).

Second Isomorphism Theorem[edit]

If \(G\) is a group, \(H\) is a subgroup of \(G\), and \(N\) is a normal subgroup of \(G\), then: $$ HN/N \cong H/(H\cap N) $$ There are a couple of lemmas for this. One, \(H \cap N\) is a normal subgroup of \(H\), or the right side of the equation would be meaningless. This is relatively easy to prove, so I'm not going to explain why.
Another lemma is that \(N\) is normal in \(HN\). This is similarly easy to prove.
Anyway, this is almost a cancellation law. When we mod \(HN\) by \(N\), we send all the elements of \(N\) to 1. This means that everything in \(H \cap N\) also gets sent to 1. Remember that the elements of \(HN\) are products \(hn\). But \(n\) gets sent to 1, so really this is \(h\), unless \(h\) also gets sent to 1, which only happens when \(h \in H \cap N\).

Third Isomorphism Theorem[edit]

Let \(G\) be a group, and let \(H\) and \(K\) be normal subgroups of \(G\). Also, let \(K\) be a subgroup of \(H\). Then: $$ \frac{G/K}{H/K} \cong \frac{G}{H} $$ So here we kind of do get a cancellation property.
When we mod \(G\) by \(K\), we send all the elements of \(G\) to 1. When we mod \(H\) by \(K\), we also send all the elements of \(K\) to 1. So the elements of \(H/K\) are the elements of \(H\) that aren't in \(K\). When we mod \(G/K\) by \(H/K\), we set the elements of \(H\) that aren't in \(K\) to 1. But we already set the elements in \(K\) to 1 when we did \(G/K\). So now we've sent all of \(H\) to 1.

Producing Normal Subgroups[edit]

There are quite a few little theorems about creating

Subnormal Series[edit]

Factors[edit]

Saturated Subnormal Series[edit]

Some More Stuff[edit]

Review of \(G\)-Sets[edit]