Monthly Archives: October 2013

The Inclusion Exclusion Principle

The Inclusion Exclusion Principle is one of most fundamental results in combinatorics. It deals with counting through a seive technique: i.e. we first overcount the quantity to be enumerated, then we try to balance out the overcounting by subtracting the extra. We may or may not subtract more then what is needed and so we count again the extra bits. Continuing in this way we “converge” at the correct count.

An easy example is |A\cup B\cup C|=|A|+|B|+|C|-|A\cap B|-|A\cap C|-|B\cap C|+|A\cap B\cap C| where A,B,C are three finite sets.

Here we start off by summing the number of elements inĀ A,B,C seperately. The overcount is compensated by subtracting the number of elements in |A\cap B|,|A\cap C|, |B\cap C|. We actually compensate for a bit more then needed and so to arrive at the correct count we must add |A\cap B\cap C|.

Our goal here is to prove the inclusion-exclusion principle and then to look at a few corollaries. We first establish two lemmas.

Lemma 1: If X is any set containing n elements and k is any field, then the set of all functions f:X\to k is an n-dimensional vector space V over k with the naturally induced operations.

Proof: Let X=\{x_1,\cdots,x_n\}. It is easy to see that V together with the operations f+g, defined by (f+g)(x_i)=f(x_i)+g(x_i) and (\alpha f)(x_i)=\alpha (f(x_i)) for all x_i\in X is a vector space.

We now exhibit a basis for V consisting of n elements. For all x_i\in X let f_{x_i} be the function defined as f_{x_i}(y)=\begin{cases}         \hfill 1    \hfill & \text{ if }y=x_i \\        \hfill 0 \hfill & \text{ otherwise} \\    \end{cases}. Now, we claim that B=\{f_{x_i}:x_i\in X\} is a basis. Clearly it is spanning as for any f\in V we have f=\sum_{x_i\in X}f(x_i)f_{x_i}. It is linearly independent as \sum_{x_i\in X} \alpha_i f_{x_i}=0 means that for any j, with 1\le j\le n, we have \sum_{x_i\in X} \alpha_i f_{x_i}(x_j)=0, i.e. \alpha_j=0. This completes the proof.\Box

Lemma 2: If m\ge 0 then \sum_{i \mathop = 0}^m \left({-1}\right)^i \binom m i = \delta_{0m}.

Proof: If m>0 put x=1, y=-1 in the binomial theorem (x+y)^m=\sum_{i=0}^mx^{m-i}y^i. If m=0, then the sum on the left reduces to only one term: (-1)^0\binom 0 0. This is clearly 1.\Box

We now prove the Inclusion Exclusion principle. This theorem in its purest form, is simply a formula for an inverse of a linear operator. The theorem is as follows:

Theorem 3: Let S be a set with n elements. Let V be the 2^n dimensional vector space of all functions f:\mathcal{P}(S)\to k, where k is some field and \mathcal{P}(S) is the power set of S. Let \phi:V\to V be the linear operator defined by

\displaystyle\phi f(T)=\sum_{Y\supseteq T}f(Y)\text{ for all } T\subseteq S.

Then \phi^{-1} exists and is given by:

\displaystyle\phi^{-1} f(T)=\sum_{Y\supseteq T}(-1)^{|Y-T|}f(Y)\text{ for all } T\subseteq S.

Proof: To show \phi^{-1} as given above is indeed the inverse it suffices to show that \phi^{-1}\phi(f)=f for all f\in V.

Let f\in V. Then,

\displaystyle\phi^{-1}\phi f(T)=\sum_{Y\supseteq T}(-1)^{|Y-T|}\phi f(Y)=\sum_{Y\supseteq T}(-1)^{|Y-T|}\sum_{Z\supseteq Y}f(Z)=\sum_{Z\supseteq T}\Big(\sum_{Z\supseteq Y\supseteq T}(-1)^{|Y-T|}\Big)f(Z)

Now fix Z,T and let m=|Z-T|. Consider \displaystyle\sum_{Z\supseteq Y\supseteq T}(-1)^{|Y-T|}. Any Y is obtained by choosing some elements out of Z-T which has m elements, and taking the union of such elements with T. So for every i, with 0\le i\le m, there are exactly \binom m i ways of choosing a Y, which has i+|T| elements. Any such Y also has i elements in |Y-T|. So \displaystyle\sum_{Z\supseteq Y\supseteq T}(-1)^{|Y-T|}=\sum_{i=0}^m(-1)^i\binom m i=\delta_{0m}. This when substituted in the expression for \phi^{-1}\phi f(T) shows that \phi^{-1}\phi f(T)=f(T), which proves the theorem.\Box

We now discuss some corollaries of the Inclusion Exclusion principle. Let S be a set of properties that elements of a given set A may or may not have. For any T\subseteq S, let A'_T\subseteq A be those elements which have exactly the properties in T and no others. We define a function f_=:S\to \mathbb{R} such that f_=(T)=|A'_T|. Similarly, for any T\subseteq S, let A''_T\subseteq A be those elements which have at least the properties in T. We define a function f_\ge:S\to \mathbb{R} such that f_=(T)=|A''_T|. It is clear that \displaystyle f_\ge(T)=\sum_{Y\supseteq T}f_=(T) for any T\subseteq S, and so by the Inclusion Exclusion principle we conclude that

Corollary 4: For any T\subseteq S, we have \displaystyle f_=(T)=\sum_{Y\supseteq T}(-1)^{|Y-T|}f_\ge (Y).

In particular we have \displaystyle f_=(\emptyset)=\sum_{Y\subseteq S}(-1)^{|Y|}f_\ge (Y), which gives us a formula for the number of elements having none of the properties.\Box

In the above corollary we think of f_\ge(T) as the first approximation to f_=(T). So we “include” that much “quantity” in our initial count. From this we subtract all terms of the type f_=(Y) where Y has just one extra element then T. Thus we “exclude” that much “quantity” from our count. This gives a better approximation. Next we add all terms of the type f_=(Y) where Y has two extra elements then T, and so on. This is the reason behind the terminology inclusion-exclusion.

We now discuss another corollary. Let A be a finite set and let A_1,\cdots A_n be some of its subsets. We define a set of properties S=\{P_1,\cdots,P_n\} which elements of A may or may not enjoy as follows: For any i, with 1\le i\le n, x\in A satisfies the property P_i if and only if x\in A_i. Also for any I\subseteq [n], let \emptyset\ne A_I=\cap_{i\in I}A_i be the set of elements which have at least the properties P_i for i\in I. Define A_\emptyset to be A. By Corollary 4, \displaystyle f_=(\emptyset)=\sum_{I'\subseteq S}(-1)^{|I'|}f_\ge (I')=\sum_{I\subseteq [n]}(-1)^{|I|}|A_I| where in the second equality we correspond each subset of properties with a subset of [n]. We summarize this as

Corollary 5: Let A be a finite set and let A_1,\cdots A_n be some of its subsets. For any \emptyset\ne I\subseteq [n], let A_I=\cap_{i\in I}A_i and let A_\emptyset=A. The number of elements in A-\cup_{i=1}^n A_i is given by \displaystyle\sum_{I\subseteq [n]}(-1)^{|I|}|A_I|=|A|+\sum_{\emptyset\ne I\subseteq [n]}(-1)^{|I|}|A_I|.\Box

A further special case is obtained by considering any finite sets A_1,A_2,\cdots,A_n and letting A=\cup_{i=1}^nA_i. Then the above corollary translates to \sum_{I\subseteq [n]}(-1)^{|I|}|A_I|=0. Considering the case of I=\emptyset seperately, we see that |A|+\displaystyle\sum_{\emptyset\ne I\subseteq [n]}(-1)^{|I|}|\cap_{i\in I}A_i|=0. This easily yields the following corollary.

Corollary 6: Let A_1,A_2\cdots,A_n be any finite sets. For any \emptyset\ne I\subseteq [n], let A_I=\cap_{i\in I}A_i and let A_\emptyset=A. Then \displaystyle|A_1\cup A_2\cup\cdots\cup A_n|=\displaystyle\sum_{\emptyset\ne I\subseteq [n]}(-1)^{|I|-1}|\cap_{i\in I}A_i|.\Box

Now by grouping terms involving the same size of I we can restate both Corollary 5 and 6 as follows.

Corollary 7: Let A be a finite set and let A_1,\cdots A_n be some of its subsets. The number of elements in A-\cup_{i=1}^n A_i is given by
\displaystyle |A|+\sum_{k=1}^n(-1)^{k}\sum_{1\le i_1 < i_2 < \cdots <i_k\le n}|A_{i_1}\cap A_{i_2}\cap\cdots\cap A_{i_k}|.\Box

Corollary 8: If A_1,A_2\cdots,A_n are any finite sets then

\displaystyle|A_1\cup A_2\cup\cdots\cup A_n|=\sum_{k=1}^n(-1)^{k-1}\sum_{1\le i_1< i_2<\cdots<i_k\le n}|A_{i_1}\cap A_{i_2}\cap\cdots\cap A_{i_k}|.\Box

Corollaries 5 to 8 are often referred to as the principle of inclusion-exclusion themselves as in combinatorial settings they are the ones most often used. A further simplified version can also be derived from them when the intersection of any k distinct sets A_i always has the same cardinality N_k. In that case we only need to multiply N_k with the number of such ways to select the k sets to get the value of the inner sums in Corollaries 7 and 8.

Corollary 9: Let A be a finite set and let A_1,\cdots A_n be some of its subsets. Suppose that for any k with 1\le k\le n there exists a natural number N_k so that for any \{i_1,\cdots, i_k\}\subseteq [n] we have A_{i_1}\cap\cdots\cap A_{i_k}=N_k. Then the number of elements in A-\cup_{i=1}^n A_i is given by |A|+\sum_{k=1}^n(-1)^k\binom{n}{k}N_k and |A_1\cup A_2\cup\cdots\cup A_n|=\sum_{k=1}^n(-1)^{k-1}\binom{n}{k}N_k.\Box

Leave a comment

Filed under Combinatorics, Set theory

Graph Automorphisms

This post is concerning automorphisms of graphs, which quantify the symmetry existing within the graph structure. Given two graphs G and H, a bijection f:V(G)\to V(H) which maintains adjacency, i.e. xy\in E(G)\Leftrightarrow f(x)f(y)\in E(H), is called an isomorphism and the graphs G and H are called isomorphic. Clearly isomorphic graphs are essentially the same, with the superficial difference between them on account of different notation used in defining the vertex set. A isomorphism from the graph G to itself is called an automorphism. It is easy to see that the set of all automorphisms on a graph G together with the operation of composition of functions forms a group. This group is called the automorphism group of the graph, and is denoted by A(G).

In the remainder of this post we investigate some well known graphs and find out their automorphism groups.

The first graph we take up is the complete graph K_n. Any permutation of its n vertices is in fact an automorphism for adjacency is never lost. Its automorphism group is therefore S_n.

The next graph is the complete bipartite graph K_{m,n}. First consider the case m\not= n. The m vertices in the first partite set can be permuted in m! ways and similarly n! ways for the second partite set. Corresponding to each of these m!n! limited permutations we get automorphisms because adjacency is never disturbed. On the other hand, no automorphism can result from swapping a vertex from the first partite set and the second partite set because unless such a swap is done in its entirety (i.e. all the vertices from the first partite set swap places with the vertices in the second partite set), adjacency will be lost. A swap can be done in entirety only if m=n which is not the case we are considering. Hence no further automorphisms can result. Moreover by the multiplication rule it is simple to observe that the automorphism group would be isomorphic to S_m\times S_n.

In the case of m=n, we first pair off the vertices in the two partite sets against each other. This is also an automorphism, say f. Now for each of the m!m! ways of permuting vertices within partite sets, an additional automorphism arises. It is obtained in this fashion: After permuting the vertices within the partite sets by the particular way we swap each vertex with its pair in the other partite set. Clearly this yields 2m!m! automorphisms and furthermore no more are possible. Since every element of A(K_{m,m}) can be written as a unique product of an automorphism collection of the type covered in counting the first m!^2 ways (which is not hard to see is a normal subgroup, being of index 2) and of the subgroup \{Id,f\} so we see that the automorphism group is S_m\times S_m\rtimes \mathbb{Z}_2.

The next graph we take up is the cycle graph C_n. Firstly note that any automorphism can be obtained in this way: A given vertex v may be mapped to any of the n vertices available (including itself). As soon as that is done, an adjacent vertex to v has only two choices left: it can either be in the counter clockwise direction to v or in the clockwise direction to v. Once that choice is also made, no other choices are required. Hence we get 2n automorphisms this way and there can be no others. Also, it is clear that two kinds of automorphisms suffice to generate this group: rotation, and swapping the notion of clockwise and counter clockwise (assuming we draw the cycle graph as equally spaced points on the unit circle; there is no loss of generality in doing that). But both these automorphisms also generate the dihedral group D_n which also has 2n elements. It follows that A(C_n)=D_n.

The final graph we take up is the well known Petersen graph. Instead of directly considering what possible functions are there in its automorphism group (although such an approach is possible) we approach the problem through the concept of line graphs.

Definition: A line graph L(G) of a graph G is the graph whose vertices are in one to one correspondence with the edges of G, two vertices of L(G) being adjacent if and only if the corresponding edges of G are adjacent.

Lemma 1: L(K_5) is the complement of the Petersen graph.
Proof: It is clear that if the vertices of K_5 are labelled 1,2,\ldots,5 then its 10 edges are the {5 \choose 2} 2-subsets of \{1,\cdots,5\}. The line graph L(K_5) thus has 10 vertices, labeled by these 10 2-subsets \{i,j\}. Two vertices \{i,j\}, \{k,\ell\} are adjacent in L(K_5) iff the two 2-subsets have a nontrivial overlap. The complement of L(K_5) is the graph with the same 10 vertices, and with two vertices being adjacent iff the corresponding two 2-subsets are disjoint. But this is the very definition of the Petersen graph.\Box

Lemma 2: A(G) is equal to A(\bar{G}).
Proof: If \sigma\in A(G) then for any two vertices x,y we have xy\in E(G)\Leftrightarrow \sigma(x)\sigma(y)\in E(G), i.e. xy\not\in E(G)\Leftrightarrow \sigma(x)\sigma(y)\not\in E(G), i.e. xy\in E(\bar{G})\Leftrightarrow \sigma(x)\sigma(y)\in E(\bar{G}) so that \sigma\in A(\bar{G}). The reverse implication follows by replacing G by \bar{G}. \Box

Theorem 3: The automorphism group of the Petersen graph is S_5.
Proof: In view of Lemma 1 and 2 it suffices to find out A(L(K_5)) for the automorphism group of the Petersen graph is going to be the same. We let K_5 have the vertex set \{1,\cdots,5\} in the sequel.

Take any automorphism f of K_5. If we have two edges ab,cd\in K_5 with f(a)f(b)=f(c)f(d), then either of two cases arise. Either f(a)=f(c) or not. If f(a)=f(c) then obviously f(b)=f(d) and so by injectivity of f we have ab=cd. If f(a)\not=f(c) then it must be that f(a)=f(d). This means that f(b)=f(c) and again by injectivity we have ab=cd. What this means is that the function induced by f on E(K_5) in the natural way is injective. It is also surjective as for any xy\in E(K_5) clearly f\{f^{-1}xf^{-1}y\}=xy. Finally, this function is an automorphism since \{xy,xz\}\in E(L(K_5)) clearly implies and is implied by \{f(x)f(y),f(x)f(z)\}\in E(L(K_5)) as there is a common vertex. As our definition of the induced function is obtained in a definite way we have shown that every automorphism of K_5 induces a unique automorphism of L(K_5). Moreover, it is easy to see that if f_1,f_2 are two automorphisms then the automorphism induced by f_1\circ f_2 is the same as the automorphism induced by f_1 composed by f_2.

We now show that given an automorphism of L(K_5) we can obtain an automorphism of K_5 which induces it in the natural way. Let \pi\in A(L(K_5)). It is easy to see that the 4-cliques of L(K_5) originate from the stars K_{1,4} of K_5. So L(K_5) has exactly 5 4-cliques, say C_1,\cdots ,C_5 where C_i contains 4 vertices corresponding to the 4 edges in K_5 that are incident to a vertex i in K_5. Since \pi is an automorphism it sends 4-cliques to 4-cliques. Also, \pi must send two different 4-cliques C_i,C_j with i\ne j to different 4-cliques, because if it sends them to the same 4-clique then a collection of at least 5 vertices is mapped to a collection of 4 vertices, a contradiction to the injectivity of \pi. So \pi induces a permutation of the C_i‘s.

Now suppose \pi and \pi' are two different automorphisms in A(L(K_5)). Then they differ on at least vertex in L(K_5), say on the vertex ij\in E(K_5). Now given any vertex xy in L(K_5) consider the intersection of the 4-cliques C_x and C_y. If pq is some vertex in C_x\cap C_y then pq as an edge in K_5 is part of stars with centers x and y, i.e. pq=xy. Hence the intersection contains only the vertex xy. Every vertex of L(K_5) arises in this way. So if \pi(ij)\ne \pi'(ij), then either \pi (C_i)\ne \pi'(C_i) or \pi(C_j)\ne \pi'(C_j) for otherwise \pi (C_i\cap C_j)=\pi'(C_i\cap C_j).

Hence every automorphism of L(K_5) induces a unique permutation of the C_i‘s. Moreover distinct automorphisms induce distinct permutations so that the automorphisms and the permutations can be put in one-one correspondence. Consider an automorphism f of the vertices of K_5 where f(i)=j if C_i\to C_j in the permutation corresponding to \pi. Now a vertex \{x,y\}\in E(K_5) of L(K_5). This is also the intersection of the 4-cliques C_x and C_y and so \pi(\{x,y\})=\pi(C_x\cap C_y)=\pi(C_x)\cap\pi (C_y)=C_{f(x)}\cap C_{f(y)}=f(x)f(y). This shows that f induces \pi as an automorphism.

Hence we have shown that A(K_5)\equiv A(L(K_5)). So the Petersen graph has the automorphism group S_5. \Box

Leave a comment

Filed under Combinatorics, Graph theory, Group Theory