Category Archives: Miscellaneous

Automorphic Numbers

In this post we discuss automorphic numbers. Note that all numbers involved in this post are non negative integers.

Definition: Let x have n digits in its base 10 representation. Then x is said to be automorphic if x^2\equiv x\pmod {10^n}.

In our base 10 representation this means that the last n digits of x^2 are precisely x. In other words, the digits of x are appended to x^2. Note that since the digits of x are appended to x^2 they are also appended to x^m for any m\ge 1.

A list of some automorphic numbers can be found here. Our aim here is to characterize all such numbers in base 10.

Theorem: The automorphic numbers in base 10 are given by \{0,1\}\cup\{5^{2^m}\pmod {10^m}:m\ge 1\}\cup\{16^{5^m}\pmod {10^m}:m\ge 1\}.

Proof: Suppose x is an automorphic number of n digits. Then,

x(x-1)\equiv 0\pmod {10^n}

which is equivalent to

\Big(x\equiv 0\text{ or }1\pmod{2^n}\Big) \text{ and } \Big(x\equiv 0\text{ or }1\pmod{5^n}\Big)

which is equivalent to the occurrence of exactly one of the following:

Case 1: x\equiv 0\pmod {2^n} \text{ and } x\equiv 0\pmod {5^n}

Case 2: x\equiv 1\pmod {2^n} \text{ and } x\equiv 1\pmod {5^n}

Case 3: x\equiv 1\pmod {2^n} \text{ and } x\equiv 0\pmod {5^n}

Case 4: x\equiv 0\pmod {2^n} \text{ and } x\equiv 1\pmod {5^n}

Now we consider all these cases one by one. Case 1 is equivalent to x\equiv 0\pmod {10^n}. Since x had n digits so this means x=0. Similarly x=1 in case 2.

Now suppose case 3 holds. By the Chinese Remainder theorem there exists a unique solution to these two congruences modulo 10^n. We show that 5^{2^n} is a solution to them. By uniqueness it will then follow that case 3 is equivalent to saying that x\equiv 5^{2^n}\pmod {10^n}. Since x has n digits and 5^{2^n}\pmod {10^n} has at most n digits so we will therefore have x equal to 5^{2^n}\pmod {10^n}. Hence x\in\{5^{2^m}\pmod {10^m}:m\ge 1\}.

We first show that 5^{2^n}\equiv 1\pmod {2^n}. Proceed by induction on n. For n=1 note that 5^{2}\equiv 1\pmod 2.

Now assume the result for k. Then 5^{2^k}=q2^k+1 and squaring yields 5^{2^{k+1}}=q^2 2^{2k}+q2^{k+1}+1=(q^2 2^{k-1}+q)2^{k+1}+1 and so 5^{2^{k+1}}\equiv 1\pmod {2^{k+1}} thereby completing the induction step. Next observe that as n<2^n so 5^{2^n}\equiv 0\pmod {5^n} follows immediately, thereby completing case 3.

Note that our argument also implies that any number of the form 5^{2^m}\pmod{10^m} is automorphic. Indeed, if a=5^{2^m}\pmod{10^m} had t\le m digits then we have shown that a^2\equiv a\pmod {10^m}. Now as 10^t divides 10^m so a^2\equiv a\pmod {10^t}, i.e. a is automorphic.

To finish the proof we now consider case 4. We show that 16^{5^n} is a solution to the congruences of case 4. Clearly 16^{5^n}\equiv 0\pmod {2^n} as n<4.5^n. To show 16^{5^n}\equiv 1\pmod {5^n} we proceed by induction as before. The base case may be explicitly worked out or be shown to hold by invoking Fermat’s theorem: m^p\equiv m\pmod p, for any prime p and integer m. Next assume the result for k, so that 16^{5^k}=q5^k+1. Raising both sides to the fifth power yields the result for k+1. The rest of the argument is similar.\Box

Leave a comment

Filed under Miscellaneous, Number theory, Recreational Mathematics

The Rank Nullity Theorem

One of the important results in linear algebra is the rank nullity theorem. Here I am going to present a proof of it which is slightly less known. The reason I like this proof is because it ties together many concepts and results quite nicely, and also because I independently thought of it.

The theorem (as is well known) says that if V,W are vector spaces with n=\dim V<\infty and T:V\to W a linear map then \text{rank} (T)+\text{nullity}(T)=n.

In this proof I will further assume that W is finite dimensional with dimension m. A more general proof can be found on wikipedia.

We start by fixing two bases of V and W and obtain a m\times n matrix A=\begin{pmatrix}  r_1\\  r_2\\  \vdots\\  r_m  \end{pmatrix} of T relative to these bases. (Each r_i is a 1\times n row matrix). Then our theorem basically translates to \text{rank} (A)+\text{nullity}(A)=n. We let \text{Row Space} (A)=R,\text{Null Space} (A)=N and claim that R^\perp=N.

Clearly if x\in N then Ax=0 and so \begin{pmatrix}  r_1\\  r_2\\  \vdots\\  r_m  \end{pmatrix}x=\begin{pmatrix}  r_1x\\  r_2x\\  \vdots\\  r_mx  \end{pmatrix}=\begin{pmatrix}  0\\  0\\  \vdots\\  0  \end{pmatrix} so that each r_i is orthogonal to x. Hence x\in R^\perp. Conversely if x\in R^\perp then x^Tr_i^T=0 so that x^TA^T=0, i.e. Ax=0 following which x\in N.

Now it only remains to invoke the result \dim U+\dim U^\perp=\dim V for any subspace U of an inner product space V to conclude that \dim R+\dim N=\dim \mathbb{R}^n. In other words \text{rank} (A)+\text{nullity}(A)=n.\Box

2 Comments

Filed under Algebra, Linear Algebra, Miscellaneous

A geometric proof for the irrationality of √2

Tom Apostol in 2000 gave a geometric proof of the irrationality of 2, which for a bright student can even be considered as a “proof without words”:

Image

(The figure has been taken from wikipedia)

The explanation is as follows: Suppose \sqrt{2} is rational and equals m/n where m,n\in\mathbb{N} with (m,n)=1. Consider an isoceles right \vartriangle ABC with legs of length n and hypotenuse m. Draw the blue arcs with center A. Observe that \vartriangle ABC is congruent to \vartriangle ADE by SAS, and furthermore note the various lengths given in the figure follow from this fact. Hence we have an even smaller right isosceles triangle \vartriangle CDF, with hypotenuse length 2n-m and legs m-n. The fact these values are integers even smaller than m and n and in the same ratio is clear from the figure and thus we contradict the hypothesis that (m,n)=1.

The above proof may be aptly summarized in one line:  If \sqrt{2}=m/n is in lowest terms then \sqrt{2}=(2n-m)/(m-n) is in lower terms. In this case however the definition of the square root is implicitly used to show that 2n-m<m and m-n<n.

Leave a comment

Filed under Miscellaneous

The Gauss triangle trick

This is a part of a series of proofs I plan to write here which seem essential for any student of mathematics. This series is inspired by the stackexchange post here, although I am not sure yet as to how many proofs from the list I will pick up.

Theorem: \sum_{k=1}^nk=\frac{n(n+1)}{2}.

Proof: Let S=1+2+\cdots n and write 2S as follows:

1+2+\cdots n +\\n+(n-1)+\cdots 1.

Now addition is associative, so adding each number in the top line with the number directly below it, we get 2S= (1+n) + (2+(n-1))+\cdots (n+1). Clearly there are n parenthesis with each summing to n+1 and so 2S=n(n+1). The  result follows.\Box

So why is this called the Gauss triangle trick. Well, the story goes that when Gauss was in preparatory school, Gauss’s teacher told the kids to add up all the numbers from 1 to 100. He probably thought that this will give him some time to rest and was surprised when in a few minutes Gauss came up with answer using this trick.

The numbers obtained as the various sums, eg 1,1+2=3,1+2+3=6 etc are called triangle numbers. A triangle number is so called because it is a count of the number of balls that can form a equilateral triangle.

Leave a comment

Filed under Miscellaneous

The axiom of choice

This post is about the axiom of choice, arguably the most famous and the most controversial of the axioms underpinning the set theoretic foundations of mathematics today. The axiom of choice is about making choices: choosing an element each from an arbitrary collection of nonempty sets. The axiom basically says that this can always be done, although it provides no recipe for doing so.

Before formally presenting the axiom we state some definitions. The treatment is essentially taken from here.

Definition: A function I from a set \Lambda onto a set X is said to index the set X by \Lambda. The set \Lambda is called the index and X is the indexed set. If I(\lambda)=x, we write x_\lambda for I(\lambda).

Definition: Let X be a nonempty set indexed by a set \Lambda. The cartesian product of X is defined to be the set \Pi _{\lambda\in\Lambda}x_\lambda of all functions f with domain \Lambda and codomain \cup X=\cup_{x\in X}x, satisfying the condition f(\lambda)\in x_\lambda.

Let us consider an example before we go further. We let X=\{\{a,b\},\{c,d\}\} be indexed by \Lambda=\{1,2\} where 1\to \{a,b\}, 2\to \{c,d\}. Now there are four functions possible from \Lambda to \{a,b,c,d\} which satisfy the definition:

The function f_1 for which f(1)=a, f(2)=c.
The function f_2 for which f(1)=b, f(2)=c.
The function f_3 for which f(1)=a, f(2)=d.
The function f_4 for which f(1)=b, f(2)=d.

The collection \{f_1,f_2,f_3,f_4\} is the cartesian product. Now intuitively the behavior of the function f_1 can be captured by simply writing (a,c) since the fact that the first coordinate corresponds to a and the second to c indicates the function 1\to a,2\to c. Similarly the behavior of the function f_2,f_3,f_4 are also captured by (b,c),(a,d),(b,d) respectively. Note that a particular ordered pair always corresponds to a unique function f_i. So the cartesian product may be simply represented as \{(a,c),(b,c),(a,d),(b,d)\}.

In fact if \Lambda has at most countable number of elements within it, we can always consider it as an subset of the form \{1,2,3\cdots\} (which may or may not terminate), and index appropriately. Then the cartesian product obtained can be equivalently thought of as a collection of n-tuples or infinite sequences where the ith coordinate corresponds to the image of i in the indexing. For example if X=\{\{0,1\}\} and \Lambda=\mathbb{N} then all zero-one sequences correspond to the requisite functions and their collection forms the cartesian product.

Definition: Any function f which satisfies the conditions required to make it a member of a cartesian product is called a choice function.

We now give three formulations of the axiom of choice:

Axiom of choice:

1. For every nonempty set whose elements are nonempty sets and are indexed by a set I there exists a choice function.

2. If \{a_i\} is a family of nonempty sets, indexed by a nonempty set I, then there exists a family \{x_i\} with i\in I such that x_i\in a_i for each i\in I. (This corresponds roughly to our informal comment about choices in the first paragraph.)

3. The cartesian product of a nonempty collection of nonempty sets is nonempty.

Theorem: The three formulations are equivalent.

Proof: 1\Rightarrow 2: Let \{a_i\}_{i\in I} be a family of sets, indexed by the nonempty set I. Let S=\{a_i\mid i\in I\}. (Note that the elements of S are themselves sets.) We index S by itself, and consider the cartesian product of S. By the assumption and by I\ne \emptyset, the set S is a nonempty set of nonempty sets. Hence owing to (1) there exists a choice function, i.e. a function f\colon S\to \bigcup S such that f(x)\in x for all x \in S. For i\in I let x_i=f(a_i). Then \{x_i\}_{i\in I} is a family with the required properties.

2\Rightarrow 1: Let S=\{a_i:i\in I\} be a nonempty set whose elements are themselves nonempty where S is indexed by I. By (2) we have a family x_i with x_i\in a_i. We define our choice function to be f:I\to\cup S where f(i)=x_i.

1\Leftrightarrow 3: This is obvious by the definitions.\Box

Due to this result any of these three statements may be referred to as the axiom of choice. In fact, there are many other equivalent statements. The term ZFC is usually used to denote the Zermelo Fraenkel axioms together with the axiom of choice, and these axioms form a basis of most of the mathematics done today.

Why is this axiom so relevant? Firstly if one refuses to accept this axiom, one loses a lot of interesting results. Secondly, many times assuming the axiom of choice is so much easier. Many results such as the Cantor-Berstein-Schroeder theorem were proved originally using the axiom of choice, although later a “choice-free” proof was discovered. Similarly the axiom of choice makes undergraduate analysis easier by enabling one to say that f(x) is continuous at x=a if f(a_n) tends to f(a) for each sequence (a_n) that tends to a.

Are there any negative issues with the axiom of choice? Well, firstly it is not constructive and doesn’t actually tell us the choices to make. It just guarantees that some choice exists. In one formulation it tells us that the real numbers can be well ordered but it gives us no recipe of doing so. In other words, it doesn’t actually construct a well order of the reals for us. Secondly, some counter intuitive results such as the Banach Tarsky paradox are true in ZFC and many mathematicians find this troubling and “fishy”.

Thirdly, and probably most importantly, there is history. After it was introduced (in another form) in 1904 by Zermelo, many mathematicians wondered whether or not the axiom was consistent with ZF. In other words people asked whether accepting it would lead to contradictions. Many decades passed before in 1938 Godel showed that ZFC is indeed consistent. Another doubt was whether the axiom of choice was independent of ZF. It was only by 1963 that Cohen proved that the axiom of choice cannot derived from ZF and so together with Godel’s result this implied that the axiom of choice is independent of ZF. During this long period from 1904 to 1963, many bizzare results such as the Banach Tarsky paradox, existence of non-Lebesgue measurable sets etc were found. All this sowed a suspicion in people’s minds as to whether this axiom was “false” at some fundamental level. As a result mathematicians began to pay close attention to results in which the axiom was used, and to try and device proofs which avoided the axiom as much as possible. This habit somewhat lessened after the independence results, but it has not been dropped by everyone even today.

The fact of the matter is that while rejecting the axiom of choice leads to many weird results, so does accepting it. See the chapters entitled “Disasters without Choice” and “Disasters with Choice” in this book for details. However in the long run, the advantages of accepting the axiom outweigh the advantages of rejecting it primarily because the sheer quantity of beautiful and aesthetic results flowing from the axiom of choice provide substance to mathematics as a whole.

3 Comments

Filed under Miscellaneous, Set theory

Arrows, Ramsey numbers and the Party Problem

Here come the arrows! The notation \alpha\rightarrow (\beta)^2_r means that every r-colored complete graph on \alpha vertices contains a monochromatic complete subgraph with \beta vertices. Using the notation \omega for the cardinality of the natural numbers the abridged Ramsey theorems state that \omega\rightarrow (\omega)^2_r and that \forall n\exists m,m\rightarrow (n)^2_r.

Now let us take a particular case: n=3,r=2. Our objective here is to prove that 6\rightarrow (3)^2_2. In other words, if the edges of K_6 are 2-colored then there exists a monochromatic K_3. Fix a vertex v in K_6. From the 5 edges emanating from v, at least three must share a color: call it c. Now if all the edges in triangle made by those three vertices are not colored c, we are done for we have found a monochromatic K_3 in the other color. If even one edge, say xy among the triangle made by those three vertices is of color c then we have a monochromatic K_3 colored c with vertices x,y and v.

The fact that 6\rightarrow (3)^2_2 is also referred to as the solution of the party problem: Prove that in any gathering of six people there are three mutual friends, or three mutual strangers. By representing the people as vertices and their relationships via the colors on the edges connecting them we easily see that a monochromatic triangle is guaranteed and therefore the result holds. It is also obvious that the gathering may contain more then six people, a monochromatic triangle is still guaranteed within (in fact, it will only become more likely with more people). In other words: m\rightarrow (3)^2_2\forall m\ge 6.

Is six the least such number? Indeed it is as the following graph demonstrates:

We note that there is no monochromatic triangle in the above 2-colored K_5.

For a given n the least such number m,m\rightarrow (n)^2_r is called the diagonal Ramsey number R(n,n). While such numbers are guaranteed to exist by Ramsey’s theorem, actually finding them is hard indeed. It is trivial to see that R(1,1)=1 and R(2,2)=2. Our proof above illustrates that R(3,3)=6. It has also been proven that R(4,4)=18. However all efforts to find out the exact value of any other diagonal Ramsey number have failed. It is one of those problems, which are easily stated but extremely difficult to attack.

Regarding R(5,5), Joel Spencer comments in his book, Ten lectures on the Probabilistic Method:

Erdős asks us to imagine an alien force, vastly more powerful than us, landing on Earth and demanding the value of R(5,5) or they will destroy our planet. In that case, he claims, we should marshal all our computers and all our mathematicians and attempt to find the value. But suppose, instead, that they ask for R(6,6). In that case, he believes, we should attempt to destroy the aliens.

In closing it may be remarked that the Ramsey number R(m,n) is defined to be the least positive integer such that any 2-coloring of K_{R(m,n)} in two colors, say red and blue, contains either a red K_m or a blue K_n. This definition also has the natural generalization for more then two colors. Although the existence of all Ramsey numbers is guaranteed by Ramsey’s theorem, only few have been discovered so far.

1 Comment

Filed under Combinatorics, Miscellaneous, Ramsey theory

A ball game and Konig’s lemma

In this post we will examine an application of Konig’s lemma to investigate a game proposed by Raymond Smullyan.

Imagine there is an unlimited supply of balls with us, with positive integers painted on each. It is very much possible that a particular positive integer is repeated many times on various balls. We do not know anything else about the distribution of numbers on the balls. Further imagine that there is a box with a ball initially within it.

To play the game we may do either of two things:

(1) We may remove a ball from the box and replace it with any finite number of balls, arbitrarily chosen, but constrained to have numbers painted on them which are less then the number of the ball removed. For example, if we have a number 1000 ball we may remove it and replace it by 6000 balls: 3000 balls on which 657 is painted, and 3000 balls on which 999 is painted.

(2) We may remove a ball from the box and not do any replacement.

After playing the game once we can continue playing provided there are balls left in the box.

The question is: Is it possible to play the ball game indefinitely? Can we ensure that the box will never be empty?

Smullyan himself answered this problem in the negative by using Konig’s lemma.

Theorem: The Smullyan game terminates after a finite number of steps.

Proof: For any play of the Smullyan game define a (rooted) tree as follows:
1. The vertices are the painted balls used during the game.
2. The root is the painted ball intially within the box.
3. The edges are given by the following criteria: There is an edge between the vertices x and y iff during some move of the game, the ball y is among the balls which replaces the ball x (or vice-versa).

By the definition of the game this is a rooted tree where each level is finite. Can there be an infinite path in the tree? No there can’t. This is because, by definition an infinite path would correspond to a infinite strictly decreasing sequence of positive integers which is absurd. By the contrapositive form of Konig’s lemma we can therefore conclude that the tree is finite. In other words only finitely many balls were used and so the game terminated after finitely many steps.

An alternate proof without using Konig’s lemma is available here.

Leave a comment

Filed under Game Theory, Graph theory, Miscellaneous