Tag Archives: mathematics

An elementary statistics problem from Hearthstone

In Hearthstone, a popular online card game, there is a minion called C’Thun which deals damage equal to its attack value, one damage at a time. This prompted a popular streamer named Trump (not the insufferable presidential candidate) to ask the following question once on his channel: when playing a C’Thun with 12 attack (hence, it will deal 12 damage one at a time), what is the probability that it will kill a 6 health minion when played, with no other minions on the board?

To those who don’t know how Hearthstone works, one can think of the question as follows: suppose you flip a coin 12 times in a row. Each time you flip the coin, you record down whether you got heads or tails. As long as you have fewer than 6 heads, the coin is fair and you have equal probability of getting heads or tails on your next flip. However, as soon as you hit 6 heads, then the coin becomes unfair and you can only get tails after. The question that Trump (real name Jeffrey Shih) asks is then equivalent to “what is the probability that you get 6 heads”.

To calculate this probability, we can ask the reverse question: what is the probability that we don’t get six heads? If we don’t get 6 heads, then the probability distribution is the same as if we always had a fair coin, and so the probability is the same. The probability of getting at most five heads when flipping a fair coin 12 times is given by

\displaystyle 2^{-12} \sum_{k=0}^5 \binom{12}{k}.

The well-known binomial theorem tells us that

\displaystyle 2^{-12} \sum_{k=0}^{12} \binom{12}{k} = 1.

This shows that the probability of getting at most 5 heads is equal to

\displaystyle \frac{1}{2} - 2^{-13} \binom{12}{6} = \frac{793}{2048}.

Therefore, the probability of getting 6 heads in the original game is equal to

\displaystyle 1 - \frac{793}{2048} = \frac{1255}{2048},

or roughly 61.28\%. Thus it is far more likely that C’Thun will kill a 6 health minion with only 12 damage than too many shots going to face.

Advertisements

On binary forms II

The following joint paper of myself and my advisor Professor C.L. Stewart has been released on the arxiv. In this follow-up post, I would like to describe in some detail how to establish an asymptotic formula for the number of integers in an interval which are representable by a fixed binary form F with integer coefficients and non-zero discriminant.

There are essentially three ingredients which go into the proof, each established decades apart. The first essential piece of the puzzle was established by Kurt Mahler in the 1930’s. He showed that if we examine the number of integer points in the region \{(x,y) \in \mathbb{R}^2 : |F(x,y)| \leq Z\}, then the number of such points is closely approximated by the area of the region. Since the region is homogeneously expanding, the area itself is well-approximated by scaling the `fundamental region’ given by \{(x,y) \in \mathbb{R}^2 : |F(x,y)| \leq 1\}. Indeed, let A_F denote the area of this fundamental region and let N_F(Z) denote the number of integer pairs (x,y) \in \mathbb{Z}^2 such that |F(x,y)| \leq Z. Then Mahler proved that

\displaystyle N_F(Z) \sim A_F Z^{\frac{2}{d}}.

More precisely, he proved a very good error term. He showed that when d \geq 3, we have

\displaystyle N_F(Z) = A_F Z^{\frac{2}{d}} + O_F\left(Z^{\frac{1}{d-1}} \right).

The question then becomes is there some way to remove the redundancies in Mahler’s theorem? For example, if F has even degree, then F(x,y) = F(-x,-y) for all (x,y) \in \mathbb{R}^2, so the pairs (x,y), (-x,-y) \in \mathbb{Z}^2 represent the same integer. Is it true that this is the only way that this can happen? Unfortunately, the answer is no. For example, consider the binary form F_n = x^n + (x - y)(2x - y) \cdots (nx-y). Then clearly the points (1,1), \cdots, (1,n) all represent 1, and this construction works for any positive integer n. Therefore, there does not appear to be a simple way to count the multiplicities of points representing the same integer in Mahler’s theorem.

While examples like the above exist, perhaps it is possible that this happen sufficiently rare as to be negligible. For instance, if only O\left(Z^{2/d - \delta}\right) many points counted by N_F(Z) are such that there exist many `essentially different’ (precise definition to come) other points which represent the same integer, and even in the worst case there can be at most say O\left(Z^{\frac{\delta}{2}}\right) many essentially different pairs, then we have shown that in total, the contribution from these bad points to N_F(Z) is only O\left(Z^{\frac{2}{d} - \frac{\delta}{2}}\right), which is fine.

We shall now make some definitions. We say that an integer h is essentially represented by F if there exist two integer pairs (x_1, y_1), (x_2, y_2) for which F(x_1, y_1) = F(x_2, y_2) = h, then there exists an element

\displaystyle T = \begin{pmatrix} t_1 & t_2 \\ t_3 & t_4 \end{pmatrix} \in \text{GL}_2(\mathbb{Q}) such that

\begin{pmatrix} x_1 \\ y_1 \end{pmatrix} = T \begin{pmatrix} x_2 \\ y_2 \end{pmatrix}

and such that

F_T(x,y) = F(t_1 x + t_2 y, t_3 x + t_4 y) = F(x,y)

for all (x,y) \in \mathbb{C}^2. Otherwise, we say that h is not essentially represented.

Now put R_F(Z) to be the number of integers up to Z which are representable by F, and let R_F^{(1)}(Z) be the number of essentially represented integers and R_F^{(2)}(Z) be the number of non-essentially represented integers. If we can show that R_F(Z) \sim R_F^{(1)}(Z), then we are basically done. This amounts to showing that R_F^{(2)}(Z) is small compared to Z^{\frac{2}{d}}.

Christopher Hooley proved this for both the ‘easy cubic case’ and the ‘hard cubic case’. However, it was D.R. Heath-Brown who showed that R_F^{(2)}(Z) is always small compared to R_F^{(1)}(Z). This paved the way to our eventual success at this problem.

It remains to account for the interaction between those T \in \text{GL}_2(\mathbb{Q})  which fix F and R_F^{(1)}(Z). These elements are called the rational automorphisms of F and we denote them by \text{Aut} F = \text{Aut}_\mathbb{Q} F. The most novel contribution we made to this topic is that we accounted for the exact interaction between \text{Aut} F and R_F^{(1)}(Z) with the so-called ‘redundancy lemmas’. This will be discussed at a future time.

Some problems for the new year

Part new year resolution and part a birthday present to myself (and those audience members interested), I’ve decided to write up some problems I’ve been thinking about but either don’t have the time or the techniques/knowledge to tackle at the present time. Hopefully they will keep me motivated into 2016, as well as anyone else who’s interested in them. In no particular order:

1) Stewart’s Conjecture: I have already discussed this problem in two earlier posts (here and here). The conjecture is due to my advisor, Professor Cameron Stewart, in a paper from 1991. The conjecture asserts that there exists a positive number c > 0 such that for all binary forms F(x,y) of degree d \geq 3, integer coefficients, and non-zero discriminant, there exists a positive number r_F which depends on F such that for all integers h with |h| \geq r_F, the equation F(x,y) = h has at most c solutions. In particular, the value of c does not depend on F nor d. A weaker version of this conjecture asserts the existence of a positive number c_d for every degree d \geq 3 for which the above holds.

I suspect that Chabauty’s method, applied to the estimation of integer points on hyperelliptic curves, is close to being able to solve this problem; see this paper by Balakrishnan, Besser, and Muller. However, there may be other tools that may be used without involving a corresponding curve. That said, since a positive answer to Stewart’s conjecture would have significant impact on the theory of rational points on hyperelliptic curves, it seems that the two problems are intrinsically intertwined.

2) Asymptotic Chen’s Theorem: This is related to a problem I’ve been thinking about lately. Chen’s theorem asserts that every sufficiently large even integer N can be written as the sum of a prime and a number which is the product of at most two primes. However, this simple statement hides the nature of the proof. The proof essentially depends on two parts, and (as far as I know) has not been improved on a fundamental level since Chen. The first is the very general Jurkat-Richert theorem, which can handle quite general sequences. Its input is some type of Bombieri-Vinogradov theorem, i.e., some type of positive level of distribution. It essentially churns out semi-primes of some order given a particular level of distribution. We will phrase the result slightly differently, in terms of the twin prime conjecture. Goldbach’s conjecture is quite related, and Chen actually proved the analogous statement for both the twin prime problem and Goldbach’s conjecture. Bombieri-Vinogradov provides the level 1/2, and with this level, the Jurkat-Richert theorem immediately yields that there exist infinitely many primes p such that p+2 is the product of at most three primes. Using this basic sieve mechanism and the Bombieri-Vinogradov theorem, it is impossible to breach the ‘three prime’ barrier. A higher level of distribution would do the trick, but so far, Bombieri-Vinogradov has not been improved in general (although Yitang Zhang‘s seminal work on bounded gaps between primes does provide an improvement in a special case). Thus, we require the second piece of the proof of Chen’s theorem, the most novel part of his proof. He was able to show that there aren’t too many primes p such that p+2 has exactly three prime factors, so few that the difference in number between those primes p where p+2 has at most three prime factors and those with exactly three prime factors can be detected. However, the estimation of these two quantities using sieves (Chen’s theorem does not introduce any technology that’s not directly related to sieves) produce terms with the same order of magnitude, so Chen’s approach destroys any hope of establishing an asymptotic formula for the number of primes p for which p+2 is the product of at most two primes. It would be a significant achievement to prove such an asymptotic formula, because it means there has been a significant improvement to the underlying sieve mechanism, or some other non-sieve technology has been brought in successfully to tackle the problem. Either case, it would be quite the thing to behold.

3) An interpolation between geometrically irreducible forms and decomposable forms: A celebrated theorem of Axel Thue is the statement that for any binary form F(x,y) with integer coefficients, degree d \geq 3, and non-zero discriminant and for any non-zero integer h, the equation F(x,y) = h has only finitely many solutions in integers x,y.  Thue’s theorem is ineffective, meaning one cannot actually find an upper bound for the number of solutions except to know that it must be finite. Thue’s theorem has been refined by many authors over the past century, with some of the sharpest results known today due to my advisor Cam Stewart and Shabnam Akhtari.

If one wishes to generalize Thue’s theorem to higher dimensions, then there are two obvious candidates. The more obvious one is to consider general homogeneous polynomials F(x_1, \cdots, x_n) in many variables. However, in this case Thue’s techniques do not generalize in an obvious way. Thue’s original argument reduced the problem to a diophantine approximation problem, i.e., to show that there are only finitely many rational numbers which are `very close’ to a given root of F. This exploits the fact that all binary forms can be factored into linear forms, a feature which is absent for general homogeneous polynomials in n \geq 3 variables. Thus, one needs to narrow the scope and instead consider decomposable forms, meaning homogeneous polynomials F(x_1, \cdots, x_n) which can be factored into linear forms over \mathbb{C}, say. To this end, significant progress has been made. Most notably, Schmidt’s subspace theorem was motivated by this precise question. Schmidt, Evertse, and several others have worked over the years to establish results which are quite close to the case of Thue equations, though significant gaps remain, but that’s a separate issue and we omit further discussion.

The question I have is whether there is a way to close the gap between what can be proved about decomposable forms and for general forms. The forms which are the most different from decomposable forms, which are essentially as degenerate as possible geometrically, are the ones that are the least degenerate; i.e., the geometrically irreducible forms. These are the forms that cannot be factored at all. Specifically, their lack of factorization is not because its factorability is hidden by some arithmetic or algebraic obstruction but because it is geometrically not reducible. Precisely, geometrically irreducible forms are those forms F(x_1, \cdots, x_n) which do not have factors of positive degree even over an algebraically closed field, say \mathbb{C}. For decomposable forms, a necessary condition is to ensure that the degree d exceeds the number of variables n; much like the condition d \geq 3 in the case of Thue’s theorem. However, absent from the case when n = 2 is the possibility that there are forms of degree exceeding one which behave `almost’ like linear forms, in a concrete sense. By this I mean we can show that as long as basic local conditions are satisfied, the form represents all integers. This has shown to be the case for forms whose degree is very small compared to the number of variables; the first such result is due to Birch, and has been improved steadily since then. Thus the interpolation I am wondering about is the following: let F(x_1, \cdots, x_n) be a homogeneous polynomial with integer coefficients and degree d \geq n+1, with no repeated factors of positive degree. Suppose that F factors, over \mathbb{C}, into forms of very small degree, say d' \ll \log n. Can we hope to establish finiteness results like we can for decomposable forms? This seems like a very interesting question.

If you are interested in any of these problems or if you have an idea as to how to approach any of them, please let me know!

Notes on the Oxford IUT workshop by Brian Conrad

mathbabe

Brian Conrad is a math professor at Stanford and was one of the participants at the Oxford workshop on Mochizuki’s work on the ABC Conjecture. He is an expert in arithmetic geometry, a subfield of number theory which provides geometric formulations of the ABC Conjecture (the viewpoint studied in Mochizuki’s work).

Since he was asked by a variety of people for his thoughts about the workshop, Brian wrote the following summary. He hopes that a non-specialist may also learn something from these notes concerning the present situation. Forthcoming articles in Nature and Quanta on the workshop will be addressed at the general public. This writeup has the following structure:

  1. Background
  2. What has delayed wider understanding of the ideas?
  3. What is Inter-universal Teichmuller Theory (IUTT = IUT)?
  4. What happened at the conference?
  5. Audience frustration
  6. Concluding thoughts
  7. Technical appendix

1.  Background

The ABC Conjecture is one of the outstanding conjectures in number…

View original post 7,551 more words

Solution to why that nonic is solvable

Previously, we claimed that for any quadruple a,b,c,d of rational integers, not all zero, the nonic polynomial

F(x) = x^9 + a x^8 + b x^7 + c x^6 + d x^5 - (126 - 56 a + 21 b - 6 c + d)x^4

- (84 - 28 a + 7 b - c)x^3 - (36 - 8a + c)x^2 - (9 - a)x - 1

is solvable, meaning that it is possible to determine the roots of F explicitly by radicals. By Galois theory, this is equivalent to the assertion that the Galois group of the Galois closure of F is a solvable group.

To do this, we need the following fact, which was proved by Bhargava and Yang in this paper as Theorem 4. The statement of their theorem is correct, and the proof is mostly correct, but there is a minor issue. The problem is that the stabilizer of F under \text{GL}_2(\mathbb{C}), which we will denote by \text{Aut}_\mathbb{C} F, need not be realizable as a subgroup of the Galois group of F, which we will denote by \text{Gal} (F). However, the argument they gave for the commuting action between elements of \text{Aut}_\mathbb{C} F and \text{Gal}(F) is correct.

We now consider a binary form F of the shape given above. We see that both \text{Aut}_\mathbb{C} F and \text{Gal}(F) act on the roots of F and can therefore be embedded via their action on the roots of F into S_9, the symmetric group on nine letters. If we restrict to \text{GL}_2(\mathbb{Q}) action and denote by \text{Aut} F = \text{Aut}_\mathbb{Q} F, then it follows from Galois theory that \text{Gal} (F) must be a subgroup of the centralizer of any element in \text{Aut} F in S_9.

We then check that, miraculously, \text{Aut} F always contains the following element of order 3:

U = \begin{pmatrix} 0 & 1 \\ -1 & -1 \end{pmatrix}.

We check that the only complex numbers fixed by this element are the roots of x^2 + x + 1. Therefore, if F is irreducible, then no root of F can be fixed by U. Relabelling the roots if necessary, we can assume that U can be realized in S_9 as

U = (123)(456)(789).

The centralizer C(U) of U is the stabilizer of U under the action by conjugation of S_9 on itself. The orbit of U is precisely the set of elements in S_9 of the same cycle type, therefore the orbit contains

\displaystyle \binom{9}{3}(2!) \binom{6}{3}(2!) (2!) \frac{1}{3!} = 2240.

By the orbit-stabilizer theorem, it follows that C(U) contains 9!/2240 = 162 = 2 \times 3^4 elements. Since \text{Gal} (F) is contained in C(U), it is solvable by Burnside’s theorem, which asserts that any finite group whose order is only divisible by two distinct primes is solvable.

 

Mathematicians have always been willing to accept new ideas

In a recent publication (see here) of a popular internet comic strip that I like, the author poked fun at the supposed notion that mathematicians are intransigent and stubborn, failing to accept new ideas in a timely fashion (this is not merely an outside opinion, there are some insiders who feel the same way… quite strongly in fact. See Doron Zeilberger’s opinion page for instance). However, as someone who is about to get a PHD in mathematics and an amateur mathematics historian, I would like to voice my polite disagreement with Mr. Weinersmith’s premise.

This is the message I posted on the comic’s Facebook page:

“As someone about to get a PHD in mathematics, I can attest that the basic premise of this comic is wrong. Mathematicians have always been much quicker to accept new advances and shifts in paradigm faster than their contemporaries in other fields. The only times when acceptance of new results, even paradigm shifting ones, were slow to be accepted by the mathematical community are those where the result was poorly written or poorly presented (for example, Cantor’s work on cardinality, Brun’s sieve theory, Ramanujan’s work before Hardy, Heegner’s solution of Gauss’s class number one problem, and most recently and still unresolved: Shinichi Mochizuki’s purported proof of the abc conjecture)

Edit: to give a positive example, consider the proofs of Fermat’s Last Theorem and more recently, the Poincare conjecture. These two are considered two of the most difficult mathematical problems in history, and when their solutions were presented, it took only a few years for the mathematical community to verify and accept their correctness. Even more recently, Yitang Zhang’s manuscript containing the proof of the existence of infinitely many primes which are within a bounded distance from each other was accepted in JUST THREE WEEKS by one of mathematics’ top journals, even though Zhang was at the time completely unknown and in particular was not known to have done any work in number theory.”

I would like to elaborate even further on my comments. Not only are mathematicians not intransigent as suggested in the comic, mathematicians are likely to be the group in academia which is the most willing to share their ideas and accept other people’s ideas (this is a broad stroke, there are certainly many people who arbitrarily dismiss people’s work, as anyone who has faced a grouchy referee when submitting a paper can attest) . The lightning fast acceptance of the two big advances on the bounded gap problem should serve as a testament to this. Both of the main players, Yitang Zhang and James Maynard, were at the time more or less completely unknown. Their ‘lowly’ status did not prevent their work from being recognized, almost instantly in fact, by some of the biggest experts in the field (including Andrew Granville and Terence Tao). This seems unlikely in many other areas, especially as one gets further away from pure science.

This is not to say that mathematicians are just more progressive and forward-thinking in general. Social attitudes among mathematicians, while probably better than the general population, is certainly not stellar, as a recent paper by Greg Martin points out.

A solvable nonic polynomial

Continuing from our demonstration that a certain sextic polynomial, which are not in general solvable, has an explicit factorization, we go on to describe how a class of degree 9 polynomials is solvable. Consider a,b,c,d to be rational integers, not all zero, and the nonic polynomial

F(x) = x^9 + a x^8 + b x^7 + c x^6 + d x^5 - (126 - 56 a + 21 b - 6 c + d)x^4

- (84 - 28 a + 7 b - c)x^3 - (36 - 8a + c)x^2 - (9 - a)x - 1.

The claim is that all such polynomials are in fact solvable!

I will reveal the argument a little later, but it’ll be interesting to see what kind of arguments readers can come up with.