Hi all,

I have included the solutions to the 2016 analysis exam here. Enjoy!

Leave a reply

Hi all,

I have included the solutions to the 2016 analysis exam here. Enjoy!

More so than that, it appears that my observations watching the episode where ‘died’ were basically right on the money. Feels good to be right!

The following joint paper of myself and my advisor Professor C.L. Stewart has been released on the arxiv. In this follow-up post, I would like to describe in some detail how to establish an asymptotic formula for the number of integers in an interval which are representable by a fixed binary form with integer coefficients and non-zero discriminant.

There are essentially three ingredients which go into the proof, each established decades apart. The first essential piece of the puzzle was established by Kurt Mahler in the 1930’s. He showed that if we examine the number of integer points in the region , then the number of such points is closely approximated by the area of the region. Since the region is homogeneously expanding, the area itself is well-approximated by scaling the `fundamental region’ given by . Indeed, let denote the area of this fundamental region and let denote the number of integer pairs such that . Then Mahler proved that

More precisely, he proved a very good error term. He showed that when , we have

The question then becomes is there some way to remove the redundancies in Mahler’s theorem? For example, if has even degree, then for all , so the pairs represent the same integer. Is it true that this is the only way that this can happen? Unfortunately, the answer is no. For example, consider the binary form . Then clearly the points all represent 1, and this construction works for any positive integer . Therefore, there does not appear to be a simple way to count the multiplicities of points representing the same integer in Mahler’s theorem.

While examples like the above exist, perhaps it is possible that this happen sufficiently rare as to be negligible. For instance, if only many points counted by are such that there exist many `essentially different’ (precise definition to come) other points which represent the same integer, and even in the worst case there can be at most say many essentially different pairs, then we have shown that in total, the contribution from these bad points to is only , which is fine.

We shall now make some definitions. We say that an integer is *essentially represented *by if there exist two integer pairs for which , then there exists an element

such that

and such that

for all . Otherwise, we say that is not essentially represented.

Now put to be the number of integers up to which are representable by , and let be the number of essentially represented integers and be the number of non-essentially represented integers. If we can show that , then we are basically done. This amounts to showing that is small compared to .

Christopher Hooley proved this for both the ‘easy cubic case’ and the ‘hard cubic case’. However, it was D.R. Heath-Brown who showed that is always small compared to . This paved the way to our eventual success at this problem.

It remains to account for the interaction between those which fix and . These elements are called the *rational automorphisms *of and we denote them by . The most novel contribution we made to this topic is that we accounted for the exact interaction between and with the so-called ‘redundancy lemmas’. This will be discussed at a future time.

I just watched the latest episode of The Blacklist. MAJOR SPOILERS ARE AHEAD, THIS IS YOUR WARNING.

So in the latest episode, “Mr. Solomon – Conclusion”, there is a cataclysmic plot twist in which Elizabeth Keen, whose mysterious familial background is the driving force of the show, seemingly perished after a medical complication. The shock value of this development cannot be underestimated… however, I claim that she is (obviously) not dead. Here are the clues:

1) When Keen was assessed by the first doctor at the hospital, her diagnosis was fine. There are no indications at that time that she has suffered any serious trauma.

2) When Mr. Kaplan went to pick her up from the hospital, Keen was talking angrily about how she’s in this situation because of Reddington. Mr. Kaplan looked sympathetic. Most likely they planned what to do right after.

3) Notice that Keen started developing medical complications only after being treated at the nightclub. During this time Mr. Kaplan, Keen, and Nick had time to converse in private; Reddington (and therefore the audience) is not aware of what happened. After this conversation, all of a sudden Keen’s medical condition started deteriorating, and Nick asked for additional medical equipment to be brought. Who took care of this? Mr. Kaplan.

4) Mr. Kaplan very squarely blamed Red for Keen’s predicament. It is not at all surprising if she would secretly plan to fake Keen’s death just so that Reddington could not be in Keen’s life any longer.

5) After Keen has been pronounced dead, Mr. Kaplan, who earlier in the episode showed that she obviously cares for Keen, displayed no distress. Reddington almost fell over and Samar balled her eyes out, even Nick was extremely distraught. So why didn’t Mr. Kaplan react at all? Probably because she knows Liz is not really dead. Moreover, she again murmured something to Nick while Dembe was helping Red into the car. Clearly there is something astray here.

The obvious need to ‘kill’ Liz is because Megan Boone is pregnant for real and probably has to go give birth for real, and take some mat leave on top of that. This is a pragmatic solution to this very gnarly logistical issue. Do not be surprised if Keen pops up all of a sudden on episode 20 or earlier.

My wife showed me a homework problem of a friend’s son in China that he could not solve, and his mother also could not solve. I thought it was quite an interesting problem, so I thought I would share.

I do not have an acceptable diagram (which admittedly would make the problem much easier to digest), so I will be as precise as possible in describing the problem. Suppose that you have a triangle with a right angle at vertex . The length of the side is equal to 14. Construct squares (read counter-clockwise) on side and on side . Draw the line from vertex to the vertex of square , and let be the intersection of the line segment and . Suppose further that the line segment is parallel to . What is the area of the quadrilateral ?

We let denote the side length of , denote the side length of , so that . Put for the length of . Since and are parallel, it follows that . Therefore the desired area is given by (the square brackets denote the area of the polygon with the given vertices), or

This problem is nice because it requires adroit use of many different geometric facts, and once the proper principles are applied, the solution is beautiful and elegant. Hard to imagine an 11 year old being able to do this regularly though!

Edit: in the original post there was an error where I forgot to divide by 2 in the penultimate step. Silly mistake!

After months of silence, I am finally able to share the research I’ve been doing in the last few months. I’ve dropped hints before in this post and this other one. These are all part of a big project I’ve been working on jointly with my advisor on the representation of integers by binary forms. More recently, I have been working on a project with Cindy Tsang on counting binary quartic forms with small Galois groups. These are all connected by an insight into binary forms essentially due to Hooley.

Let be a binary form of degree , integer coefficients, and non-zero discriminant . Put for the number of integers in the interval for which the equation has a solution in integers . Put . When and is positive definite, Gauss proved that . He conjectured, and then Landau proved, that in this case. Thus most integers cannot be represented by , and for each integer that can be represented, there are many representations on average.

However, this is very atypical behaviour. Indeed, quadratic forms are complete norm forms of degree 2. For *incomplete *norm forms, i.e., binary forms of degree at least 3, one should expect a totally different behaviour in that and are not that different. This was confirmed by Hooley in a significant paper in 1967 [Hoo1]. Indeed, he showed that when is an irreducible binary cubic form such that is not a perfect integer square, then . We shall refer to this as the `easy cubic case’. In 1986 [Hoo2], he went on to obtain the asymptotic formula for for binary quartic forms of the shape . In this case, one has

when is not a perfect 4-th power of a rational number, and

otherwise. We refer to this as the `easy quartic case’. Finally, he went on to deal with the case when is a binary cubic form such that is a square in . In this case, he showed that there is a positive integer which can be determined explicitly in terms of the coefficients of such that

We will refer to this as the `hard cubic case’.

There is a general theory which applies to all binary forms (of any degree at least three) which allows one recover both the term in the easy quartic case and the in the hard cubic case. This will be fully explained in an joint paper by my advisor Professor Cameron Stewart and I. However, in the cubic and quartic cases specifically, one can resolve the problem more finely, in that not only can one describe the relationship between and , but show that there is a positive rational number which can be given explicitly in terms of the coefficients of when is a binary cubic or quartic form. My contribution to the cubic case is that we can find this rational constant even when is not assumed to be irreducible. Indeed, we find that the most interesting behaviour actually occurs when is completely reducible (but still with non-zero discriminant)! The reducibility of seems to have no effect in the case of quartic forms.

The key observation is that the behaviour of the constant is determined completely by the *automorphism group *of the binary form in . This appears to be an extraordinary insight made by Hooley in his investigation of the hard cubic case. It appears that almost all authors either explicitly or implicitly assumed that it suffices to look only at the smaller group .

More details will be posted later.

Part new year resolution and part a birthday present to myself (and those audience members interested), I’ve decided to write up some problems I’ve been thinking about but either don’t have the time or the techniques/knowledge to tackle at the present time. Hopefully they will keep me motivated into 2016, as well as anyone else who’s interested in them. In no particular order:

**1) Stewart’s Conjecture: **I have already discussed this problem in two earlier posts (here and here). The conjecture is due to my advisor, Professor Cameron Stewart, in a paper from 1991. The conjecture asserts that there exists a positive number such that for all binary forms of degree , integer coefficients, and non-zero discriminant, there exists a positive number which depends on such that for all integers with , the equation has at most solutions. In particular, the value of does not depend on nor . A weaker version of this conjecture asserts the existence of a positive number for every degree for which the above holds.

I suspect that Chabauty’s method, applied to the estimation of integer points on hyperelliptic curves, is close to being able to solve this problem; see this paper by Balakrishnan, Besser, and Muller. However, there may be other tools that may be used without involving a corresponding curve. That said, since a positive answer to Stewart’s conjecture would have significant impact on the theory of rational points on hyperelliptic curves, it seems that the two problems are intrinsically intertwined.

**2) Asymptotic Chen’s Theorem: **This is related to a problem I’ve been thinking about lately. Chen’s theorem asserts that every sufficiently large even integer can be written as the sum of a prime and a number which is the product of at most two primes. However, this simple statement hides the nature of the proof. The proof essentially depends on two parts, and (as far as I know) has not been improved on a fundamental level since Chen. The first is the very general Jurkat-Richert theorem, which can handle quite general sequences. Its input is some type of Bombieri-Vinogradov theorem, i.e., some type of positive level of distribution. It essentially churns out semi-primes of some order given a particular level of distribution. We will phrase the result slightly differently, in terms of the twin prime conjecture. Goldbach’s conjecture is quite related, and Chen actually proved the analogous statement for both the twin prime problem and Goldbach’s conjecture. Bombieri-Vinogradov provides the level , and with this level, the Jurkat-Richert theorem immediately yields that there exist infinitely many primes such that is the product of at most three primes. Using this basic sieve mechanism and the Bombieri-Vinogradov theorem, it is impossible to breach the ‘three prime’ barrier. A higher level of distribution would do the trick, but so far, Bombieri-Vinogradov has not been improved in general (although Yitang Zhang‘s seminal work on bounded gaps between primes does provide an improvement in a special case). Thus, we require the second piece of the proof of Chen’s theorem, the most novel part of his proof. He was able to show that there aren’t too many primes such that has *exactly *three prime factors, so few that the difference in number between those primes where has at most three prime factors and those with exactly three prime factors can be detected. However, the estimation of these two quantities using sieves (Chen’s theorem does not introduce any technology that’s not directly related to sieves) produce terms with the same order of magnitude, so Chen’s approach destroys any hope of establishing an asymptotic formula for the number of primes for which is the product of at most two primes. It would be a significant achievement to prove such an asymptotic formula, because it means there has been a significant improvement to the underlying sieve mechanism, or some other non-sieve technology has been brought in successfully to tackle the problem. Either case, it would be quite the thing to behold.

**3) An interpolation between geometrically irreducible forms and decomposable forms: **A celebrated theorem of Axel Thue is the statement that for any binary form with integer coefficients, degree , and non-zero discriminant and for any non-zero integer , the equation has only finitely many solutions in integers . Thue’s theorem is ineffective, meaning one cannot actually find an upper bound for the number of solutions except to know that it must be finite. Thue’s theorem has been refined by many authors over the past century, with some of the sharpest results known today due to my advisor Cam Stewart and Shabnam Akhtari.

If one wishes to generalize Thue’s theorem to higher dimensions, then there are two obvious candidates. The more obvious one is to consider general homogeneous polynomials in many variables. However, in this case Thue’s techniques do not generalize in an obvious way. Thue’s original argument reduced the problem to a diophantine approximation problem, i.e., to show that there are only finitely many rational numbers which are `very close’ to a given root of . This exploits the fact that all binary forms can be factored into linear forms, a feature which is absent for general homogeneous polynomials in variables. Thus, one needs to narrow the scope and instead consider *decomposable forms*, meaning homogeneous polynomials which can be factored into linear forms over , say. To this end, significant progress has been made. Most notably, Schmidt’s subspace theorem was motivated by this precise question. Schmidt, Evertse, and several others have worked over the years to establish results which are quite close to the case of Thue equations, though significant gaps remain, but that’s a separate issue and we omit further discussion.

The question I have is whether there is a way to close the gap between what can be proved about decomposable forms and for general forms. The forms which are the most different from decomposable forms, which are essentially as degenerate as possible geometrically, are the ones that are the least degenerate; i.e., the geometrically irreducible forms. These are the forms that cannot be factored at all. Specifically, their lack of factorization is not because its factorability is hidden by some arithmetic or algebraic obstruction but because it is geometrically not reducible. Precisely, geometrically irreducible forms are those forms which do not have factors of positive degree even over an algebraically closed field, say . For decomposable forms, a necessary condition is to ensure that the degree exceeds the number of variables ; much like the condition in the case of Thue’s theorem. However, absent from the case when is the possibility that there are forms of degree exceeding one which behave `almost’ like linear forms, in a concrete sense. By this I mean we can show that as long as basic local conditions are satisfied, the form represents all integers. This has shown to be the case for forms whose degree is very small compared to the number of variables; the first such result is due to Birch, and has been improved steadily since then. Thus the interpolation I am wondering about is the following: let be a homogeneous polynomial with integer coefficients and degree , with no repeated factors of positive degree. Suppose that factors, over , into forms of very small degree, say . Can we hope to establish finiteness results like we can for decomposable forms? This seems like a very interesting question.

If you are interested in any of these problems or if you have an idea as to how to approach any of them, please let me know!