Why Elizabeth Keen is (obviously) not dead

I just watched the latest episode of The Blacklist. MAJOR SPOILERS ARE AHEAD, THIS IS YOUR WARNING.

So in the latest episode, “Mr. Solomon – Conclusion”, there is a cataclysmic plot twist in which Elizabeth Keen, whose mysterious familial background is the driving force of the show, seemingly perished after a medical complication. The shock value of this development cannot be underestimated… however, I claim that she is (obviously) not dead. Here are the clues:

1) When Keen was assessed by the first doctor at the hospital, her diagnosis was fine. There are no indications at that time that she has suffered any serious trauma.

2) When Mr. Kaplan went to pick her up from the hospital, Keen was talking angrily about how she’s in this situation because of Reddington. Mr. Kaplan looked sympathetic. Most likely they planned what to do right after.

3) Notice that Keen started developing medical complications only after being treated at the nightclub. During this time Mr. Kaplan, Keen, and Nick had time to converse in private; Reddington (and therefore the audience) is not aware of what happened. After this conversation, all of a sudden Keen’s medical condition started deteriorating, and Nick asked for additional medical equipment to be brought. Who took care of this? Mr. Kaplan.

4) Mr. Kaplan very squarely blamed Red for Keen’s predicament. It is not at all surprising if she would secretly plan to fake Keen’s death just so that Reddington could not be in Keen’s life any longer.

5) After Keen has been pronounced dead, Mr. Kaplan, who earlier in the episode showed that she obviously cares for Keen, displayed no distress. Reddington almost fell over and Samar balled her eyes out, even Nick was extremely distraught. So why didn’t Mr. Kaplan react at all? Probably because she knows Liz is not really dead. Moreover, she again murmured something to Nick while Dembe was helping Red into the car. Clearly there is something astray here.

The obvious need to ‘kill’ Liz is because Megan Boone is pregnant for real and probably has to go give birth for real, and take some mat leave on top of that. This is a pragmatic solution to this very gnarly logistical issue. Do not be surprised if Keen pops up all of a sudden on episode 20 or earlier.

 

A grade school question from China

My wife showed me a homework problem of a friend’s son in China that he could not solve, and his mother also could not solve. I thought it was quite an interesting problem, so I thought I would share.

I do not have an acceptable diagram (which admittedly would make the problem much easier to digest), so I will be as precise as possible in describing the problem. Suppose that you have a triangle ABC with a right angle at vertex B. The length of the side AC is equal to 14. Construct squares AGFB (read counter-clockwise) on side AB and BEDC on side BC. Draw the line from vertex A to the vertex D of square BCDE, and let H be the intersection of the line segment AD and BC. Suppose further that the line segment GH  is parallel to AC. What is the area of the quadrilateral BHDE?

We let x denote the side length of AB, y denote the side length of BC, so that x^2 + y^2 = 14^2 = 196. Put z for the length of BH. Since AC and GH are parallel, it follows that z = y - x. Therefore the desired area is given by [AED] - [ABH] (the square brackets denote the area of the polygon with the given vertices), or

\frac{y(x+y)}{2} - \frac{x(y-x)}{2} = \frac{x^2 + y^2}{2} = 98.

This problem is nice because it requires adroit use of many different geometric facts, and once the proper principles are applied, the solution is beautiful and elegant. Hard to imagine an 11 year old being able to do this regularly though!

Edit: in the original post there was an error where I forgot to divide by 2 in the penultimate step. Silly mistake!

On binary forms

After months of silence, I am finally able to share the research I’ve been doing in the last few months. I’ve dropped hints before in this post and this other one. These are all part of a big project I’ve been working on jointly with my advisor on the representation of integers by binary forms. More recently, I have been working on a project with Cindy Tsang on counting binary quartic forms with small Galois groups. These are all connected by an insight into binary forms essentially due to Hooley.

Let F be a binary form of degree d, integer coefficients, and non-zero discriminant \Delta(F). Put R_F(Z) for the number of integers n in the interval [-Z,Z] for which the equation F(x,y) = n has a solution in integers x,y. Put N_F(Z) = \# \{(x,y) \in \mathbb{Z}^2 \text { s.t. } |F(x,y)| \leq Z\}. When d = 2 and F is positive definite, Gauss proved that N_F(Z) \sim A_1 Z. He conjectured, and then Landau proved, that R_F(Z) \sim A_2 Z (\log Z)^{-1} in this case. Thus most integers cannot be represented by F, and for each integer that can be represented, there are many representations on average.

However, this is very atypical behaviour. Indeed, quadratic forms are complete norm forms of degree 2. For incomplete norm forms, i.e., binary forms of degree at least 3, one should expect a totally different behaviour in that N_F(Z) and R_F(Z) are not that different. This was confirmed by Hooley in a significant paper in 1967 [Hoo1]. Indeed, he showed that when F is an irreducible binary cubic form such that \Delta(F) is not a perfect integer square, then N_F(Z) \sim R_F(Z). We shall refer to this as the `easy cubic case’. In 1986 [Hoo2], he went on to obtain the asymptotic formula for R_F(Z) for binary quartic forms of the shape F(x,y) = Ax^4 + Bx^2 y^2 + Cy^4. In this case, one has

R_F(Z) \sim \frac{1}{4} N_F(Z)

when A/C is not a perfect 4-th power of a rational number, and

\displaystyle R_F(Z) \sim \frac{1}{4} \left(1 - \frac{1}{2|AC|}\right) N_F(Z)

otherwise. We refer to this as the `easy quartic case’. Finally, he went on to deal with the case when F is a binary cubic form such that \Delta(F) is a square in $\mathbb{Z}$. In this case, he showed that there is a positive integer m which can be determined explicitly in terms of the coefficients of F such that

\displaystyle R_F(Z) \sim \left(1 - \frac{2}{3m} \right) N_F(Z).

We will refer to this as the `hard cubic case’.

There is a general theory which applies to all binary forms (of any degree at least three) which allows one recover both the |AC| term in the easy quartic case and the m in the hard cubic case. This will be fully explained in an joint paper by my advisor Professor Cameron Stewart and I. However, in the cubic and quartic cases specifically, one can resolve the problem more finely, in that not only can one describe the relationship between R_F(Z) and N_F(Z), but show that there is a positive rational number W_F which can be given explicitly in terms of the coefficients of F when F is a binary cubic or quartic form. My contribution to the cubic case is that we can find this rational constant even when F is not assumed to be irreducible. Indeed, we find that the most interesting behaviour actually occurs when F is completely reducible (but still with non-zero discriminant)! The reducibility of F seems to have no effect in the case of quartic forms.

The key observation is that the behaviour of the constant W_F is determined completely by the automorphism group of the binary form F in \text{GL}_2(\mathbb{Q}). This appears to be an extraordinary insight made by Hooley in his investigation of the hard cubic case. It appears that almost all authors either explicitly or implicitly assumed that it suffices to look only at the smaller group \text{GL}_2(\mathbb{Z}).

More details will be posted later.

Some problems for the new year

Part new year resolution and part a birthday present to myself (and those audience members interested), I’ve decided to write up some problems I’ve been thinking about but either don’t have the time or the techniques/knowledge to tackle at the present time. Hopefully they will keep me motivated into 2016, as well as anyone else who’s interested in them. In no particular order:

1) Stewart’s Conjecture: I have already discussed this problem in two earlier posts (here and here). The conjecture is due to my advisor, Professor Cameron Stewart, in a paper from 1991. The conjecture asserts that there exists a positive number c > 0 such that for all binary forms F(x,y) of degree d \geq 3, integer coefficients, and non-zero discriminant, there exists a positive number r_F which depends on F such that for all integers h with |h| \geq r_F, the equation F(x,y) = h has at most c solutions. In particular, the value of c does not depend on F nor d. A weaker version of this conjecture asserts the existence of a positive number c_d for every degree d \geq 3 for which the above holds.

I suspect that Chabauty’s method, applied to the estimation of integer points on hyperelliptic curves, is close to being able to solve this problem; see this paper by Balakrishnan, Besser, and Muller. However, there may be other tools that may be used without involving a corresponding curve. That said, since a positive answer to Stewart’s conjecture would have significant impact on the theory of rational points on hyperelliptic curves, it seems that the two problems are intrinsically intertwined.

2) Asymptotic Chen’s Theorem: This is related to a problem I’ve been thinking about lately. Chen’s theorem asserts that every sufficiently large even integer N can be written as the sum of a prime and a number which is the product of at most two primes. However, this simple statement hides the nature of the proof. The proof essentially depends on two parts, and (as far as I know) has not been improved on a fundamental level since Chen. The first is the very general Jurkat-Richert theorem, which can handle quite general sequences. Its input is some type of Bombieri-Vinogradov theorem, i.e., some type of positive level of distribution. It essentially churns out semi-primes of some order given a particular level of distribution. We will phrase the result slightly differently, in terms of the twin prime conjecture. Goldbach’s conjecture is quite related, and Chen actually proved the analogous statement for both the twin prime problem and Goldbach’s conjecture. Bombieri-Vinogradov provides the level 1/2, and with this level, the Jurkat-Richert theorem immediately yields that there exist infinitely many primes p such that p+2 is the product of at most three primes. Using this basic sieve mechanism and the Bombieri-Vinogradov theorem, it is impossible to breach the ‘three prime’ barrier. A higher level of distribution would do the trick, but so far, Bombieri-Vinogradov has not been improved in general (although Yitang Zhang‘s seminal work on bounded gaps between primes does provide an improvement in a special case). Thus, we require the second piece of the proof of Chen’s theorem, the most novel part of his proof. He was able to show that there aren’t too many primes p such that p+2 has exactly three prime factors, so few that the difference in number between those primes p where p+2 has at most three prime factors and those with exactly three prime factors can be detected. However, the estimation of these two quantities using sieves (Chen’s theorem does not introduce any technology that’s not directly related to sieves) produce terms with the same order of magnitude, so Chen’s approach destroys any hope of establishing an asymptotic formula for the number of primes p for which p+2 is the product of at most two primes. It would be a significant achievement to prove such an asymptotic formula, because it means there has been a significant improvement to the underlying sieve mechanism, or some other non-sieve technology has been brought in successfully to tackle the problem. Either case, it would be quite the thing to behold.

3) An interpolation between geometrically irreducible forms and decomposable forms: A celebrated theorem of Axel Thue is the statement that for any binary form F(x,y) with integer coefficients, degree d \geq 3, and non-zero discriminant and for any non-zero integer h, the equation F(x,y) = h has only finitely many solutions in integers x,y.  Thue’s theorem is ineffective, meaning one cannot actually find an upper bound for the number of solutions except to know that it must be finite. Thue’s theorem has been refined by many authors over the past century, with some of the sharpest results known today due to my advisor Cam Stewart and Shabnam Akhtari.

If one wishes to generalize Thue’s theorem to higher dimensions, then there are two obvious candidates. The more obvious one is to consider general homogeneous polynomials F(x_1, \cdots, x_n) in many variables. However, in this case Thue’s techniques do not generalize in an obvious way. Thue’s original argument reduced the problem to a diophantine approximation problem, i.e., to show that there are only finitely many rational numbers which are `very close’ to a given root of F. This exploits the fact that all binary forms can be factored into linear forms, a feature which is absent for general homogeneous polynomials in n \geq 3 variables. Thus, one needs to narrow the scope and instead consider decomposable forms, meaning homogeneous polynomials F(x_1, \cdots, x_n) which can be factored into linear forms over \mathbb{C}, say. To this end, significant progress has been made. Most notably, Schmidt’s subspace theorem was motivated by this precise question. Schmidt, Evertse, and several others have worked over the years to establish results which are quite close to the case of Thue equations, though significant gaps remain, but that’s a separate issue and we omit further discussion.

The question I have is whether there is a way to close the gap between what can be proved about decomposable forms and for general forms. The forms which are the most different from decomposable forms, which are essentially as degenerate as possible geometrically, are the ones that are the least degenerate; i.e., the geometrically irreducible forms. These are the forms that cannot be factored at all. Specifically, their lack of factorization is not because its factorability is hidden by some arithmetic or algebraic obstruction but because it is geometrically not reducible. Precisely, geometrically irreducible forms are those forms F(x_1, \cdots, x_n) which do not have factors of positive degree even over an algebraically closed field, say \mathbb{C}. For decomposable forms, a necessary condition is to ensure that the degree d exceeds the number of variables n; much like the condition d \geq 3 in the case of Thue’s theorem. However, absent from the case when n = 2 is the possibility that there are forms of degree exceeding one which behave `almost’ like linear forms, in a concrete sense. By this I mean we can show that as long as basic local conditions are satisfied, the form represents all integers. This has shown to be the case for forms whose degree is very small compared to the number of variables; the first such result is due to Birch, and has been improved steadily since then. Thus the interpolation I am wondering about is the following: let F(x_1, \cdots, x_n) be a homogeneous polynomial with integer coefficients and degree d \geq n+1, with no repeated factors of positive degree. Suppose that F factors, over \mathbb{C}, into forms of very small degree, say d' \ll \log n. Can we hope to establish finiteness results like we can for decomposable forms? This seems like a very interesting question.

If you are interested in any of these problems or if you have an idea as to how to approach any of them, please let me know!

Large sieve inequality

I am currently reading the book Opera de Cribro by John Friedlander and Henryk Iwaniec, and in particular studying the large sieve. One important thing to remember is that the “large sieve” is not really a sieve in the conventional sense. A ‘sieve’ typically refers to a choice of sieve weights, for example a combinatorial sieve is usually some way of defining sieve weights \lambda_d in such a way that \lambda_d = \mu(d) for some positive integers d, while \lambda_d = 0 for others. The large sieve does not involve a choice of sieve weights; and indeed, is usually independent from such choices (at least in its distilled from, the Bombieri-Vinogradov theorem).

The large sieve is actually just an inequality, which is not strictly number-theoretical. In fact, it applies equally well to any “well-spaced” points on the unit circle. The full force of this philosophy has recently been brought to bear on the Vinogradov Mean Value Theorem, in this paper. We write it in its most general form. We will adopt the convention e(x) = e^{2 \pi i x} and for a given sequence (a_n) of complex numbers, define S(\alpha) = \displaystyle \sum_{M < n \leq M+N} a_n e(\alpha n). Now suppose that \alpha_1, \cdots, \alpha_r are well-spaced real numbers with respect to some parameter \delta, meaning that for k \ne l, the number \alpha_k - \alpha_l is at least \delta away from an integer. We will write the distance of a real number \beta from an integer as \lVert \beta \rVert. In other words, we insist that if k \ne l, then \lVert \alpha_k - \alpha_l \rVert \geq \delta.

Moreover, it is clear that we can have at most \delta^{-1} many \alpha_j‘s. From the Cauchy-Schwarz inequality, we see that \lvert S(\alpha) \rvert^2 \leq N \sum_{M < n \leq M+N} |a_n|^2. Therefore, any upper bound for the term

\displaystyle \sum_r \lvert S(\alpha_r)\rvert^2

must include N, \delta^{-1}. The remarkable thing is that this is enough! Indeed, Selberg proved the following sharp form of the large sieve inequality:

\displaystyle \sum_r \lvert S(\alpha) \rvert^2 \leq (N + \delta^{-1} -1) \sum_n |a_n|^2.

This has the following striking number-theoretic interpretation. Consider all the rational numbers a/q with \gcd(a,q) = 1 and 1 \leq q \leq Q. Observe that any two such rationals differ by at most 1/Q^2, in other words, these rationals are Q^2-spaced. Then the large sieve inequality gives the following

\displaystyle \sum_{q \leq Q} \sum_{\substack{a \pmod{q} \\ \gcd(a,q) = 1}} \left \lvert S \left(\frac{a}{q}\right) \right \rvert^2 \leq \left(Q^2 + N - 1\right) \sum_n |a_n|^2.

There are striking consequences to this inequality, including the famous theorem of Linnik.

Notes on the Oxford IUT workshop by Brian Conrad

mathbabe

Brian Conrad is a math professor at Stanford and was one of the participants at the Oxford workshop on Mochizuki’s work on the ABC Conjecture. He is an expert in arithmetic geometry, a subfield of number theory which provides geometric formulations of the ABC Conjecture (the viewpoint studied in Mochizuki’s work).

Since he was asked by a variety of people for his thoughts about the workshop, Brian wrote the following summary. He hopes that a non-specialist may also learn something from these notes concerning the present situation. Forthcoming articles in Nature and Quanta on the workshop will be addressed at the general public. This writeup has the following structure:

  1. Background
  2. What has delayed wider understanding of the ideas?
  3. What is Inter-universal Teichmuller Theory (IUTT = IUT)?
  4. What happened at the conference?
  5. Audience frustration
  6. Concluding thoughts
  7. Technical appendix

1.  Background

The ABC Conjecture is one of the outstanding conjectures in number…

View original post 7,551 more words