Monday, April 30, 2018

The Birthday Problem


The Birthday problem is one of the classic problems in Probability theory concerning the probability that, in a set of $n$ randomly chosen people, some pair will have a the same birthday. The idea in this post is to generalize this setup and find the expected value in this generalization.

The generalization here is this: Assume that there is a very long line of persons ready to enter a very large room one by one. Each person is let in and declares her birthday upon entering the room. How many people, on average, must enter in order to find $k$ people that have birthdays that are $d$ days apart (given an $r$-day year)?

Non-Spoiler: Some readers may recognize that this Problem 584 of Project Euler. But note that this post neither gives the answer nor the right way to solve that problem. All we are going here is find an approximation of the required value which is far off from the correct answer.

Well, when we say 'approximation', we are immediately reminded of the Poisson approximation of the Birthday problem in Wikipedia. But the idea here is that most of the approximations I found in the web concerns only about the probability and not about the expected value.  A special case of this problem is solved in Strategic Practice 5 of the (amazing) Stat 110 course by the (extraordinary) Prof. Joseph Blitzstein. And I got curious as to how good the approximation really is.

Let $\mathbb{E}(k,d,r)$ be the expected number of people that have to enter room such that we have $k$ people with birthdays $d$ days apart for the first time.

Let $X_n$ be the number of $k$-set birthday matches in when $n$ people have entered the room. By the Poisson paradigm, $X_n$ can be considered to be a poisson variable. The parameter $\lambda_n$ for this variable is given by,

$\displaystyle\lambda_n=\displaystyle\binom{n}{k}\cdot\frac{1}{(k-1)!}\left(\frac{2d+1}{r}\right)^{k-1}$  (why?)

Therefore we immediately have, $\mathbb{P}(X_n=0)=e^{-\lambda_n}$

Now by a simple trick common in evaluating expected values, we have,

$\begin{align}
\displaystyle\mathbb{E}(k,d,r)&=k+\sum_{n=k}^\infty\mathbb{P}(X_n=0)=k+\displaystyle\sum_{n=k}^\infty e^{-\lambda_n}
\end{align}$

Though we have an infinite sum, the numbers decrease rapidly because of the exponential factor. For the values given the said Project euler problem, this approximation gives $\mathbb{E}_{\text{Poisson}}(3,1,10)\approx 6.15092$ and $\mathbb{E}_{\text{Poisson}}(3,7,100)\approx 8.79751$

But we can take it a little further. When the parameter of a Poisson variable get very large, then the Normal variable with the same mean and variance as the Poisson variable gives better accuracy. Therefore,

$\begin{align}
\displaystyle\mathbb{E}(k,d,r)&=k+\sum_{n=k}^\infty\mathbb{P}(X_n=0)\\
&=k+\displaystyle\sum_{n=k}^\infty \mathbb{P}(-1/2 \leq X_n \leq 1/2)
\end{align}$

where $X_n \sim \mathcal{N}(\lambda_n,\lambda_n)$ along with the continuity correction.

Here again, the numbers decrease fast and the sum converges in about fifty terms. Now we have $\mathbb{E}_{\text{Normal}}(3,1,10)\approx 5.68554$ and $\mathbb{E}_{\text{Normal}}(3,7,100)\approx 8.04791$

In fact for the required answer, the normal approximation gives $\mathbb{E}_{\text{Normal}}(4,7,365)\approx 33.6396$ with a relative error of just about $2.5\%$. Not bad.

These values were calculated with the following Mathematica code.

Clear["Global`*"];
PoissonApprox[k_, d_, r_] :=
  k + Sum[N[
     Exp[-Binomial[n, k] Power[(1 + 2 d)/r, k - 1]/
        Factorial[k - 1]]], {n, k, 100}];
NormalApprox[k_, d_, r_] :=
 k + Sum[N[
    CDF[NormalDistribution[
       Binomial[n, k] Power[(1 + 2 d)/r, k - 1]/Factorial[k - 1],
       Sqrt[Binomial[n, k] Power[(1 + 2 d)/r, k - 1]/
          Factorial[k - 1]]], 1/2] -
     CDF[NormalDistribution[
       Binomial[n, k] Power[(1 + 2 d)/r, k - 1]/Factorial[k - 1],
       Sqrt[Binomial[n, k] Power[(1 + 2 d)/r, k - 1]/
          Factorial[k - 1]]], -1/2]], {n, k, 100}]
PoissonApprox[4, 7, 365]
NormalApprox[4, 7, 365]

Hope you enjoyed this. See you again with a nice problem.


Until then
Yours Aye
Me

Tuesday, April 24, 2018

Probability on a Chess Match


Consider two friends $A$ and $B$ playing a series of bullet chess games. As every time the player who lost the recent game wants to continue, they come with an idea. They'll play the first game. After the game, they'll toss a fair coin. If the coin comes up heads, they'll continue playing another game, else they end the match. They'll continue this procedure until the coin turns up Tails.

The question we are interested here is the probability of either player winning the match given the probability $\alpha$ of $A$ winning a game, probability $\gamma$ of $B$ winning a game and the remaining probability $\beta=1-\alpha-\gamma$ of a game ending in a draw. WLOG assume $\alpha<\gamma$.

All the equations referred in this post now on are from A list of Integrals and Identities.

Let $A_{n,k}$ denote the event of $A$ winning a match of $n$ games by $k$ more wins (ignoring the coin tosses). Then,

$\displaystyle\mathbb{P}(A_{n,k})=\sum_{a+b+c =n\\a-c=k}\binom{n}{a,b,c}\alpha^a\beta^b\gamma^c$

where $a$ denotes the number of games won by $A$, $c$ the number of games won by $B$ and $b$ the number of games that ended in a draw.

First, let's tackle an easy question. That is finding the probability of the match ending in a draw.

$\begin{align}
\mathbb{P}(\text{Draw})&=\displaystyle\sum_{n=1}^\infty\mathbb{P}(A_{n,0}\text{ | }n\text{ games played})\cdot\mathbb{P}(n\text{ games being played})\\
\end{align}$

As we know that the probability of the match lasting $n$ games is $2^{-n}$, using (1) we have,

$\begin{align}
\mathbb{P}(\text{Draw})&=\displaystyle\frac{\cot\phi}{\sqrt{\alpha\gamma}}-1\\
\end{align}$,   where $\phi=\cos^{-1}\displaystyle\left(\frac{2\sqrt{\alpha\gamma}}{1+\alpha+\gamma}\right)$


Similarly, we now look at the probability of $A$ not losing the match. This time we have,

$\begin{align}
\mathbb{P}(A \text{ not losing})&=\displaystyle\sum_{n=1}^\infty\sum_{k=0}^\infty\mathbb{P}(A_{n,k}\text{ | }n\text{ games played})\cdot\mathbb{P}(n\text{ games being played})\\
\end{align}$

Using (6), this reduces to,

$\begin{align}
\mathbb{P}(A \text{ not losing})&=\displaystyle\frac{\cot\phi}{\sqrt{\alpha\gamma}-\alpha(\sec\phi-\tan\phi)}-1\\
\end{align}$

In fact, instead of using a fair coin, if they decide to use a biased coin so that they continue playing with probability $q$ and end the match with probability $p=1-q$,

$\begin{align}
\displaystyle\frac{q}{1-q}\mathbb{P}(\text{Draw})&=\frac{\frac{1}{2}\cot\phi}{\sqrt{(q\alpha)(q\gamma)}}-1\\
\end{align}$

$\displaystyle\frac{q}{1-q}\mathbb{P}(A \text{ wins the match})=\displaystyle\frac{\frac{1}{2}\cot\phi}{(q\gamma)(\sec\phi+\tan\phi)-\sqrt{(q\alpha)(q\gamma)}}$

$\displaystyle\frac{q}{1-q}\mathbb{P}(B \text{ wins the match})=\displaystyle\frac{\frac{1}{2}\cot\phi}{(q\alpha)(\sec\phi+\tan\phi)-\sqrt{(q\alpha)(q\gamma)}}$

where $\displaystyle\phi=\cos^{-1}\left(\frac{2\sqrt{(q\alpha)(q\gamma)}}{(1-q)+q\alpha+q\gamma}\right)$

These expressions can be written algebraically without the use of trigonometric functions but they are clumsy and I prefer this form. Also, there a couple of nice details hidden in these expressions. First, using a right choice of $q$, the weaker player will be able to decrease his opponent's chance of winning the match which was surprising for me.

Second, in the interesting and limiting case of $q\to1$, the LHSs of the above three expression can be seen as the expected number of games after which the match is at a drawn stage, player $A$ is in the lead and player $B$ is in the lead respectively.

$\displaystyle\lim_{q\to1}\displaystyle\frac{q}{1-q}\mathbb{P}(\text{Draw})=\frac{1}{|\gamma-\alpha|}-1$,

$\displaystyle\lim_{q\to1}\displaystyle\frac{q}{1-q}\mathbb{P}(A \text{ wins the match})=\frac{\alpha}{\text{max}(0,\gamma-\alpha)^2}$ and

$\displaystyle\lim_{q\to1}\displaystyle\frac{q}{1-q}\mathbb{P}(B \text{ wins the match})=\frac{\gamma}{\text{max}(0,\alpha-\gamma)^2}$

ASIDE: An interesting discussion on a related problem is given in Joint probability of geometric random variables. However, there the distributions are independent. Our problem can be stated similarly using the result that if,

$\vec{\mathbf{X}}\sim \text{Mult}(N, \vec{\mathbf{p}})$ where $N \sim \text{NBin}(m,s)$, then $X_i \sim \displaystyle\text{NBin}\left(m,\frac{s}{s+(1-s)p_i}\right)$

But the $X_i$'s are not independent (they would be had $N$ been a Poisson variable).

Even though we started with the assumption $\alpha<\gamma$, fortunately the final results holds for all values of $\alpha$ and $\gamma$. The symmetry of the formulae is really enchanting to me and greatly satisfied that I've solved this. Hope you enjoyed it.

Until then
Yours Aye
Me

A List of Integrals and Identities


A list the integrals and identities used in solving the problem in this post.

Throughout this post we consider $0<\alpha, \gamma,r \leq 1$ and $a, b, c, k$ is a non-negative integer.


(1) I'm not sure where I saw this. I'll update the reference if I find it. UPDATE: I found the reference. It's given in Advances in Combinatorial Methods and Applications to Probability and Statistics

$\displaystyle\sum_{a,b,c \geq 0 \\ a+b+c=n \\ a-c=k}\binom{n}{a,b,c}\alpha^a(1-\alpha-\gamma)^b\gamma^c=\frac{(\alpha/\gamma)^{k/2}}{\pi}\int_0^\pi  \cos k\theta\text{ } (1-\alpha-\gamma+2\sqrt{\alpha\gamma}\cos \theta)^n \, d\theta$


(2) This is given in Generality of algebra. Anyway, this is easy to prove if we recognize $r^k \cos kx=\text{Re}(r^k e^{ikx})$ and it's just a geometric series.

$1+r\cos x+r^2 \cos 2x+r^3 \cos 3x+\cdots=\displaystyle\frac{1-r\cos x}{1-2r\cos x+r^2}$


(3) This is just pattern recognition by using different values in Mathematica.

$\displaystyle\int \frac{1-r \cos x}{1-2r \cos x +r^2}\,dx=\frac{x}{2}+\tan^{-1}\left(\frac{1+r}{1-r}\tan\left(\frac{x}{2}\right)\right)+\text{constant}$

Interestingly,

$\displaystyle\frac{1}{\pi}\int_0^\pi \frac{1-r \cos x}{1-2r \cos x +r^2}\,dx=H[1-r]$

where $H[x]$ is the Heaviside Unit step function.


(4) I definitely saw this Wikipedia. But again I couldn't find it now. UPDATE: Found this too in List of definite integrals.

$\displaystyle\frac{1}{\pi}\int_0^\pi\frac{\cos kx}{\sec \phi-\cos x}\,dx=\frac{(\sec \phi - \tan \phi)^k}{\tan \phi}$


(5) Using $k=0$ in (4),

$\displaystyle\frac{1}{\pi}\int_0^\pi\frac{1}{\sec \phi-\cos x}\,dx=\cot\phi$


(6) Multiplying (4) by $r^k$ and summing across all $k$,

$\displaystyle\frac{1}{\pi}\int_0^\pi\frac{1+r\cos x+r^2 \cos 2x+r^3 \cos 3x+\cdots}{\sec \phi-\cos x}\,dx=\frac{\cot \phi}{1-r(\sec \phi - \tan \phi)}$



Until then
Yours Aye
Me

Thursday, April 19, 2018

A mild generalization of the Ménage problem


We all know about the classic Ménage problem which for the number of different ways in which it is possible to seat a set of male-female couples at a dining table so that men and women alternate and nobody sits next to his or her partner.

Here again we consider a similar setup except each family consists four members, a father, mother, a son and a daughter. To be clear, we seek the number of different ways in which it possible to seat a set of families at a dining table so that men and women alternate and no family end up sitting together. Note that we consider a family to be sitting together only when all members of the same family sit next to each other.

The idea here is inspired by the Non-sexist solution of the menage problem. It is easy to see that there are $2(2n)!^2$ seatings in total such that the dinner table has a male-female arrangement. That is, $2$ ways to decide which seat is for men/women; $(2n)!$ ways each for men and women.

Consider $n>1$. Let $L_n$ be the answer we are trying to find for the case of $n$ families. Then, using inclusion-exclusion, we have

$L_n=\displaystyle\sum_{k=0}^n(-1)^k\binom{n}{k}w_k$

where $w_k$ denotes the number of seatings such that a given set of $k$ families are seated together (There maybe other families sitting together as well but we don't care about them). To compute $w_k$, let's define $d_k$ which is the number of ways of covering a circle containing $4n$ equally spaced points with a curved domino that spans $4$ points.

Calculation of $d_k$'s are very easy. It is just the stars and bars problem on a circular setup. For the problem at hand, we have,

$d_k=\displaystyle\frac{4n}{4n-3k}\binom{4n-3k}{k}$

The idea for the above formulae is exactly like that of this Stack Exchange question.

Now, we compute $w_k$. Note that there are $8$ ways in which a family can be seated together with men and women alternating

If we are done choosing $k$ families, we have $2$ ways to decide on which are men's seats and which are women's. We have $d_k$ ways to seat these $k$ families in the circular table. The families can be permuted in $k!$ ways. As we have selected men's (and women's) seats, each of the $k$ families that are supposed to sit together, can have $4$ ways of being seated. The remaining $2n-2k$ men and $2n-2k$ women can be permuted in the remaining seats. In short, we have,

$w_k=2\cdot d_k \cdot 4 ^k \cdot k! \cdot (2n-2k)!^2$

Therefore,

$L_n=\displaystyle\sum_{k=0}^n(-1)^k\binom{n}{k}\cdot 2\cdot 4^k\cdot k!\cdot (2n-2k)!^2\cdot d_k$

Substituting $d_k$, and simplifying we finally end up with,

$L_n=2\cdot n!\displaystyle\sum_{k=0}^n(-4)^k\frac{4n}{4n-3k}\binom{4n-3k}{k}\frac{(2n-2k)!^2}{(n-k)!}$

And there we have our final answer.

Another interesting variation in our 'generalized menage problem' is asking for the number of ways such that no two members of the same family sit together. This seems rather complicated in that it doesn't seem to admit any closed form. Of course, this is just my opinion.

Trying to find an answer to this variation, I started thinking about arranging balls of $n$ different colors with $c[i]$ balls of color $i$. Seems DP is the best approach as discussed in AMBALLS - Editorial. But let's postpone that for a later post.

Hope you enjoyed this post. I'll write again with a nice post.


UPDATE: In the spirit of ménage numbers, if we define

$A_n=\displaystyle\sum_{k=0}^n(-4)^k\frac{4n}{4n-3k}\binom{4n-3k}{k}\frac{(2n-2k)!^2}{(n-k)!}$

then we have this monstrous recurrence for $A_n$'s.

$\begin{align}
(2 n - 3) (2 n - 5) (2 n - 7) (n - 2) (n - 3) (n - 4) (n - 5)A_n&=4 (n - 5) (n - 4) (n - 3) n (2 n - 7) (2 n - 5) (8 n^4 - 36 n^3 +  54 n^2 - 17 n - 13) A_{n - 1} + \\&
 16 (n - 5) (n - 4) (n - 3) n (2 n - 7) (16 n^5 - 144 n^4 + 472 n^3 -
    736 n^2 + 505 n - 95) A_{n - 2} - \\&
 384 (n - 5) (n - 4) n (2 n - 1) (12 n^3 - 84 n^2 + 197 n - 145) A_{
   n - 3} + \\&
 2304 (n - 5) (n - 3) n (2 n - 3) (2 n - 1) (6 n^2 - 21 n + 10) A_{
   n - 4} + \\&
 27648 (n - 4) (n - 3) (n - 1) n (2 n - 5) (2 n - 3) (2 n - 1) A_{
   n - 5}+ \\&
96 \cdot (-4)^n \cdot (2 n - 7) (2 n - 5) (2 n - 3) (2 n - 1) (4 n - 5)
\end{align}$

Obviously, I didn't find this recurrence manually. I used the amazing RISCergosum package which has a Mathematica implementation of Zeilberger's algorithm for finding holonomic recurrences. And it did the job in a matter of seconds. Amazing!!

Mathematica code:
Clear["Global`*"];
<< RISC`HolonomicFunctions`
ann = Annihilator[
   Power[-4, k] (4 n)/(4 n - 3 k)
     Binomial[4 n - 3 k, k] Factorial[2 n - 2 k]^2/
    Factorial[n - k], {S[k], S[n]}];
Takayama[ann, {k}]


Until then
Yours Aye
Me