Wednesday, August 18, 2021

Birds on a Wire - a Probability puzzle

This post is about birds on a wire - a nice probability puzzle that I recently saw on cut-the-knot website. The site also gives five solutions all of which are somewhat informal.

While all the solutions were pretty, I was particularly struck by the elegant solution by Stuart Anderson. Studying his solution made me realize that the condition that '$n$ is large' is not very significant. In fact, we'll use his idea to solve the question for the finite version of the problem.

Let the length of the wire be $1$ unit. Let $X_k$ ($1\leq k \leq n$) denote the position of the $k$th bird to sit on the wire. That is, all the $X_k$'s are independent standard uniform random variables.

Let $X_{(k)}$ be the random variable denoting the $k$th Order statistic. Now let's define new random variables $Y_k$ for $0 < k \leq n+1$ such that

$Y_k=X_{(k)}-X_{(k-1)}$ with $X_{(0)}=0$ and $X_{(n+1)}=1$.

Note that the $n$ birds divide the wire into $n+1$ segments and the random variable $Y_k$ denote the length of the $k$th segment.

Using the joint distribution of the Order statistics of the Uniform distribution, we can see that the $Y_k$s are identical (but not necessarily independent) $Beta(1,n)$ variables.

Therefore, the cumulative distribution for any $Y_k$ is $F(y)=\mathbb{P}(Y_k \leq y)=1-(1-y)^n$ and the expected value is $\mathbb{E}(Y_k)=1/(n+1)$.

Now if $n$ were large, we can use the idea that the above expression for CDF is approximated by $1-e^{-ny}$ and the expected value by $1/n$. But as noted earlier, we don't have to assume $n$ is large and proceed directly.

Let's define new variables $C_k$ such that

$\displaystyle C_k=\begin{cases}0, & \text{if }k \text{th segment is not colored}\\  Y_k, & \text{otherwise} \end{cases}$

It is straight forward to see that

(i) $\mathbb{E}(C_1)=\mathbb{E}(C_{n+1})=0$ because those segments never get coloured and
(ii) $\mathbb{E}(C_2)=\mathbb{E}(Y_2)=\mathbb{E}(C_n)=\mathbb{E}(Y_n)=1/(n+1)$ because those segments always gets coloured.

For the cases $2<k<n$, consider the segment $Y_k$ along with its neighbours $Y_{k-1}$ and $Y_{k+1}$. A minute's thought shows that the $k$th segment is not coloured if and only if it is the biggest among the three variables.

If we let $Z_k=\max(Y_{k-1},Y_k,Y_{k+1})$, we can rewrite $C_k$ as

$\displaystyle C_k=\begin{cases}0, & Y_k=Z_k\\  Y_k, & \text{otherwise} \end{cases}$

But $\displaystyle \mathbb{P}(Y_k=Z_k)=\frac{1}{3}$ by symmetry. Therefore,

$\displaystyle \mathbb{E}(C_k)=\mathbb{P}(Y_k=Z_k)\cdot 0+\mathbb{P}(Y_k \ne Z_k) \mathbb{E}(Y_k|Y_k \ne Z_k)=\frac{2}{3}\mathbb{E}(Y_k|Y_k \ne Z_k)$

To do this, we condition on $Y_k$. This gives,

$\displaystyle \mathbb{E}(Y_k)=\frac{1}{3}\mathbb{E}(Y_k|Y_k=Z_k)+\frac{2}{3}\mathbb{E}(Y_k|Y_k \ne Z_k)$

Rearranging this, we have

$\displaystyle \mathbb{E}(C_k)=\frac{2}{3}\mathbb{E}(Y_k|Y_k \ne Z_k)=\mathbb{E}(Y_k)-\frac{1}{3}\mathbb{E}(Y_k|Y_k=Z_k)=\mathbb{E}(Y_k)-\frac{1}{3}\mathbb{E}(Z_k)$

To compute the expected value of $Z_k$, we proceed in two parts.

First, consider any distribution with CDF $F(v)$ and the order statistics $V_{(i)}$, $V_{(j)}$ and $V_{(k)}$ with $i<j<k$.

It can be seen intuitively that the conditional distribution of $V_{(j)}$ given $V_{(i)}$ and $V_{(k)}$ is the same as the $(j-i)$th order statistic obtained from a sample of size $k-i-1$ whose distribution is $F(v)$ truncated on the left at $V_{(i)}$ and on the right at $V_{(k)}$.

For example, if we know that $V_{(5)}=0.4$ and $V_{(10)}=0.9$ from the standard uniform distribution $U(0,1)$, then $V_{(7)}$ has the same distribution as $W_{(2)}$ from a sample of size $4(=10-5+1)$ from the distribution $W\sim U(0.4,0.9)$.

This result can be obtained by using Theorems 2 and 3 given here.

Second, it is a well known result in probability (cuttheknothere and here to quote a few) that if a line of length 1 unit is broken at $n-1$ points giving $n$ segments, then the expected length of the $k$th largest segment is $(1/k+\cdots+1/n)/n$. Specifically, the length of the largest segment in a 3 segment case is $11/18$.

Therefore,

$\displaystyle \mathbb{E}(Z_k)=\mathbb{E}(\mathbb{E}(Z_k|X_{(k-2)},X_{(k+1)}))=\mathbb{E}\left(\frac{11}{18}(X_{(k+1)}-X_{(k-2)})\right)=\frac{11}{18}\frac{3}{n+1}$

Using the above and substituting the known values, we get

$\displaystyle \mathbb{E}(C_k)=\frac{1}{n+1}-\frac{1}{3}\frac{11}{18}\frac{3}{n+1}=\frac{7}{18n+18}$

If we now let $C=C_1+C_2+\cdots+C_n+C_{n+1}$, then using the linearity of expectations, we get

$\displaystyle \mathbb{E}(C)=\frac{2}{n+1}+(n-3)\frac{7}{18n+18} =\frac{7n+15}{18n+18}$

Note that this applies only when $n\geq 3$ (cases $n<3$ are trivial to calculate). More importantly, this shows that the expected value tends to $7/18$ for large $n$.

I would like to thank Abhilash Unnikrishan for pointing out the error (I assumed that $Y_k$s are independent) in my original solution and to use the 'expected value of largest segment in a line' to simplify calculations in arrive at the final result. 

Hope you enjoyed the discussion.


Until then
Yours Aye
Me

Sunday, July 11, 2021

Doubling the Cube

It's always interesting anytime we get to visit one of the three classical problems in Geometry. As the title of this post says, we are going to talk about Doubling the cube.

It is well known that doubling the cube is impossible with only a compass and straightedge but is tractable with a neusis ruler. One of the simplest such constructions is given Wikipedia. We use a slightly modified version in this post.


Construct an equilateral triangle $ABC$. Draw a line through $B$ perpendicular to $BC$. We now use the neusis to find point $D$ (on line $AB$) and $E$ such that $AB=ED$. Then, $CE=\sqrt[3]{2}AB$.

The wiki article quoted above does not prove this and I decided to try it on my own. To begin with, because $\angle CBE$ is right angled, we see that $BE=\sqrt{CE^2-CB^2}$.

Now we construct a point $E'$ on $BD$ such that $EE'$ is parallel to $CB$.



Because $\triangle DEE' \sim \triangle DCB$, their sides are in proportion. Therefore,

$\displaystyle \frac{EE'}{CB} = \frac{DE}{DC} \implies EE'=BC \cdot \frac{BC}{BC+CE}$.

As $\triangle BE'E$ is a $30-60-90$ triangle, $BE=\sqrt{3}EE'$. Therefore,

$\displaystyle \sqrt{3}\frac{BC^2}{BC+CE}=\sqrt{CE^2-BC^2} \implies (CE^2-BC^2)(CE+BC)^2=3BC^4$.

If we let $CE=xBC$, then the above expression reduces to $(x^2-1)(x+1)^2=3$.

By ratio test, we can guess that $-2$ is a root of the equation. Taking that factor out, we can see that $x=\sqrt[3]{2}$ thus proving that $CE=\sqrt[3]{2}BC$.

This last part of the proof is the one I find a little unsatisfactory. For one of the classical problems in Geometry, this proof is more algebraic than geometric.

I went in search of a more geometric proof myself. I couldn't find one and I went on an internet search and to my surprise, there aren't a lot. Finally, I found Nicomedes' proof in this page which gave me what I was looking for. I also get to know about Crockett Johnson's painting on Doubling the cube.

Luckily, I was able to modify (and simplify) the argument to get something that looks elegant to my eyes. This proof starts by constructing a line through $D$ perpendicular to $CB$. Let this line meet (the extension of) $CB$ at $J$.



As $\triangle CBE \sim \triangle CJD$,

$\displaystyle \frac{CB}{BJ}=\frac{CE}{ED} \implies CE \cdot BJ=CB \cdot ED=CB \cdot AB$

Note that $\triangle DBJ$ is a $30-60-90$ triangle. Therefore, $BD=2BJ$. This finally shows us that

$CE \cdot BD=2CB \cdot AB \tag{1} \label{reqn}$

Now comes the second part of our construction. Extend $AC$ and mark a point $F$ on it such that $CF=CD$. Join $D$ and $F$. Draw a line through $B$ perpendicular to $AD$ and mark its intersection with $AF$ as $G$. Draw a line through $G$ parallel to $AB$ and let it meet $DF$ at $H$. Join $H$ and $B$.



$\triangle GAB$ is a $30-60-90$ triangle. Therefore, $GA=2AB=2CB$.

Because $CD=CF$ by construction, $CE=GF$. Using this, we recast $\eqref{reqn}$ to

$GF \cdot BD=GA \cdot AB \tag{2} \label{rreqn}$

Our plan now is to prove that $GH=AB$. As $\triangle FGH \sim \triangle FAD$, we have

$\displaystyle \frac{GH}{GF}=\frac{AD}{AF} \implies GH \cdot AF=GF \cdot ( AB+BD)=GF\cdot AB+GF\cdot BD$

Using $\eqref{rreqn}$ now, we have,

$\displaystyle GH \cdot AF=GF\cdot AB+GA\cdot AB=AB\cdot (GF+GA)=AB \cdot AF \implies GH = AF$

This also shows that $BH$ is parallel to $AF$. Therefore,

$\displaystyle\frac{GH}{GF}=\frac{AD}{AF}=\frac{BD}{BH}  \tag{3} \label{feqn}$

because $\triangle FGH \sim \triangle FAD \sim \triangle HBD$.

We now construct a point $C'$ on $AB$ such that $CC'$ is perpendicular to $AB$. Note that $C'$ will also be the midpoint of $AB$. Now,

$DB\cdot DA=(DC'-C'B)(DC'+C'B)=DC'^2-C'B^2$

Now $CC'^2=CD^2-C'D^2=CB^2-C'B^2$ both by Pythagoras theorem on respective $\triangle CC'D$ and $\triangle CC'B$. Using this,

$DB \cdot DA=CD^2-CB^2=CF^2-CA^2=(CF-CA)(CF+CA)=GF\cdot AF$

Therefore,

$\displaystyle \frac{GF}{DB}=\frac{DA}{AF}$

(Alternately, we can construct a circle centered at $C$ with $CA$ as radius. Because both $D$ and $F$ are equidistant from $C$, tangents from $D$ and $F$ to the circle will be of the same length. In other words, both points have the same power w.r.t the circle. But it is apparent from the configuration that $\mathcal{P}(C,D)=DB\cdot DA$ and $\mathcal{P}(C,F)=FG \cdot FA$)

Using $\eqref{feqn}$, we finally have $\displaystyle \frac{GH}{GF}=\frac{GF}{BD}=\frac{BD}{BH}$

Because $ABHG$ is a parallelogram, $BH=AG=2GH$. Therefore,

$\displaystyle \left(\frac{GF}{GH}\right)^3=\frac{GF}{GH}\frac{GF}{GH}\frac{GF}{GH}=\frac{GF}{GH}\frac{BD}{GF}\frac{BH}{BD}=\frac{BH}{GH}=2$

Using the fact that $GF=CE$ and $GH=AB$, we finally have

$CE^3=2AB^3$ (or) $CE=\sqrt[3]{2}AB$


Until then
Yours Aye
Me

Thursday, May 20, 2021

Birthday Problem without Replacement

In one of the previous posts, we saw the generalization of Coupon collector's problem without replacement. Not long after, I thought about the same generalization for the Birthday problem.

Because a Deck of cards is a better platform for dealing with problems without replacement, we use the same setup here. That is, we seek the expected number cards we need to draw (without replacement) from a well shuffled pack of cards to get two cards of the same rank.

This problem is a lot easier. We have $n=13$ ranks each with $j=4$ instances (suits). Using Hypergeometric distribution, the probability that we need more than $k$ cards

$\displaystyle \mathbb{P}(X>k)=\frac{\binom{13}{k}\cdot 4^k}{\binom{52}{k}}$

That is, of the $k$ cards we select from $52$ cards, we can have atmost one instance for each rank. Therefore, the $k$ cards should come from the $13$ ranks and for each of the card, we have $4$ choices.

Now the expected value is obtained easily.

$\displaystyle \mathbb{E}(X)=\sum_{k=0}^n \mathbb{P}(X>k)=\sum_{k=0}^n \frac{\binom{n}{k}\cdot j^k}{\binom{nj}{k}}=1+\sum_{k=1}^nj^k\binom{n}{k}\binom{nj}{k}^{-1}$

Needless to say, this agrees with the classical expression (that is at $j \to \infty$) given in Pg 417 (or Pg 433) of 'Analytic Combinatorics'. We attempt to find asymptotics when $n\to\infty$. 

Let $\displaystyle a_k=j^k\binom{n}{k}\binom{nj}{k}^{-1}$ for $k\geq 1$

Then, using the beautiful idea I learned from Hirschhorn's page (for example, papers 176 and 197),

$\begin{align}\displaystyle a_k &= j^k\binom{n}{k}\binom{nj}{k}^{-1}\\ &= \frac{n}{nj}\frac{n-1}{nj-1}\cdots\frac{n-k+1}{nj-k+1}j^k \\ &= \frac{1-\frac{0}{n}}{1-\frac{0}{nj}}\frac{1-\frac{1}{n}}{1-\frac{1}{nj}}\cdots \frac{1-\frac{k-1}{n}}{1-\frac{k-1}{nj}} \\ &\approx \text{exp}\left\{-\frac{0}{n}-\frac{1}{n}-\cdots -\frac{k-1}{n}+\frac{0}{nj}+\frac{1}{nj}+\cdots +\frac{k-1}{nj}\right\} \\ &\approx \text{exp}\left\{-\frac{1-j^{-1}}{n}\frac{k^2}{2}\right\} \\ \end{align}$

This is nothing but Laplace's method widely discussed in literature and especially in my favorite book 'Analytic Combinatorics' (page 755).

Note that we have used idea that $(1+x)^m=e^{m\log(1+x)}\approx e^{mx}$ for small $x$. Now,

$\displaystyle \mathbb{E}(X)=1+\sum_{k=1}^n a_k \approx 1+\int\limits_0^\infty \text{exp}\left\{-\frac{1-j^{-1}}{n}\frac{k^2}{2}\right\}\,dk=1+\sqrt{\frac{\pi}{2}}\sqrt{\frac{n}{1-j^{-1}}}$

where we've used the standard Normal integral. For large $j$, this clearly reduces to the classical asymptotic value.

In fact, for large $n$, the asymptotic expansion of the binomial coefficient is

$\begin{align}\displaystyle \binom{n}{k} & \approx \frac{n^k}{k!}\text{exp}\left\{-\frac{S_1(k-1)}{n}-\frac{S_2(k-1)}{2n^2}-\frac{S_3(k-1)}{3n^3}\cdots\right\} \\ & \approx \frac{n^k}{k!}\text{exp}\left\{-\frac{k(k-1)}{2n}-\frac{k^3}{6n^2}-\frac{k^4}{12n^3}\cdots\right\}\\ & \approx \frac{n^k}{k!}\text{exp}\left\{-\frac{k^2}{2n}-\frac{k^3}{6n^2}-\frac{k^4}{12n^3}\cdots\right\}\\ \end{align}$

where $S_r(m)$ is the sum of the $r$-th powers of the first $m$ natural numbers.

Using the second expression and simplifying a bit more, we have

$\displaystyle a_k \approx \text{exp}\left\{\frac{1-j^{-1}}{8n}\right\}\text{exp}\left\{-\frac{(1-j^{-1})(k-\frac{1}{2})^2}{2n}\right\}\left(1-\frac{1-j^{-2}}{6n^2}k^3-\frac{1-j^{-3}}{12n^3}k^4+\frac{1}{2}\frac{(1-j^{-2})^2}{36n^4}k^6+\cdots\right)$

If we now use the Normal approximation, we get

$\begin{align}\displaystyle \mathbb{E}(X) &\approx 1+\text{exp}\left\{\frac{1-j^{-1}}{8n}\right\}\left(\sqrt{\frac{n\pi/2}{1-j^{-1}}}-\frac{1}{3}\frac{j+1}{j-1}-\frac{1}{4}\frac{1-j^{-3}}{(1-j^{-1})^{5/2}}\sqrt{\frac{\pi}{2n}}+\frac{5}{24}\frac{(1-j^{-2})^2}{(1-j^{-1})^{7/2}}\sqrt{\frac{\pi}{2n}}\right) \\ &= 1+\text{exp}\left\{\frac{1-j^{-1}}{8n}\right\}\left(\sqrt{\frac{n\pi/2}{1-j^{-1}}}-\frac{1}{3}\frac{j+1}{j-1}-\frac{1}{24}\frac{1-4j^{-1}+j^{-2}}{(1-j^{-1})^2}\sqrt{\frac{1-j^{-1}}{2n/\pi}} \right) \\ &\approx 1+\sqrt{\frac{n\pi/2}{1-j^{-1}}}-\frac{1}{3}\frac{j+1}{j-1}+\frac{1}{12}\frac{1-j^{-1}+j^{-2}}{(1-j^{-1})^2}\sqrt{\frac{1-j^{-1}}{2n/\pi}} \\ \end{align}$

which I think is an $O(n^{-1})$ approximation. Hope you liked the discussion.

Clear["Global`*"];
n = 10000; j = 7;
acc = 50;
res = N[1, acc];
k = 1; nume = N[n, acc]; deno = N[n, acc];
Monitor[Do[
     res += N[nume/deno, acc];
     nume *= (n - k); deno *= (n - k/j);
     , {k, n}];, {k, res}]; // AbsoluteTiming
res
N[1 + Exp[(1 - j^-1)/(
    8 n)] (Sqrt[(n \[Pi]/2)/(1 - j^-1)] - 1/3 (1 + j^-1)/(1 - j^-1) - 
     1/24 (1 - 4 j^-1 + j^-2)/(1 - j^-1)^2 Sqrt[(1 - j^-1)/(
      2 n/\[Pi])]), acc]


Until then
Yours Aye
Me

Friday, May 14, 2021

Coupon collector's Problem without Replacement

In this post, we seek the expected number of coupons needed to complete the set where the number of coupons in each type is finite and we sample without replacement.

I recently got interested in this question when I saw a post on Reddit that asks for the expected number of cards that has to be drawn without replacement from deck of cards to get all the four suits. This is essentially the coupon collector's problem with 4 different coupon types with 13 coupons available in each type.

Let's first discuss a simpler question. What is the expected number of cards to be drawn without replacement from a well shuffled deck to collect the first Ace? This is exactly like that of the Geometric distribution but without replacement.

We can employ symmetry to simplify this problem. The four Aces would divide the remaining 48 cards into five equal sets. Therefore each set would contain 48/5=9.6 cards. Thus, including the first Ace we collect, we need 10.6 draws without replacement.

What we've discussed here is the Negative Hypergeometric distribution which deals with the expected number of draws without replacement needed to get $r$-th success. Analogous to the Geometric distribution, let's also define the Negative Hypergeometric distribution with $r=1$ as the Negative Geometric distribution.

If $X$ is a Negative Geometric random variable denoting the number of draws without replacement needed to get the first success, then based on our discussion above, we have

$\displaystyle\mathbb{E}(X)=\frac{N-K}{K+1}+1=\frac{N+1}{K+1}$

where $N$ is the population size and $K$ is the number of "successes" in the population.

That is all we need to solve our original problem. Even though, finding the expected number of draws without replacement to get all suits is solved a lot of times in Stack Exchange (For example, here, here and here by Marko Reidel with Generating functions), we are going to use the amazing Maximums-Minimums Identity approach used here.

Let $X_1$, $X_2$, $X_3$ and $X_4$ be the random number of draws without replacement needed to get the first Spades, Clubs, Hearts and Diamonds respectively. Then the random number $X=\text{max}(X_1,X_2,X_3,X_4)$ denotes the number of draws to get all the four suits.

Note that each $X_i$ is a Negative geometric variable and the minimum of any bunch is again Negative Geometric with the number of successes pooled together. Using the Max-Min Identity and the linearity of expectations, we have

$\begin{align}\displaystyle\mathbb{E}[X]&=\mathbb{E}[\text{max }X_i]\\ &=\sum_i \mathbb{E}[X_i] - \sum_{i<j}\mathbb{E}[\text{min}(X_i,X_j)]+\sum_{i<j<k}\mathbb{E}[\text{min}(X_i,X_j,X_k)]-\cdots\\ &= \binom{4}{1}\frac{52+1}{13+1}-\binom{4}{2}\frac{52+1}{26+1}+\binom{4}{3}\frac{52+1}{39+1}-\binom{4}{4}\frac{52+1}{52+1}\\ &= \frac{4829}{630} \approx 7.66508\end{align}$

Though we have solved the question in the case where each type has an equal number of coupons, it should be easy to see that this approach is generalizable easily.

For the case of $n$ different coupon types with $j$ coupons in each type, we have,

$\displaystyle \mathbb{E}(X)=\sum_{k=1}^n(-1)^{k-1}\binom{n}{k}\frac{nj+1}{kj+1}=(nj+1)\left[1-\binom{n+1/j}{n}^{-1}\right]$

which is the closed form solution of the finite version of the coupon collector's problem. We know that as $j\to \infty$, this reduces to the classical coupon collector problem. For case of $n\to \infty$, using the Asymptotic bounds of Binomial Coefficient, we can write,

$\displaystyle \mathbb{E}(X) \approx nj \left[1-\binom{n+1/j}{n}^{-1} \right] \approx nj - \Gamma(1/j)n^{1-1/j}$

Even though I knew how the Max-Min Identity is useful in terms of the coupon collector problem with replacement, it took me almost a day of on-and-off thinking before I could convince myself that we can make it work for the without replacement case as well. And it was a pleasant surprise that we could arrive at a nice asymptotic expression for the expectation.


Until Then
Yours Aye
Me

Friday, April 30, 2021

A Probability problem on an Election result

A friend of mine recently posed the following problem to me: Given two contestants in an election $X$ and $Y$ (with equal popularity among the voters), what is the probability that the contestant leading the election after 80% of polling eventually loses the election?

I misunderstood the question in ways more than one and, not wanting to use paper-pencil, solved this problem with an answer of $(2/\pi)\tan^{-1}4$ which was wrong.

I enquired with him the source of the problem which also has a solution. But the solution there seems very convoluted and needs Wolfram Alpha to get a closed form. Seeing the solution, I realized that I have misunderstood the question and that my method is a lot easier.

Let $X_1,Y_1$ be the votes received by the contestants in after 80% of polling respectively and $X_2,Y_2$ be the votes respectively in the remaining 20% polling. For the sake of simplicity, let the total number of votes be $20n$ for a large $n$.

We know that, if $X\sim \text{Bin}(m,p)$, then for large $m$, $X$ is approximately distributed as $\mathcal{N}(mp,mpq)$. Therefore, for $p=q=1/2$,

$\displaystyle X_1,Y_1\sim \mathcal{N}\left(\frac{16n}{2}, \frac{16n}{4}\right)$ and $\displaystyle X_2,Y_2 \sim \mathcal{N}\left(\frac{4n}{2}, \frac{4n}{4}\right)$

Let $E$ be the event denoting the player trailing after 80% polling eventually wins the election. Then,

$\displaystyle \mathbb{P}(E)=2\cdot \frac{1}{2}\cdot \mathbb{P}(X_1+X_2 \leq Y_1+Y_2 \text{ and }X_1 \geq Y_1)$

We can rewrite the same to get

$\mathbb{P}(E)=\mathbb{P}(X_1-Y_1 \leq Y_2 - X_2 \text{ and }X_1-Y_1 \geq 0)$

We also know that if $U\sim \mathcal{N}(\mu_1,\sigma_1^2)$ and $V\sim \mathcal{N}(\mu_2,\sigma_2^2)$, then $aU+bV\sim \mathcal{N}(a\mu_1+b\mu_2,a^2\sigma_1^2+b^2\sigma_2^2)$

Therefore, $X_1-Y_1 \sim \mathcal{N}\left(0,8n\right)=\sqrt{8n}Z_1$ and $X_2-Y_2\sim \mathcal{N}\left(0,2n\right)=\sqrt{2n}Z_2$

where $Z_1$ and $Z_2$ are standard Normal variables.

Therefore,

$\begin{align}\displaystyle \mathbb{P}(E)&=\mathbb{P}(2\sqrt{2n}Z_1\leq \sqrt{2n}Z_2 \text{ and } 2\sqrt{2n}Z_1 \geq 0)\\ &=\mathbb{P}(2Z_1\leq Z_2 \text{ and } Z_1 \geq 0) \\ &= \mathbb{P}(Z_1 \leq Z_2/2 \text{ and }Z_1 \geq 0)\\ &=\mathbb{P}(0 \leq Z_1 \leq Z_2/2) \\ &= \mathbb{P}\left(0 \leq \frac{Z_1}{Z_2} \leq \frac{1}{2}\right)\\ &= \mathbb{P}\left(0 \leq W \leq \frac{1}{2}\right)\\ \end{align}$

where $W$ is the ratio of two standard Normal distributions and hence a Cauchy random variable. As the CDF of a Cauchy variable has a closed form in terms of arctangent function, we finally have

$\displaystyle \mathbb{P}(E)=\frac{1}{\pi}\tan^{-1}\left(\frac{1}{2}\right)\approx 0.1475$

If $E$ denotes the event that the contestant trailing when the fraction $a$ of votes remains to counted eventually wins the election. then

$\displaystyle \mathbb{P}(E)=\frac{1}{\pi}\tan^{-1}\left(\sqrt{\frac{a}{1-a}}\right)=\frac{1}{\pi}\sin^{-1}\sqrt{a}$

Hope you enjoyed the discussion. See ya in the next post.


Until Then
Yours Aye
Me

Saturday, April 10, 2021

Expected Value in terms of CDF

It is well known that the expectation of a non-negative random variable $X$ can be written as

$\displaystyle \mathbb{E}[X] \overset{\underset{\mathrm{d}}{}}{=} \sum_{k=0}^\infty \mathbb{P}(X>k)  \overset{\underset{\mathrm{c}}{}}{=} \int\limits_0^\infty \mathbb{P}(X>x)\,dx$

for the discrete and continuous cases.

It's quite easy to prove this, at least in the discrete case. It is interesting that, in the same vein, this can be extended to arbitrary functions. That is,

$\begin{align} \displaystyle \mathbb{E}[g(X)]&=\sum_{k=0}^\infty g(k)\mathbb{P}(X=k)\\ &=\sum_{k=0}^\infty \Delta g(k)\mathbb{P}(X>k)\\ \end{align}$

where $\Delta g(k)=g(k+1)-g(k)$ is the forward difference operator. WLOG, We also have made an assumptions that $g(0)=0$.

Comparing this with continuous case, we can make an 'educated guess' that

$\displaystyle \mathbb{E}[g(X)]=\int\limits_0^\infty \mathbb{P}(X>x)\,dg(x)$

We made a post about estimating a sum with probability where we showed the expected error in the approximation is given by

$\displaystyle \mathbb{E}(\delta)=\sum_{k=1}^\infty f(k)\frac{\binom{n-k}{m}}{\binom{n}{m}}$

Note that the term involving the ratio of binomial coefficients can be interpreted as the probability of the minimum-of-the-$n$-tuple being greater than $k$. Therefore,

$\displaystyle \mathbb{E}(\delta)=\sum_{k=1}^\infty f(k)\mathbb{P}(Y>k)$

where $Y$ denotes the smallest order statistic.

Comparing this with our expression for expectation, we see that the expected value of the (probabilistic) Right Riemann sum is

$\displaystyle \mathbb{E}[\text{Right Riemann sum}]  \overset{\underset{\mathrm{d}}{}}{=} \mathbb{E}\left[\sum_{j=Y}^n f(j)\right]  \overset{\underset{\mathrm{c}}{}}{=} \mathbb{E}\left[ \int\limits_Y^1 f(x)\,dx \right]$

Without going into further calculations, I'm guessing that

(i) $\displaystyle \mathbb{E}[\text{Left Riemann sum}]  \overset{\underset{\mathrm{d}}{}}{=} \mathbb{E}\left[\sum_{j=0}^Z f(j)\right]  \overset{\underset{\mathrm{c}}{}}{=} \mathbb{E}\left[ \int\limits_0^Z f(x)\,dx \right]$

(ii) $\displaystyle \mathbb{E}[\text{error in Trapezoidal sum}]  \overset{\underset{\mathrm{d}}{}}{=} \frac{1}{2}\mathbb{E}\left[\sum_{j=Y}^Z f(j)\right]  \overset{\underset{\mathrm{c}}{}}{=} \frac{1}{2}\mathbb{E}\left[ \int\limits_Y^Z f(x)\,dx \right]$

where $Z$ denotes the largest order statistic.

Hope you enjoyed the discussion. See ya in the next post.


Until then
Yours Aye
Me

Monday, January 18, 2021

Probability on a standard deck of cards

I'm a great fan of Probability and Combinatorics. The question that we are going to solve in this post remains as one of my most favorite question in this area. I've been meaning to solve this one for so long and finally I'm glad I did in recently.

Consider a standard deck of 52 cards that is randomly shuffled. What is the probability that we do not have any King adjacent to any Queen after the shuffle?

Like I said, there were multiple instances where I used to think about this problem for long and then get distracted elsewhere. This time Possibly Wrong brought it back again to my attention. There is already a closed form solution there but I couldn't understand it. So, I decided to solve it Generating functions.

Stripping the problem down of all labels, the equivalent problem is to consider the number of words that we can form from the alphabet $\mathcal{A}=\{a,b,c\}$ with no occurrence of the sub-word $ab$ or $ba$. The $a$'s correspond to the Kings in the deck, $b$'s to the Queens and $c$'s to the rest of the cards.

Two important points here: First, We have to specifically find the number of words with 4 $a$'s, 4 $b$'s and 44 $c$'s. Second, convince yourself that both the problems are equivalent.

Pattern avoidance problems are easily tacked with the idea presented in Page 60 of 'amazing' Analytic Combinatorics.

Following the same, let $\mathcal{S}$ be the language words with no occurrence of $ab$ or $ba$, $\mathcal{T}_{ab}$ be those words that end with $ab$ but have no other occurrence of $ab$ and likewise for $\mathcal{T}_{ba}$ be those words that end with $ba$ but have no other occurrence of $ba$.

Appending a letter from $\mathcal{A}$ to $\mathcal{S}$, we find a non-empty word either in $\mathcal{S}$, $\mathcal{T}_{ab}$ or $\mathcal{T}_{ba}$. Therefore,

$\mathcal{S}+\mathcal{T}_{ab}+\mathcal{T}_{ba}=\{\epsilon\}+\mathcal{S}\times\mathcal{A}$

Appending $ab$ to $\mathcal{S}$, we either get a word from $\mathcal{T}_{ab}$ or a word from $\mathcal{T}_{ba}$ appending with $b$. Therefore,

$\mathcal{S}\times ab=\mathcal{T}_{ab}+\mathcal{T}_{ba}b$

Similarly, we have, $\mathcal{S}\times ba=\mathcal{T}_{ba}+\mathcal{T}_{ab}a$

The three equations in terms of OGFs become,

$S+T_{ab}+T_{ba}=1+S\cdot(a+b+c)$
$S\cdot ab=T_{ab}+T_{ba}\cdot b$
$S\cdot ba=T_{ba}+T_{ab}\cdot a$

We have three equations in three unknowns. Solving for $S$, we get,

$\displaystyle S=\frac{1-ab}{(1-a)(1-b)-(1-ab)c}$

which should be regarded as the generating function in terms of variables $a$, $b$ and $c$. So the coefficient of $a^ib^jc^k$ gives the number of words that avoid both $ab$ and $ba$ using $i$ $a$'s, $j$ $b$'s and $k$ $c$'s.

The coefficient of $c^k$ in this generating function is elementary. We get,

$\displaystyle [c^k]S=\frac{(1-ab)^{k+1}}{(1-a)^{k+1}(1-b)^{k+1}}=\left(\frac{1-ab}{(1-a)(1-b)}\right)^{k+1}$

Something interesting happens here. We note that

$\displaystyle \frac{1-ab}{(1-a)(1-b)}=1+a+b+a^2+b^2+a^3+b^3+a^4+b^4+\cdots$

Therefore the coefficient of $a^ib^j$ in the expansion $[c^k]S$ above has the following interpretation: It is the number of ways that a bipartite number $(i,j)$ can be written as sum of $k+1$ bipartite numbers of the form $(u,0)$ and $(0,v)$ with $u,v\geq0$.

This can be achieved with simple combinatorics. Of the $k+1$ numbers, choose $m$ numbers for $i$. Distribute $i$ in those $m$ numbers such that each numbers gets at least $1$. Distribute $j$ in the remaining $k+1-m$ numbers.

Therefore,

$\displaystyle [a^ib^jc^k]S=\sum_{m=1}^i \binom{k+1}{m}\binom{i-1}{m-1}\binom{j+k-m}{j}$

There we have it!

But, wait a minute.. The final answer gives us a very simple combinatorial way of arriving at the answer. Just lay down all the $k$ $c$-cards. Now there will be a total of $k+1$ gaps between those cards. Choose $m$ among them and distribute the $i$ $a$-cards in those $m$ spaces such that each space gets at least one card. Now distribute the $j$ $b$-cards in the remaining $k+1-m$ spaces.

Knowing this, I feel stupid to have gone through the generating function route to get the solution. Grrr...

Anyway, to get the desired probability we use $i=j=4$ and $k=44$,

$\displaystyle \mathbb{P}(\text{No King and Queen adjacent})=\frac{4!4!44!\sum_{m=1}^4 \binom{44+1}{m}\binom{4-1}{m-1}\binom{4+44-m}{4}}{52!}$

which matches with the answer given in the quoted page.

One more small note before we conclude. Take any two Aces out of the deck. Now the probability of no King adjacent to any Queen in this pack of 50 cards is given by using $i=j=4$ and $k=42$ which is $\approx 0.499087$, surprisingly close to $1/2$. Very nearly a fair bet!

I'm convinced that the author of the page left the answer in a specific form as hint for someone attempting to solve the problem but it didn't help me. But I'm glad that I was able to derive a simpler form of the solution with some intuition on why the solution looks the way it does. Hope you enjoyed this discussion.

Clear["Global`*"];
SeriesCoefficient[1/(
  1 - a - b - c + (a b (a - 1 + b - 1))/(a b - 1)), {a, 0, 4}, {b, 0, 
   4}, {c, 0, 44}]/Multinomial[4, 4, 44]
1 - N[%]
1 - NSum[Binomial[44 + 1, m] Binomial[4 - 1, m - 1] Binomial[
     4 + 44 - m, 4], {m, 4}]/Multinomial[4, 4, 44]
SeriesCoefficient[(
  1 - a b)/((1 - a) (1 - b) - (1 - a b) c), {a, 0, 4}, {b, 0, 12}, {c,
    0, 36}]/Multinomial[4, 12, 36]
1 - N[%]
1 - NSum[Binomial[36 + 1, m] Binomial[4 - 1, m - 1] Binomial[
     12 + 36 - m, 12], {m, 4}]/Multinomial[4, 12, 36]
SeriesCoefficient[(
  1 - a b)/((1 - a) (1 - b) - (1 - a b) c), {a, 0, 4}, {b, 0, 16}, {c,
    0, 32}]/Multinomial[4, 16, 32]
1 - N[%]
1 - NSum[Binomial[32 + 1, m] Binomial[4 - 1, m - 1] Binomial[
     16 + 32 - m, 16], {m, 4}]/Multinomial[4, 16, 32]


Until then
Yours Aye
Me