Tuesday, April 24, 2018

Probability on a Chess Match


Consider two friends $A$ and $B$ playing a series of bullet chess games. As every time the player who lost the recent game wants to continue, they come with an idea. They'll play the first game. After the game, they'll toss a fair coin. If the coin comes up heads, they'll continue playing another game, else they end the match. They'll continue this procedure until the coin turns up Tails.

The question we are interested here is the probability of either player winning the match given the probability $\alpha$ of $A$ winning a game, probability $\gamma$ of $B$ winning a game and the remaining probability $\beta=1-\alpha-\gamma$ of a game ending in a draw. WLOG assume $\alpha<\gamma$.

All the equations referred in this post now on are from A list of Integrals and Identities.

Let $A_{n,k}$ denote the event of $A$ winning a match of $n$ games by $k$ more wins (ignoring the coin tosses). Then,

$\displaystyle\mathbb{P}(A_{n,k})=\sum_{a+b+c =n\\a-c=k}\binom{n}{a,b,c}\alpha^a\beta^b\gamma^c$

where $a$ denotes the number of games won by $A$, $c$ the number of games won by $B$ and $b$ the number of games that ended in a draw.

First, let's tackle an easy question. That is finding the probability of the match ending in a draw.

$\begin{align}
\mathbb{P}(\text{Draw})&=\displaystyle\sum_{n=1}^\infty\mathbb{P}(A_{n,0}\text{ | }n\text{ games played})\cdot\mathbb{P}(n\text{ games being played})\\
\end{align}$

As we know that the probability of the match lasting $n$ games is $2^{-n}$, using (1) we have,

$\begin{align}
\mathbb{P}(\text{Draw})&=\displaystyle\frac{\cot\phi}{\sqrt{\alpha\gamma}}-1\\
\end{align}$,   where $\phi=\cos^{-1}\displaystyle\left(\frac{2\sqrt{\alpha\gamma}}{1+\alpha+\gamma}\right)$


Similarly, we now look at the probability of $A$ not losing the match. This time we have,

$\begin{align}
\mathbb{P}(A \text{ not losing})&=\displaystyle\sum_{n=1}^\infty\sum_{k=0}^\infty\mathbb{P}(A_{n,k}\text{ | }n\text{ games played})\cdot\mathbb{P}(n\text{ games being played})\\
\end{align}$

Using (6), this reduces to,

$\begin{align}
\mathbb{P}(A \text{ not losing})&=\displaystyle\frac{\cot\phi}{\sqrt{\alpha\gamma}-\alpha(\sec\phi-\tan\phi)}-1\\
\end{align}$

In fact, instead of using a fair coin, if they decide to use a biased coin so that they continue playing with probability $q$ and end the match with probability $p=1-q$,

$\begin{align}
\displaystyle\frac{q}{1-q}\mathbb{P}(\text{Draw})&=\frac{\frac{1}{2}\cot\phi}{\sqrt{(q\alpha)(q\gamma)}}-1\\
\end{align}$

$\displaystyle\frac{q}{1-q}\mathbb{P}(A \text{ wins the match})=\displaystyle\frac{\frac{1}{2}\cot\phi}{(q\gamma)(\sec\phi+\tan\phi)-\sqrt{(q\alpha)(q\gamma)}}$

$\displaystyle\frac{q}{1-q}\mathbb{P}(B \text{ wins the match})=\displaystyle\frac{\frac{1}{2}\cot\phi}{(q\alpha)(\sec\phi+\tan\phi)-\sqrt{(q\alpha)(q\gamma)}}$

where $\displaystyle\phi=\cos^{-1}\left(\frac{2\sqrt{(q\alpha)(q\gamma)}}{(1-q)+q\alpha+q\gamma}\right)$

These expressions can be written algebraically without the use of trigonometric functions but they are clumsy and I prefer this form. Also, there a couple of nice details hidden in these expressions. First, using a right choice of $q$, the weaker player will be able to decrease his opponent's chance of winning the match which was surprising for me.

Second, in the interesting and limiting case of $q\to1$, the LHSs of the above three expression can be seen as the expected number of games after which the match is at a drawn stage, player $A$ is in the lead and player $B$ is in the lead respectively.

$\displaystyle\lim_{q\to1}\displaystyle\frac{q}{1-q}\mathbb{P}(\text{Draw})=\frac{1}{|\gamma-\alpha|}-1$,

$\displaystyle\lim_{q\to1}\displaystyle\frac{q}{1-q}\mathbb{P}(A \text{ wins the match})=\frac{\alpha}{\text{max}(0,\gamma-\alpha)^2}$ and

$\displaystyle\lim_{q\to1}\displaystyle\frac{q}{1-q}\mathbb{P}(B \text{ wins the match})=\frac{\gamma}{\text{max}(0,\alpha-\gamma)^2}$

ASIDE: An interesting discussion on a related problem is given in Joint probability of geometric random variables. However, there the distributions are independent. Our problem can be stated similarly using the result that if,

$\vec{\mathbf{X}}\sim \text{Mult}(N, \vec{\mathbf{p}})$ where $N \sim \text{NBin}(m,s)$, then $X_i \sim \displaystyle\text{NBin}\left(m,\frac{s}{s+(1-s)p_i}\right)$

But the $X_i$'s are not independent (they would be had $N$ been a Poisson variable).

Even though we started with the assumption $\alpha<\gamma$, fortunately the final results holds for all values of $\alpha$ and $\gamma$. The symmetry of the formulae is really enchanting to me and greatly satisfied that I've solved this. Hope you enjoyed it.

Until then
Yours Aye
Me

No comments:

Post a Comment