Thursday, May 20, 2021

Birthday Problem without Replacement

In one of the previous posts, we saw the generalization of Coupon collector's problem without replacement. Not long after, I thought about the same generalization for the Birthday problem.

Because a Deck of cards is a better platform for dealing with problems without replacement, we use the same setup here. That is, we seek the expected number cards we need to draw (without replacement) from a well shuffled pack of cards to get two cards of the same rank.

This problem is a lot easier. We have $n=13$ ranks each with $j=4$ instances (suits). Using Hypergeometric distribution, the probability that we need more than $k$ cards

$\displaystyle \mathbb{P}(X>k)=\frac{\binom{13}{k}\cdot 4^k}{\binom{52}{k}}$

That is, of the $k$ cards we select from $52$ cards, we can have atmost one instance for each rank. Therefore, the $k$ cards should come from the $13$ ranks and for each of the card, we have $4$ choices.

Now the expected value is obtained easily.

$\displaystyle \mathbb{E}(X)=\sum_{k=0}^n \mathbb{P}(X>k)=\sum_{k=0}^n \frac{\binom{n}{k}\cdot j^k}{\binom{nj}{k}}=1+\sum_{k=1}^nj^k\binom{n}{k}\binom{nj}{k}^{-1}$

Needless to say, this agrees with the classical expression (that is at $j \to \infty$) given in Pg 417 (or Pg 433) of 'Analytic Combinatorics'. We attempt to find asymptotics when $n\to\infty$. 

Let $\displaystyle a_k=j^k\binom{n}{k}\binom{nj}{k}^{-1}$ for $k\geq 1$

Then, using the beautiful idea I learned from Hirschhorn's page (for example, papers 176 and 197),

$\begin{align}\displaystyle a_k &= j^k\binom{n}{k}\binom{nj}{k}^{-1}\\ &= \frac{n}{nj}\frac{n-1}{nj-1}\cdots\frac{n-k+1}{nj-k+1}j^k \\ &= \frac{1-\frac{0}{n}}{1-\frac{0}{nj}}\frac{1-\frac{1}{n}}{1-\frac{1}{nj}}\cdots \frac{1-\frac{k-1}{n}}{1-\frac{k-1}{nj}} \\ &\approx \text{exp}\left\{-\frac{0}{n}-\frac{1}{n}-\cdots -\frac{k-1}{n}+\frac{0}{nj}+\frac{1}{nj}+\cdots +\frac{k-1}{nj}\right\} \\ &\approx \text{exp}\left\{-\frac{1-j^{-1}}{n}\frac{k^2}{2}\right\} \\ \end{align}$

This is nothing but Laplace's method widely discussed in literature and especially in my favorite book 'Analytic Combinatorics' (page 755).

Note that we have used idea that $(1+x)^m=e^{m\log(1+x)}\approx e^{mx}$ for small $x$. Now,

$\displaystyle \mathbb{E}(X)=1+\sum_{k=1}^n a_k \approx 1+\int\limits_0^\infty \text{exp}\left\{-\frac{1-j^{-1}}{n}\frac{k^2}{2}\right\}\,dk=1+\sqrt{\frac{\pi}{2}}\sqrt{\frac{n}{1-j^{-1}}}$

where we've used the standard Normal integral. For large $j$, this clearly reduces to the classical asymptotic value.

In fact, for large $n$, the asymptotic expansion of the binomial coefficient is

$\begin{align}\displaystyle \binom{n}{k} & \approx \frac{n^k}{k!}\text{exp}\left\{-\frac{S_1(k-1)}{n}-\frac{S_2(k-1)}{2n^2}-\frac{S_3(k-1)}{3n^3}\cdots\right\} \\ & \approx \frac{n^k}{k!}\text{exp}\left\{-\frac{k(k-1)}{2n}-\frac{k^3}{6n^2}-\frac{k^4}{12n^3}\cdots\right\}\\ & \approx \frac{n^k}{k!}\text{exp}\left\{-\frac{k^2}{2n}-\frac{k^3}{6n^2}-\frac{k^4}{12n^3}\cdots\right\}\\ \end{align}$

where $S_r(m)$ is the sum of the $r$-th powers of the first $m$ natural numbers.

Using the second expression and simplifying a bit more, we have

$\displaystyle a_k \approx \text{exp}\left\{\frac{1-j^{-1}}{8n}\right\}\text{exp}\left\{-\frac{(1-j^{-1})(k-\frac{1}{2})^2}{2n}\right\}\left(1-\frac{1-j^{-2}}{6n^2}k^3-\frac{1-j^{-3}}{12n^3}k^4+\frac{1}{2}\frac{(1-j^{-2})^2}{36n^4}k^6+\cdots\right)$

If we now use the Normal approximation, we get

$\begin{align}\displaystyle \mathbb{E}(X) &\approx 1+\text{exp}\left\{\frac{1-j^{-1}}{8n}\right\}\left(\sqrt{\frac{n\pi/2}{1-j^{-1}}}-\frac{1}{3}\frac{j+1}{j-1}-\frac{1}{4}\frac{1-j^{-3}}{(1-j^{-1})^{5/2}}\sqrt{\frac{\pi}{2n}}+\frac{5}{24}\frac{(1-j^{-2})^2}{(1-j^{-1})^{7/2}}\sqrt{\frac{\pi}{2n}}\right) \\ &= 1+\text{exp}\left\{\frac{1-j^{-1}}{8n}\right\}\left(\sqrt{\frac{n\pi/2}{1-j^{-1}}}-\frac{1}{3}\frac{j+1}{j-1}-\frac{1}{24}\frac{1-4j^{-1}+j^{-2}}{(1-j^{-1})^2}\sqrt{\frac{1-j^{-1}}{2n/\pi}} \right) \\ &\approx 1+\sqrt{\frac{n\pi/2}{1-j^{-1}}}-\frac{1}{3}\frac{j+1}{j-1}+\frac{1}{12}\frac{1-j^{-1}+j^{-2}}{(1-j^{-1})^2}\sqrt{\frac{1-j^{-1}}{2n/\pi}} \\ \end{align}$

which I think is an $O(n^{-1})$ approximation. Hope you liked the discussion.

Clear["Global`*"];
n = 10000; j = 7;
acc = 50;
res = N[1, acc];
k = 1; nume = N[n, acc]; deno = N[n, acc];
Monitor[Do[
     res += N[nume/deno, acc];
     nume *= (n - k); deno *= (n - k/j);
     , {k, n}];, {k, res}]; // AbsoluteTiming
res
N[1 + Exp[(1 - j^-1)/(
    8 n)] (Sqrt[(n \[Pi]/2)/(1 - j^-1)] - 1/3 (1 + j^-1)/(1 - j^-1) - 
     1/24 (1 - 4 j^-1 + j^-2)/(1 - j^-1)^2 Sqrt[(1 - j^-1)/(
      2 n/\[Pi])]), acc]


Until then
Yours Aye
Me

Friday, May 14, 2021

Coupon collector's Problem without Replacement

In this post, we seek the expected number of coupons needed to complete the set where the number of coupons in each type is finite and we sample without replacement.

I recently got interested in this question when I saw a post on Reddit that asks for the expected number of cards that has to be drawn without replacement from deck of cards to get all the four suits. This is essentially the coupon collector's problem with 4 different coupon types with 13 coupons available in each type.

Let's first discuss a simpler question. What is the expected number of cards to be drawn without replacement from a well shuffled deck to collect the first Ace? This is exactly like that of the Geometric distribution but without replacement.

We can employ symmetry to simplify this problem. The four Aces would divide the remaining 48 cards into five equal sets. Therefore each set would contain 48/5=9.6 cards. Thus, including the first Ace we collect, we need 10.6 draws without replacement.

What we've discussed here is the Negative Hypergeometric distribution which deals with the expected number of draws without replacement needed to get $r$-th success. Analogous to the Geometric distribution, let's also define the Negative Hypergeometric distribution with $r=1$ as the Negative Geometric distribution.

If $X$ is a Negative Geometric random variable denoting the number of draws without replacement needed to get the first success, then based on our discussion above, we have

$\displaystyle\mathbb{E}(X)=\frac{N-K}{K+1}+1=\frac{N+1}{K+1}$

where $N$ is the population size and $K$ is the number of "successes" in the population.

That is all we need to solve our original problem. Even though, finding the expected number of draws without replacement to get all suits is solved a lot of times in Stack Exchange (For example, here, here and here by Marko Reidel with Generating functions), we are going to use the amazing Maximums-Minimums Identity approach used here.

Let $X_1$, $X_2$, $X_3$ and $X_4$ be the random number of draws without replacement needed to get the first Spades, Clubs, Hearts and Diamonds respectively. Then the random number $X=\text{max}(X_1,X_2,X_3,X_4)$ denotes the number of draws to get all the four suits.

Note that each $X_i$ is a Negative geometric variable and the minimum of any bunch is again Negative Geometric with the number of successes pooled together. Using the Max-Min Identity and the linearity of expectations, we have

$\begin{align}\displaystyle\mathbb{E}[X]&=\mathbb{E}[\text{max }X_i]\\ &=\sum_i \mathbb{E}[X_i] - \sum_{i<j}\mathbb{E}[\text{min}(X_i,X_j)]+\sum_{i<j<k}\mathbb{E}[\text{min}(X_i,X_j,X_k)]-\cdots\\ &= \binom{4}{1}\frac{52+1}{13+1}-\binom{4}{2}\frac{52+1}{26+1}+\binom{4}{3}\frac{52+1}{39+1}-\binom{4}{4}\frac{52+1}{52+1}\\ &= \frac{4829}{630} \approx 7.66508\end{align}$

Though we have solved the question in the case where each type has an equal number of coupons, it should be easy to see that this approach is generalizable easily.

For the case of $n$ different coupon types with $j$ coupons in each type, we have,

$\displaystyle \mathbb{E}(X)=\sum_{k=1}^n(-1)^{k-1}\binom{n}{k}\frac{nj+1}{kj+1}=(nj+1)\left[1-\binom{n+1/j}{n}^{-1}\right]$

which is the closed form solution of the finite version of the coupon collector's problem. We know that as $j\to \infty$, this reduces to the classical coupon collector problem. For case of $n\to \infty$, using the Asymptotic bounds of Binomial Coefficient, we can write,

$\displaystyle \mathbb{E}(X) \approx nj \left[1-\binom{n+1/j}{n}^{-1} \right] \approx nj - \Gamma(1/j)n^{1-1/j}$

Even though I knew how the Max-Min Identity is useful in terms of the coupon collector problem with replacement, it took me almost a day of on-and-off thinking before I could convince myself that we can make it work for the without replacement case as well. And it was a pleasant surprise that we could arrive at a nice asymptotic expression for the expectation.


Until Then
Yours Aye
Me

Friday, April 30, 2021

A Probability problem on an Election result

A friend of mine recently posed the following problem to me: Given two contestants in an election $X$ and $Y$ (with equal popularity among the voters), what is the probability that the contestant leading the election after 80% of polling eventually loses the election?

I misunderstood the question in ways more than one and, not wanting to use paper-pencil, solved this problem with an answer of $(2/\pi)\tan^{-1}4$ which was wrong.

I enquired with him the source of the problem which also has a solution. But the solution there seems very convoluted and needs Wolfram Alpha to get a closed form. Seeing the solution, I realized that I have misunderstood the question and that my method is a lot easier.

Let $X_1,Y_1$ be the votes received by the contestants in after 80% of polling respectively and $X_2,Y_2$ be the votes respectively in the remaining 20% polling. For the sake of simplicity, let the total number of votes be $20n$ for a large $n$.

We know that, if $X\sim \text{Bin}(m,p)$, then for large $m$, $X$ is approximately distributed as $\mathcal{N}(mp,mpq)$. Therefore, for $p=q=1/2$,

$\displaystyle X_1,Y_1\sim \mathcal{N}\left(\frac{16n}{2}, \frac{16n}{4}\right)$ and $\displaystyle X_2,Y_2 \sim \mathcal{N}\left(\frac{4n}{2}, \frac{4n}{4}\right)$

Let $E$ be the event denoting the player trailing after 80% polling eventually wins the election. Then,

$\displaystyle \mathbb{P}(E)=2\cdot \frac{1}{2}\cdot \mathbb{P}(X_1+X_2 \leq Y_1+Y_2 \text{ and }X_1 \geq Y_1)$

We can rewrite the same to get

$\mathbb{P}(E)=\mathbb{P}(X_1-Y_1 \leq Y_2 - X_2 \text{ and }X_1-Y_1 \geq 0)$

We also know that if $U\sim \mathcal{N}(\mu_1,\sigma_1^2)$ and $V\sim \mathcal{N}(\mu_2,\sigma_2^2)$, then $aU+bV\sim \mathcal{N}(a\mu_1+b\mu_2,a^2\sigma_1^2+b^2\sigma_2^2)$

Therefore, $X_1-Y_1 \sim \mathcal{N}\left(0,8n\right)=\sqrt{8n}Z_1$ and $X_2-Y_2\sim \mathcal{N}\left(0,2n\right)=\sqrt{2n}Z_2$

where $Z_1$ and $Z_2$ are standard Normal variables.

Therefore,

$\begin{align}\displaystyle \mathbb{P}(E)&=\mathbb{P}(2\sqrt{2n}Z_1\leq \sqrt{2n}Z_2 \text{ and } 2\sqrt{2n}Z_1 \geq 0)\\ &=\mathbb{P}(2Z_1\leq Z_2 \text{ and } Z_1 \geq 0) \\ &= \mathbb{P}(Z_1 \leq Z_2/2 \text{ and }Z_1 \geq 0)\\ &=\mathbb{P}(0 \leq Z_1 \leq Z_2/2) \\ &= \mathbb{P}\left(0 \leq \frac{Z_1}{Z_2} \leq \frac{1}{2}\right)\\ &= \mathbb{P}\left(0 \leq W \leq \frac{1}{2}\right)\\ \end{align}$

where $W$ is the ratio of two standard Normal distributions and hence a Cauchy random variable. As the CDF of a Cauchy variable has a closed form in terms of arctangent function, we finally have

$\displaystyle \mathbb{P}(E)=\frac{1}{\pi}\tan^{-1}\left(\frac{1}{2}\right)\approx 0.1475$

If $E$ denotes the event that the contestant trailing when the fraction $a$ of votes remains to counted eventually wins the election. then

$\displaystyle \mathbb{P}(E)=\frac{1}{\pi}\tan^{-1}\left(\sqrt{\frac{a}{1-a}}\right)=\frac{1}{\pi}\sin^{-1}\sqrt{a}$

Hope you enjoyed the discussion. See ya in the next post.


Until Then
Yours Aye
Me

Saturday, April 10, 2021

Expected Value in terms of CDF

It is well known that the expectation of a non-negative random variable $X$ can be written as

$\displaystyle \mathbb{E}[X] \overset{\underset{\mathrm{d}}{}}{=} \sum_{k=0}^\infty \mathbb{P}(X>k)  \overset{\underset{\mathrm{c}}{}}{=} \int\limits_0^\infty \mathbb{P}(X>x)\,dx$

for the discrete and continuous cases.

It's quite easy to prove this, at least in the discrete case. It is interesting that, in the same vein, this can be extended to arbitrary functions. That is,

$\begin{align} \displaystyle \mathbb{E}[g(X)]&=\sum_{k=0}^\infty g(k)\mathbb{P}(X=k)\\ &=\sum_{k=0}^\infty \Delta g(k)\mathbb{P}(X>k)\\ \end{align}$

where $\Delta g(k)=g(k+1)-g(k)$ is the forward difference operator. WLOG, We also have made an assumptions that $g(0)=0$.

Comparing this with continuous case, we can make an 'educated guess' that

$\displaystyle \mathbb{E}[g(X)]=\int\limits_0^\infty \mathbb{P}(X>x)\,dg(x)$

We made a post about estimating a sum with probability where we showed the expected error in the approximation is given by

$\displaystyle \mathbb{E}(\delta)=\sum_{k=1}^\infty f(k)\frac{\binom{n-k}{m}}{\binom{n}{m}}$

Note that the term involving the ratio of binomial coefficients can be interpreted as the probability of the minimum-of-the-$n$-tuple being greater than $k$. Therefore,

$\displaystyle \mathbb{E}(\delta)=\sum_{k=1}^\infty f(k)\mathbb{P}(Y>k)$

where $Y$ denotes the smallest order statistic.

Comparing this with our expression for expectation, we see that the expected value of the (probabilistic) Right Riemann sum is

$\displaystyle \mathbb{E}[\text{Right Riemann sum}]  \overset{\underset{\mathrm{d}}{}}{=} \mathbb{E}\left[\sum_{j=Y}^n f(j)\right]  \overset{\underset{\mathrm{c}}{}}{=} \mathbb{E}\left[ \int\limits_Y^1 f(x)\,dx \right]$

Without going into further calculations, I'm guessing that

(i) $\displaystyle \mathbb{E}[\text{Left Riemann sum}]  \overset{\underset{\mathrm{d}}{}}{=} \mathbb{E}\left[\sum_{j=0}^Z f(j)\right]  \overset{\underset{\mathrm{c}}{}}{=} \mathbb{E}\left[ \int\limits_0^Z f(x)\,dx \right]$

(ii) $\displaystyle \mathbb{E}[\text{error in Trapezoidal sum}]  \overset{\underset{\mathrm{d}}{}}{=} \frac{1}{2}\mathbb{E}\left[\sum_{j=Y}^Z f(j)\right]  \overset{\underset{\mathrm{c}}{}}{=} \frac{1}{2}\mathbb{E}\left[ \int\limits_Y^Z f(x)\,dx \right]$

where $Z$ denotes the largest order statistic.

Hope you enjoyed the discussion. See ya in the next post.


Until then
Yours Aye
Me

Monday, January 18, 2021

Probability on a standard deck of cards

I'm a great fan of Probability and Combinatorics. The question that we are going to solve in this post remains as one of my most favorite question in this area. I've been meaning to solve this one for so long and finally I'm glad I did in recently.

Consider a standard deck of 52 cards that is randomly shuffled. What is the probability that we do not have any King adjacent to any Queen after the shuffle?

Like I said, there were multiple instances where I used to think about this problem for long and then get distracted elsewhere. This time Possibly Wrong brought it back again to my attention. There is already a closed form solution there but I couldn't understand it. So, I decided to solve it Generating functions.

Stripping the problem down of all labels, the equivalent problem is to consider the number of words that we can form from the alphabet $\mathcal{A}=\{a,b,c\}$ with no occurrence of the sub-word $ab$ or $ba$. The $a$'s correspond to the Kings in the deck, $b$'s to the Queens and $c$'s to the rest of the cards.

Two important points here: First, We have to specifically find the number of words with 4 $a$'s, 4 $b$'s and 44 $c$'s. Second, convince yourself that both the problems are equivalent.

Pattern avoidance problems are easily tacked with the idea presented in Page 60 of 'amazing' Analytic Combinatorics.

Following the same, let $\mathcal{S}$ be the language words with no occurrence of $ab$ or $ba$, $\mathcal{T}_{ab}$ be those words that end with $ab$ but have no other occurrence of $ab$ and likewise for $\mathcal{T}_{ba}$ be those words that end with $ba$ but have no other occurrence of $ba$.

Appending a letter from $\mathcal{A}$ to $\mathcal{S}$, we find a non-empty word either in $\mathcal{S}$, $\mathcal{T}_{ab}$ or $\mathcal{T}_{ba}$. Therefore,

$\mathcal{S}+\mathcal{T}_{ab}+\mathcal{T}_{ba}=\{\epsilon\}+\mathcal{S}\times\mathcal{A}$

Appending $ab$ to $\mathcal{S}$, we either get a word from $\mathcal{T}_{ab}$ or a word from $\mathcal{T}_{ba}$ appending with $b$. Therefore,

$\mathcal{S}\times ab=\mathcal{T}_{ab}+\mathcal{T}_{ba}b$

Similarly, we have, $\mathcal{S}\times ba=\mathcal{T}_{ba}+\mathcal{T}_{ab}a$

The three equations in terms of OGFs become,

$S+T_{ab}+T_{ba}=1+S\cdot(a+b+c)$
$S\cdot ab=T_{ab}+T_{ba}\cdot b$
$S\cdot ba=T_{ba}+T_{ab}\cdot a$

We have three equations in three unknowns. Solving for $S$, we get,

$\displaystyle S=\frac{1-ab}{(1-a)(1-b)-(1-ab)c}$

which should be regarded as the generating function in terms of variables $a$, $b$ and $c$. So the coefficient of $a^ib^jc^k$ gives the number of words that avoid both $ab$ and $ba$ using $i$ $a$'s, $j$ $b$'s and $k$ $c$'s.

The coefficient of $c^k$ in this generating function is elementary. We get,

$\displaystyle [c^k]S=\frac{(1-ab)^{k+1}}{(1-a)^{k+1}(1-b)^{k+1}}=\left(\frac{1-ab}{(1-a)(1-b)}\right)^{k+1}$

Something interesting happens here. We note that

$\displaystyle \frac{1-ab}{(1-a)(1-b)}=1+a+b+a^2+b^2+a^3+b^3+a^4+b^4+\cdots$

Therefore the coefficient of $a^ib^j$ in the expansion $[c^k]S$ above has the following interpretation: It is the number of ways that a bipartite number $(i,j)$ can be written as sum of $k+1$ bipartite numbers of the form $(u,0)$ and $(0,v)$ with $u,v\geq0$.

This can be achieved with simple combinatorics. Of the $k+1$ numbers, choose $m$ numbers for $i$. Distribute $i$ in those $m$ numbers such that each numbers gets at least $1$. Distribute $j$ in the remaining $k+1-m$ numbers.

Therefore,

$\displaystyle [a^ib^jc^k]S=\sum_{m=1}^i \binom{k+1}{m}\binom{i-1}{m-1}\binom{j+k-m}{j}$

There we have it!

But, wait a minute.. The final answer gives us a very simple combinatorial way of arriving at the answer. Just lay down all the $k$ $c$-cards. Now there will be a total of $k+1$ gaps between those cards. Choose $m$ among them and distribute the $i$ $a$-cards in those $m$ spaces such that each space gets at least one card. Now distribute the $j$ $b$-cards in the remaining $k+1-m$ spaces.

Knowing this, I feel stupid to have gone through the generating function route to get the solution. Grrr...

Anyway, to get the desired probability we use $i=j=4$ and $k=44$,

$\displaystyle \mathbb{P}(\text{No King and Queen adjacent})=\frac{4!4!44!\sum_{m=1}^4 \binom{44+1}{m}\binom{4-1}{m-1}\binom{4+44-m}{4}}{52!}$

which matches with the answer given in the quoted page.

One more small note before we conclude. Take any two Aces out of the deck. Now the probability of no King adjacent to any Queen in this pack of 50 cards is given by using $i=j=4$ and $k=42$ which is $\approx 0.499087$, surprisingly close to $1/2$. Very nearly a fair bet!

I'm convinced that the author of the page left the answer in a specific form as hint for someone attempting to solve the problem but it didn't help me. But I'm glad that I was able to derive a simpler form of the solution with some intuition on why the solution looks the way it does. Hope you enjoyed this discussion.

Clear["Global`*"];
SeriesCoefficient[1/(
  1 - a - b - c + (a b (a - 1 + b - 1))/(a b - 1)), {a, 0, 4}, {b, 0, 
   4}, {c, 0, 44}]/Multinomial[4, 4, 44]
1 - N[%]
1 - NSum[Binomial[44 + 1, m] Binomial[4 - 1, m - 1] Binomial[
     4 + 44 - m, 4], {m, 4}]/Multinomial[4, 4, 44]
SeriesCoefficient[(
  1 - a b)/((1 - a) (1 - b) - (1 - a b) c), {a, 0, 4}, {b, 0, 12}, {c,
    0, 36}]/Multinomial[4, 12, 36]
1 - N[%]
1 - NSum[Binomial[36 + 1, m] Binomial[4 - 1, m - 1] Binomial[
     12 + 36 - m, 12], {m, 4}]/Multinomial[4, 12, 36]
SeriesCoefficient[(
  1 - a b)/((1 - a) (1 - b) - (1 - a b) c), {a, 0, 4}, {b, 0, 16}, {c,
    0, 32}]/Multinomial[4, 16, 32]
1 - N[%]
1 - NSum[Binomial[32 + 1, m] Binomial[4 - 1, m - 1] Binomial[
     16 + 32 - m, 16], {m, 4}]/Multinomial[4, 16, 32]


Until then
Yours Aye
Me

Saturday, January 16, 2021

Probability on a Contingency Table

Contingency tables are quite common in understanding a classification problems like that of a ML model or new drug tested against a disease. Given that we are just recovering from a pandemic, let's stick to the case of a Machine Learning model. In the context of ML models, it is called the Confusion matrix and we'll use both the terms interchangeably in this post.

A 2x2 contingency table usually has two columns for the binary classification (Win vs. Lose, Apple vs. Orange, White vs. Black etc.) and two rows for whether the prediction was right or wrong. Let's consider the classification as a 'Hypothesis' and the model's prediction as an 'Evidence' supporting it.

Here is how our contingency table would look like. 

Table 1

H

⌐H

E

n(H∩E)

n(⌐H∩E)

⌐E

n(H∩⌐E)

n(⌐H∩⌐E)


where $n(A)$ denotes the number of elements in set $A$.

We can normalize this table by dividing each of the four entries by the total thereby creating a new table.

Table 2

H

⌐H

E

$\mathbb{P}(H\cap E)$

$\mathbb{P}(\neg H\cap E)$

⌐E

$\mathbb{P}(H \cap \neg E)$

$\mathbb{P}(\neg H \cap \neg E)$


where we can view each entry as the probability of a classification falling in that bracket and $\mathbb{P}(A)$ denotes the probability of event $A$. Note that the sum of all the entries of Table 2 is 1.

The Wiki page on Confusion matrix gives a huge list of metrics that can be derived out of this table. In this post, we visit a couple of probability problems created from them.

Three of the important metrics that I learned in the context of ML from these matrices are

Precision, $\displaystyle p=\frac{\mathbb{P}(H\cap E)}{\mathbb{P}(H\cap E)+\mathbb{P}(\neg H\cap E)}=\frac{\mathbb{P}(H\cap E)}{\mathbb{P}(E)}=\mathbb{P}(H|E)$

Recall, $\displaystyle r=\frac{\mathbb{P}(H\cap E)}{\mathbb{P}(H\cap E)+\mathbb{P}(H\cap \neg E)}=\frac{\mathbb{P}(H\cap E)}{\mathbb{P}(H)}=\mathbb{P}(E|H)$

and Accuracy, $a=\mathbb{P}(H \cap E)+\mathbb{P}(\neg H \cap \neg E)$

Suppose, we want to create a random (normalized) confusion matrix. One way to do this would be create random variables all between 0 and 1, that also sum to 1. We can use Dirichlet distribution with four parameters to achieve this.

But there may be instances where we want to create a confusion matrix with a given precision, recall and accuracy. There are four entries in the table. Given three metrics and the fact that the entries should add upto 1 would seem to suggest, that these completely define the table.

But not all such values can create a valid table. For example, its impossible to create a valid table with a precision of 66.7%, recall 90.0% and accuracy 60%. So our first question is, given that precision, recall and accuracy are all uniformly distributed random variables, what is the probability that we will end up with a valid table.

To produce a valid table, the three variables need to satisfy the condition

$\displaystyle a \geq \frac{pr}{1-(1-p)(1-r)}$

Let $T$ be event of getting a valid table. Using the above we have,

$\displaystyle \mathbb{P}(T)=\int\limits_0^1\int\limits_0^1\int\limits_0^1\left[ a \geq \frac{pr}{1-(1-p)(1-r)} \right]\,dp\,dr\,da$

For a moment, let's assume we assume that $a$ is given. Then we first solve

$\displaystyle F(a)=\int\limits_0^1\int\limits_0^1\left[a \geq \frac{pr}{1-(1-p)(1-r)}\right]\,dp\,dr$

We can simplify the expression inside the Iverson bracket as a curve in the $r-p$ plane. The equation of the curve is given by

$\displaystyle r=\frac{ap}{(1+a)p-a}$

Plotting the region for $a=1/4$, we get the following graph.




The region to be integrated lies between the curve and the two axes. We can divide this region along the $p=r$ line. This line intersects the graph at $\left(\frac{2a}{1+a},\frac{2a}{1+a}\right)$. Therefore,

$\displaystyle F(a)=\frac{4a^2}{(1+a)^2}+2\int\limits_{\frac{2a}{1+a}}^1\frac{a p}{(1+a)p-a}\,dp$

Solving the second integral with Wolfram Alpha, we get,

$\displaystyle F(a)=\frac{4a^2}{(1+a)^2}+2\frac{a(1-a)}{(1+a)^2}-2\frac{a^2\log{a}}{(1+a)^2}=\frac{2a}{1+a}-\frac{2a^2\log{a}}{(1+a)^2}$

Plugging this back in our original equation and integrating, we see that,

$\displaystyle \mathbb{P}(T)=\int\limits_0^1\left(\frac{2a}{1+a}-\frac{2a^2\log{a}}{(1+a)^2}\right)\,da=4-\frac{\pi^2}{3}\approx 0.710132$

Thus we see that the just about 29% of the tables will not be valid. Something that truly surprised me here is the fact that $\pi$ makes an appearance here. There are no circles (not even something close to that) in this problem!!

The second problem is we see is also quite similar. Now, assume that we create tables (like that of Table 2) such that the values are uniformly distributed and sum to 1. If we want the precision and recall of our random tables to be greater than some threshold, what would be expected accuracy of the table?

For clarity, let $\mathbb{P}(H \cap E)=X$, $\mathbb{P}(\neg H \cap E)=Y$, $\mathbb{P}(H \cap \neg E)=Z$ and $\mathbb{P}(\neg H \cap \neg E)=W$, then $(X,Y,Z,W)\sim \text{Dir}(1,1,1,1)$

$\displaystyle \mathbb{E}(1-Y-Z|\mathbb{P}(E|H)\geq r,\mathbb{P}(H|E)\geq p)=\frac{1}{V}\int\limits_Q 1-y-z \,dx\,dy\,dz$

where $Q$ is the region such that

$Q=\{(x,y,z):(1-p)x\geq py \land (1-r)x\geq rz \land x+y+z \leq 1 \land x,y,z\geq 0\}$

and $V$ is the volume enclosed by $Q$.

Evaluating this integral is not so easy. The region integration depends on the value of $p$ and $r$ and it kind of ends in a mess of equations. But with some luck and a lot of Mathematica, we can see

$\displaystyle \mathbb{E}(\mathbb{P}(H \cap E)+\mathbb{P}(\neg H \cap \neg E)\text{  }|\text{  }\mathbb{P}(E|H)\geq r,\mathbb{P}(H|E)\geq p)=\frac{2(p+r)^2+p^3+r^3-pr(p^2+r^2)}{4(p+r)(1-(1-p)(1-r))}$

I have no way of making sense of that expression but, hey, we have an expectation on probabilities!

Hope you enjoyed this discussion.

Clear["Global`*"];
p = 200/1000; r = 250/1000;
ImpReg[a_, p_, r_] := 
  ImplicitRegion[
   y + z <= 1 - a && (1 - p) x >= p y && (1 - r) x >= r z && 0 <= x &&
     0 <= y && 0 <= z && x + y + z <= 1, {x, y, z}];
ImpRegap[a_, p_] := 
  ImplicitRegion[
   y + z <= 1 - a && (1 - p) x >= p y && 0 <= x && 0 <= y && 0 <= z &&
     x + y + z <= 1, {x, y, z}];
ImpRegpr[p_, r_] := 
  ImplicitRegion[(1 - p) x >= p y && (1 - r) x >= r z && 0 <= x && 
    0 <= y && 0 <= z && x + y + z <= 1, {x, y, z}];
ImpRegra[r_, a_] := 
  ImplicitRegion[
   y + z <= 1 - a && (1 - r) x >= r z && 0 <= x && 0 <= y && 0 <= z &&
     x + y + z <= 1, {x, y, z}];
ExpecAcc[p_, r_] := (-2 p^2 - p^3 - 4 p r + p^3 r - 2 r^2 - r^3 + 
   p r^3)/(4 (p + r) (-p - r + p r));
ExpecAcc[p, r]
N[%]
lim = 1000000;
cnt = 0;
val = 0;
Do[
  x = -Log[RandomReal[]]; y = -Log[RandomReal[]];
  z = -Log[RandomReal[]]; w = -Log[RandomReal[]];
  t = w + x + y + z;
  x /= t; w /= t; y /= t; z /= t;
  If[And[x/(x + y) >= p, x/(x + z) >= r],
   cnt += 1; val += 1 - y - z;
   ];
  , {i, lim}];
N[val/cnt]


Until then
Yours Aye
Me

Thursday, January 7, 2021

Isochrone on a Spherical surface

The Cycloid in all its grandeur scooped all the glory for itself by way of being both the brachistochrone curve and the tautochrone curve.

But, seemingly out of some dark magic, the semi-cubical parabola makes its way as the (vertical) isochrone curve, a curve on which a bead sliding without friction covers equal vertical distances in equal intervals of time.

In my last post, we found the differential equation of a tautochrone curve on a spherical surface and used it to find the curve. In this post, we kind of continue the same discussion.

Our aim in this post is to find a curve on the surface of a sphere such that a bead sliding (without friction) under the influence of gravity covers equal polar angles at equal intervals of time. In other words, we are trying to find the (polar) Isochrone curve on the spherical surface.

For reasons that'll be apparent later, we modify our problem construct a little. Let's assume that the bead starts at the north pole and slides (without friction) slowly along the $\phi=0$ plane from $\theta=0$ to $\theta=\theta_0$ where the bead seamlessly enters the curve.

Let $\omega=d\theta/dt$ be the constant polar velocity as the bead enters the isochrone curve. Also, note that, because of the way the bead travels before entering the curve, the azimuthal velocity will be zero at the entry point.


$\displaystyle \left(\frac{ds}{dt}\right)^2 =\left(\frac{d\theta}{dt}\right)^2+\sin^2\theta\left(\frac{d\phi}{dt}\right)^2=\left(\frac{d\theta}{dt}\right)^2\left(1+\sin^2\theta\left(\frac{d\phi}{d\theta}\right)^2\right)$

Let the plane tangent to the sphere at the south pole be the 'base' from which we measure the gravitational potential energy. Then, using the law of conservation of energy at $\theta=\theta_0$ (just after the bead entered the isochrone) and any point beyond,

$\displaystyle \omega^2\left(1+\sin^2\theta\left(\frac{d\phi}{d\theta}\right)^2\right)+2g(1+\cos\theta)=\omega^2+2g(1+\cos\theta_0)$

Simplifying this,

$\displaystyle \frac{d\phi}{d\theta}=\frac{\sqrt{2g}}{\omega}\frac{\sqrt{\cos\theta_0-\cos\theta}}{\sin\theta}$

It is easy to express $\omega$ in terms of $\theta_0$. Using the law of conservation of energy at $\theta=0$ and $\theta=\theta_0$ (just before the bead enters the isochrone), it is easy to see,

$\omega=\sqrt{2g(1-\cos\theta_0)}$

Using this in the expression above, we finally have,

$\displaystyle \frac{d\phi}{d\theta}=\frac{1}{\sin\theta}\sqrt{\frac{\cos\theta_0-\cos\theta}{1-\cos\theta_0}}$

(WARNING: Now is the time where I give you a fair warning to keep your mind in a secured place because it is about to be blown.)

Looking at the above differential equation, it is clear that it is the same equation we found in our previous post for the tautochrone problem. The same curve solves both the problems just like the cycloid does in plane geometry!!!

This says, if you place the bead at any point on our curve, the time it takes to reach the south pole is the same and hence becomes tautochrone. But if you place the bead at the north pole and let it slide itself into the curve, then it covers equal polar angles at equal intervals of times and hence becomes (polar) isochrone.

Truly, some dark magic stuff going on. I would have never expected in any way that such a weird coincidence would happen in the spherical case. I truly enjoyed this. Hope you did too. See you in the next post.


Until Then
Yours Aye
Me