This problem is all about taking a simple problem in probability and complicating it.
Consider the problem of finding the expected number of uniform variables between $0$ and $1$ needed to get a sum greater than $1$. This problem is well studied and surprisingly the answer turns out to be $e$, Euler's number.
The derivation goes something like this. Starting with $0$, we keep generating Uniform random variable $X \sim U(0,1)$ and adding them until we exceed $1$. Let $\mathbb{E}(x)$ be the expected number of random variables needed when we start from $x$.
It's easy to that $\mathbb{E}(x)=0$ for $x>1$ and $E(1)=1$, both by problem definition.
Now it's easy to write the following equation.
$\mathbb{E}(x)=1+\displaystyle\int\limits_0^1\mathbb{E}(x+t)\,dt$
Now let $y=x+t$. The only thing to take care is when $t=1$, $y=1+x$. But we already know that expected value is $0$ for a value exceeding $1$. Hence, we can only consider the integral with $1$ as the upper limit.
$\mathbb{E}(x)=1+\displaystyle\int\limits_x^1\mathbb{E}(y)\,dy$
For reasons that'll be apparent later in this post, let's define a family of functions $f_k(z)$ such that $f_k^{(k)}(z)=D^k f_k(z)=\mathbb{E}(z)$ and $f_k(1)=0$, $k > 0$. Here the bracketed exponents denotes differentiation and $D$ denotes the differentiation operator.
Using this definition, the equation above becomes,
$\begin{align}
f_1'(x)&=1+\displaystyle\int\limits_x^1\,df_1\\
&=1+f_1(1)-f_1(x)\\
&=1-f_1(x)
\end{align}$
Therefore, we end up with the differential equation $f'_1(x)+f_1(x)=1$.
Solving this with Wolfram alpha, we get $f_1(x)=1+c_1e^{-x}$.
But $\mathbb{E}(x)=f_1'(x)=-c_1 e^{-x}$. Using the boundary condition, $\mathbb{E}(1)=1$, we get $\mathbb{E}(x)=e^{1-x}$ and hence $\mathbb{E}(0)=e$. $\blacksquare$
Now we make a slight modification. Instead of just generating one random number each time, what if we generate a bunch of $n$ uniform random numbers and choose the minimum of those to be the addend. Now we seek the expected number of random numbers added. We know that the pdf of the minimum of $n$ numbers is,
$pdf(z)=n(1-z)^{n-1}$, $0 \leq z \leq 1$
Therefore the integral equation we start with becomes,
$\mathbb{E}_{n,1}(x)=1+\displaystyle\int\limits_0^1\mathbb{E}_{n,1}(x+t)n(1-t)^{n-1}\,dt$
Note that $\mathbb{E}_{n,1}(1)=1$. Using $y=x+t$,
$\displaystyle\mathbb{E}_{n,1}(x)=1+n\int\limits_x^1\mathbb{E}_{n,1}(y)(1+x-y)^{n-1}\,dy$
Using the $f$ functions, we can transform the equation to,
$f_1'(x)=1-n f_1(x)+n(n-1)\int\limits_x^1f_1(y)(1+x-y)^{n-2}\,dy$
Now this can be repeatedly applied until the exponents inside the integral becomes $0$. We'll finally end-up with,
$\displaystyle \frac{f_n^{(n)}(x)}{n!}+\frac{f_n^{(n-1)}(x)}{(n-1)!}+\frac{f_n^{(n-2)}(x)}{(n-2)!}+\cdots+f_n(x)=\frac{1}{n!}$
Or using the differential operator,
$\displaystyle \left(\frac{D^n}{n!}+\frac{D^{n-1}}{(n-1)!}+\frac{D^{n-2}}{(n-2)!}+\cdots+1\right)f_n(x)=\frac{1}{n!}$
The characteristic equation of the differential equation is the partial exponential sum function. Let $P_{n,1}(m)$ denote this characteristic of equation and let $\eta_1,\eta_2,\cdots,\eta_n$ be the roots. That is,
$P_{n,1}(z)=\displaystyle 1+z+\frac{z^2}{2!}+\frac{z^3}{3!}+\cdots+\frac{z^n}{n!}$
Note that $P'_{n,1}(z)=P_{n-1,1}(z)$. Also, as
$P_{n,1}(z)=(z-\eta_1)(z-\eta_2)(z-\eta_3)\cdots(z-\eta_n)$
we have, $P'_{n,1}(\eta_i)=\displaystyle\prod_{i \ne j,1 \leq j \leq n}(\eta_i-\eta_j)=P_{n-1,1}(\eta_i)$
Back to the differential equation, $f_n(x)=\frac{1}{n!} + c_1 e^{\eta_1x}+ c_2 e^{\eta_2x}+ c_3 e^{\eta_3x}+\cdots+ c_n e^{\eta_nx}$
$f_n^{(k)}(x)=c_1 \eta_1^ke^{\eta_1x}+ c_2 \eta_2^ke^{\eta_2x}+ c_3\eta_3^k e^{\eta_3x}+\cdots+ c_n \eta_n^ke^{\eta_nx}$, $0\leq k\leq n$
Before solving for the constants, by this time we should realize that the constant in the RHS of the starting integral equation doesn't matter as it'll be taken care by the boundary conditions. The other constants can be solved by the conditions we enforced on the $f$ functions.
$f_n^{(k)}(1) = 0$ for $k<n$ and $f_n^{(n)}(1)=\mathbb{E}(1)=1$.
The post is getting unusually long. So I'll cut short the details. Solving for the constants using determinants, we'll have,
$\mathbb{E}_{n,1}(x)=\displaystyle\frac{1}{n!}\sum_{i=1}^n \frac{e^{-\eta_i(1-x)}\eta_i^{n-1}}{P_{n-1,1}(\eta_i)}$ and hence $\mathbb{E}_{n,1}(0)=\displaystyle\frac{1}{n!}\sum_{i=1}^n \frac{e^{-\eta_i}\eta_i^{n-1}}{P_{n-1,1}(\eta_i)}$ $\blacksquare$
Now for the final complication, what if we generate a bunch of $n$ random uniform numbers, sort them and choose the $k$th smallest number as the addend? Let $\mathbb{E}_{n,k}(x)$ be the expected value. The only thing of importance is the characteristic equation denoted as $P_{n,k}(z)$.
$P_{n,k}(z)=\displaystyle \frac{z^n}{n!}-(-1)^k \sum_{j=0}^{n-k}\binom{n-1-j}{k-1}\frac{z^j}{j!}$
That is, $P_{n,k}(D)f_n(x)=\frac{1}{n!}$
Note that still, $P'_{n,k}(z)=P_{n-1,k}(z)$ and $P'_{n,k}(\eta_i)=\displaystyle\prod_{i \ne j,1 \leq j \leq n}(\eta_i-\eta_j)=P_{n-1,k}(\eta_i)$
And the same analysis shows,
$\mathbb{E}_{n,k}(x)=\displaystyle\frac{1}{n!}\sum_{i=1}^n \frac{e^{-\eta_i(1-x)}\eta_i^{n-1}}{P_{n-1,k}(\eta_i)}$ and hence $\mathbb{E}_{n,k}(0)=\displaystyle\frac{1}{n!}\sum_{i=1}^n \frac{e^{-\eta_i}\eta_i^{n-1}}{P_{n-1,k}(\eta_i)}$
One of the beautiful things to note here is that the nature of the final formula gives a interpretation via the contour integral. That is,
$\mathbb{E}_{n,k}(x)=\displaystyle\frac{1}{n!}\sum_{i=1}^n \frac{e^{-\eta_i(1-x)}\eta_i^{n-1}}{P_{n-1,k}(\eta_i)}=\frac{1}{2\pi i n!}\oint_{\Gamma}\frac{e^{-z(1-x)}z^{n-1}}{P_{n,k}(z)}\,dz$
where the contour $\Gamma$ is large enough to encompass all the zeroes of $P_{n,k}(z)$.
Using these I was able to find,
$\mathbb{E}_{1,1}(0)=e$, $\mathbb{E}_{2,1}(0)=e(\cos 1 + \sin 1)$ and $\mathbb{E}_{2,2}(0)=\cosh \sqrt 2$,
The other characteristic equations does not appear to have closed form solutions. But the next simpler things are $\mathbb{E}_{n,n}(0)$.
$\mathbb{E}_{n,n}(0)={}_0F_{n-1}(;1/n,2/n,\cdots,(n-1)/n;n^n/n!)$
where ${}_pF_q(z)$ is the generalized hyper-geometric function.
Solving the characteristic equations is the real trouble and I tried to find some simpler ways to do that without success. However I was able to establish,
$\mathbb{E}_{n,k}(0)\approx \displaystyle \frac{n+1}{k}\left(1+\frac{1}{2}\frac{k+1}{n+2}\right)$
This approximation is very good even when $n$ and $k$ are far apart.
I tried to cover the derivations too but the post is getting longer. Maybe I'll post them separately and break this into shorter ones.
Until then,
Yours Aye
Me
Consider the problem of finding the expected number of uniform variables between $0$ and $1$ needed to get a sum greater than $1$. This problem is well studied and surprisingly the answer turns out to be $e$, Euler's number.
The derivation goes something like this. Starting with $0$, we keep generating Uniform random variable $X \sim U(0,1)$ and adding them until we exceed $1$. Let $\mathbb{E}(x)$ be the expected number of random variables needed when we start from $x$.
It's easy to that $\mathbb{E}(x)=0$ for $x>1$ and $E(1)=1$, both by problem definition.
Now it's easy to write the following equation.
$\mathbb{E}(x)=1+\displaystyle\int\limits_0^1\mathbb{E}(x+t)\,dt$
Now let $y=x+t$. The only thing to take care is when $t=1$, $y=1+x$. But we already know that expected value is $0$ for a value exceeding $1$. Hence, we can only consider the integral with $1$ as the upper limit.
$\mathbb{E}(x)=1+\displaystyle\int\limits_x^1\mathbb{E}(y)\,dy$
For reasons that'll be apparent later in this post, let's define a family of functions $f_k(z)$ such that $f_k^{(k)}(z)=D^k f_k(z)=\mathbb{E}(z)$ and $f_k(1)=0$, $k > 0$. Here the bracketed exponents denotes differentiation and $D$ denotes the differentiation operator.
Using this definition, the equation above becomes,
$\begin{align}
f_1'(x)&=1+\displaystyle\int\limits_x^1\,df_1\\
&=1+f_1(1)-f_1(x)\\
&=1-f_1(x)
\end{align}$
Therefore, we end up with the differential equation $f'_1(x)+f_1(x)=1$.
Solving this with Wolfram alpha, we get $f_1(x)=1+c_1e^{-x}$.
But $\mathbb{E}(x)=f_1'(x)=-c_1 e^{-x}$. Using the boundary condition, $\mathbb{E}(1)=1$, we get $\mathbb{E}(x)=e^{1-x}$ and hence $\mathbb{E}(0)=e$. $\blacksquare$
Now we make a slight modification. Instead of just generating one random number each time, what if we generate a bunch of $n$ uniform random numbers and choose the minimum of those to be the addend. Now we seek the expected number of random numbers added. We know that the pdf of the minimum of $n$ numbers is,
$pdf(z)=n(1-z)^{n-1}$, $0 \leq z \leq 1$
Therefore the integral equation we start with becomes,
$\mathbb{E}_{n,1}(x)=1+\displaystyle\int\limits_0^1\mathbb{E}_{n,1}(x+t)n(1-t)^{n-1}\,dt$
Note that $\mathbb{E}_{n,1}(1)=1$. Using $y=x+t$,
$\displaystyle\mathbb{E}_{n,1}(x)=1+n\int\limits_x^1\mathbb{E}_{n,1}(y)(1+x-y)^{n-1}\,dy$
Using the $f$ functions, we can transform the equation to,
$f_1'(x)=1-n f_1(x)+n(n-1)\int\limits_x^1f_1(y)(1+x-y)^{n-2}\,dy$
Now this can be repeatedly applied until the exponents inside the integral becomes $0$. We'll finally end-up with,
$\displaystyle \frac{f_n^{(n)}(x)}{n!}+\frac{f_n^{(n-1)}(x)}{(n-1)!}+\frac{f_n^{(n-2)}(x)}{(n-2)!}+\cdots+f_n(x)=\frac{1}{n!}$
Or using the differential operator,
$\displaystyle \left(\frac{D^n}{n!}+\frac{D^{n-1}}{(n-1)!}+\frac{D^{n-2}}{(n-2)!}+\cdots+1\right)f_n(x)=\frac{1}{n!}$
The characteristic equation of the differential equation is the partial exponential sum function. Let $P_{n,1}(m)$ denote this characteristic of equation and let $\eta_1,\eta_2,\cdots,\eta_n$ be the roots. That is,
$P_{n,1}(z)=\displaystyle 1+z+\frac{z^2}{2!}+\frac{z^3}{3!}+\cdots+\frac{z^n}{n!}$
Note that $P'_{n,1}(z)=P_{n-1,1}(z)$. Also, as
$P_{n,1}(z)=(z-\eta_1)(z-\eta_2)(z-\eta_3)\cdots(z-\eta_n)$
we have, $P'_{n,1}(\eta_i)=\displaystyle\prod_{i \ne j,1 \leq j \leq n}(\eta_i-\eta_j)=P_{n-1,1}(\eta_i)$
Back to the differential equation, $f_n(x)=\frac{1}{n!} + c_1 e^{\eta_1x}+ c_2 e^{\eta_2x}+ c_3 e^{\eta_3x}+\cdots+ c_n e^{\eta_nx}$
$f_n^{(k)}(x)=c_1 \eta_1^ke^{\eta_1x}+ c_2 \eta_2^ke^{\eta_2x}+ c_3\eta_3^k e^{\eta_3x}+\cdots+ c_n \eta_n^ke^{\eta_nx}$, $0\leq k\leq n$
Before solving for the constants, by this time we should realize that the constant in the RHS of the starting integral equation doesn't matter as it'll be taken care by the boundary conditions. The other constants can be solved by the conditions we enforced on the $f$ functions.
$f_n^{(k)}(1) = 0$ for $k<n$ and $f_n^{(n)}(1)=\mathbb{E}(1)=1$.
The post is getting unusually long. So I'll cut short the details. Solving for the constants using determinants, we'll have,
$\mathbb{E}_{n,1}(x)=\displaystyle\frac{1}{n!}\sum_{i=1}^n \frac{e^{-\eta_i(1-x)}\eta_i^{n-1}}{P_{n-1,1}(\eta_i)}$ and hence $\mathbb{E}_{n,1}(0)=\displaystyle\frac{1}{n!}\sum_{i=1}^n \frac{e^{-\eta_i}\eta_i^{n-1}}{P_{n-1,1}(\eta_i)}$ $\blacksquare$
Now for the final complication, what if we generate a bunch of $n$ random uniform numbers, sort them and choose the $k$th smallest number as the addend? Let $\mathbb{E}_{n,k}(x)$ be the expected value. The only thing of importance is the characteristic equation denoted as $P_{n,k}(z)$.
$P_{n,k}(z)=\displaystyle \frac{z^n}{n!}-(-1)^k \sum_{j=0}^{n-k}\binom{n-1-j}{k-1}\frac{z^j}{j!}$
That is, $P_{n,k}(D)f_n(x)=\frac{1}{n!}$
Note that still, $P'_{n,k}(z)=P_{n-1,k}(z)$ and $P'_{n,k}(\eta_i)=\displaystyle\prod_{i \ne j,1 \leq j \leq n}(\eta_i-\eta_j)=P_{n-1,k}(\eta_i)$
And the same analysis shows,
$\mathbb{E}_{n,k}(x)=\displaystyle\frac{1}{n!}\sum_{i=1}^n \frac{e^{-\eta_i(1-x)}\eta_i^{n-1}}{P_{n-1,k}(\eta_i)}$ and hence $\mathbb{E}_{n,k}(0)=\displaystyle\frac{1}{n!}\sum_{i=1}^n \frac{e^{-\eta_i}\eta_i^{n-1}}{P_{n-1,k}(\eta_i)}$
One of the beautiful things to note here is that the nature of the final formula gives a interpretation via the contour integral. That is,
$\mathbb{E}_{n,k}(x)=\displaystyle\frac{1}{n!}\sum_{i=1}^n \frac{e^{-\eta_i(1-x)}\eta_i^{n-1}}{P_{n-1,k}(\eta_i)}=\frac{1}{2\pi i n!}\oint_{\Gamma}\frac{e^{-z(1-x)}z^{n-1}}{P_{n,k}(z)}\,dz$
where the contour $\Gamma$ is large enough to encompass all the zeroes of $P_{n,k}(z)$.
Using these I was able to find,
$\mathbb{E}_{1,1}(0)=e$, $\mathbb{E}_{2,1}(0)=e(\cos 1 + \sin 1)$ and $\mathbb{E}_{2,2}(0)=\cosh \sqrt 2$,
The other characteristic equations does not appear to have closed form solutions. But the next simpler things are $\mathbb{E}_{n,n}(0)$.
$\mathbb{E}_{n,n}(0)={}_0F_{n-1}(;1/n,2/n,\cdots,(n-1)/n;n^n/n!)$
where ${}_pF_q(z)$ is the generalized hyper-geometric function.
Solving the characteristic equations is the real trouble and I tried to find some simpler ways to do that without success. However I was able to establish,
$\mathbb{E}_{n,k}(0)\approx \displaystyle \frac{n+1}{k}\left(1+\frac{1}{2}\frac{k+1}{n+2}\right)$
This approximation is very good even when $n$ and $k$ are far apart.
I tried to cover the derivations too but the post is getting longer. Maybe I'll post them separately and break this into shorter ones.
Until then,
Yours Aye
Me
No comments:
Post a Comment