Monday, July 1, 2024

Tautochronous tanks

Toricelli's law states that the velocity of a discharge, say $v$, located at the bottom of a tank filled to a height $h$ is $v=\sqrt{2gh}$.

Assuming the shape of the tank to be a right prism (with its axis along the $y$-axis), we can use the conservation of mass (or volume in this context) to get

$A \,dy=-a \cdot \sqrt{2gy}\,dt$

where $A$ is the cross sectional area of the prism and $a$ is the area of discharge. Note that the negative sign indicates that $y$ (the height of the water column) decreases as time $t$ increases.

Rearranging and integrating, we have

$\displaystyle \int_0^T \,dt=-\frac{A}{a\sqrt{2g}}\int_h^0 \frac{1}{\sqrt{y}}\,dy \implies T=\frac{A}{a}\sqrt{\frac{2h}{g}}$

where $T$ is the total time taken to drain the tank which is proportional to the square root of the height of the liquid column.

But can we design a tank such that the time of discharge is directly proportional to height? If we assume that the shape of the tank is a solid of revolution such that the area of the cross section at $y$ is given by $A(y)$, then by the similar reasoning as above, we have

$\displaystyle T=\frac{1}{a\sqrt{2g}}\int_0^h \frac{A(y)}{\sqrt{y}}\,dy$

Therefore, for $A(y)=\pi x^2 \propto \sqrt{y}$ (or) $x \propto \sqrt[4]{y}$, we get that $T(h)$ is directly proportional to $h$.

This clearly shows that the draining time for a tank formed by the solid of revolution quartic curve $y=x^4$ is directly proportional to height of the liquid column.

This naturally leads to the second question that we want to address in this post. Can we design a tank such that the draining time is independent of the height of the liquid column.

A direct attack (for example, using Abel's solution of the Tautochrone curve) quickly suggests that there is no such curve.

This might seem like a dead end and it sort of is. But, Let's consider an almost similar question: Can we design a tank, initially filled to the brim, such that the time of discharge is independent of the height at which the discharging hole is located?

For ease of what follows, we orient the coordinate system such that gravity points in the positive $y$ direction. Then for a tank whose shape is a solid of revolution, using the conservation of volume, we have,

$\displaystyle A(y)\,dy=a\cdot \sqrt{2g(y_0-y)}\,dt \implies T(y_0)=\frac{1}{a\sqrt{2g}}\int_0^{y_0}\frac{A(y)}{\sqrt{y_0-y}}\,dy$

where $y_0$ is the height at which the discharge is located.

Now, if we use Abel's idea, we can see that

$\displaystyle A(y)=T_0\frac{a\sqrt{2g}}{\pi}\frac{1}{\sqrt{y}}$

And there we have it! The inverse quartic equation $y=x^{-4}$ is tautochronous such that the time taken to empty the tank is independent of the 'depth' of the water column.

Because the $x$-axis is asymptotic to the inverse quartic curve, we should also ensure that the tank holds a finite volume of water of water for a given height of discharge. But it's an easy exercise to check that the volume is indeed finite.

Hope you enjoyed this. See ya soon.

Until then, Yours Aye

Me

Saturday, May 18, 2024

A nice Geometric Optimisation Problem - Part 2

This post picks up where we left earlier.

In addition to the centre of circumcircle of $\triangle KDL$, the midpoint of the segment $KL$ also lies in the circle. In fact, this can be proved more easily.

Let $A$ and $C$ be the centers of the respective circles. It can be easily seen that $2\angle CAD=\angle BAD=2\angle BQD=2\angle BKD$ which means $\angle CAD=\angle BKD$. Similarly, we can show that $\angle ACD=\angle BLD$. Hence, we see that $\triangle ACD \sim \triangle KLD$.

If we let $O$ be the midpoint of $AC$ and $P$ be the midpoint of $KL$, then by the similarity of the triangle we established before, we have $\angle DOC=\angle DPL=\angle DPB$. Because $\angle DOC$ is constant, we see that $P$ subtends a constant angle with segment $BD$ and hence lies on a circle containing $B$ and $D$.

Also, note that $\angle BOD=2\angle DOC=2\angle DPB=2\angle DSB$ where $S$ is as shown below. Because the angle subtended by $O$ with segment $BD$ is twice that of $P$, we know that $O$ is the center of circle containing $P$.

As both $J$ and $O$ lie on the perpendicular bisector of $AC$ at the product-maximising-position, segment $DT$ (where $T$ is shown below) is parallel to $AC$. Also, $\angle BDT$ is right angled which makes $BT$ the diameter of $P$'s locus.

Because $\angle NDT$ is right angle, we see that $NT$, which is the perpendicular bisector of $KL$ at the product-maximising position, is the diameter of $\triangle KLD$'s circumcircle.

We know that $\angle KTN=\angle KDN$ and $\angle LDN=\angle LTN$ (angles subtended by the same chord in a circle), we see that $\angle KDB=\angle LDB$.

Therefore, we finally see that at the product-maximising position, $ND$ (or) $BD$ should be the angle bisector $\angle KDB$ which is the relation we were looking for in the previous post.

Hope you enjoyed this. See ya later.

Until then
Yours Aye
Me

A nice Geometric Optimisation Problem - Part I

Consider two circles of different radii intersecting at $B$ and $D$. If we now draw a line $KL$ through $B$ as shown in the following figure, the question of determining the position at which the sum of $BK$ and $BL$ maximised is one of the famous non trivial problems in Geometric Optimisation.

We can solve this by noting that $\triangle KDL$ remains self similar. Because both $\angle BKD$ and $\angle BLD$ are angles subtended by the same chord $BD$ in the respective circles, the angles in the triangle remain constant irrespective of the position of the line $KL$. 

 Therefore $KL$ will be maximised when $DK$ (or $DL$) is as large as possible which happens when they are the respective diameters of the given circles. We could also see that, at this position $KL$ and $BD$ will be perpendicular to each other.

A natural follow up question would be find the position of $KL$ such that the product of $BK$ and $BL$ is maximised. Find a geometric construction of the position along with a geometric proof for the same proved more challenging that I had anticipated. That proof will be the subject of this post.

Before I start, I also want to point out that the proof here may seem unnecessarily complicated. But if anyone can share the shorter and elegant geometric proof for this problem, I would be really grateful.

To begin with, if we construct the circle passing through $K$, $D$ and $L$, and extend the segment $BD$ so that it intersects the circumcircle at $N$, then by the power-of-the-point theorem, we know that

$BK \cdot BL = BD \cdot BN$

Therefore, maximising the product of $BK$ and $BL$ is the same as maximising $BD \cdot BN$. But $BD$ is a constant independent of the position of the segment $KL$. Hence, the problem we are after reduces to that of maximising the segment $BN$ or $DN$.

To maximise $DN$, we turn our attention to the locus of the centre of the circumcircle, say $J$ (Take a wild guess as to what the locus might be before continuing).

Let $A$ and $C$ be the centres of the respective circles and let the line $AC$ meet the circles at $Q$ and $R$ as shown below.

Then, for any general position of $KL$, $\angle CAD=2\angle CQD=\angle BQD=\angle BKD$

Also, because $LD$ is a chord on both the circumcircle and (one of) the given circle, it's perpendicular bisector pass through both $C$ and $J$. Therefore, $\angle BKD=\angle LKD=\angle DJC$.

We've seen that the segment $CD$ subtends the same angle at both $A$ and $J$ in any general position. Therefore, $J$ lies on the circle containing the points $A$, $D$ and $C$ i.e. the circumcircle of $\triangle ADC$.

It should be easy to see that $DN$ (which is a chord on the circumcircle of $\triangle KDL$) is maximised when the projection of $DJ$ along the line $BD$ is at its highest. This happens when $J$ is at the 'top' of the circle i.e. at the intersection of the $\triangle ACD$'s circumcircle and the perpendicular bisector of $AC$.

Now that we know the position of $J$, we can draw a circle with $J$ as centre and $JD$ as radius. Then, the intersection of this circle with the given circles gives us points $K$ and $L$ such that the product $BK \cdot BL$ is maximised.

Even though we have now shown the geometric construction of $KL$ to maximize the product, it's not as clean as the statement '$KL$ must be perpendicular to $BD$' we had in the maximising-the-sum case.

Do we even have such a relation in the product case? We'll see about that in the next post.

Hope you enjoyed this. See ya later.

Until then
Yours Aye
Me

Saturday, March 2, 2024

Summing Uniform random variables to reach a target sum

The problem of finding the expected number of standard Uniform variables to get a sum greater than 1 is quite famous. In this post, we deal with the same problem but for a given target $t$.

This problem is not too difficult. Knowing that the sum of uniform random variables follows the Irwin-Hall distribution and the CDF of this distribution has a closed form solves the problem easily.

For example, if we let $N$ be the random variable denoting the number of draws needed to get the sum exceed $t$ for the first time, then

$\displaystyle \mathbb{P}(N>n)=\mathbb{P}(U_1+U_2+\cdots+U_n \leq t)=\frac{1}{n!}\sum_{k=0}^{\lfloor t \rfloor}(-1)^k\binom{n}{k}(t-k)^n$

Then $\mathbb{P}(N=n)=\mathbb{P}(N>n-1)-\mathbb{P}(N>n)$

But we can get this more directly using the idea in Excercise 6.65 (more importantly, its solution) of Concrete Mathematics.

For standard uniform random variables $U_1,U_2,\cdots,U_n$, let $X_1=\{U_1\}$, $X_2=\{U_1+U_2\}$ and so on, where $\{x\}$ denotes the fractional part of $x$.

It can be easily seen that $X_k$'s are all independently distributed uniform random variables. More importantly, the number of descents in the sequence of $X_k$'s is $\lfloor U_1+U_2+\cdots+U_n\rfloor$.

Therefore, for integer $t$, the sum of $n$ uniform random variables exceeding $t$ for the first time corresponds to the sequence of $X_1,X_2,\cdots,X_n$ that has exactly $t$ descents and end with a descent.

The probability of such a sequence is related to the permutation of numbers $1,2,\cdots,n$ that has exactly $t$ and end with the descent. We have already seen in this blog regarding this exact problem. Using that result, we finally have,

$\displaystyle\mathbb{P}(N=n)=\frac{(n-t)A(n-1,t-1)}{n!}$

Though we have solved this for integer $t$, with a giant leap in faith guided by some experimentation, we can also see that this works any real $t$.

With this, we get,

$\mathbb{E}(\text{No. of Uniform variables needed to get a sum greater than }\pi)\approx 6.949504$

$\mathbb{E}(\text{No. of Uniform variables needed to get a sum greater than }e)\approx 6.104002$

Hope you enjoyed this. See ya soon.

Until then, Yours Aye

Me

Saturday, December 30, 2023

Is $e$ less than $(1+\sqrt{21})/2$?

One of my friends recently posed the problem of finding the expected number of uniform random variables needed to get a sum greater than $1$ without using calculus.

Even though I've seen this problem before, I was not able to recollect it and the calculus proof was all I could come up with until he eventually showed the me the solution. But this time, something different struck me.

The solution to this problem uses that fact that

$\displaystyle \mathbb{P}(N>n)=\frac{1}{n!}$

where $N$ is the random variable indicating the number of Uniform random variables needed. But given this, we could do more than the expected value. We know that,

$\displaystyle \mathbb{E}(N)=\sum_{n=0}^\infty\mathbb{P}(N>n)=\sum_{n=0}^\infty \frac{1}{n!}=e$

which follows from the definition of Euler's number.

But suppose, we don't know the exact value of $e$. With elementary methods, we can show that $2<e<3$.

In the problem that we were discussing, we can use the expression for probability to find $\mathbb{V}(N)$ as well. We know that

$\displaystyle \mathbb{E}(N^2)=\sum_{n=0}^\infty(2n+1)\mathbb{P}(N>n)=\sum_{n=0}^\infty \frac{2n+1}{n!}=3e$

Therefore, $\mathbb{V}(N)=\mathbb{E}(N^2)-\mathbb{E}(n)=e(e-3)$

As variance of any random variable $\geq 0$ and we know that $e>0$, this shows us that $e<3$ which is something we already know. But this gives us a nice idea.

Let $N_{(2)}$ be the random variable denoting the number of Uniform random variables needed to get a sum a sum greater than $2$. Uniform Sum Distribution shows us that,

$\displaystyle \mathbb{P}(N_{(2)}=n)=\frac{(n-2)(2^{n-1}-n)}{n!}$

With this, we can see that

$\displaystyle \mathbb{E}(N_{(2)})=e(e-1)$ and $\displaystyle \mathbb{E}(N_{(2)}^2)=5e(e-1)$

Therefore, $\displaystyle \mathbb{V}(N_{(2)})=e(e-1)(5-e^2+e)$

Using the same idea as before, we see that $5-e^2+e>0$ which shows us

$\displaystyle \frac{1}{2}(1-\sqrt{21}) \leq e \leq \frac{1}{2}(1+\sqrt{21})$

Numerically, this shows that $e \leq 2.791288$!!

As you may have already guessed, this could be extended for higher moments and / or sum-greater-than-3 and so on. But the resulting expressions gives higher degree polynomials which, even though gives better estimates for $e$, cannot be easily solved in closed form. For example, using

$\displaystyle \mathbb{E}(N_{(3)})=\frac{1}{2}(2e^3-4e^2+e)$ and $\displaystyle \mathbb{E}(N_{(3)}^2)=\frac{7}{2}(2e^3-4e^2+e)$

Solving the resulting expression for variance, we get to see that $2e^3-4e^2+e-14 \leq 0$. Solving this with WA, we see that

$\displaystyle e < \frac{1}{6}(4+\sqrt[3]{784+18\sqrt{1894}}+\sqrt[3]{784-18\sqrt{1894}}) \approx 2.74615$

Hope you enjoyed this. See ya soon.

Until then, Yours Aye

Me

A nice property on the recurrence of Eulerian Numbers

I was recently looking at Eulerian numbers for a problem and I noticed a nice, and probably lesser/never known, property on their recursive definition. In summary, it seemed like an exercise problem out of a Combinatorics textbook except I couldn't find one.

Eulerian numbers $A(n,k)$ count the number of permutation of numbers from $1$ to $n$ in which exactly $k$ elements are smaller than the previous element. In other, words they count the number of permutation with exactly $k$ descents. 

With this interpretation, we can setup the following recurrence as shown in this Proposition 3 of this lecture.

$A(n,k)=(n-k)A(n-1,k-1)+(k+1)A(n-1,k)$

The first term here corresponds to the number of permutations in which the element $n$ contributes to the descent count and the second term to those where it doesn't.

The problem that I was interested in was that of counting the number of permutation with $k$ descents and end with a descent.

After some thought, it seemed easy to setup a recurrence for this count as well.

Let $A_d(n,k)$ be the number of permutations of $n$ elements with exactly $k$ descents and end with a descent. Let $A_a(n,k)$ be the same count but those that end with an ascent. Then,

$A_d(n,k)=k A_d(n-1,k)+A_a(n-1,k-1)+(n-k)A_d(n-1,k-1)$

The first term here corresponds to inserting $n$ in an existing descent. The second term is where we insert $n$ as second to last position thereby simultaneously making the permutation end with an descent and increasing the descent count.

The last one is where we take a permutation with $n-1$ elements. There are '$n-1$' gaps where we can insert $n$ (we don't $n$ to be in the last position as it would make the permutation to end with an ascent). Of the $n-1$ gaps, $k-1$ are already descents which means there are $n-k$ gaps that correspond to an ascent. Inserting $n$ here would increase descent count by one giving us the requisite permutation.

Similarly, we can write the following recurrence for the $A_a(n-k)$

$A_a(n,k)=A_d(n-1,k)+(n-k-1)A_a(n-1,k-1)+(k+1)A_a(n-1,k)$

Using the fact that $A(n,k)=A_a(n,k)+A_d(n,k)$ and some simple manipulations, we get

$A_d(n,k)=(n-k)A(n-1,k-1)$ and $A_a(n,k)=(k+1)A(n-1,k)$

I hope you can see now why this surprised me. The recurrence usually written for Eulerian numbers also has another simpler combinatorial representation without any change!!

 That is, the number of permutations with $k$ descents in which $n$ contributes to the descent count is equinumerous with the permutations with $k$ descents that end with a descent.

Hope you enjoyed this. See ya soon.

Until then, Yours Aye

Me

Sunday, December 24, 2023

The Rounding Error Conundrum

Like all managers, I too, from time to time, will be in a position to present a bunch of numbers in an Excel Sheet. To bring clarity to the sheet, I follow the etiquette of rounding the numbers to (MS-Excel's default) two decimal places.

The problem this brings is that when you sum of list of numbers and round everything to, say, two decimal places, the numbers doesn't always add up and someone or the other points out that the numbers are not adding up in the last decimal place. I got tired of pointing out that it is a result of rounding.

At first, I thought rounding with more decimal places might bring some resolution to this situation. But then, it became apparent (and intuitively obvious) that no matter the rounding, the problem persists. Being an amateur mathematician at heart, I wanted to solve this problem mathematically.

Let $X_1,X_2,\cdots,X_n$ be standard uniform random variables $U(0,1)$ and $Y=X_1+X_2+\cdots+X_n$. We are then interested in the probability that $[Y]=[X_1]+[X_2]+\cdots+[X_n]$ where $[x]$ denotes the value of $x$ rounded to $r$ places after the decimal point.

As I started thinking about this problem, the only progress I could make is that $r$ doesn't affect the probability and we could as well consider it to be $0$. Frustrated, I posted in my puzzle group and Atul Gupta from the group was able to quickly solve this.

The key idea is to use the periodicity of the 'round' function. That is, $[n+x]=n+[x]$ for integer $n$ and real $x$.

The way to solve this problem then is transform the $X$'s such that $X_k=B_k+Z_k$ where $B_k$'s are either $0$ or $1$ and $Z_k \sim U(-0.5,0.5)$.

Because we take $r=0$, we can see that $[Z_k]=0$ for all $k$. Therefore, $[X_k]=[B_k+Z_k]=B_k+[Z_k]=B_k$ which means $[X_1]+[X_2]+\cdots+[X_n]=B_1+B_2+\cdots+B_n=\text{(say)}B$.

On the other hand,

$\begin{align} Y&=X_1+X_2+\cdots+X_n\\ &=B_1+Z_1+B_2+Z_2+\cdots+B_n+Z_n\\ &=B+Z_1+Z_2+\cdots+Z_n \end{align}$

Using the periodicity of 'round' function, $[Y]=B+[Z_1+Z_2+\cdots+Z_n]$.

Hence, the condition we are interested in simplifies to $[Z_1+Z_2+\cdots+Z_n]=0$ which is equivalent to saying $-0.5 \leq Z_1+Z_2+\cdots+Z_n \leq 0.5$

Transforming the $Z_k$'s to standard Uniform variables $U_k=Z_k+1/2$, this becomes $(n-1)/2 \leq U_1+U_2+\cdots+U_n \leq (n+1)/2$

It is well known that the sum of $n$ standard uniform random variables defines the Irwin-Hall distribution (also known as Uniform Sum distribution) $IH_n$.

Irwin-Hall distribution and Eulerian numbers are closely related. For example, this should already be apparent from the PDF of $IH_n$ in the Wikipedia page and the integral identity of Eulerian Numbers.

While the CDF of $IH_n$ also nearly resembles Eulerian numbers, it is not quite there. To make Eulerian numbers more useful to our cause, we slightly modify the formula given in Wiki page.

$\displaystyle A(n,x)=\sum_{i=0}^{x+1}(-1)^i\binom{n+1}{i}(x+1-i)^n$

where $x$ can be a real number. With this definition, we have the following useful result.

$\displaystyle\mathbb{P}(t \leq IH_n \leq t+1)=\frac{1}{n!}A(n,t)$

Therefore, we finally have what we were looking for.

$\displaystyle \mathbb{P}([Y]=[X_1]+[X_2]+\cdots+[X_n])=\mathbb{P}\left(\frac{n-1}{2} \leq \sum_{k=1}^nU_k \leq \frac{n+1}{2}\right)=\frac{1}{n!}A(n,(n-1)/2)$

For large $n$, we can use the normal approximation to show that 

$\displaystyle \mathbb{P}([Y]=[X_1]+[X_2]+\cdots+[X_n]) \approx \sqrt{\frac{2}{\pi}}\sin\left(\sqrt{\frac{3}{n}}\right)$

which shows the probability of the sum matching only worsens as $n$ increases.

On an unrelated note, I also noticed that

$\displaystyle \mathbb{E}(IH_n|t \leq IH_n \leq t+1)=(t+1)\frac{n}{n+1}$

Given the simple expression for the expected value, I suspect there is a very nice reason for this but am yet to see it. In case you can, please do share it with us in the comments.

Hope you enjoyed this. See ya soon.

Until then, Yours Aye

Me