Bitwise operators have always fascinated me. There are a number of cool tricks and tips that one can use in programming to shorten the code and even elegant, I would say. To start with, we have Bit twiddling hacks that contains a huge list. I've used some of them in my Nim Multiplication code.

But this code is not about using those tricks. Bitwise operations are interesting from mathematical viewpoint as well. For example, the one that has always stumped me was the Hadamard Transform that are intimately connected with the Bitwise XOR. In one of their wide variety of applications, they play a great role in XOR transforms.

Diving straight into the point, if we have two arrays $\mathbf{a}=(a_0,a_1,a_2,\cdots)$ and $\mathbf{b}=(b_0,b_1,b_2,\cdots)$ and we define the convolution as an array $\mathbf{c}=(c_0,c_1,c_2,\cdots)$ such that,

$c_n=\displaystyle\sum_{i*j=n}a_ib_j$

where '*' is any binary operation. For example, when the binary operation is 'addition', then we have the most commonly used

We can also the Bitwise operations as the binary operation. Most famous of them would be the 'Bitwise XOR' which results in the

If evaluated directly, all these convolutions would lead to $O(n^2)$ time complexity. A common technique in making this task easier is defining an easily invertible matrix $\mathbf{M}$ such that

$\mathbf{M}\mathbf{c}=(\mathbf{M}\mathbf{a})\odot(\mathbf{M}\mathbf{b})$

where $\odot$ is the element-wise multiplication. For example, $\mathbf{M}$ is the Vandermonde matrix in case of the Discrete convolution.

For the

$H \to

\begin{pmatrix}

H & H \\

H & -H

\end{pmatrix}

$ (or) $H \to

\begin{pmatrix}

H & -H \\

H & H

\end{pmatrix}

$

though the first seems to the most popular choice. Hadamard matrices are very simple in that they are their own inverse which makes inverse part just like the transform. In my opinion, this is very analogous to how XOR can as both addition and subtraction.

The idea in this post is to find the relevant matrices for OR and AND convolutions. Fiddling with the first few terms of the matrix equation we saw before with $\mathbf{c}$ defined as OR/AND accordingly, it is not so hard to come with the following.

$\mathbf{M}_{\text{OR}} \to

\begin{pmatrix}

\mathbf{M}_{\text{OR}} & \mathbf{0} \\

\mathbf{M}_{\text{OR}} & \mathbf{M}_{\text{OR}}

\end{pmatrix}

$ (or) $\mathbf{M}_{\text{OR}} \to

\begin{pmatrix}

\mathbf{M}_{\text{OR}} & \mathbf{M}_{\text{OR}} \\

\mathbf{M}_{\text{OR}} & \mathbf{0}

\end{pmatrix}

$

and

$\mathbf{M}_{\text{AND}} \to

\begin{pmatrix}

\mathbf{M}_{\text{AND}} & \mathbf{M}_{\text{AND}} \\

\mathbf{0} & \mathbf{M}_{\text{AND}}

\end{pmatrix}

$ (or) $\mathbf{M}_{\text{AND}} \to

\begin{pmatrix}

\mathbf{0} & \mathbf{M}_{\text{AND}} \\

\mathbf{M}_{\text{AND}} & \mathbf{M}_{\text{AND}}

\end{pmatrix}

$

We can choose either of the matrices for OR/AND. I prefer the first matrices in both cases mostly because the first versions have determinant 1.

Having done that, we follow up with the inverse matrices, $\mathbf{N}_{\text{OR}}$ and $\mathbf{N}_{\text{AND}}$, which can also be iteratively built.

$\mathbf{N}_{\text{OR}} \to

\begin{pmatrix}

\mathbf{N}_{\text{OR}} & \mathbf{0} \\

-\mathbf{N}_{\text{OR}} & \mathbf{N}_{\text{OR}}

\end{pmatrix}

$ and $\mathbf{N}_{\text{AND}} \to

\begin{pmatrix}

\mathbf{N}_{\text{AND}} & -\mathbf{N}_{\text{AND}} \\

\mathbf{0} & \mathbf{N}_{\text{AND}}

\end{pmatrix}

$

Now these can be used to answer questions like finding the number of sequences containing numbers from $0$ to $k$ with a given OR-sum (or AND-sum).

I think one of the reason that the

Just to conclude the story, to solve the

UPDATE: I always knew that I may not be the first ones to come up with the corresponding matrices for OR and AND convolution but I was a little disappointed when I found Serbanology when I was reading All the good tutorials found for Competitive programming.

Until then

Yours Aye

Me

But this code is not about using those tricks. Bitwise operations are interesting from mathematical viewpoint as well. For example, the one that has always stumped me was the Hadamard Transform that are intimately connected with the Bitwise XOR. In one of their wide variety of applications, they play a great role in XOR transforms.

Diving straight into the point, if we have two arrays $\mathbf{a}=(a_0,a_1,a_2,\cdots)$ and $\mathbf{b}=(b_0,b_1,b_2,\cdots)$ and we define the convolution as an array $\mathbf{c}=(c_0,c_1,c_2,\cdots)$ such that,

$c_n=\displaystyle\sum_{i*j=n}a_ib_j$

where '*' is any binary operation. For example, when the binary operation is 'addition', then we have the most commonly used

*Discrete convolution*. If the operation is 'multiplication', we have what we call the*Dirichlet convolution*.We can also the Bitwise operations as the binary operation. Most famous of them would be the 'Bitwise XOR' which results in the

*Dyadic convolution*. Similarly, we can define*OR convolution*and*AND convolution*using Bitwise-OR and Bitwise-AND as the binary operations respectively.If evaluated directly, all these convolutions would lead to $O(n^2)$ time complexity. A common technique in making this task easier is defining an easily invertible matrix $\mathbf{M}$ such that

$\mathbf{M}\mathbf{c}=(\mathbf{M}\mathbf{a})\odot(\mathbf{M}\mathbf{b})$

where $\odot$ is the element-wise multiplication. For example, $\mathbf{M}$ is the Vandermonde matrix in case of the Discrete convolution.

For the

*Dyadic convolution*(or the*XOR convolution*), the matrix we are looking for are the Hadamard matrices, $H$. We can construct the matrices with $H=(1)$ and iteratively constructing the higher order matrices with either of the following two matrices$H \to

\begin{pmatrix}

H & H \\

H & -H

\end{pmatrix}

$ (or) $H \to

\begin{pmatrix}

H & -H \\

H & H

\end{pmatrix}

$

though the first seems to the most popular choice. Hadamard matrices are very simple in that they are their own inverse which makes inverse part just like the transform. In my opinion, this is very analogous to how XOR can as both addition and subtraction.

The idea in this post is to find the relevant matrices for OR and AND convolutions. Fiddling with the first few terms of the matrix equation we saw before with $\mathbf{c}$ defined as OR/AND accordingly, it is not so hard to come with the following.

$\mathbf{M}_{\text{OR}} \to

\begin{pmatrix}

\mathbf{M}_{\text{OR}} & \mathbf{0} \\

\mathbf{M}_{\text{OR}} & \mathbf{M}_{\text{OR}}

\end{pmatrix}

$ (or) $\mathbf{M}_{\text{OR}} \to

\begin{pmatrix}

\mathbf{M}_{\text{OR}} & \mathbf{M}_{\text{OR}} \\

\mathbf{M}_{\text{OR}} & \mathbf{0}

\end{pmatrix}

$

and

$\mathbf{M}_{\text{AND}} \to

\begin{pmatrix}

\mathbf{M}_{\text{AND}} & \mathbf{M}_{\text{AND}} \\

\mathbf{0} & \mathbf{M}_{\text{AND}}

\end{pmatrix}

$ (or) $\mathbf{M}_{\text{AND}} \to

\begin{pmatrix}

\mathbf{0} & \mathbf{M}_{\text{AND}} \\

\mathbf{M}_{\text{AND}} & \mathbf{M}_{\text{AND}}

\end{pmatrix}

$

We can choose either of the matrices for OR/AND. I prefer the first matrices in both cases mostly because the first versions have determinant 1.

Having done that, we follow up with the inverse matrices, $\mathbf{N}_{\text{OR}}$ and $\mathbf{N}_{\text{AND}}$, which can also be iteratively built.

$\mathbf{N}_{\text{OR}} \to

\begin{pmatrix}

\mathbf{N}_{\text{OR}} & \mathbf{0} \\

-\mathbf{N}_{\text{OR}} & \mathbf{N}_{\text{OR}}

\end{pmatrix}

$ and $\mathbf{N}_{\text{AND}} \to

\begin{pmatrix}

\mathbf{N}_{\text{AND}} & -\mathbf{N}_{\text{AND}} \\

\mathbf{0} & \mathbf{N}_{\text{AND}}

\end{pmatrix}

$

Now these can be used to answer questions like finding the number of sequences containing numbers from $0$ to $k$ with a given OR-sum (or AND-sum).

I think one of the reason that the

*Dyadic (XOR) convolution*is more famous is because of their application in nim games. Unfortunately,*OR*and*AND convolutions*does not have any direct applications that I'm aware of. Still, I found these beautiful and worth sharing.Just to conclude the story, to solve the

*LCM-convolution*between two functions, we*dirichlet-convolve*each of them with the constant function. Now we simply do a pointwise multiplication and*dirichlet-convolve*the result with Mobius function. This is well explained in this stackexchange question and Multiplicative Arithmetic Functions of SeveralVariables: A Survey. These also discuss*GCD-convolution*but I don't understand them yet.UPDATE: I always knew that I may not be the first ones to come up with the corresponding matrices for OR and AND convolution but I was a little disappointed when I found Serbanology when I was reading All the good tutorials found for Competitive programming.

Until then

Yours Aye

Me