2015/09/07

Matrix Factorisation with Linear Filters (and discussion)

I submitted a preprint on matrix factorisations and linear filters. I managed to derive some factorisation algorithms as linear filtering algorithms.

In the paper, I left a discussion to here, estimating parameters via maximising marginal likelihood. So here it is.

In the paper, I consider the following probabilistic model, \begin{align} p(c) &= \NPDF(c; c_0, V_0 \otimes I) \label{priorC} \\ p(y_k|c,x_k) &= \NPDF(y_k; (x_k \otimes I_m) c, \lambda \otimes I). \label{LikelihoodCform1} \end{align} where each $x_k$ is a static model parameter vector, and $C$ is the latent matrix. Note that $(x_k^\top \otimes I_m) c = C x_k$ (see the paper). We would like to solve the following problem to estimate $x_k$ for each $k$, \begin{align*} x_k^* = \argmax_{x_k} \log p(y_k | y_{1:k-1},x_k). \end{align*} So we would like to obtain the incremental marginal likelihood $p(y_k | y_{1:k-1},x_k)$. We define it as, \begin{align*} p(y_k | y_{1:k-1},x_k) = \int p(y_k | c, x_k) p(c | y_{1:k-1}) \mbox{d}c \end{align*} where $c = \vect(C)$. The important thing here is $p(c | y_{1:k-1})$ is not the exact distribution but it is an approximation to the true posterior as one has to use $x_{k-1}^*$ to obtain the distribution (If $X$ were known, it would be exact). So we know from the paper, \begin{align*} p(c | y_{1:k-1}) = \NPDF(c; c_{k-1}, V_{k-1} \otimes I_m), \end{align*} and, \begin{align*} p(y_k | c, x_k) = \NPDF(y_k; (x_k \otimes I_m) c, \lambda \otimes I_m). \end{align*} Using formulas in Bishop's PRML (2006, page 93), we can arrive the marginal $p(y_k|y_{1:k-1},x_k)$ as the following, \begin{align*} p(y_k | y_{1:k-1},x_k) = \NPDF(y_k; (x_k^\top \otimes I_m) c_k, (\lambda \otimes I_m) + (x_k^\top \otimes I_m) (V_{k-1} \otimes I_m) (x_k \otimes I_m)). \end{align*} Using properties given in the Section I.A of the paper, we compactly obtain, \begin{align}\label{MarginalLikelihoodOfMFRLF} p(y_k | y_{1:k-1},x_k) = \NPDF(y_k; C_{k-1} x_k, (\lambda + x_k^\top V_{k-1} x_k) \otimes I_m) \end{align} Although integration is analytic, maximising the logarithm of \eqref{MarginalLikelihoodOfMFRLF} is not possible analytically. However the likelihood is differentiable, so one can use nonlinear optimisation methods. But in terms of performance this does not bring any advantage: One gains a bit in the overall error part but loses too much in terms of computation time. I have tried quasi-Newton methods (BFGS), and conjugate gradients but no cure.

2015/06/19

Online Matrix Factorization via Broyden Updates

I arXived a new preprint titled Online Matrix Factorization via Broyden Updates.

Around this April, I was reading quasi-Newton methods (from this very nice paper of Philipp Hennig) and when I saw the derivation of the Broyden update, I immediately realized that this idea may be used for computing factorizations. Furthermore, it will lead to an online scheme, more preferable!

The idea is to solve the following optimization problem at each iteration $k$:\begin{align*} \min_{x_k,C_k} \big\| y_k - C_k x_k \big\|_2^2 + \lambda \big\|C_k - C_{k-1}\big\|_F^2.\end{align*}
The motivation behind this cost is in the manuscript.

Although the basic idea was explicit, I set a few goals. First of all, I would like to develop a method that one can sample any column of the dataset and use it immediately. So I modified the notation a bit, as you can see from Eq. (2) in the manuscript. Secondly, I wanted that one must be able to use mini-batches as well, a group of columns at each time. Thirdly, it was obvious that a modern matrix factorization method must handle the missing data, so I had to extend the algorithm to handle the missing data. Consequently, I have sorted out all of this except a rule for missing data with mini-batches which turned out to be harder, so I left out that for this work.

2015/03/08

Tinkering around logistic map

I was tinkering around logistic map today, I accidentally discovered a property of it. That is, if you look at histogram of the sequence generated by it, it will be same independent of the input. This immediately hit me that there may be an underlying probability density of this map. In fact, after Googled, I found out that Ulam and von Neumann studied this! So here we go.

2015/03/04

Monte Carlo as Intuition

I am doing teaching assistantship for Monte Carlo methods course. I explained the following exercise as an application of the Monte Carlo principle. It is nice because it coincides with a very intuitive guess.