Bayesian probabilistic matrix factorization (BPMF) is a classical model in the recommender system field. In the following, we will discuss:

• What the BPMF is?

• How to implement BPMF mainly using Python Numpy with high efficiency?

• How to make data imputations with real-world spatiotemporal datasets?

Ruslan Salakhutdinov, Andriy Mnih, 2008. Bayesian probabilistic matrix factorization using Markov chain Monte Carlo. Proceedings of the 25th International Conference on Machine Learning (ICML 2008), Helsinki, Finland. [Matlab code (official)]

## Quick Run¶

This notebook is publicly available for any usage at our data imputation project. Please click transdim.

In [1]:
import numpy as np
from numpy.random import multivariate_normal as mvnrnd
from scipy.stats import wishart
from numpy.linalg import inv as inv

# Part 1: Matrix Computation Concepts¶

## 1) Kronecker product¶

• Definition:

Given two matrices $A\in\mathbb{R}^{m_1\times n_1}$ and $B\in\mathbb{R}^{m_2\times n_2}$, then, the Kronecker product between these two matrices is defined as

$$A\otimes B=\left[ \begin{array}{cccc} a_{11}B & a_{12}B & \cdots & a_{1m_2}B \\ a_{21}B & a_{22}B & \cdots & a_{2m_2}B \\ \vdots & \vdots & \ddots & \vdots \\ a_{m_11}B & a_{m_12}B & \cdots & a_{m_1m_2}B \\ \end{array} \right]$$

where the symbol $\otimes$ denotes Kronecker product, and the size of resulted $A\otimes B$ is $(m_1m_2)\times (n_1n_2)$ (i.e., $m_1\times m_2$ columns and $n_1\times n_2$ rows).

• Example:

If $A=\left[ \begin{array}{cc} 1 & 2 \\ 3 & 4 \\ \end{array} \right]$ and $B=\left[ \begin{array}{ccc} 5 & 6 & 7\\ 8 & 9 & 10 \\ \end{array} \right]$, then, we have

$$A\otimes B=\left[ \begin{array}{cc} 1\times \left[ \begin{array}{ccc} 5 & 6 & 7\\ 8 & 9 & 10\\ \end{array} \right] & 2\times \left[ \begin{array}{ccc} 5 & 6 & 7\\ 8 & 9 & 10\\ \end{array} \right] \\ 3\times \left[ \begin{array}{ccc} 5 & 6 & 7\\ 8 & 9 & 10\\ \end{array} \right] & 4\times \left[ \begin{array}{ccc} 5 & 6 & 7\\ 8 & 9 & 10\\ \end{array} \right] \\ \end{array} \right]$$$$=\left[ \begin{array}{cccccc} 5 & 6 & 7 & 10 & 12 & 14 \\ 8 & 9 & 10 & 16 & 18 & 20 \\ 15 & 18 & 21 & 20 & 24 & 28 \\ 24 & 27 & 30 & 32 & 36 & 40 \\ \end{array} \right]\in\mathbb{R}^{4\times 6}.$$

## 2) Khatri-Rao product (kr_prod)¶

• Definition:

Given two matrices $A=\left( \boldsymbol{a}_1,\boldsymbol{a}_2,...,\boldsymbol{a}_r \right)\in\mathbb{R}^{m\times r}$ and $B=\left( \boldsymbol{b}_1,\boldsymbol{b}_2,...,\boldsymbol{b}_r \right)\in\mathbb{R}^{n\times r}$ with same number of columns, then, the Khatri-Rao product (or column-wise Kronecker product) between $A$ and $B$ is given as follows,

$$A\odot B=\left( \boldsymbol{a}_1\otimes \boldsymbol{b}_1,\boldsymbol{a}_2\otimes \boldsymbol{b}_2,...,\boldsymbol{a}_r\otimes \boldsymbol{b}_r \right)\in\mathbb{R}^{(mn)\times r}$$

where the symbol $\odot$ denotes Khatri-Rao product, and $\otimes$ denotes Kronecker product.

• Example:

If $A=\left[ \begin{array}{cc} 1 & 2 \\ 3 & 4 \\ \end{array} \right]=\left( \boldsymbol{a}_1,\boldsymbol{a}_2 \right)$ and $B=\left[ \begin{array}{cc} 5 & 6 \\ 7 & 8 \\ 9 & 10 \\ \end{array} \right]=\left( \boldsymbol{b}_1,\boldsymbol{b}_2 \right)$, then, we have

$$A\odot B=\left( \boldsymbol{a}_1\otimes \boldsymbol{b}_1,\boldsymbol{a}_2\otimes \boldsymbol{b}_2 \right)$$$$=\left[ \begin{array}{cc} \left[ \begin{array}{c} 1 \\ 3 \\ \end{array} \right]\otimes \left[ \begin{array}{c} 5 \\ 7 \\ 9 \\ \end{array} \right] & \left[ \begin{array}{c} 2 \\ 4 \\ \end{array} \right]\otimes \left[ \begin{array}{c} 6 \\ 8 \\ 10 \\ \end{array} \right] \\ \end{array} \right]$$$$=\left[ \begin{array}{cc} 5 & 12 \\ 7 & 16 \\ 9 & 20 \\ 15 & 24 \\ 21 & 32 \\ 27 & 40 \\ \end{array} \right]\in\mathbb{R}^{6\times 2}.$$
In [2]:
def kr_prod(a, b):
return np.einsum('ir, jr -> ijr', a, b).reshape(a.shape[0] * b.shape[0], -1)
In [3]:
A = np.array([[1, 2], [3, 4]])
B = np.array([[5, 6], [7, 8], [9, 10]])
print(kr_prod(A, B))
[[ 5 12]
[ 7 16]
[ 9 20]
[15 24]
[21 32]
[27 40]]

## 3) Computing Covariance Matrix (cov_mat)¶

For any matrix $X\in\mathbb{R}^{m\times n}$, cov_mat can return a $n\times n$ covariance matrix for special use in the following.

In [4]:
def cov_mat(mat):
dim1, dim2 = mat.shape
new_mat = np.zeros((dim2, dim2))
mat_bar = np.mean(mat, axis = 0)
for i in range(dim1):
new_mat += np.einsum('i, j -> ij', mat[i, :] - mat_bar, mat[i, :] - mat_bar)
return new_mat

# Part 2: Bayesian Probabilistic Matrix Factorization (BPMF)¶

In [5]:
def BPMF(dense_mat, sparse_mat, init, rank, maxiter1, maxiter2):
"""Bayesian Probabilistic Matrix Factorization, BPMF."""
W = init["W"]
X = init["X"]

dim1, dim2 = sparse_mat.shape
dim = np.array([dim1, dim2])
pos = np.where((dense_mat != 0) & (sparse_mat == 0))
position = np.where(sparse_mat != 0)
binary_mat = np.zeros((dim1, dim2))
binary_mat[position] = 1

beta0 = 1
nu0 = rank
mu0 = np.zeros((rank))
W0 = np.eye(rank)
tau = 1
alpha = 1e-6
beta = 1e-6

W_plus = np.zeros((dim1, rank))
X_plus = np.zeros((dim2, rank))
mat_hat_plus = np.zeros((dim1, dim2))
for iters in range(maxiter1):
for order in range(2):
if order == 0:
mat = W.copy()
elif order == 1:
mat = X.copy()
mat_bar = np.mean(mat, axis = 0)
var_mu_hyper = (dim[order] * mat_bar + beta0 * mu0)/(dim[order] + beta0)
var_W_hyper = inv(inv(W0) + cov_mat(mat) + dim[order] * beta0/(dim[order] + beta0)
* np.outer(mat_bar - mu0, mat_bar - mu0))
var_Lambda_hyper = wishart(df = dim[order] + nu0, scale = var_W_hyper, seed = None).rvs()
var_mu_hyper = mvnrnd(var_mu_hyper, inv((dim[order] + beta0) * var_Lambda_hyper))

if order == 0:
var1 = X.T
mat0 = np.matmul(var1, sparse_mat.T)
elif order == 1:
var1 = W.T
mat0 = np.matmul(var1, sparse_mat)
var2 = kr_prod(var1, var1)
if order == 0:
mat1 = np.matmul(var2, binary_mat.T)
elif order == 1:
mat1 = np.matmul(var2, binary_mat)
var3 = tau * mat1.reshape(rank, rank, dim[order]) + np.dstack([var_Lambda_hyper] * dim[order])
var4 = tau * mat0 + np.dstack([np.matmul(var_Lambda_hyper, var_mu_hyper)] * dim[order])[0, :, :]
for i in range(dim[order]):
var_Lambda = var3[:, :, i]
inv_var_Lambda = inv((var_Lambda + var_Lambda.T)/2)
vec = mvnrnd(np.matmul(inv_var_Lambda, var4[:, i]), inv_var_Lambda)
if order == 0:
W[i, :] = vec.copy()
elif order == 1:
X[i, :] = vec.copy()

if iters + 1 > maxiter1 - maxiter2:
W_plus += W
X_plus += X

mat_hat = np.matmul(W, X.T)
if iters + 1 > maxiter1 - maxiter2:
mat_hat_plus += mat_hat
rmse = np.sqrt(np.sum((dense_mat[pos] - mat_hat[pos]) ** 2)/dense_mat[pos].shape[0])

var_alpha = alpha + 0.5 * sparse_mat[position].shape[0]
error = sparse_mat - mat_hat
var_beta = beta + 0.5 * np.sum(error[position] ** 2)
tau = np.random.gamma(var_alpha, 1/var_beta)

if (iters + 1) % 200 == 0 and iters < maxiter1 - maxiter2:
print('Iter: {}'.format(iters + 1))
print('RMSE: {:.6}'.format(rmse))
print()

W = W_plus/maxiter2
X = X_plus/maxiter2
mat_hat = mat_hat_plus/maxiter2
if maxiter1 >= 100:
final_mape = np.sum(np.abs(dense_mat[pos] - mat_hat[pos])/dense_mat[pos])/dense_mat[pos].shape[0]
final_rmse = np.sqrt(np.sum((dense_mat[pos] - mat_hat[pos]) ** 2)/dense_mat[pos].shape[0])
print('Imputation MAPE: {:.6}'.format(final_mape))
print('Imputation RMSE: {:.6}'.format(final_rmse))
print()

return mat_hat, W, X

# Part 3: Data Organization¶

## 1) Matrix Structure¶

We consider a dataset of $m$ discrete time series $\boldsymbol{y}_{i}\in\mathbb{R}^{f},i\in\left\{1,2,...,m\right\}$. The time series may have missing elements. We express spatio-temporal dataset as a matrix $Y\in\mathbb{R}^{m\times f}$ with $m$ rows (e.g., locations) and $f$ columns (e.g., discrete time intervals),

$$Y=\left[ \begin{array}{cccc} y_{11} & y_{12} & \cdots & y_{1f} \\ y_{21} & y_{22} & \cdots & y_{2f} \\ \vdots & \vdots & \ddots & \vdots \\ y_{m1} & y_{m2} & \cdots & y_{mf} \\ \end{array} \right]\in\mathbb{R}^{m\times f}.$$

## 2) Tensor Structure¶

We consider a dataset of $m$ discrete time series $\boldsymbol{y}_{i}\in\mathbb{R}^{nf},i\in\left\{1,2,...,m\right\}$. The time series may have missing elements. We partition each time series into intervals of predifined length $f$. We express each partitioned time series as a matrix $Y_{i}$ with $n$ rows (e.g., days) and $f$ columns (e.g., discrete time intervals per day),

$$Y_{i}=\left[ \begin{array}{cccc} y_{11} & y_{12} & \cdots & y_{1f} \\ y_{21} & y_{22} & \cdots & y_{2f} \\ \vdots & \vdots & \ddots & \vdots \\ y_{n1} & y_{n2} & \cdots & y_{nf} \\ \end{array} \right]\in\mathbb{R}^{n\times f},i=1,2,...,m,$$

therefore, the resulting structure is a tensor $\mathcal{Y}\in\mathbb{R}^{m\times n\times f}$.

# Part 4: Experiments on Guangzhou Data Set¶

In [6]:
import scipy.io

tensor = tensor['tensor']
random_matrix = random_matrix['random_matrix']
random_tensor = random_tensor['random_tensor']

dense_mat = tensor.reshape([tensor.shape[0], tensor.shape[1] * tensor.shape[2]])
missing_rate = 0.2

# =============================================================================
### Random missing (RM) scenario
### Set the RM scenario by:
binary_mat = (np.round(random_tensor + 0.5 - missing_rate)
.reshape([random_tensor.shape[0], random_tensor.shape[1] * random_tensor.shape[2]]))
# =============================================================================

sparse_mat = np.multiply(dense_mat, binary_mat)
In [7]:
import time
start = time.time()
dim1, dim2 = sparse_mat.shape
rank = 80
init = {"W": 0.1 * np.random.rand(dim1, rank),
"X": 0.1 * np.random.rand(dim2, rank)}
maxiter1 = 1100
maxiter2 = 100
BPMF(dense_mat, sparse_mat, init, rank, maxiter1, maxiter2)
end = time.time()
print('Running time: %d seconds'%(end - start))
Iter: 200
RMSE: 4.45974

Iter: 400
RMSE: 4.48453

Iter: 600
RMSE: 4.50418

Iter: 800
RMSE: 4.49705

Iter: 1000
RMSE: 4.53354

Imputation MAPE: 0.0954385
Imputation RMSE: 4.05508

Running time: 15621 seconds
In [8]:
import scipy.io

tensor = tensor['tensor']
random_matrix = random_matrix['random_matrix']
random_tensor = random_tensor['random_tensor']

dense_mat = tensor.reshape([tensor.shape[0], tensor.shape[1] * tensor.shape[2]])
missing_rate = 0.4

# =============================================================================
### Random missing (RM) scenario
### Set the RM scenario by:
binary_mat = (np.round(random_tensor + 0.5 - missing_rate)
.reshape([random_tensor.shape[0], random_tensor.shape[1] * random_tensor.shape[2]]))
# =============================================================================

sparse_mat = np.multiply(dense_mat, binary_mat)
In [9]:
import time
start = time.time()
dim1, dim2 = sparse_mat.shape
rank = 80
init = {"W": 0.1 * np.random.rand(dim1, rank),
"X": 0.1 * np.random.rand(dim2, rank)}
maxiter1 = 1100
maxiter2 = 100
BPMF(dense_mat, sparse_mat, init, rank, maxiter1, maxiter2)
end = time.time()
print('Running time: %d seconds'%(end - start))
Iter: 200
RMSE: 4.59603

Iter: 400
RMSE: 4.64364

Iter: 600
RMSE: 4.66908

Iter: 800
RMSE: 4.6682

Iter: 1000
RMSE: 4.67601

Imputation MAPE: 0.0980906
Imputation RMSE: 4.1659

Running time: 15537 seconds
In [10]:
import scipy.io

tensor = tensor['tensor']
random_matrix = random_matrix['random_matrix']
random_tensor = random_tensor['random_tensor']

dense_mat = tensor.reshape([tensor.shape[0], tensor.shape[1] * tensor.shape[2]])
missing_rate = 0.2

# =============================================================================
### Non-random missing (NM) scenario
### Set the NM scenario by:
binary_tensor = np.zeros(tensor.shape)
for i1 in range(tensor.shape[0]):
for i2 in range(tensor.shape[1]):
binary_tensor[i1, i2, :] = np.round(random_matrix[i1, i2] + 0.5 - missing_rate)
binary_mat = binary_tensor.reshape([binary_tensor.shape[0], binary_tensor.shape[1] * binary_tensor.shape[2]])
# =============================================================================

sparse_mat = np.multiply(dense_mat, binary_mat)
In [11]:
import time
start = time.time()
dim1, dim2 = sparse_mat.shape
rank = 10
init = {"W": 0.1 * np.random.rand(dim1, rank),
"X": 0.1 * np.random.rand(dim2, rank)}
maxiter1 = 1100
maxiter2 = 100
BPMF(dense_mat, sparse_mat, init, rank, maxiter1, maxiter2)
end = time.time()
print('Running time: %d seconds'%(end - start))
Iter: 200
RMSE: 4.40543

Iter: 400
RMSE: 4.40299

Iter: 600
RMSE: 4.40416

Iter: 800
RMSE: 4.4028

Iter: 1000
RMSE: 4.40422

Imputation MAPE: 0.102771
Imputation RMSE: 4.29007

Running time: 1869 seconds
In [12]:
import scipy.io

tensor = tensor['tensor']
random_matrix = random_matrix['random_matrix']
random_tensor = random_tensor['random_tensor']

dense_mat = tensor.reshape([tensor.shape[0], tensor.shape[1] * tensor.shape[2]])
missing_rate = 0.4

# =============================================================================
### Non-random missing (NM) scenario
### Set the NM scenario by:
binary_tensor = np.zeros(tensor.shape)
for i1 in range(tensor.shape[0]):
for i2 in range(tensor.shape[1]):
binary_tensor[i1, i2, :] = np.round(random_matrix[i1, i2] + 0.5 - missing_rate)
binary_mat = binary_tensor.reshape([binary_tensor.shape[0], binary_tensor.shape[1] * binary_tensor.shape[2]])
# =============================================================================

sparse_mat = np.multiply(dense_mat, binary_mat)
In [13]:
import time
start = time.time()
dim1, dim2 = sparse_mat.shape
rank = 10
init = {"W": 0.1 * np.random.rand(dim1, rank),
"X": 0.1 * np.random.rand(dim2, rank)}
maxiter1 = 1100
maxiter2 = 100
BPMF(dense_mat, sparse_mat, init, rank, maxiter1, maxiter2)
end = time.time()
print('Running time: %d seconds'%(end - start))
Iter: 200
RMSE: 4.55077

Iter: 400
RMSE: 4.55088

Iter: 600
RMSE: 4.55242

Iter: 800
RMSE: 4.54653

Iter: 1000
RMSE: 4.54695

Imputation MAPE: 0.103967
Imputation RMSE: 4.39942

Running time: 1862 seconds

Experiment results of missing data imputation using BPMF:

scenario rank maxiter1 maxiter2 mape rmse
0.2, RM 80 1100 100 0.0954 4.0551
0.4, RM 80 1100 100 0.0981 4.1659
0.2, NM 10 1100 100 0.1028 4.2901
0.4, NM 10 1100 100 0.1040 4.3994

# Part 5: Experiments on Birmingham Data Set¶

In [14]:
import scipy.io

tensor = tensor['tensor']
random_matrix = random_matrix['random_matrix']
random_tensor = random_tensor['random_tensor']

dense_mat = tensor.reshape([tensor.shape[0], tensor.shape[1] * tensor.shape[2]])
missing_rate = 0.1

# =============================================================================
### Random missing (RM) scenario
### Set the RM scenario by:
binary_mat = (np.round(random_tensor + 0.5 - missing_rate)
.reshape([random_tensor.shape[0], random_tensor.shape[1] * random_tensor.shape[2]]))
# =============================================================================

sparse_mat = np.multiply(dense_mat, binary_mat)
In [15]:
import time
start = time.time()
dim1, dim2 = sparse_mat.shape
rank = 30
init = {"W": 0.1 * np.random.rand(dim1, rank),
"X": 0.1 * np.random.rand(dim2, rank)}
maxiter1 = 1100
maxiter2 = 100
BPMF(dense_mat, sparse_mat, init, rank, maxiter1, maxiter2)
end = time.time()
print('Running time: %d seconds'%(end - start))
Iter: 200
RMSE: 85.3746

Iter: 400
RMSE: 82.764

Iter: 600
RMSE: 80.5529

Iter: 800
RMSE: 82.5179

Iter: 1000
RMSE: 85.4004

Imputation MAPE: 0.0787418
Imputation RMSE: 81.593

Running time: 576 seconds
In [16]:
import scipy.io

tensor = tensor['tensor']
random_matrix = random_matrix['random_matrix']
random_tensor = random_tensor['random_tensor']

dense_mat = tensor.reshape([tensor.shape[0], tensor.shape[1] * tensor.shape[2]])
missing_rate = 0.3

# =============================================================================
### Random missing (RM) scenario
### Set the RM scenario by:
binary_mat = (np.round(random_tensor + 0.5 - missing_rate)
.reshape([random_tensor.shape[0], random_tensor.shape[1] * random_tensor.shape[2]]))
# =============================================================================

sparse_mat = np.multiply(dense_mat, binary_mat)
In [17]:
import time
start = time.time()
dim1, dim2 = sparse_mat.shape
rank = 30
init = {"W": 0.1 * np.random.rand(dim1, rank),
"X": 0.1 * np.random.rand(dim2, rank)}
maxiter1 = 1100
maxiter2 = 100
BPMF(dense_mat, sparse_mat, init, rank, maxiter1, maxiter2)
end = time.time()
print('Running time: %d seconds'%(end - start))
C:\ProgramData\Anaconda3\lib\site-packages\ipykernel_launcher.py:35: RuntimeWarning: covariance is not symmetric positive-semidefinite.
Iter: 200
RMSE: 86.5213

Iter: 400
RMSE: 92.6908

Iter: 600
RMSE: 94.3497

Iter: 800
RMSE: 88.1774

Iter: 1000
RMSE: 97.1066

Imputation MAPE: 0.099453
Imputation RMSE: 83.8159

Running time: 571 seconds
In [18]:
import scipy.io

tensor = tensor['tensor']
random_matrix = random_matrix['random_matrix']
random_tensor = random_tensor['random_tensor']

dense_mat = tensor.reshape([tensor.shape[0], tensor.shape[1] * tensor.shape[2]])
missing_rate = 0.1

# =============================================================================
### Non-random missing (NM) scenario
### Set the NM scenario by:
binary_tensor = np.zeros(tensor.shape)
for i1 in range(tensor.shape[0]):
for i2 in range(tensor.shape[1]):
binary_tensor[i1, i2, :] = np.round(random_matrix[i1, i2] + 0.5 - missing_rate)
binary_mat = binary_tensor.reshape([binary_tensor.shape[0], binary_tensor.shape[1] * binary_tensor.shape[2]])
# =============================================================================

sparse_mat = np.multiply(dense_mat, binary_mat)
In [19]:
import time
start = time.time()
dim1, dim2 = sparse_mat.shape
rank = 10
init = {"W": 0.1 * np.random.rand(dim1, rank),
"X": 0.1 * np.random.rand(dim2, rank)}
maxiter1 = 1100
maxiter2 = 100
BPMF(dense_mat, sparse_mat, init, rank, maxiter1, maxiter2)
end = time.time()
print('Running time: %d seconds'%(end - start))
Iter: 200
RMSE: 33.9211

Iter: 400
RMSE: 33.8615

Iter: 600
RMSE: 34.1373

Iter: 800
RMSE: 34.0786

Iter: 1000
RMSE: 33.4816

Imputation MAPE: 0.131775
Imputation RMSE: 29.2774

Running time: 280 seconds
In [20]:
import scipy.io

tensor = tensor['tensor']
random_matrix = random_matrix['random_matrix']
random_tensor = random_tensor['random_tensor']

dense_mat = tensor.reshape([tensor.shape[0], tensor.shape[1] * tensor.shape[2]])
missing_rate = 0.3

# =============================================================================
### Non-random missing (NM) scenario
### Set the NM scenario by:
binary_tensor = np.zeros(tensor.shape)
for i1 in range(tensor.shape[0]):
for i2 in range(tensor.shape[1]):
binary_tensor[i1, i2, :] = np.round(random_matrix[i1, i2] + 0.5 - missing_rate)
binary_mat = binary_tensor.reshape([binary_tensor.shape[0], binary_tensor.shape[1] * binary_tensor.shape[2]])
# =============================================================================

sparse_mat = np.multiply(dense_mat, binary_mat)
In [21]:
import time
start = time.time()
dim1, dim2 = sparse_mat.shape
rank = 10
init = {"W": 0.1 * np.random.rand(dim1, rank),
"X": 0.1 * np.random.rand(dim2, rank)}
maxiter1 = 1100
maxiter2 = 100
BPMF(dense_mat, sparse_mat, init, rank, maxiter1, maxiter2)
end = time.time()
print('Running time: %d seconds'%(end - start))
Iter: 200
RMSE: 72.387

Iter: 400
RMSE: 68.8299

Iter: 600
RMSE: 66.6435

Iter: 800
RMSE: 67.5589

Iter: 1000
RMSE: 70.8601

Imputation MAPE: 0.147508
Imputation RMSE: 60.2924

Running time: 278 seconds

Experiment results of missing data imputation using BPMF:

scenario rank maxiter1 maxiter2 mape rmse
10%, RM 30 1100 100 0.0787 81.593
30%, RM 30 1100 100 0.0995 83.8159
10%, NM 10 1100 100 0.1318 29.2774
30%, NM 10 1100 100 0.1475 60.2924

# Part 6: Experiments on Hangzhou Data Set¶

In [22]:
import scipy.io

tensor = tensor['tensor']
random_matrix = random_matrix['random_matrix']
random_tensor = random_tensor['random_tensor']

dense_mat = tensor.reshape([tensor.shape[0], tensor.shape[1] * tensor.shape[2]])
missing_rate = 0.2

# =============================================================================
### Random missing (RM) scenario
### Set the RM scenario by:
binary_mat = (np.round(random_tensor + 0.5 - missing_rate)
.reshape([random_tensor.shape[0], random_tensor.shape[1] * random_tensor.shape[2]]))
# =============================================================================

sparse_mat = np.multiply(dense_mat, binary_mat)
In [23]:
import time
start = time.time()
dim1, dim2 = sparse_mat.shape
rank = 50
init = {"W": 0.1 * np.random.rand(dim1, rank),
"X": 0.1 * np.random.rand(dim2, rank)}
maxiter1 = 1100
maxiter2 = 100
BPMF(dense_mat, sparse_mat, init, rank, maxiter1, maxiter2)
end = time.time()
print('Running time: %d seconds'%(end - start))
Iter: 200
RMSE: 52.8515

Iter: 400
RMSE: 52.2388

Iter: 600
RMSE: 52.4415

Iter: 800
RMSE: 53.973

Iter: 1000
RMSE: 51.7854

Imputation MAPE: 0.296279
Imputation RMSE: 41.8653

Running time: 2397 seconds
In [24]:
import scipy.io

tensor = tensor['tensor']
random_matrix = random_matrix['random_matrix']
random_tensor = random_tensor['random_tensor']

dense_mat = tensor.reshape([tensor.shape[0], tensor.shape[1] * tensor.shape[2]])
missing_rate = 0.4

# =============================================================================
### Random missing (RM) scenario
### Set the RM scenario by:
binary_mat = (np.round(random_tensor + 0.5 - missing_rate)
.reshape([random_tensor.shape[0], random_tensor.shape[1] * random_tensor.shape[2]]))
# =============================================================================

sparse_mat = np.multiply(dense_mat, binary_mat)
In [25]:
import time
start = time.time()
dim1, dim2 = sparse_mat.shape
rank = 50
init = {"W": 0.1 * np.random.rand(dim1, rank),
"X": 0.1 * np.random.rand(dim2, rank)}
maxiter1 = 1100
maxiter2 = 100
BPMF(dense_mat, sparse_mat, init, rank, maxiter1, maxiter2)
end = time.time()
print('Running time: %d seconds'%(end - start))
Iter: 200
RMSE: 57.7243

Iter: 400
RMSE: 55.2085

Iter: 600
RMSE: 56.3372

Iter: 800
RMSE: 56.3237

Iter: 1000
RMSE: 55.2162

Imputation MAPE: 0.328331
Imputation RMSE: 44.4621

Running time: 2394 seconds
In [26]:
import scipy.io

tensor = tensor['tensor']
random_matrix = random_matrix['random_matrix']
random_tensor = random_tensor['random_tensor']

dense_mat = tensor.reshape([tensor.shape[0], tensor.shape[1] * tensor.shape[2]])
missing_rate = 0.2

# =============================================================================
### Non-random missing (NM) scenario
### Set the NM scenario by:
binary_tensor = np.zeros(tensor.shape)
for i1 in range(tensor.shape[0]):
for i2 in range(tensor.shape[1]):
binary_tensor[i1, i2, :] = np.round(random_matrix[i1, i2] + 0.5 - missing_rate)
binary_mat = binary_tensor.reshape([binary_tensor.shape[0], binary_tensor.shape[1] * binary_tensor.shape[2]])
# =============================================================================

sparse_mat = np.multiply(dense_mat, binary_mat)
In [27]:
import time
start = time.time()
dim1, dim2 = sparse_mat.shape
rank = 10
init = {"W": 0.1 * np.random.rand(dim1, rank),
"X": 0.1 * np.random.rand(dim2, rank)}
maxiter1 = 1100
maxiter2 = 100
BPMF(dense_mat, sparse_mat, init, rank, maxiter1, maxiter2)
end = time.time()
print('Running time: %d seconds'%(end - start))
Iter: 200
RMSE: 72.8171

Iter: 400
RMSE: 70.9143

Iter: 600
RMSE: 71.9877

Iter: 800
RMSE: 73.1187

Iter: 1000
RMSE: 72.6543

Imputation MAPE: 0.36313
Imputation RMSE: 64.2751

Running time: 555 seconds
In [28]:
import scipy.io

tensor = tensor['tensor']
random_matrix = random_matrix['random_matrix']
random_tensor = random_tensor['random_tensor']

dense_mat = tensor.reshape([tensor.shape[0], tensor.shape[1] * tensor.shape[2]])
missing_rate = 0.4

# =============================================================================
### Non-random missing (NM) scenario
### Set the NM scenario by:
binary_tensor = np.zeros(tensor.shape)
for i1 in range(tensor.shape[0]):
for i2 in range(tensor.shape[1]):
binary_tensor[i1, i2, :] = np.round(random_matrix[i1, i2] + 0.5 - missing_rate)
binary_mat = binary_tensor.reshape([binary_tensor.shape[0], binary_tensor.shape[1] * binary_tensor.shape[2]])
# =============================================================================

sparse_mat = np.multiply(dense_mat, binary_mat)
In [29]:
import time
start = time.time()
dim1, dim2 = sparse_mat.shape
rank = 10
init = {"W": 0.1 * np.random.rand(dim1, rank),
"X": 0.1 * np.random.rand(dim2, rank)}
maxiter1 = 1100
maxiter2 = 100
BPMF(dense_mat, sparse_mat, init, rank, maxiter1, maxiter2)
end = time.time()
print('Running time: %d seconds'%(end - start))
Iter: 200
RMSE: 62.2081

Iter: 400
RMSE: 61.1341

Iter: 600
RMSE: 64.1261

Iter: 800
RMSE: 65.101

Iter: 1000
RMSE: 64.6031

Imputation MAPE: 0.364337
Imputation RMSE: 59.0373

Running time: 552 seconds

Experiment results of missing data imputation using BPMF:

scenario rank maxiter1 maxiter2 mape rmse
20%, RM 50 1100 100 0.2963 41.8653
40%, RM 50 1100 100 0.3283 44.4621
20%, NM 10 1100 100 0.3631 64.2751
40%, NM 10 1100 100 0.3643 59.0373

# Part 7: Experiments on Seattle Data Set¶

In [9]:
import pandas as pd

dense_mat = pd.read_csv('../datasets/Seattle-data-set/mat.csv', index_col = 0)
RM_mat = pd.read_csv('../datasets/Seattle-data-set/RM_mat.csv', index_col = 0)
dense_mat = dense_mat.values
RM_mat = RM_mat.values

missing_rate = 0.2

# =============================================================================
### Random missing (RM) scenario
### Set the RM scenario by:
binary_mat = np.round(RM_mat + 0.5 - missing_rate)
# =============================================================================

sparse_mat = np.multiply(dense_mat, binary_mat)
In [10]:
import time
start = time.time()
dim1, dim2 = sparse_mat.shape
rank = 50
init = {"W": 0.1 * np.random.rand(dim1, rank),
"X": 0.1 * np.random.rand(dim2, rank)}
maxiter1 = 1100
maxiter2 = 100
BPMF(dense_mat, sparse_mat, init, rank, maxiter1, maxiter2)
end = time.time()
print('Running time: %d seconds'%(end - start))
Iter: 200
RMSE: 4.46412

Iter: 400
RMSE: 4.38932

Iter: 600
RMSE: 4.37362

Iter: 800
RMSE: 4.37091

Iter: 1000
RMSE: 4.36332

Imputation MAPE: 0.0651451
Imputation RMSE: 4.04333

Running time: 9192 seconds
In [11]:
import pandas as pd

dense_mat = pd.read_csv('../datasets/Seattle-data-set/mat.csv', index_col = 0)
RM_mat = pd.read_csv('../datasets/Seattle-data-set/RM_mat.csv', index_col = 0)
dense_mat = dense_mat.values
RM_mat = RM_mat.values

missing_rate = 0.4

# =============================================================================
### Random missing (RM) scenario
### Set the RM scenario by:
binary_mat = np.round(RM_mat + 0.5 - missing_rate)
# =============================================================================

sparse_mat = np.multiply(dense_mat, binary_mat)
In [12]:
import time
start = time.time()
dim1, dim2 = sparse_mat.shape
rank = 50
init = {"W": 0.1 * np.random.rand(dim1, rank),
"X": 0.1 * np.random.rand(dim2, rank)}
maxiter1 = 1100
maxiter2 = 100
BPMF(dense_mat, sparse_mat, init, rank, maxiter1, maxiter2)
end = time.time()
print('Running time: %d seconds'%(end - start))
Iter: 200
RMSE: 4.76083

Iter: 400
RMSE: 4.70584

Iter: 600
RMSE: 4.67984

Iter: 800
RMSE: 4.66926

Iter: 1000
RMSE: 4.66515

Imputation MAPE: 0.0703029
Imputation RMSE: 4.28836

Running time: 8654 seconds
In [13]:
import pandas as pd

dense_mat = pd.read_csv('../datasets/Seattle-data-set/mat.csv', index_col = 0)
NM_mat = pd.read_csv('../datasets/Seattle-data-set/NM_mat.csv', index_col = 0)
dense_mat = dense_mat.values
NM_mat = NM_mat.values

missing_rate = 0.2

# =============================================================================
### Non-random missing (NM) scenario
### Set the NM scenario by:
binary_tensor = np.zeros((dense_mat.shape[0], 28, 288))
for i1 in range(binary_tensor.shape[0]):
for i2 in range(binary_tensor.shape[1]):
binary_tensor[i1, i2, :] = np.round(NM_mat[i1, i2] + 0.5 - missing_rate)
# =============================================================================

sparse_mat = np.multiply(dense_mat, binary_tensor.reshape([dense_mat.shape[0], dense_mat.shape[1]]))
In [14]:
import time
start = time.time()
dim1, dim2 = sparse_mat.shape
rank = 10
init = {"W": 0.1 * np.random.rand(dim1, rank),
"X": 0.1 * np.random.rand(dim2, rank)}
maxiter1 = 1100
maxiter2 = 100
BPMF(dense_mat, sparse_mat, init, rank, maxiter1, maxiter2)
end = time.time()
print('Running time: %d seconds'%(end - start))
Iter: 200
RMSE: 5.35812

Iter: 400
RMSE: 5.35851

Iter: 600
RMSE: 5.35796

Iter: 800
RMSE: 5.35949

Iter: 1000
RMSE: 5.36241

Imputation MAPE: 0.0911995
Imputation RMSE: 5.26532

Running time: 1740 seconds
In [15]:
import pandas as pd

dense_mat = pd.read_csv('../datasets/Seattle-data-set/mat.csv', index_col = 0)
NM_mat = pd.read_csv('../datasets/Seattle-data-set/NM_mat.csv', index_col = 0)
dense_mat = dense_mat.values
NM_mat = NM_mat.values

missing_rate = 0.4

# =============================================================================
### Non-random missing (NM) scenario
### Set the NM scenario by:
binary_tensor = np.zeros((dense_mat.shape[0], 28, 288))
for i1 in range(binary_tensor.shape[0]):
for i2 in range(binary_tensor.shape[1]):
binary_tensor[i1, i2, :] = np.round(NM_mat[i1, i2] + 0.5 - missing_rate)
# =============================================================================

sparse_mat = np.multiply(dense_mat, binary_tensor.reshape([dense_mat.shape[0], dense_mat.shape[1]]))
In [16]:
import time
start = time.time()
dim1, dim2 = sparse_mat.shape
rank = 10
init = {"W": 0.1 * np.random.rand(dim1, rank),
"X": 0.1 * np.random.rand(dim2, rank)}
maxiter1 = 1100
maxiter2 = 100
BPMF(dense_mat, sparse_mat, init, rank, maxiter1, maxiter2)
end = time.time()
print('Running time: %d seconds'%(end - start))
Iter: 200
RMSE: 5.42698

Iter: 400
RMSE: 5.42502

Iter: 600
RMSE: 5.43372

Iter: 800
RMSE: 5.42601

Iter: 1000
RMSE: 5.43167

Imputation MAPE: 0.0918858
Imputation RMSE: 5.30468

Running time: 1733 seconds

Experiment results of missing data imputation using BPMF:

scenario rank maxiter1 maxiter2 mape rmse
20%, RM 50 1100 100 0.0651 4.0433
40%, RM 50 1100 100 0.0703 4.2884
20%, NM 10 1100 100 0.0912 5.2653
40%, NM 10 1100 100 0.0919 5.3047