BG/NBD Model#
In this notebook we show how to fit a BG/NBD model in PyMC-Marketing. We compare the results with the lifetimes
package (no longer maintained). The model is presented in the paper: Fader, P. S., Hardie, B. G., & Lee, K. L. (2005). “Counting your customers” the easy way: An alternative to the Pareto/NBD model. Marketing science, 24(2), 275-284.
Prepare Notebook#
import arviz as az
import matplotlib.pyplot as plt
import pandas as pd
import xarray as xr
from fastprogress.fastprogress import progress_bar
from lifetimes import BetaGeoFitter
from pymc_marketing import clv
# Plotting configuration
az.style.use("arviz-darkgrid")
plt.rcParams["figure.figsize"] = [12, 7]
plt.rcParams["figure.dpi"] = 100
plt.rcParams["figure.facecolor"] = "white"
%load_ext autoreload
%autoreload 2
%config InlineBackend.figure_format = "retina"
Read Data#
We use the CDNOW
dataset (see lifetimes quick-start).
data_path = "https://raw.githubusercontent.com/pymc-labs/pymc-marketing/main/data/clv_quickstart.csv"
df = pd.read_csv(data_path)
df.head()
frequency | recency | T | monetary_value | |
---|---|---|---|---|
0 | 2 | 30.43 | 38.86 | 22.35 |
1 | 1 | 1.71 | 38.86 | 11.77 |
2 | 0 | 0.00 | 38.86 | 0.00 |
3 | 0 | 0.00 | 38.86 | 0.00 |
4 | 0 | 0.00 | 38.86 | 0.00 |
Recall from the lifetimes
documentation the following definitions:
frequency
represents the number of repeat purchases the customer has made. This means that it’s one less than the total number of purchases. This is actually slightly wrong. It’s the count of time periods the customer had a purchase in. So if using days as units, then it’s the count of days the customer had a purchase on.
T
represents the age of the customer in whatever time units chosen (weekly, in the above dataset). This is equal to the duration between a customer’s first purchase and the end of the period under study.
recency
represents the age of the customer when they made their most recent purchases. This is equal to the duration between a customer’s first purchase and their latest purchase. (Thus if they have made only 1 purchase, the recency is 0.)
Tip
We rename the index column to customer_id
as this is required by the model
data = (
df.reset_index()
.rename(columns={"index": "customer_id"})
.drop(columns="monetary_value")
)
Model Specification#
The BG/NBD model is a probabilistic model that describes the buying behavior of a customer in the non-contractual setting. It is based on the following assumptions for each customer:
Frequency Process#
While active, the time between transactions is distributed exponential with transaction rate, i.e.,
\[f(t_{j}|t_{j-1}; \lambda) = \lambda \exp(-\lambda (t_{j} - t_{j - 1})), \quad t_{j} \geq t_{j - 1} \geq 0\]Heterogeneity in \(\lambda\) follows a gamma distribution with pdf
\[f(\lambda|r, \alpha) = \frac{\alpha^{r}\lambda^{r - 1}\exp(-\lambda \alpha)}{\Gamma(r)}, \quad \lambda > 0\]
Dropout Process#
After any transaction, a customer becomes inactive with probability \(p\).
Heterogeneity in \(p\) follows a beta distribution with pdf
\[f(p|a, b) = \frac{\Gamma(a + b)}{\Gamma(a) \Gamma(b)} p^{a - 1}(1 - p)^{b - 1}, \quad 0 \leq p \leq 1\]The transaction rate \(\lambda\) and the dropout probability \(p\) vary independently across customers.
Instead of estimating \(\lambda\) and \(p\) for each specific customer, we do it for a randomly chosen customer, i.e. we work with the expected values of the parameters. Hence, we are interesting in finding the posterior distribution of the parameters \(r\), \(\alpha\), \(a\), and \(b\).
Model Fitting#
Estimating such parameters is very easy in PyMC-Marketing. We instantiate the model in a similar way:
model = clv.BetaGeoModel(data=data)
And build the model to see the model configuration:
model.build_model()
model
BG/NBD
alpha ~ HalfFlat()
r ~ HalfFlat()
phi_dropout ~ Uniform(0, 1)
kappa_dropout ~ Pareto(1, 1)
a ~ Deterministic(f(kappa_dropout, phi_dropout))
b ~ Deterministic(f(kappa_dropout, phi_dropout))
likelihood ~ Potential(f(r, alpha, kappa_dropout, phi_dropout))
Notice the additional phi_dropout
and kappa_dropout
priors. These were added to the default configuration to improve performance, but can be omitted when specifying a custom model_config
with a_prior
and b_prior
.
The specified model structure can also be visualized:
model.graphviz()
We can now fit the model. The default sampler in PyMC-Marketing is the No-U-Turn Sampler (NUTS). We use the default \(4\) chains and \(1000\) draws per chain.
Note
It is not necessary to build the model before fitting it. We can fit the model directly.
sample_kwargs = {
"draws": 2_000,
"chains": 4,
"target_accept": 0.9,
"random_seed": 42,
}
idata_mcmc = model.fit(**sample_kwargs)
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (4 chains in 4 jobs)
NUTS: [alpha, r, phi_dropout, kappa_dropout]
Sampling 4 chains for 1_000 tune and 2_000 draw iterations (4_000 + 8_000 draws total) took 8 seconds.
idata_mcmc
-
<xarray.Dataset> Size: 400kB Dimensions: (chain: 4, draw: 2000) Coordinates: * chain (chain) int64 32B 0 1 2 3 * draw (draw) int64 16kB 0 1 2 3 4 5 ... 1995 1996 1997 1998 1999 Data variables: a (chain, draw) float64 64kB 0.9821 0.6442 ... 0.6165 0.5989 alpha (chain, draw) float64 64kB 5.145 4.934 4.44 ... 5.28 4.579 b (chain, draw) float64 64kB 3.448 2.011 2.359 ... 1.655 1.622 kappa_dropout (chain, draw) float64 64kB 4.43 2.655 3.206 ... 2.271 2.221 phi_dropout (chain, draw) float64 64kB 0.2217 0.2426 ... 0.2714 0.2697 r (chain, draw) float64 64kB 0.2587 0.2577 ... 0.2686 0.2526 Attributes: created_at: 2024-12-16T03:12:42.421912+00:00 arviz_version: 0.18.0 inference_library: pymc inference_library_version: 5.15.1 sampling_time: 8.389899015426636 tuning_steps: 1000
-
<xarray.Dataset> Size: 992kB Dimensions: (chain: 4, draw: 2000) Coordinates: * chain (chain) int64 32B 0 1 2 3 * draw (draw) int64 16kB 0 1 2 3 4 ... 1996 1997 1998 1999 Data variables: (12/17) acceptance_rate (chain, draw) float64 64kB 0.9927 0.837 ... 1.0 diverging (chain, draw) bool 8kB False False ... False False energy (chain, draw) float64 64kB 9.594e+03 ... 9.59e+03 energy_error (chain, draw) float64 64kB -0.1389 ... -0.09885 index_in_trajectory (chain, draw) int64 64kB 3 -3 -5 4 -3 ... 10 2 11 3 largest_eigval (chain, draw) float64 64kB nan nan nan ... nan nan ... ... process_time_diff (chain, draw) float64 64kB 0.002323 ... 0.002338 reached_max_treedepth (chain, draw) bool 8kB False False ... False False smallest_eigval (chain, draw) float64 64kB nan nan nan ... nan nan step_size (chain, draw) float64 64kB 0.4512 0.4512 ... 0.41 step_size_bar (chain, draw) float64 64kB 0.4845 0.4845 ... 0.4782 tree_depth (chain, draw) int64 64kB 3 3 3 3 3 2 ... 3 4 4 3 4 3 Attributes: created_at: 2024-12-16T03:12:42.432573+00:00 arviz_version: 0.18.0 inference_library: pymc inference_library_version: 5.15.1 sampling_time: 8.389899015426636 tuning_steps: 1000
-
<xarray.Dataset> Size: 94kB Dimensions: (index: 2357) Coordinates: * index (index) int64 19kB 0 1 2 3 4 5 ... 2352 2353 2354 2355 2356 Data variables: customer_id (index) int64 19kB 0 1 2 3 4 5 ... 2352 2353 2354 2355 2356 frequency (index) int64 19kB 2 1 0 0 0 7 1 0 2 0 ... 7 1 2 0 0 0 5 0 4 0 recency (index) float64 19kB 30.43 1.71 0.0 0.0 ... 24.29 0.0 26.57 0.0 T (index) float64 19kB 38.86 38.86 38.86 38.86 ... 27.0 27.0 27.0
We can look into the summary table:
model.fit_summary()
mean | sd | hdi_3% | hdi_97% | mcse_mean | mcse_sd | ess_bulk | ess_tail | r_hat | |
---|---|---|---|---|---|---|---|---|---|
a | 0.783 | 0.194 | 0.453 | 1.124 | 0.003 | 0.002 | 5932.0 | 5764.0 | 1.0 |
alpha | 4.480 | 0.384 | 3.756 | 5.184 | 0.006 | 0.004 | 4225.0 | 4186.0 | 1.0 |
b | 2.408 | 0.749 | 1.271 | 3.786 | 0.010 | 0.007 | 5240.0 | 4860.0 | 1.0 |
kappa_dropout | 3.191 | 0.934 | 1.718 | 4.862 | 0.013 | 0.009 | 5343.0 | 4971.0 | 1.0 |
phi_dropout | 0.249 | 0.020 | 0.212 | 0.285 | 0.000 | 0.000 | 5428.0 | 5177.0 | 1.0 |
r | 0.244 | 0.012 | 0.220 | 0.267 | 0.000 | 0.000 | 4262.0 | 3689.0 | 1.0 |
We see that the r_hat
values are close to \(1\), which indicates convergence.
We can also plot posterior distributions of the parameters and the rank plots:
axes = az.plot_trace(
data=model.idata,
compact=True,
kind="rank_bars",
backend_kwargs={"figsize": (12, 9), "layout": "constrained"},
)
plt.gcf().suptitle("BG/NBD Model Trace", fontsize=18, fontweight="bold");
Using MAP fit#
CLV models such as BetaGeoModel
, can provide the maximum a posteriori estimates using a numerical optimizer (L-BFGS-B
from scipy.optimize
) under the hood.
model_map = clv.BetaGeoModel(data=data)
idata_map = model_map.fit(fit_method="map")
idata_map
-
<xarray.Dataset> Size: 64B Dimensions: (chain: 1, draw: 1) Coordinates: * chain (chain) int64 8B 0 * draw (draw) int64 8B 0 Data variables: a (chain, draw) float64 8B 0.7065 alpha (chain, draw) float64 8B 4.41 b (chain, draw) float64 8B 2.091 kappa_dropout (chain, draw) float64 8B 2.798 phi_dropout (chain, draw) float64 8B 0.2525 r (chain, draw) float64 8B 0.2426 Attributes: created_at: 2024-12-16T03:15:24.358631+00:00 arviz_version: 0.18.0 inference_library: pymc inference_library_version: 5.15.1
-
<xarray.Dataset> Size: 94kB Dimensions: (index: 2357) Coordinates: * index (index) int64 19kB 0 1 2 3 4 5 ... 2352 2353 2354 2355 2356 Data variables: customer_id (index) int64 19kB 0 1 2 3 4 5 ... 2352 2353 2354 2355 2356 frequency (index) int64 19kB 2 1 0 0 0 7 1 0 2 0 ... 7 1 2 0 0 0 5 0 4 0 recency (index) float64 19kB 30.43 1.71 0.0 0.0 ... 24.29 0.0 26.57 0.0 T (index) float64 19kB 38.86 38.86 38.86 38.86 ... 27.0 27.0 27.0
This time we get point estimates for the parameters.
map_summary = model_map.fit_summary()
map_summary
a 0.706
alpha 4.410
b 2.091
kappa_dropout 2.798
phi_dropout 0.253
r 0.243
Name: value, dtype: float64
Comparing with the lifetimes
package#
For the sake of comparison, let’s fit the model using the lifetimes
package.
bgf = BetaGeoFitter()
bgf.fit(
frequency=data["frequency"].values,
recency=data["recency"].values,
T=data["T"].values,
)
bgf.summary
coef | se(coef) | lower 95% bound | upper 95% bound | |
---|---|---|---|---|
r | 0.242593 | 0.012557 | 0.217981 | 0.267205 |
alpha | 4.413532 | 0.378221 | 3.672218 | 5.154846 |
a | 0.792886 | 0.185719 | 0.428877 | 1.156895 |
b | 2.425752 | 0.705345 | 1.043276 | 3.808229 |
Show code cell source
fig, axes = plt.subplots(
nrows=2, ncols=2, figsize=(12, 9), sharex=False, sharey=False, layout="constrained"
)
axes = axes.flatten()
for i, var_name in enumerate(["r", "alpha", "a", "b"]):
ax = axes[i]
az.plot_posterior(
model.idata.posterior[var_name].values.flatten(),
color="C0",
point_estimate="mean",
ax=ax,
label="MCMC",
)
ax.axvline(x=map_summary[var_name], color="C1", linestyle="--", label="MAP")
ax.axvline(
x=bgf.summary["coef"][var_name], color="C2", linestyle="--", label="lifetimes"
)
ax.legend(loc="upper right")
ax.set_title(var_name)
plt.gcf().suptitle("BG/NBD Model Parameters", fontsize=18, fontweight="bold");
The r
and alpha
purchase rate parameters are quite similar for all three models, but the a
and b
dropout parameters are better approximated with the default parameters when fitted with MCMC.
Some Applications#
Now that you have fitted the model, we can use it to make predictions. For example, we can predict the expected probability of a customer being alive as a function of time (steps). Here is a snippet of code to do that:
Expected Number of Purchases#
Let us take a sample of users:
example_customer_ids = [1, 6, 10, 18, 45, 1412]
data_small = data.query("customer_id.isin(@example_customer_ids)")
data_small.head(6)
customer_id | frequency | recency | T | |
---|---|---|---|---|
1 | 1 | 1 | 1.71 | 38.86 |
6 | 6 | 1 | 5.00 | 38.86 |
10 | 10 | 5 | 24.43 | 38.86 |
18 | 18 | 3 | 28.29 | 38.71 |
45 | 45 | 12 | 34.43 | 38.57 |
1412 | 1412 | 14 | 30.29 | 31.57 |
Observe that the last two customers are frequent buyers as compared to the others.
steps = 90
expected_num_purchases_steps = xr.concat(
objs=[
model.expected_purchases(
data=data_small,
future_t=t,
)
for t in progress_bar(range(steps))
],
dim="t",
).transpose(..., "t")
We can plot the expected number of purchases for the next \(90\) periods:
Show code cell source
fig, axes = plt.subplots(
nrows=len(example_customer_ids),
ncols=1,
figsize=(12, 15),
sharex=True,
sharey=True,
layout="constrained",
)
axes = axes.flatten()
for i, customer_id in enumerate(example_customer_ids):
ax = axes[i]
customer_expected_num_purchases_steps = expected_num_purchases_steps.sel(
customer_id=customer_id
)
az.plot_hdi(
range(steps),
customer_expected_num_purchases_steps,
hdi_prob=0.94,
color="C0",
fill_kwargs={"alpha": 0.3, "label": "$94 \\%$ HDI"},
ax=ax,
)
az.plot_hdi(
range(steps),
customer_expected_num_purchases_steps,
hdi_prob=0.5,
color="C0",
fill_kwargs={"alpha": 0.5, "label": "$50 \\%$ HDI"},
ax=ax,
)
ax.plot(
range(steps),
customer_expected_num_purchases_steps.mean(dim=("chain", "draw")),
color="C0",
label="posterior mean",
)
ax.legend(loc="upper left")
ax.set(title=f"Customer {customer_id}", xlabel="t", ylabel="purchases")
axes[-1].set(xlabel="steps")
plt.gcf().suptitle("Expected Number of Purchases", fontsize=18, fontweight="bold");
Note that the frequent buyers are expected to make more purchases in the future.
Probability of a Customer Being Alive#
We now look into the probability of a customer being alive for the next \(90\) periods:
steps = 90
future_alive_all = []
for t in progress_bar(range(steps)):
future_data = data_small.copy()
future_data["T"] = future_data["T"] + t
future_alive = model.expected_probability_alive(data=future_data)
future_alive_all.append(future_alive)
expected_probability_alive_steps = xr.concat(
objs=future_alive_all,
dim="t",
).transpose(..., "t")
Show code cell source
fig, axes = plt.subplots(
nrows=len(example_customer_ids),
ncols=1,
figsize=(12, 15),
sharex=True,
sharey=True,
layout="constrained",
)
axes = axes.flatten()
for i, customer_id in enumerate(example_customer_ids):
ax = axes[i]
customer_expected_probability_alive_steps = expected_probability_alive_steps.sel(
customer_id=customer_id
)
az.plot_hdi(
range(steps),
customer_expected_probability_alive_steps,
hdi_prob=0.94,
color="C1",
fill_kwargs={"alpha": 0.3, "label": "$94 \\%$ HDI"},
ax=ax,
)
az.plot_hdi(
range(steps),
customer_expected_probability_alive_steps,
hdi_prob=0.5,
color="C1",
fill_kwargs={"alpha": 0.5, "label": "$50 \\%$ HDI"},
ax=ax,
)
ax.plot(
range(steps),
customer_expected_probability_alive_steps.mean(dim=("chain", "draw")),
color="C1",
label="posterior mean",
)
ax.legend(loc="upper right")
ax.set(title=f"Customer {customer_id}", ylabel="probability alive", ylim=(0, 1))
axes[-1].set(xlabel="steps")
plt.gcf().suptitle(
"Expected Probability Alive over Time", fontsize=18, fontweight="bold"
);
Tip
Here are some general remarks:
These plots assume no future purchases.
The decay probability is not the same as it depends in the purchase history of the customer.
The probability of being alive is always decreasing as we are assuming there is no change in the other parameters.
These probabilities are always non-negative, as expected.
Warning
For the frequent buyers, the probability of being alive drops very fast as we are assuming no future purchases. It is very important to keep this in mind when interpreting the results.
Probability of a Customer Making Zero Purchases in a time range#
We now look into the probability of a customer making 0 purchases between now, and the next \(t\) periods between 0 and 30.
steps = 30
expected_probability_zero_purchases = xr.concat(
objs=[
model.expected_probability_no_purchase(
data=data_small,
t=t,
)
for t in progress_bar(range(steps))
],
dim="t",
).transpose(..., "t")
fig, axes = plt.subplots(
nrows=len(example_customer_ids),
ncols=1,
figsize=(12, 15),
sharex=True,
sharey=True,
layout="constrained",
)
axes = axes.flatten()
for i, customer_id in enumerate(example_customer_ids):
ax = axes[i]
customer_expected_probability_zero_purchases = (
expected_probability_zero_purchases.sel(customer_id=customer_id)
)
az.plot_hdi(
range(steps),
customer_expected_probability_zero_purchases,
hdi_prob=0.94,
color="C1",
fill_kwargs={"alpha": 0.3, "label": "$94 \\%$ HDI"},
ax=ax,
)
az.plot_hdi(
range(steps),
customer_expected_probability_zero_purchases,
hdi_prob=0.5,
color="C1",
fill_kwargs={"alpha": 0.5, "label": "$50 \\%$ HDI"},
ax=ax,
)
ax.plot(
range(steps),
customer_expected_probability_zero_purchases.mean(dim=("chain", "draw")),
color="C1",
label="posterior mean",
)
ax.legend(loc="upper right")
ax.set(title=f"Customer {customer_id}", ylabel="Probability", ylim=(0, 1))
axes[-1].set(xlabel="steps")
plt.gcf().suptitle(
"Expected Probability Zero Purchases between $(T, T+t]$.",
fontsize=18,
fontweight="bold",
);
%reload_ext watermark
%watermark -n -u -v -iv -w -p pymc,pytensor
Last updated: Mon Dec 16 2024
Python implementation: CPython
Python version : 3.10.14
IPython version : 8.22.2
pymc : 5.15.1
pytensor: 2.22.1
pytensor : 2.22.1
arviz : 0.18.0
pandas : 2.2.2
pymc_marketing: 0.10.0
matplotlib : 3.8.4
xarray : 2024.10.0
Watermark: 2.4.3