BG/NBD Model#

In this notebook we show how to fit a BG/NBD model in PyMC-Marketing. We compare the results with the lifetimes package (no longer maintained). The model is presented in the paper: Fader, P. S., Hardie, B. G., & Lee, K. L. (2005). “Counting your customers” the easy way: An alternative to the Pareto/NBD model. Marketing science, 24(2), 275-284.

Prepare Notebook#

import arviz as az
import matplotlib.pyplot as plt
import pandas as pd
import xarray as xr
from fastprogress.fastprogress import progress_bar
from lifetimes import BetaGeoFitter

from pymc_marketing import clv

# Plotting configuration
az.style.use("arviz-darkgrid")
plt.rcParams["figure.figsize"] = [12, 7]
plt.rcParams["figure.dpi"] = 100
plt.rcParams["figure.facecolor"] = "white"

%load_ext autoreload
%autoreload 2
%config InlineBackend.figure_format = "retina"

Read Data#

We use the CDNOW dataset (see lifetimes quick-start).

data_path = "https://raw.githubusercontent.com/pymc-labs/pymc-marketing/main/data/clv_quickstart.csv"

df = pd.read_csv(data_path)

df.head()
frequency recency T monetary_value
0 2 30.43 38.86 22.35
1 1 1.71 38.86 11.77
2 0 0.00 38.86 0.00
3 0 0.00 38.86 0.00
4 0 0.00 38.86 0.00

Recall from the lifetimes documentation the following definitions:

  • frequency represents the number of repeat purchases the customer has made. This means that it’s one less than the total number of purchases. This is actually slightly wrong. It’s the count of time periods the customer had a purchase in. So if using days as units, then it’s the count of days the customer had a purchase on.

  • T represents the age of the customer in whatever time units chosen (weekly, in the above dataset). This is equal to the duration between a customer’s first purchase and the end of the period under study.

  • recency represents the age of the customer when they made their most recent purchases. This is equal to the duration between a customer’s first purchase and their latest purchase. (Thus if they have made only 1 purchase, the recency is 0.)

Tip

We rename the index column to customer_id as this is required by the model

data = (
    df.reset_index()
    .rename(columns={"index": "customer_id"})
    .drop(columns="monetary_value")
)

Model Specification#

The BG/NBD model is a probabilistic model that describes the buying behavior of a customer in the non-contractual setting. It is based on the following assumptions for each customer:

Frequency Process#

  1. While active, the time between transactions is distributed exponential with transaction rate, i.e.,

    \[f(t_{j}|t_{j-1}; \lambda) = \lambda \exp(-\lambda (t_{j} - t_{j - 1})), \quad t_{j} \geq t_{j - 1} \geq 0\]
  2. Heterogeneity in \(\lambda\) follows a gamma distribution with pdf

    \[f(\lambda|r, \alpha) = \frac{\alpha^{r}\lambda^{r - 1}\exp(-\lambda \alpha)}{\Gamma(r)}, \quad \lambda > 0\]

Dropout Process#

  1. After any transaction, a customer becomes inactive with probability \(p\).

  2. Heterogeneity in \(p\) follows a beta distribution with pdf

    \[f(p|a, b) = \frac{\Gamma(a + b)}{\Gamma(a) \Gamma(b)} p^{a - 1}(1 - p)^{b - 1}, \quad 0 \leq p \leq 1\]
  3. The transaction rate \(\lambda\) and the dropout probability \(p\) vary independently across customers.

Instead of estimating \(\lambda\) and \(p\) for each specific customer, we do it for a randomly chosen customer, i.e. we work with the expected values of the parameters. Hence, we are interesting in finding the posterior distribution of the parameters \(r\), \(\alpha\), \(a\), and \(b\).

Model Fitting#

Estimating such parameters is very easy in PyMC-Marketing. We instantiate the model in a similar way:

model = clv.BetaGeoModel(data=data)

And build the model to see the model configuration:

model.build_model()
model
BG/NBD
            alpha ~ Weibull(2, 10)
                r ~ Weibull(2, 1)
      phi_dropout ~ Uniform(0, 1)
    kappa_dropout ~ Pareto(1, 1)
                a ~ Deterministic(f(kappa_dropout, phi_dropout))
                b ~ Deterministic(f(kappa_dropout, phi_dropout))
recency_frequency ~ BetaGeoNBD(a, b, r, alpha, <constant>)

Notice the additional phi_dropout and kappa_dropout priors. These were added to the default configuration to improve performance, but can be omitted when specifying a custom model_config with a and b.

The specified model structure can also be visualized:

model.graphviz()
../../_images/69248deb786fb470404348e4991c4d9f47d0d080b33ce90bfe16d649bf0a89c9.svg

We can now fit the model. The default sampler in PyMC-Marketing is the No-U-Turn Sampler (NUTS). We use the default \(4\) chains and \(1000\) draws per chain.

Note

It is not necessary to build the model before fitting it. We can fit the model directly.

sample_kwargs = {
    "draws": 2_000,
    "chains": 4,
    "target_accept": 0.9,
    "random_seed": 42,
}

idata_mcmc = model.fit(**sample_kwargs)
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (4 chains in 4 jobs)
NUTS: [alpha, r, phi_dropout, kappa_dropout]

Sampling 4 chains for 1_000 tune and 2_000 draw iterations (4_000 + 8_000 draws total) took 7 seconds.
idata_mcmc
arviz.InferenceData
    • <xarray.Dataset> Size: 400kB
      Dimensions:        (chain: 4, draw: 2000)
      Coordinates:
        * chain          (chain) int64 32B 0 1 2 3
        * draw           (draw) int64 16kB 0 1 2 3 4 5 ... 1995 1996 1997 1998 1999
      Data variables:
          alpha          (chain, draw) float64 64kB 4.058 4.701 4.601 ... 4.295 4.038
          r              (chain, draw) float64 64kB 0.234 0.2538 ... 0.2478 0.2458
          phi_dropout    (chain, draw) float64 64kB 0.2443 0.2045 ... 0.2781 0.2799
          kappa_dropout  (chain, draw) float64 64kB 4.042 5.491 5.06 ... 1.721 2.71
          a              (chain, draw) float64 64kB 0.9874 1.123 ... 0.4784 0.7584
          b              (chain, draw) float64 64kB 3.055 4.368 3.903 ... 1.242 1.952
      Attributes:
          created_at:                 2025-02-05T16:41:01.867859+00:00
          arviz_version:              0.20.0
          inference_library:          pymc
          inference_library_version:  5.19.1
          sampling_time:              6.9851062297821045
          tuning_steps:               1000

    • <xarray.Dataset> Size: 992kB
      Dimensions:                (chain: 4, draw: 2000)
      Coordinates:
        * chain                  (chain) int64 32B 0 1 2 3
        * draw                   (draw) int64 16kB 0 1 2 3 4 ... 1996 1997 1998 1999
      Data variables: (12/17)
          perf_counter_diff      (chain, draw) float64 64kB 0.001831 ... 0.00184
          step_size_bar          (chain, draw) float64 64kB 0.4822 0.4822 ... 0.4736
          reached_max_treedepth  (chain, draw) bool 8kB False False ... False False
          perf_counter_start     (chain, draw) float64 64kB 7.468e+05 ... 7.468e+05
          lp                     (chain, draw) float64 64kB -9.59e+03 ... -9.592e+03
          tree_depth             (chain, draw) int64 64kB 3 4 2 4 3 4 ... 3 3 3 3 2 3
          ...                     ...
          acceptance_rate        (chain, draw) float64 64kB 0.9899 0.8692 ... 0.6503
          max_energy_error       (chain, draw) float64 64kB -0.4589 0.2942 ... 1.132
          energy_error           (chain, draw) float64 64kB -0.4005 0.1005 ... 0.05883
          energy                 (chain, draw) float64 64kB 9.593e+03 ... 9.597e+03
          process_time_diff      (chain, draw) float64 64kB 0.001831 ... 0.00184
          step_size              (chain, draw) float64 64kB 0.4655 0.4655 ... 0.4531
      Attributes:
          created_at:                 2025-02-05T16:41:01.879643+00:00
          arviz_version:              0.20.0
          inference_library:          pymc
          inference_library_version:  5.19.1
          sampling_time:              6.9851062297821045
          tuning_steps:               1000

    • <xarray.Dataset> Size: 57kB
      Dimensions:            (customer_id: 2357, obs_var: 2)
      Coordinates:
        * customer_id        (customer_id) int64 19kB 0 1 2 3 ... 2353 2354 2355 2356
        * obs_var            (obs_var) <U9 72B 'recency' 'frequency'
      Data variables:
          recency_frequency  (customer_id, obs_var) float64 38kB 30.43 2.0 ... 0.0 0.0
      Attributes:
          created_at:                 2025-02-05T16:41:01.886772+00:00
          arviz_version:              0.20.0
          inference_library:          pymc
          inference_library_version:  5.19.1

    • <xarray.Dataset> Size: 94kB
      Dimensions:      (index: 2357)
      Coordinates:
        * index        (index) int64 19kB 0 1 2 3 4 5 ... 2352 2353 2354 2355 2356
      Data variables:
          customer_id  (index) int64 19kB 0 1 2 3 4 5 ... 2352 2353 2354 2355 2356
          frequency    (index) int64 19kB 2 1 0 0 0 7 1 0 2 0 ... 7 1 2 0 0 0 5 0 4 0
          recency      (index) float64 19kB 30.43 1.71 0.0 0.0 ... 24.29 0.0 26.57 0.0
          T            (index) float64 19kB 38.86 38.86 38.86 38.86 ... 27.0 27.0 27.0

We can look into the summary table:

model.fit_summary()
mean sd hdi_3% hdi_97% mcse_mean mcse_sd ess_bulk ess_tail r_hat
alpha 4.504 0.374 3.810 5.201 0.006 0.004 4154.0 5019.0 1.0
r 0.245 0.012 0.222 0.268 0.000 0.000 4217.0 4680.0 1.0
phi_dropout 0.248 0.020 0.212 0.287 0.000 0.000 5004.0 4984.0 1.0
kappa_dropout 3.198 0.932 1.734 4.918 0.013 0.010 5262.0 4991.0 1.0
a 0.782 0.192 0.470 1.148 0.002 0.002 6141.0 5313.0 1.0
b 2.416 0.750 1.237 3.775 0.011 0.008 5096.0 4821.0 1.0

We see that the r_hat values are close to \(1\), which indicates convergence.

We can also plot posterior distributions of the parameters and the rank plots:

axes = az.plot_trace(
    data=model.idata,
    compact=True,
    kind="rank_bars",
    backend_kwargs={"figsize": (12, 9), "layout": "constrained"},
)
plt.gcf().suptitle("BG/NBD Model Trace", fontsize=18, fontweight="bold");
../../_images/10f1c2afbd27dd655e7965556369cac63085f776e48cd6a1be0b48be79d7c500.png

Using MAP fit#

CLV models such as BetaGeoModel, can provide the maximum a posteriori estimates using a numerical optimizer (L-BFGS-B from scipy.optimize) under the hood.

model_map = clv.BetaGeoModel(data=data)
idata_map = model_map.fit(method="map")

idata_map
arviz.InferenceData
    • <xarray.Dataset> Size: 64B
      Dimensions:        (chain: 1, draw: 1)
      Coordinates:
        * chain          (chain) int64 8B 0
        * draw           (draw) int64 8B 0
      Data variables:
          alpha          (chain, draw) float64 8B 4.444
          r              (chain, draw) float64 8B 0.2437
          phi_dropout    (chain, draw) float64 8B 0.2522
          kappa_dropout  (chain, draw) float64 8B 2.797
          a              (chain, draw) float64 8B 0.7057
          b              (chain, draw) float64 8B 2.092
      Attributes:
          created_at:                 2025-02-05T16:41:07.861972+00:00
          arviz_version:              0.20.0
          inference_library:          pymc
          inference_library_version:  5.19.1

    • <xarray.Dataset> Size: 57kB
      Dimensions:            (customer_id: 2357, obs_var: 2)
      Coordinates:
        * customer_id        (customer_id) int64 19kB 0 1 2 3 ... 2353 2354 2355 2356
        * obs_var            (obs_var) <U9 72B 'recency' 'frequency'
      Data variables:
          recency_frequency  (customer_id, obs_var) float64 38kB 30.43 2.0 ... 0.0 0.0
      Attributes:
          created_at:                 2025-02-05T16:41:07.865199+00:00
          arviz_version:              0.20.0
          inference_library:          pymc
          inference_library_version:  5.19.1

    • <xarray.Dataset> Size: 94kB
      Dimensions:      (index: 2357)
      Coordinates:
        * index        (index) int64 19kB 0 1 2 3 4 5 ... 2352 2353 2354 2355 2356
      Data variables:
          customer_id  (index) int64 19kB 0 1 2 3 4 5 ... 2352 2353 2354 2355 2356
          frequency    (index) int64 19kB 2 1 0 0 0 7 1 0 2 0 ... 7 1 2 0 0 0 5 0 4 0
          recency      (index) float64 19kB 30.43 1.71 0.0 0.0 ... 24.29 0.0 26.57 0.0
          T            (index) float64 19kB 38.86 38.86 38.86 38.86 ... 27.0 27.0 27.0

This time we get point estimates for the parameters.

map_summary = model_map.fit_summary()

map_summary
alpha            4.444
r                0.244
phi_dropout      0.252
kappa_dropout    2.797
a                0.706
b                2.092
Name: value, dtype: float64

Comparing with the lifetimes package#

For the sake of comparison, let’s fit the model using the lifetimes package.

bgf = BetaGeoFitter()
bgf.fit(
    frequency=data["frequency"].values,
    recency=data["recency"].values,
    T=data["T"].values,
)

bgf.summary
coef se(coef) lower 95% bound upper 95% bound
r 0.242593 0.012557 0.217981 0.267205
alpha 4.413532 0.378221 3.672218 5.154846
a 0.792886 0.185719 0.428877 1.156895
b 2.425752 0.705345 1.043276 3.808229
Hide code cell source
fig, axes = plt.subplots(
    nrows=2, ncols=2, figsize=(12, 9), sharex=False, sharey=False, layout="constrained"
)

axes = axes.flatten()

for i, var_name in enumerate(["r", "alpha", "a", "b"]):
    ax = axes[i]
    az.plot_posterior(
        model.idata.posterior[var_name].values.flatten(),
        color="C0",
        point_estimate="mean",
        ax=ax,
        label="MCMC",
    )
    ax.axvline(x=map_summary[var_name], color="C1", linestyle="--", label="MAP")
    ax.axvline(
        x=bgf.summary["coef"][var_name], color="C2", linestyle="--", label="lifetimes"
    )
    ax.legend(loc="upper right")
    ax.set_title(var_name)

plt.gcf().suptitle("BG/NBD Model Parameters", fontsize=18, fontweight="bold");
../../_images/2e604e099397bd6b01d9f07da8315773c01bbd93794e15ec33f179912d939b85.png

The r and alpha purchase rate parameters are quite similar for all three models, but the a and b dropout parameters are better approximated with the default parameters when fitted with MCMC.

Prior and Posterior Predictive Checks#

PPCs allow us to check the efficacy of our priors, and the peformance of the fitted posteriors.

Let’s see how the model performs in a prior predictive check, where we sample from the default priors before fitting the model:

# PPC histogram plot
clv.plot_expected_purchases_ppc(model, ppc="prior");
Sampling: [alpha, kappa_dropout, phi_dropout, r, recency_frequency]
../../_images/7b4ba9f38c1ad25d61a45c9deebad1e05f5cf2d39b9a9be7461b989dd6efb1ee.png
clv.plot_expected_purchases_ppc(model, ppc="posterior");
Sampling: [recency_frequency]

../../_images/f4fdcb315b8aa832f8b3300d4e42b0157a13e5032f27ff4b41483b0a2a38895b.png

Some Applications#

Now that you have fitted the model, we can use it to make predictions. For example, we can predict the expected probability of a customer being alive as a function of time (steps). Here is a snippet of code to do that:

Expected Number of Purchases#

Let us take a sample of users:

example_customer_ids = [1, 6, 10, 18, 45, 1412]

data_small = data.query("customer_id.isin(@example_customer_ids)")

data_small.head(6)
customer_id frequency recency T
1 1 1 1.71 38.86
6 6 1 5.00 38.86
10 10 5 24.43 38.86
18 18 3 28.29 38.71
45 45 12 34.43 38.57
1412 1412 14 30.29 31.57

Observe that the last two customers are frequent buyers as compared to the others.

steps = 90

expected_num_purchases_steps = xr.concat(
    objs=[
        model.expected_purchases(
            data=data_small,
            future_t=t,
        )
        for t in progress_bar(range(steps))
    ],
    dim="t",
).transpose(..., "t")
100.00% [90/90 00:01<00:00]

We can plot the expected number of purchases for the next \(90\) periods:

Hide code cell source
fig, axes = plt.subplots(
    nrows=len(example_customer_ids),
    ncols=1,
    figsize=(12, 15),
    sharex=True,
    sharey=True,
    layout="constrained",
)

axes = axes.flatten()

for i, customer_id in enumerate(example_customer_ids):
    ax = axes[i]
    customer_expected_num_purchases_steps = expected_num_purchases_steps.sel(
        customer_id=customer_id
    )
    az.plot_hdi(
        range(steps),
        customer_expected_num_purchases_steps,
        hdi_prob=0.94,
        color="C0",
        fill_kwargs={"alpha": 0.3, "label": "$94 \\%$ HDI"},
        ax=ax,
    )
    az.plot_hdi(
        range(steps),
        customer_expected_num_purchases_steps,
        hdi_prob=0.5,
        color="C0",
        fill_kwargs={"alpha": 0.5, "label": "$50 \\%$ HDI"},
        ax=ax,
    )
    ax.plot(
        range(steps),
        customer_expected_num_purchases_steps.mean(dim=("chain", "draw")),
        color="C0",
        label="posterior mean",
    )
    ax.legend(loc="upper left")
    ax.set(title=f"Customer {customer_id}", xlabel="t", ylabel="purchases")

axes[-1].set(xlabel="steps")
plt.gcf().suptitle("Expected Number of Purchases", fontsize=18, fontweight="bold");
../../_images/77dcc58b90ac4f7a711b39f00d0d5c28c8d49e0277e84a78accc4a4531c46c8c.png

Note that the frequent buyers are expected to make more purchases in the future.

Probability of a Customer Being Alive#

We now look into the probability of a customer being alive for the next \(90\) periods:

steps = 90

future_alive_all = []

for t in progress_bar(range(steps)):
    future_data = data_small.copy()
    future_data["T"] = future_data["T"] + t
    future_alive = model.expected_probability_alive(data=future_data)
    future_alive_all.append(future_alive)

expected_probability_alive_steps = xr.concat(
    objs=future_alive_all,
    dim="t",
).transpose(..., "t")
100.00% [90/90 00:00<00:00]
Hide code cell source
fig, axes = plt.subplots(
    nrows=len(example_customer_ids),
    ncols=1,
    figsize=(12, 15),
    sharex=True,
    sharey=True,
    layout="constrained",
)

axes = axes.flatten()

for i, customer_id in enumerate(example_customer_ids):
    ax = axes[i]
    customer_expected_probability_alive_steps = expected_probability_alive_steps.sel(
        customer_id=customer_id
    )
    az.plot_hdi(
        range(steps),
        customer_expected_probability_alive_steps,
        hdi_prob=0.94,
        color="C1",
        fill_kwargs={"alpha": 0.3, "label": "$94 \\%$ HDI"},
        ax=ax,
    )
    az.plot_hdi(
        range(steps),
        customer_expected_probability_alive_steps,
        hdi_prob=0.5,
        color="C1",
        fill_kwargs={"alpha": 0.5, "label": "$50 \\%$ HDI"},
        ax=ax,
    )
    ax.plot(
        range(steps),
        customer_expected_probability_alive_steps.mean(dim=("chain", "draw")),
        color="C1",
        label="posterior mean",
    )
    ax.legend(loc="upper right")
    ax.set(title=f"Customer {customer_id}", ylabel="probability alive", ylim=(0, 1))

axes[-1].set(xlabel="steps")
plt.gcf().suptitle(
    "Expected Probability Alive over Time", fontsize=18, fontweight="bold"
);
../../_images/d1439ffc49816e90a41d2915efcab05ea9731fe6f5fd050108eef5a82629eb66.png

Tip

Here are some general remarks:

  • These plots assume no future purchases.

  • The decay probability is not the same as it depends in the purchase history of the customer.

  • The probability of being alive is always decreasing as we are assuming there is no change in the other parameters.

  • These probabilities are always non-negative, as expected.

Warning

For the frequent buyers, the probability of being alive drops very fast as we are assuming no future purchases. It is very important to keep this in mind when interpreting the results.

Probability of a Customer Making Zero Purchases in a time range#

We now look into the probability of a customer making 0 purchases between now, and the next \(t\) periods between 0 and 30.

steps = 30
expected_probability_zero_purchases = xr.concat(
    objs=[
        model.expected_probability_no_purchase(
            data=data_small,
            t=t,
        )
        for t in progress_bar(range(steps))
    ],
    dim="t",
).transpose(..., "t")
100.00% [30/30 00:00<00:00]
fig, axes = plt.subplots(
    nrows=len(example_customer_ids),
    ncols=1,
    figsize=(12, 15),
    sharex=True,
    sharey=True,
    layout="constrained",
)

axes = axes.flatten()

for i, customer_id in enumerate(example_customer_ids):
    ax = axes[i]
    customer_expected_probability_zero_purchases = (
        expected_probability_zero_purchases.sel(customer_id=customer_id)
    )
    az.plot_hdi(
        range(steps),
        customer_expected_probability_zero_purchases,
        hdi_prob=0.94,
        color="C1",
        fill_kwargs={"alpha": 0.3, "label": "$94 \\%$ HDI"},
        ax=ax,
    )
    az.plot_hdi(
        range(steps),
        customer_expected_probability_zero_purchases,
        hdi_prob=0.5,
        color="C1",
        fill_kwargs={"alpha": 0.5, "label": "$50 \\%$ HDI"},
        ax=ax,
    )
    ax.plot(
        range(steps),
        customer_expected_probability_zero_purchases.mean(dim=("chain", "draw")),
        color="C1",
        label="posterior mean",
    )
    ax.legend(loc="upper right")
    ax.set(title=f"Customer {customer_id}", ylabel="Probability", ylim=(0, 1))

axes[-1].set(xlabel="steps")
plt.gcf().suptitle(
    "Expected Probability Zero Purchases between $(T, T+t]$.",
    fontsize=18,
    fontweight="bold",
);
../../_images/1681a3544a4c9e97fdad538ed97a04c403980f565bfe1c59e0a96c5b86175a9c.png
%reload_ext watermark
%watermark -n -u -v -iv -w -p pymc,pytensor
Last updated: Wed Feb 05 2025

Python implementation: CPython
Python version       : 3.10.16
IPython version      : 8.31.0

pymc    : 5.19.1
pytensor: 2.26.4

arviz         : 0.20.0
pandas        : 2.2.3
pymc_marketing: 0.11.0
lifetimes     : 0.11.3
xarray        : 2025.1.1
matplotlib    : 3.10.0
fastprogress  : 1.0.3

Watermark: 2.5.0