Skip to content

Conversation

@s3alfisc
Copy link

@s3alfisc s3alfisc commented Dec 20, 2025

Hi @drbenvincent ,

Attached a very drafty PR (with lots of support from Claude) to enable maketables support for CausalPy via the new Plug-In solution @dsliwka implemented.
Attached is also an example notebook.

For the RDD class, you e.g. get something like this:
image

Would you generally be interested in merging this PR?

Where I'd need your help - what coefficients should be reported by default? Which statistics might users be interested in beyond the defaults? Is my choice of 94% probability intervals a good one? Etc.

Note that this PR is more of a proof of concept and might need some cleaning up and unit tests. If this makes it into CausalPy, we should likely also implement tests over there to ensure we never break CausalPy flows.

Best, Alex


📚 Documentation preview 📚: https://causalpy--600.org.readthedocs.build/en/600/

@review-notebook-app
Copy link

Check out this pull request on  ReviewNB

See visual diffs & provide feedback on Jupyter Notebooks.


Powered by ReviewNB

@cursor
Copy link

cursor bot commented Dec 20, 2025

PR Summary

Introduce maketables plug-in dunder methods on BaseExperiment (coef table, stats, depvar, vcov) with Bayesian/OLS handling, plus a demo notebook.

  • Core (experiments):
    • Add zero-coupling maketables integration on BaseExperiment via dunder methods:
      • __maketables_coef_table__: builds coefficient tables for PyMC (posterior mean, std, two-tailed posterior p, 94% HDI) and OLS (point estimates).
      • __maketables_stat__(key): returns N, model_type, experiment_type, and r2 (if available).
      • __maketables_depvar__: exposes dependent variable name.
      • __maketables_vcov_info__: returns vcov metadata.
  • Examples:
    • Add notebooks/maketables_demo.ipynb demonstrating DiD and RD models (OLS vs PyMC) and maketables.ETable usage.

Written by Cursor Bugbot for commit c5bb622. This will update automatically on new commits. Configure here.

@s3alfisc s3alfisc marked this pull request as draft December 20, 2025 15:06
widths = sorted_samples[interval_size:] - sorted_samples[:n_intervals]
min_idx = np.argmin(widths)
ci_lower = float(sorted_samples[min_idx])
ci_upper = float(sorted_samples[min_idx + interval_size])
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

HDI calculation has off-by-one error in interval bounds

The HDI (Highest Density Interval) calculation in _maketables_coef_table_bayesian has an off-by-one error. For a 94% interval containing ceil(0.94 * n) samples starting at index i, the interval spans indices i to i + interval_size - 1. However, the code computes widths = sorted_samples[interval_size:] - sorted_samples[:n_intervals], which calculates sorted_samples[i + interval_size] - sorted_samples[i] instead of sorted_samples[i + interval_size - 1] - sorted_samples[i]. Similarly, ci_upper uses sorted_samples[min_idx + interval_size] when it needs sorted_samples[min_idx + interval_size - 1]. This results in intervals containing one extra sample and potentially selecting a suboptimal HDI.

Fix in Cursor Fix in Web

}
# Add R² if available
if hasattr(self, "score") and self.score is not None:
stat_map["r2"] = self.score
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

R² statistic returns Series instead of scalar value

The __maketables_stat__ method assigns self.score directly to stat_map["r2"] without handling the case where self.score is a pandas Series (which occurs for PyMC models). As shown in the notebook output, this causes the R² row in maketables to display the entire Series representation (unit_0_r2 0.836121 unit_0_r2_std 0.012656 dtype: float64) instead of a clean scalar value. Other code in the codebase (e.g., _bayesian_plot) properly extracts self.score["unit_0_r2"] when self.score is a Series.

Fix in Cursor Fix in Web

interval_size = int(np.ceil(0.94 * n))
n_intervals = n - interval_size
widths = sorted_samples[interval_size:] - sorted_samples[:n_intervals]
min_idx = np.argmin(widths)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

HDI calculation crashes with small MCMC sample sizes

The HDI calculation in _maketables_coef_table_bayesian crashes with ValueError: attempt to get argmin of an empty sequence when the number of MCMC samples is 16 or fewer. When n <= 16, interval_size = ceil(0.94 * n) equals n, making n_intervals = 0, which results in an empty widths array. Calling np.argmin on an empty array raises a ValueError. While unlikely in normal usage where MCMC runs with hundreds of samples, this could occur in quick testing scenarios with minimal sampling parameters.

Fix in Cursor Fix in Web

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant