-
Notifications
You must be signed in to change notification settings - Fork 89
maketables plug in
#600
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
maketables plug in
#600
Conversation
|
Check out this pull request on See visual diffs & provide feedback on Jupyter Notebooks. Powered by ReviewNB |
PR SummaryIntroduce maketables plug-in dunder methods on BaseExperiment (coef table, stats, depvar, vcov) with Bayesian/OLS handling, plus a demo notebook.
Written by Cursor Bugbot for commit c5bb622. This will update automatically on new commits. Configure here. |
| widths = sorted_samples[interval_size:] - sorted_samples[:n_intervals] | ||
| min_idx = np.argmin(widths) | ||
| ci_lower = float(sorted_samples[min_idx]) | ||
| ci_upper = float(sorted_samples[min_idx + interval_size]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
HDI calculation has off-by-one error in interval bounds
The HDI (Highest Density Interval) calculation in _maketables_coef_table_bayesian has an off-by-one error. For a 94% interval containing ceil(0.94 * n) samples starting at index i, the interval spans indices i to i + interval_size - 1. However, the code computes widths = sorted_samples[interval_size:] - sorted_samples[:n_intervals], which calculates sorted_samples[i + interval_size] - sorted_samples[i] instead of sorted_samples[i + interval_size - 1] - sorted_samples[i]. Similarly, ci_upper uses sorted_samples[min_idx + interval_size] when it needs sorted_samples[min_idx + interval_size - 1]. This results in intervals containing one extra sample and potentially selecting a suboptimal HDI.
| } | ||
| # Add R² if available | ||
| if hasattr(self, "score") and self.score is not None: | ||
| stat_map["r2"] = self.score |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
R² statistic returns Series instead of scalar value
The __maketables_stat__ method assigns self.score directly to stat_map["r2"] without handling the case where self.score is a pandas Series (which occurs for PyMC models). As shown in the notebook output, this causes the R² row in maketables to display the entire Series representation (unit_0_r2 0.836121 unit_0_r2_std 0.012656 dtype: float64) instead of a clean scalar value. Other code in the codebase (e.g., _bayesian_plot) properly extracts self.score["unit_0_r2"] when self.score is a Series.
| interval_size = int(np.ceil(0.94 * n)) | ||
| n_intervals = n - interval_size | ||
| widths = sorted_samples[interval_size:] - sorted_samples[:n_intervals] | ||
| min_idx = np.argmin(widths) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
HDI calculation crashes with small MCMC sample sizes
The HDI calculation in _maketables_coef_table_bayesian crashes with ValueError: attempt to get argmin of an empty sequence when the number of MCMC samples is 16 or fewer. When n <= 16, interval_size = ceil(0.94 * n) equals n, making n_intervals = 0, which results in an empty widths array. Calling np.argmin on an empty array raises a ValueError. While unlikely in normal usage where MCMC runs with hundreds of samples, this could occur in quick testing scenarios with minimal sampling parameters.
Hi @drbenvincent ,
Attached a very drafty PR (with lots of support from Claude) to enable maketables support for CausalPy via the new Plug-In solution @dsliwka implemented.
Attached is also an example notebook.
For the RDD class, you e.g. get something like this:

Would you generally be interested in merging this PR?
Where I'd need your help - what coefficients should be reported by default? Which statistics might users be interested in beyond the defaults? Is my choice of 94% probability intervals a good one? Etc.
Note that this PR is more of a proof of concept and might need some cleaning up and unit tests. If this makes it into CausalPy, we should likely also implement tests over there to ensure we never break CausalPy flows.
Best, Alex
📚 Documentation preview 📚: https://causalpy--600.org.readthedocs.build/en/600/