Making the custom density estimator NPE/NLE/NRE procedure sequential #1365
Replies: 4 comments 2 replies
-
Can someone provide an answer for this? I am approaching a deadline for my project soon. |
Beta Was this translation helpful? Give feedback.
-
Dear @paarth-dudani please be aware that many of use are doing this work on the Regarding your question: For multi-round SNPE, it is less straight-forward to implement a custom training-loop because SNPE requires different loss functions or post-hoc correction to obtain an unbiased posterior estimate (because we are using a proposal different from the prior). For NRE and NLE, you could just wrap your training loop over epochs into a loop over rounds and run the training with the newly sampled data in every new round (because the proposal correction is not needed here). However, overall I would recommend just using our implementations to be on the save side. What is the reason you need low-level access to the training loop? Cheers, |
Beta Was this translation helpful? Give feedback.
-
Dear Jan, Thanks for your reply! As for the question: Here the link to the earlier discussion I had, which inspired my current approach: Could you elaborate a little on 'SNPE requires different loss functions or post-hoc correction to obtain an unbiased posterior estimate'? I believe this may be relevant to the following loss curves that I get from multi-round SNPE: My main reason has to do with the fact that I find independent estimates from multi-round SNE (original implementation) a little unreliable in terms of the variability of the posterior when compared with the ground truth. A custom implementation lets me see more patterns in terms of convergence issues, and has enabled me to get pretty consistent independent estimates. An idea I had in this regard was to first, train a network with the custom implementation (maybe with lesser number of simulations) and then wrap it as a direct posterior and use it as a proposal for multi-round inference. I'd appreciate any suggestions in this regard! Regards, |
Beta Was this translation helpful? Give feedback.
-
I agree with Jan, implementing a custom training loop for SNPE-C is tricky (because of the different loss functions). However, it is quite easy to implement a custom training loop for TSNPE. You will have to do something like this: from sbi.inference.posteriors import DirectPosterior
from sbi.utils import RestrictedPrior, get_density_thresholder
proposal = prior
maf_estimator = build_maf(dummy_theta, dummy_x, hidden_features = 50, num_transforms = 5, device = 'cpu')
optw = AdamW(list(maf_estimator.parameters()), lr = 1e-4)
for _ in range(rounds):
# In the next round, sample theta from the proposal instead of from the prior.
theta = proposal.sample((num_sims_per_round,))
x = run_your_simulator(theta)
# ...retrain the `maf_estimator`
for epoch in range(epochs):
optw.zero_grad()
losses = maf_estimator.loss(theta_batch, condition=x_batch)
loss = torch.mean(losses)
loss.backward()
optw.step()
# After training, build the truncated proposal:
posterior = DirectPosterior(maf_estimator, prior)
accept_reject_fn = get_density_thresholder(posterior, quantile=1e-4)
proposal = RestrictedPrior(prior, accept_reject_fn, sample_with="rejection") With TSNPE, there is no need to modify the loss function or the rest of the training loop at each round. Hope that helps |
Beta Was this translation helpful? Give feedback.
-
Following is my code for implementing a custom density estimator for NPE using a non-sequential approach:
Of course, I would like to make this inference more specific for a dataset, x_o, as this uses a lot of simulations. However the primary sequential implementation documentation on the website does not allow for as much control, in terms of getting the per epoch loss, using custom data loaders, specifying the number of epochs etc.:
Can someone assist me in implementing the first algorithm but in a format that is sequential and is conditioned on the observed data set x_o?
Beta Was this translation helpful? Give feedback.
All reactions