-
Notifications
You must be signed in to change notification settings - Fork 245
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
VAE enable annealing teacher forcing probability during training #7
Comments
IBM has a pretty good example |
@lilleswing, the code from MOSES you've sent implements teacher forcing. Did you mean that we should add free running for training? |
Yes I misread the code. |
Yes, we’ll add free run soon. It will probably be denoted as a separate model at the metrics table. |
have you ever tested the reconstruction accuracy of VAE model? I tested the reconstruction accuracy and the performance is very bad, here is my testing code, is there any problem? thanks! if name == 'main':
|
Hi, @liujunhongznn Low reconstruction quality is due to the posterior collapse that frequently happens in VAEs. Since the goal of MOSES is to produce the generative distribution as good as possible, the posterior collapse is acceptable for this task. If you want to obtain meaningful latent codes, try reducing KL divergence weight. |
@danpol Hello! Can you help me with VAE because I'm mixed up. As you before-mentioned this VAE implementation does use Teacher Forcing approach, but I don't see any loops with decoder (except val mode for generation of SMILES). Am I right that it's literally training with teacher forcing = 1? Because we don't pass previous predicted tokens (like in seq2seq models) |
Hi, @bokertof! VAE in MOSES uses teacher forcing—we pass the correct token, not the sampled one. |
@danpol Ok, I got it. Can you tell me what the reason not to use the sampled tokens as input? I'm trying to implement similar net and faced an issue when model with feeding of previously predicted tokens doesn't learn whatsoever. |
If you feed sampled tokens, you have to propagate the gradient through sampling (e.g., with REINFORCE), which has notoriously high variance. You could use variance reduction techniques, but it lies far from the notion of a "baseline". |
Thank you so much! |
The VAE doesn't have teacher forcing. The teacher forcing is really needed for larger molecules.
Original Code
https://github.com/aspuru-guzik-group/chemical_vae/blob/master/chemvae/tgru_k2_gpu.py
Moses Code
https://github.com/molecularsets/moses/blob/master/moses/vae/model.py#L114-L147
The text was updated successfully, but these errors were encountered: