Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

train_latent error #8

Open
oscarwooberry opened this issue Mar 8, 2024 · 4 comments
Open

train_latent error #8

oscarwooberry opened this issue Mar 8, 2024 · 4 comments

Comments

@oscarwooberry
Copy link

image

I try to run train_latent on my custom dataset but I run into NameError: free variable 'patch_num_x' referenced before assignment in enclosing scope

@oscarwooberry
Copy link
Author

I guess my question is "To train latent model, we directly derive semantic embedding layers from model without additional inference." I am a bit confused about how we load these semantic codes as input

@naturalDNA
Copy link

I have the same question. Did you solve it?

@WeiyunJiang
Copy link

I have the same question. Did you solve it?

On line 432, experiment.py, I just skip log_samples when imgs = None
It would suppress the error


if self.conf.train_mode.require_dataset_infer():
    imgs = None
    idx = None
else:
    imgs = batch['img']
    idx = batch["index"]
    
    self.log_sample(x_start = imgs, step = self.global_step, idx = idx)

@naturalDNA
Copy link

我有同样的问题。你解决了吗?

在第 432 行,experiment.py,我只是跳过 log_samples 当 imgs = None 它会抑制错误


if self.conf.train_mode.require_dataset_infer():
    imgs = None
    idx = None
else:
    imgs = batch['img']
    idx = batch["index"]
    
    self.log_sample(x_start = imgs, step = self.global_step, idx = idx)

Thank you for your help!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants