You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have some confusions about 'label'(rewards) setting, the horizon of state and action sequences are fixed, but rewards array is from 'start' to end of entire episode.
Why?
The text was updated successfully, but these errors were encountered:
Hi! I think conditioning on the total trajectory reward is done to sample actions that not only attempts to maximize the reward over the current sampling horizon but also aim to maximize the total reward to go of the trajectory.
Note that states are sampled by the diffusion model, so the horizon is used to pass a batch to the model. However, since the model is conditioned on returns, it is not necessary to use the same horizon for them.
I have some confusions about 'label'(rewards) setting, the horizon of state and action sequences are fixed, but rewards array is from 'start' to end of entire episode.
Why?
The text was updated successfully, but these errors were encountered: