You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I’m interested in fine-tuning this model with an additional variable and would like to understand the feasibility of adapting the existing code for this purpose.
I noticed,
The current fine-tuning process in trained_models.ipynb updates approximately 1.4 million parameters for the 2.8 resolution model, which appears to cover the full set of model parameters.
But the fine-tuning currently relies solely on an L1 loss, unlike the full training code, which may incorporate other loss functions.
Integrating the new variable will likely require adjustments to the dynamic core.
Would the current fine-tuning implementation support this modification, or would it require significant code changes? Any guidance on how to approach this would be appreciated!
The text was updated successfully, but these errors were encountered:
I’m interested in fine-tuning this model with an additional variable and would like to understand the feasibility of adapting the existing code for this purpose.
I noticed,
Would the current fine-tuning implementation support this modification, or would it require significant code changes? Any guidance on how to approach this would be appreciated!
The text was updated successfully, but these errors were encountered: