Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Question]Do Graphformer Encoder need to train for get the new representation of graph? #203

Open
feyhong1112 opened this issue Oct 31, 2024 · 0 comments

Comments

@feyhong1112
Copy link

feyhong1112 commented Oct 31, 2024

Sorry to post this, I have a big gap knowledge in deep learning. Hope, you will not mind me.

May I move on to my point?

  • My work is to get the representation of graphomer model for using in downsteam

  • Every time, I run the input graph into model to get the last hidden state of model, I got the different number in last hidden state every time.

  • I have also encode the Euclidian in shortest path

    Does it normal or not to have different last hidden state different number everytime? or I have to train the model first?

Thank you for visit my post and clarify my problem if I misunderstand please, forgive for me.

@feyhong1112 feyhong1112 changed the title [Question]Do Transformer Encoder need to train for get the new representation of graph? [Question]Do Graphformer Encoder need to train for get the new representation of graph? Oct 31, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant