We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
看的是EGES_model.py。� def make_skipgram_loss() 这个函数像代码作者这样写,我理解的意思是,最大化了node v的embedding Hv和node v的context node u的id的共现概率?? 可是原文公式(8)的loss是L(v,u,y)=−[ylog(σ(HvT Zu))+(1−y)log(1−σ(HvT Zu))]啊??
求问是我理解错了还是代码作者理解错了呀....
另外求问_dataset版的py文件是干什么用的呀?
The text was updated successfully, but these errors were encountered:
看的是EGES_model.py。� def make_skipgram_loss() 这个函数像代码作者这样写,我理解的意思是,最大化了node v的embedding Hv和node v的context node u的id的共现概率?? 可是原文公式(8)的loss是L(v,u,y)=−[ylog(σ(HvT Zu))+(1−y)log(1−σ(HvT Zu))]啊?? 求问是我理解错了还是代码作者理解错了呀.... 另外求问_dataset版的py文件是干什么用的呀?
_dataset版本在 读入数据那部分使用了tf.data这样便于大数据集的训练与学习
Sorry, something went wrong.
是写错了,作者的损失函数是用了额外的权重向量辅助计算,但是原论文里面是用结点之间经过注意力聚合的参数矩阵做的
No branches or pull requests
看的是EGES_model.py。�
def make_skipgram_loss() 这个函数像代码作者这样写,我理解的意思是,最大化了node v的embedding Hv和node v的context node u的id的共现概率??
可是原文公式(8)的loss是L(v,u,y)=−[ylog(σ(HvT Zu))+(1−y)log(1−σ(HvT Zu))]啊??
求问是我理解错了还是代码作者理解错了呀....
另外求问_dataset版的py文件是干什么用的呀?
The text was updated successfully, but these errors were encountered: