Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

http://syntaxnet.askplatyp.us - not working #8

Open
ndvbd opened this issue Apr 7, 2019 · 7 comments
Open

http://syntaxnet.askplatyp.us - not working #8

ndvbd opened this issue Apr 7, 2019 · 7 comments

Comments

@ndvbd
Copy link

ndvbd commented Apr 7, 2019

  1. On the readme it refers to "http://syntaxnet.askplatyp.us" - this link doesn't work.
  2. In the .py code you refer to "http://syntaxnet.askplatyp.us" - are you calling an external server?
@Tpt
Copy link
Collaborator

Tpt commented Apr 7, 2019

Sorry, I have shut down "http://syntaxnet.askplatyp.us" because we do not use it anymore. Spacy and StanfordNLP have similar performances than Syntaxnet. I am going to tag this repository as "obsolete" to highlight that I am not maintaining it anymore. Feel free to create a fork if you are interested in maintaining it, I could link to it from the README.

The Python file is not calling this server, it's just added to the Swagger documentation to build absolute URIs.

@ndvbd
Copy link
Author

ndvbd commented Apr 7, 2019

I thought Syntaxnet has better results than CoreNLP and Spacy. I also did some manual checks and indeed got better results in Syntaxnet. Do you have a different impression?

@Tpt
Copy link
Collaborator

Tpt commented Apr 7, 2019

Indeed, at my knowledge Syntaxnet has better results than CoreNLP. But the new Standford NLP seems to beat syntaxnet and is way easier to run.

The benchmark for Syntaxnet are here: https://github.com/tensorflow/models/blob/master/research/syntaxnet/g3doc/conll2017/README.md
And similar benchmark for StandfordNLP (the "standford" participant of CoNLL benchmark: https://universaldependencies.org/conll18/results-uas.html#per-treebank-uas-f1

About Spacy, I have not compared everything. For French on the Sequoia dataset
Spacy gives UAS: 89.08, LAS: 86.41 vs UAS: 87.90, LAS: 85.74 for SyntaxNet.

@ndvbd
Copy link
Author

ndvbd commented Apr 7, 2019

Oh, you say there is a difference between CoreNLP and StanfordNLP? I thought they are part of the same thing/team.

I see the first two links you've put - can you see something that is comparable? I mean a test for StanfordNLP and Syntaxnet that is done on the same test data?

@Tpt
Copy link
Collaborator

Tpt commented Apr 8, 2019

Oh, you say there is a difference between CoreNLP and StanfordNLP?

Yes, there are from the same team. StandfordNLP is a reimplementation from scratch of a part of CoreNLP using PyTorch.

I see the first two links you've put - can you see something that is comparable?

Yes, I believe that CoNLLU challenge 2017 and 2018 are using the same datasets, i.e. Universal Dependencies treebanks v2. For example StandfordNLP gets UAS: 90.59 and LAS: 88.78 on fr_sequoia and SyntaxNet UAS: 87.90 and LAS: 85.74 on French-Sequoia.

@ndvbd
Copy link
Author

ndvbd commented Apr 8, 2019

Okay, so if I see right:
Stanford - UAS for en_lines got 82.99
SyntaxNet - UAS for English-LinES got 82.43

I now understand. Thank you.

@Tpt
Copy link
Collaborator

Tpt commented Apr 8, 2019

Yes!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants