Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add blocked sparse input affine transform approach to docs. #249

Merged
merged 1 commit into from
Jun 23, 2023

Conversation

Sopel97
Copy link
Member

@Sopel97 Sopel97 commented Jun 12, 2023

@vondele
Copy link
Member

vondele commented Jun 23, 2023

I have read through this, looks OK.

Independent from this PR, should we integrate the permutation code (as mentioned official-stockfish/Stockfish#4620 (comment)) in the trainer somehow ?

@Sopel97
Copy link
Member Author

Sopel97 commented Jun 23, 2023

Independent from this PR, should we integrate the permutation code (as mentioned official-stockfish/Stockfish#4620 (comment)) in the trainer somehow ?

We could maybe apply it during serialization if we have a good enough solution. Right now we should work on a separate tool. I made some initial scaffolding here https://github.com/Sopel97/nnue-pytorch/blob/ftperm/ftperm.py

@vondele
Copy link
Member

vondele commented Jun 23, 2023

I think the greedy permutation that is used in the SF PR mentioned is likely a near-optimal solution to do this.

Computing the permutation needs some 50M positions, which in principle we can do by just using that many from a binpack and doing inference. Right now this is some modified version of SF collecting that data, but that's not ideal.

@vondele
Copy link
Member

vondele commented Jun 23, 2023

anyway, docs seem complete. Ready for a merge?

@Sopel97
Copy link
Member Author

Sopel97 commented Jun 23, 2023

yes

@vondele vondele merged commit 85a0979 into official-stockfish:master Jun 23, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants