-
Notifications
You must be signed in to change notification settings - Fork 25
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Last note per pubkey algo feed #621
Draft
jb55
wants to merge
6
commits into
master
Choose a base branch
from
algo
base: master
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Draft
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
1 task
looks like we'll have to change our approach since nip01 is losing multi filters in one request: |
This function creates filters for the base our first algo in Damus: Called "last N note per pubkey". I don't have a better name for it. This function generates a query in the form: [ {"authors": ["author_a"], "limit": 1, "kinds": [1] , {"authors": ["author_b"], "limit": 1, "kinds": [1] , {"authors": ["author_c"], "limit": 1, "kinds": [1] , {"authors": ["author_c"], "limit": 1, "kinds": [1] ... ] Due to an unfortunate restriction currently in nostrdb and strfry, we can only do about 16 to 20 of these at any given time. I have made this limit configurable in strfry[1]. I just need to do the same in nostrdb now. [1] hoytech/strfry#133 Changelog-Added: Add last_n_per_pubkey_from_tags algo function
Signed-off-by: William Casarin <[email protected]>
In an attempt to make our deck serializer more localized, comprehensible, and less error-prone, we introduce a new parser combinator based around string tokens. This replaces the Selection-based intermediary types so that we have a more direct serialization style.
This reduces the number of things we have to update in our token parser and serializer. For payloads, we we have to handle the payload cases different, but we now have a structure that can deal with that efficiently. Signed-off-by: William Casarin <[email protected]>
This simplifies token serialization Signed-off-by: William Casarin <[email protected]>
Signed-off-by: William Casarin <[email protected]>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This is damus' first algo feed.
What's left to do
Optional
I noticed I was missing many parts of the codebase related to column serialization. In an effort to have a more direct parsing style, I created a new token parser combinator.
The algo