Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

expose map item iterator, to load all items in one pass #65

Open
wants to merge 2 commits into
base: master
Choose a base branch
from

Conversation

davidqge
Copy link

RMK use this module to store each key when user customize from vial web tool.

fetching hundreds of items take 8s on esp32c6.

after changed to this iterator, only take 0.1s on same MCU.

@diondokter
Copy link
Collaborator

Hi there! Thanks for the PR.
That speedup is very nice!

However, I'm inclined to not merge this PR.
There are several reasons:

  1. You included some host tools that are not relevant to s-s
  2. You added a storage struct that does nothing but provide the iterator. Such a struct might be good to add, but then it needs to be used throughout. Currently all code is presented as global functions.
  3. I'm not a fan of this iterator. It allows the user to see items that are 'logically' deleted (i.e. items in the map that are overridden) and it requires the user to know about the internals of how the map works.
  4. The page iterator uses the queue method to find the last page. But map has its own ideas as to how the page layout functions. What you've made may work now, but it might not in the future.

So, how to continue?

Well for speedup there is the cache. Are you using it?
When are you fetching everything? Just after boot?

My guess would be that if you used a warmed up KeyPointerCache, you'd get similar speedups. After boot it would still be cold of course.
I'd be open for a PR that has a function that forcibly warms up the caches or a PR that improves the performance of the caches.

@HaoboGu
Copy link

HaoboGu commented Sep 20, 2024

ahhh this is what I mentioned in #33 (comment)

somehow providing a method to get/iterate all saved items is valuable. I hope we can achieve this one day:D

@davidqge
Copy link
Author

  1. You included some host tools that are not relevant to s-s

ok, this is not needed, thought nice to be able to examine the data easily.

  1. You added a storage struct that does nothing but provide the iterator. Such a struct might be good to add, but then it needs to be used throughout. Currently all code is presented as global functions.

It's very helpful to group them together.

especially in case of multiple storage, for example separate frequently changed keyboard macro from a large number of rarely changed key bindings. This makes sure we don't accidentally call wrong args.

I can change this if this is your only objection.

  1. I'm not a fan of this iterator. It allows the user to see items that are 'logically' deleted (i.e. items in the map that are overridden) and it requires the user to know about the internals of how the map works.

It's kind of like history replay, It gets earlier value, and then overwrite with newer value. perfect for initialization usage.

  1. The page iterator uses the queue method to find the last page. But map has its own ideas as to how the page layout functions. What you've made may work now, but it might not in the future.

No. It uses the queue method to find first page. This code is common to both queue, and map. Maybe move to lib.rs?

Well for speedup there is the cache. Are you using it? When are you fetching everything? Just after boot?

yes, during boot it need to load every item in storage. I tried the KeyPointerCache, it didn't improve the speed for that situation.

My guess would be that if you used a warmed up KeyPointerCache, you'd get similar speedups. After boot it would still be cold of course. I'd be open for a PR that has a function that forcibly warms up the caches or a PR that improves the performance of the caches.

The problem with pre warm cache is app have to try every possible key combination to load all, and cache have to be large. We are talking about embedded cpu here. The iterator is more efficient, and people are familiar with this in other map implementations too.

@diondokter
Copy link
Collaborator

How many keys and pages are we actually talking about?

@davidqge
Copy link
Author

My current config used about 3 pages. Will increase, as I trying more things. The potential keys could be several thousands, most are unused.

This project RMK use this Vial web gui to change the multi layer keyboard codes one key at a time.

Write many small items separately, will help minimize the wear of flash. and we found your project perfect for this situation, after exposed iterator.

@whitelynx
Copy link

@diondokter Would removing the host tools and the new storage struct (points 1 and 2) and factoring out the common function from point 4 be sufficient to get this merged? I'd like to see this implemented if possible, since we think the slow startup issue we're seeing in RMK should be fixable by using this iterator.

@diondokter
Copy link
Collaborator

Hi, ok, so my major painpoint here is that some things that are considered internal are now going to be exposed.
But if this is done well, I'd me amenable to it since it does solve a real proplem.

I can see this merged if:

  • A new feature is added to the crate raw_access with a comment that the things it enables is not bound by semver.
    • Maybe in the future this can be relaxed, but this is something I want to start with
  • It only applies to the map. We don't need this in general or for the queue. So put it in the map module and use the map page functions.
  • Everything is exposed as one iterator that iterates over all non-erased & non-corrupted items. The user should shouldn't have to know pages exist.
  • Items themselves remain private
    • Iterator should return two things: The key and a u8 slice with the value data
  • No host tools
  • The iterator is well documented so people can know what to expect
  • The iterator is the only new addition to the public api and it's behind that new feature flag

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants