First, thank you for contributing to Meilisearch! The goal of this document is to provide everything you need to start contributing to the Meilisearch tokenizer.
Remember that there are many ways to contribute other than writing code: writing tutorials or blog posts, improving the documentation, submitting bug reports and feature requests...
- Hacktoberfest
- Assumptions
- How to Contribute
- Development Workflow
- Git Guidelines
- Release Process (for internal team only)
It's Hacktoberfest month! 🥳
Thanks so much for participating with Meilisearch this year!
- We will follow the quality standards set by the organizers of Hacktoberfest (see detail on their website). Our reviewers will not consider any PR that doesn’t match that standard.
- PRs reviews will take place from Monday to Thursday, during usual working hours, CEST time. If you submit outside of these hours, there’s no need to panic; we will get around to your contribution.
- There will be no issue assignment as we don’t want people to ask to be assigned specific issues and never return, discouraging the volunteer contributors from opening a PR to fix this issue. We take the liberty to choose the PR that best fixes the issue, so we encourage you to get to it as soon as possible and do your best!
You can check out the longer, more complete guideline documentation here.
- You're familiar with GitHub and the Pull Requests(PR) workflow.
- You know about the Meilisearch community. Please use this for help.
- Ensure your change has an issue! Find an
existing issue or open a new issue.
- This is where you can get a feel if the change will be accepted or not.
- Once approved, fork the Tokenizer repository in your own GitHub account.
- Create a new Git branch
- Review the Development Workflow section that describes the steps to maintain the repository.
- Make your changes on your branch.
- Submit the branch as a Pull Request pointing to the
main
branch of the origin repository. A maintainer should comment and/or review your Pull Request within a few days. Although depending on the circumstances, it may take longer.
cargo test
cargo bench
A Segmenter
is a Script or Language specialized struct that segment a text in several lemmas that will be classified as a separator or a word later in the tokenization pipeline.
A Segmenter will never change, add, or skip a lemma, that means that concatenating all lemmas must be equal to the original text.
All Segmenters implementation are stored in src/segmenter
.
We highly recommend to start the implementation by copy-pasting the dummy example (src/segmenter/dummy_example.rs
) and follow the instructions in comments.
The only thing needed is 2 texts detected as the Segmenter
's Script or Language by the tokenizer.
One that has a size of around 130 bytes and an other that has a size of around 365 bytes.
These 2 texts must be added in the static DATA_SET
global located benches/bench.rs
:
static DATA_SET: &[((usize, Script, Language), &str)] = &[
// short texts (~130 bytes)
[...]
((<size in bytes>, Script, Language), "<Text of around 130 bytes>"),
// long texts (~365 bytes)
[...]
((<size in bytes>, Script, Language), "<Text of around 365 bytes>"),
A Normalizer
is a struct used to alterate the lemma contained in a Token in order to remove features that doesn't sygnificantly impact the sens like lowecasing, removing accents, or converting Traditionnal Chinese characteres into Simplified Chinese characteres.
We highly recommend to start the implementation by copy-pasting the dummy example (src/normalizer/dummy_example.rs
) and follow the instructions in comments.
All changes must be made in a branch and submitted as PR.
We do not enforce any branch naming style, but please use something descriptive of your changes.
As minimal requirements, your commit message should:
- be capitalized
- not finish by a dot or any other punctuation character (!,?)
- start with a verb so that we can read your commit message this way: "This commit will ...", where "..." is the commit message. e.g.: "Fix the home page button" or "Add more tests for create_index method"
We don't follow any other convention, but if you want to use one, we recommend the Chris Beams one.
Some notes on GitHub PRs:
- All PRs must be reviewed and approved by at least one maintainer.
- The PR title should be accurate and descriptive of the changes. The title of the PR will be indeed automatically added to the next release changelogs.
- Convert your PR as a draft if your changes are a work in progress: no one will review it until you pass your PR as ready for review.
The draft PRs are recommended when you want to show that you are working on something and make your work visible. - The branch related to the PR must be up-to-date with
main
before merging. Fortunately, this project uses Bors to automatically enforce this requirement without the PR author having to rebase manually.
Meilisearch tools follow the Semantic Versioning Convention.
This project integrates a bot that helps us manage pull requests merging.
Read more about this.
This project integrates a tool to create automated changelogs: the release-drafter.
Make a PR modifying the file Cargo.toml
with the right version.
version = "X.X.X"
Once the changes are merged on main
, you can publish the current draft release via the GitHub interface: on this page, click on Edit
(related to the draft release) > update the description if needed > when you are ready, click on Publish release
.
Thank you again for reading this through, we can not wait to begin to work with you if you made your way through this contributing guide ❤️