Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add int16 support to MINIMUM and MAXIMUM #2678

Merged
merged 4 commits into from
Sep 9, 2024

Conversation

andresovela
Copy link
Contributor

@andresovela andresovela commented Sep 3, 2024

This PR adds support for int16 to the MINIMUM and MAXIMUM kernels. This is necessary to work with some audio models using int16x8 quantization.

This is my first contribution to TFLM, let me know if there's anything else that needs to be done :)

bug=#2679

@andresovela andresovela requested a review from a team as a code owner September 3, 2024 12:31
@suleshahid
Copy link
Collaborator

Hi Andres, thanks for the PR.

Seems like TfLite already has this so it would be great to support in TFLM as well. Could you add a unit test in the maximum_minimum_test.cc for this? You can see the TfLite one for reference, but we will need to use the quantized versions as in our unit tests.

Thanks!

@andresovela
Copy link
Contributor Author

Sure, will do. Can you point me to some documentation about how to run the unit tests in this project?

@ddavis-2015
Copy link
Member

Sure, will do. Can you point me to some documentation about how to run the unit tests in this project?

To test with x86 use:
make -f tensorflow/lite/micro/tools/make/Makefile BUILD_TYPE=default test_kernel_maximum_minimum_test -j$(nproc)

@andresovela
Copy link
Contributor Author

@ddavis-2015 is this documented somewhere? I took a look at CONTRIBUTING.md but I didn't see anything. I'd open a PR with testing instructions if there isn't anything yet.

@ddavis-2015
Copy link
Member

@ddavis-2015 is this documented somewhere? I took a look at CONTRIBUTING.md but I didn't see anything. I'd open a PR with testing instructions if there isn't anything yet.

@andresovela

https://github.com/tensorflow/tflite-micro/blob/main/tensorflow/lite/micro/docs/porting_reference_ops.md#5-port-the-op-from-lite-to-micro-pr3

Step 6.

@andresovela
Copy link
Contributor Author

make -f tensorflow/lite/micro/tools/make/Makefile  BUILD_TYPE=default test_kernel_maximum_minimum_test -j4
tensorflow/lite/micro/tools/make/downloads/flatbuffers already exists, skipping the download.
tensorflow/lite/micro/tools/make/downloads/kissfft already exists, skipping the download.
tensorflow/lite/micro/tools/make/downloads/pigweed already exists, skipping the download.
tensorflow/lite/micro/tools/make/test_latency_log.sh kernel_maximum_minimum_test  gen/linux_x86_64_default_gcc/bin/kernel_maximum_minimum_test '~~~ALL TESTS PASSED~~~' linux
Testing FloatTest
Testing Int8Test
Testing Int16Test
Testing FloatWithBroadcastTest
Testing Int32WithBroadcastTest
5/5 tests passed
~~~ALL TESTS PASSED~~~

Running kernel_maximum_minimum_test took 0.001 seconds

@ddavis-2015
Copy link
Member

@andresovela ok looks good. Now for the procedural aspects of this process (we ALL have to do this): The code must pass Google code style checks, and there must be an issue # in your initial description. For the code style issues, find the code style issue test in the checks section and click on Details.

For the issue # please create an issue in this repo (it can just be a very short description of the problem). Then at the end of your initial description of this PR add:
bug=#xxxx
where xxxx is the issue number.

Finally, you can check that the Continuous Integration tests pass by going to labels on the right hand side of the PR page, and selecting ci:run. CI runs take approx. 20 minutes. If everything passes, you can add @suleshahid to the reviewers and congrats on your addition to the TFLM (or LiteRT for Microcontrollers) code base.

@andresovela
Copy link
Contributor Author

@ddavis-2015 I fixed the code style issue but it looks like I don't have permissions to either add labels nor add reviewers to PRs

@ddavis-2015
Copy link
Member

@ddavis-2015 I fixed the code style issue but it looks like I don't have permissions to either add labels nor add reviewers to PRs

@andresovela Sorry, didnt know about the permissions. All tests pass, reviewer added.

@suleshahid
Copy link
Collaborator

Thanks again Andres, and thank you David for the help!

@mergify mergify bot merged commit 19aaea8 into tensorflow:main Sep 9, 2024
80 of 81 checks passed
@andresovela andresovela deleted the min-max-int16 branch September 26, 2024 08:15
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants