We are consolidating OPs in TTNN #9825
Replies: 2 comments
-
🚀 Update!We are excited to announce the successful merge of a significant change (#9870). While the Why the Symlink?We are currently undertaking a massive effort to integrate operations into How Does This Affect You?Please refrain from adding new operations to Next StepsIn the coming weeks, we will focus on:
A big thank you to @tt-rkim, @eyonland, and @arakhmati for their help! |
Beta Was this translation helpful? Give feedback.
-
🚀 Update!Contents of Going forward new NN operations should be added to |
Beta Was this translation helpful? Give feedback.
-
👋 Hi everyone! Here is an update on one of our ongoing TTNN efforts.
Background
Today Operations are split between tt_dnn and ttnn. Many ops in ttnn call their tt_dnn implementation. Some ops like unary and binary duplicate code leading to 2 different pathways in C++ / Python. This not only confuses new contributors, but increases time required to make changes. For example, as we add support for queue_id and output_tensor - we have to add them to ops in both libs.
So we are now merging the two, moving code from tt_dnn to ttnn, removing tt_lib and consolidating OPs in a single place.
See details in #9322.
Tracking the work here: https://github.com/orgs/tenstorrent/projects/30/views/7 (currently accounts for ~60% of the scope)
We are now in Phase 3: Mass migrating ops. Very excited to see how more and more operations are consolidated in a single place! Lots of folks are involved to make this happen! 🙌
I believe this work provides a strong foundation for the rest of the preparation before the TTNN v1.0 release!
How does this affect you?
We expect minimal-to-no impact:
Let us know if you have any questions!
Beta Was this translation helpful? Give feedback.
All reactions