Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Generalize code in tutorials: 2, 3, 4 #5286

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

anmyachev
Copy link
Contributor

No description provided.

@anmyachev anmyachev changed the title Generalize code in tutorials: #2,#3,#4 Generalize code in tutorials: 2, 3, 4 Nov 29, 2024
@Jokeren
Copy link
Contributor

Jokeren commented Nov 29, 2024

IMO, it's still not generalized because DEVICE is hardcoded as cuda. Can you describe how users are supposed to use it on intel devices?

@anmyachev
Copy link
Contributor Author

IMO, it's still not generalized because DEVICE is hardcoded as cuda. Can you describe how users are supposed to use it on intel devices?

I agree, this only allows changing the device in one place from the user's point of view. From our point of view (plugin developers) it is also more convenient, since our code differs less from upstream's, which makes the merge easier.

Perhaps it would be suitable to redefine the device via an environment variable like: DEVICE=xpu python tutorials/02-tutorial. In this case, the user will not need to change the source code at all. What do you think?

@Jokeren
Copy link
Contributor

Jokeren commented Nov 30, 2024

One potential solution is to use

triton.runtime.driver.active.get_current_target().backend

It works well for amd and cuda, but I haven't tried for xpu. (I don't have any testbed)

@ThomasRaoux
Copy link
Collaborator

+1 to what Keren said, I don't think moving the place where we set the device is an improvement. It only makes sense if we can automatically detect triton backend as suggested by Keren

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants