Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Import reduce naive from burn #314

Merged
merged 10 commits into from
Nov 28, 2024
Merged

Import reduce naive from burn #314

merged 10 commits into from
Nov 28, 2024

Conversation

maxtremblay
Copy link
Contributor

@maxtremblay maxtremblay commented Nov 27, 2024

This is the first PR to import the reduction algorithms implemented by @wingertge in Burn.

Currently, this only import the naive implementation. The others will follow soon. I also added a bunch of tests, support for lines and refactor the code a little bit. However, the logic of the implementation follow the one from Burn.

Notice that the tests currently ignore ArgMax and ArgMin since I have some issues with Line that most be solved first.

@maxtremblay maxtremblay changed the title Import reduce naive burn Import reduce naive from burn Nov 27, 2024
Copy link
Member

@louisfd louisfd left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have a few comments, but it's already good

let mut offset_input = 0;
for axis in 0..input.rank() {
let coordinate = (ABSOLUTE_POS / output.stride(axis)) % output.shape(axis);
offset_input += coordinate * input.stride(axis);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How come the if axis != dim was removed? Was it useless?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The shape of the dim axis in output is always 1, so for that particular axis, the coordinate is always 0. Thus, it has a null effect on offset_input.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SHould I add a comment?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, I was just making sure nothing was lost from the original

}

fn accumulate(accumulator: &mut Self::Accumulator, current_value: Line<EI>, i: u32) {
// FIX: There is an issue when line_size is 1 on wgpu.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You could do if comptime!(current_value.size() == 1) and have a different logic without indexes

.collect()
}

fn random_input_values<F: Float>(&self) -> Vec<F> {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Quite a duplication from matmul test_utils. This would be a good case for cubecl-std, under a test cfg. Also, the matmul one has a seed as argument which can prevent bug proneness when we have several inputs. I think we should have an Option and use 123456789 when None. Anyway, not a blocker for this PR.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can create an issue?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, maybe an issue about creating cubecl-std with common stuff between reduce and matmul.

@maxtremblay maxtremblay merged commit 450de67 into main Nov 28, 2024
5 checks passed
@maxtremblay maxtremblay deleted the import-reduce-burn branch November 28, 2024 01:11
@maxtremblay maxtremblay restored the import-reduce-burn branch November 28, 2024 01:11
@maxtremblay maxtremblay deleted the import-reduce-burn branch November 28, 2024 01:11
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants