Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[FEATURE] General 'simultaneous static&dynamic' ops on axes #107

Open
psiha opened this issue Mar 18, 2023 · 3 comments
Open

[FEATURE] General 'simultaneous static&dynamic' ops on axes #107

psiha opened this issue Mar 18, 2023 · 3 comments
Labels
enhancement New feature or request

Comments

@psiha
Copy link
Contributor

psiha commented Mar 18, 2023

I often have a need to perform some transformation on a shape (or a pair of shapes) (implicitly when doing transformations on an ndarray which affects its shape) - and when we have the ability to have both static and dynamic ones I usually have to duplicate the logic. Some motivating examples:

  • merge two shapes (example an add operation): verify that the two shapes are the same (mathematically) and produce a new one which maximizes compile time information from both input shapes (e.g. one can have a fixed 2nd axis while the other a fixed 3rd axis so the result should have two fixed axes)
  • concatenation of two shapes across an axis - you have to perform the addition in ct and rt space (and again maximize ct information)
  • reshaping with a placeholder/free axis (this is an operation in itself that you could add to the library)
  • what convolutions (in ml space) do with their inputs - for in[N, H, W, C] - the N gets forwarded while H and W get forwarded if padding is on, otherwise they are slightly reduced and input C is fully replaced with a different value
    etc....

So provide a generic way (which compiles today ;P) to specify and perform these operations w/o duplication ;D

@psiha psiha added the enhancement New feature or request label Mar 18, 2023
@jfalcou
Copy link
Owner

jfalcou commented Mar 19, 2023

Note while I'm looking at that:
the static size can be specififed via kwk::fixed<N> and this typ esupports static-compatible +-*/ so fixed+fixed is still a fixed.
When used as parameters to of_size, it will do the correct thing.

So:

auto add_shape(auto s1, auto s2)
{
  return of_size( kumi::map( [](auto a, auto b) { return a+b; }, s1, s2);
}

should work with mixed size with no extra work.

Maybe this can help.

@psiha
Copy link
Contributor Author

psiha commented Mar 20, 2023

cool thanks will try it...once <...> works and i can compile ;D

@psiha
Copy link
Contributor Author

psiha commented Jul 26, 2023

tried it with the current code with a simple example:

kwk::shape<_> shape_d;
kwk::shape<kwk::width[ 5 ]> shape_s;
auto z = kumi::map( []( auto a, auto b ) { return a + b; }, shape_d, shape_s );

and z is a plain kumi::tuple<int> - so both parts of compile time info are lost (width type and size 5)
(and it is no longer shape but a plain tuple)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants