You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have a pretrained haiku network, and I would like to convert this into a Pytorch network for testing. By checking the shape, I found linear layers to be easily convertible, simply by np.transpose(p, (1, 0)) to get the shape right. However, for convolution layers, I found things to be more complicated.
For a Conv2d with kernel size 3 and padding 1, haiku param is of shape (in_dim, in_dim, in_channel, out_channel). However, I tried to do np.transpose(p, (3,2,0,1)) and np.transpose(p, (3,2,1,0)), and both get random guess accuracy. These are the only two ways I could find out to make shapes consistent.
It would be very nice of you to provide some thoughts on this.
Thanks!
The text was updated successfully, but these errors were encountered:
Hi,
I have a pretrained haiku network, and I would like to convert this into a Pytorch network for testing. By checking the shape, I found linear layers to be easily convertible, simply by
np.transpose(p, (1, 0))
to get the shape right. However, for convolution layers, I found things to be more complicated.For a Conv2d with kernel size 3 and padding 1, haiku param is of shape (in_dim, in_dim, in_channel, out_channel). However, I tried to do
np.transpose(p, (3,2,0,1))
andnp.transpose(p, (3,2,1,0))
, and both get random guess accuracy. These are the only two ways I could find out to make shapes consistent.It would be very nice of you to provide some thoughts on this.
Thanks!
The text was updated successfully, but these errors were encountered: