Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Request for output_padding Parameter Support in Conv1dTranspose and Conv2dTranspose Layers #292

Open
PhyllisJi opened this issue Jun 5, 2024 · 0 comments

Comments

@PhyllisJi
Copy link

Software Environment:

  • MindSpore version (source or binary): binary
  • Python version (e.g., Python 3.7.5): 3.9
  • OS platform and distribution (e.g., Linux Ubuntu 16.04): Ubuntu 20.04
  • GCC/Compiler version (if compiled from source):

Describe the expected behavior

I am writing to request the addition of the output_padding parameter to the Conv1dTranspose and Conv2dTranspose layers in MindSpore. This feature is crucial for the following reasons:

Precise Output Shape Control:
The output_padding parameter allows for precise control over the output shape of transposed convolution layers. This is particularly important when the desired output size cannot be achieved directly through the current parameters (such as stride, padding, and kernel size).

Consistency with Other Frameworks:
Many deep learning frameworks, such as TensorFlow and PyTorch, include output_padding in their transposed convolution layers. Adding this feature to MindSpore would enhance compatibility and ease the transition for users migrating from these platforms.
Flexibility in Model Design:

The ability to fine-tune the output shape using output_padding offers greater flexibility in model architecture design. It simplifies the process of aligning tensor dimensions, which is essential for complex neural network architectures.
Reduced Post-Processing Overhead:

Without output_padding, users may need to apply additional operations (such as padding) to achieve the desired output shape. This adds unnecessary complexity and computation overhead. Integrating output_padding directly into the layer would streamline model implementation and improve performance.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant