Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rename the argument of dense encoder from layers to fc_layers #3930

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

metamath1
Copy link

@metamath1 metamath1 commented Feb 12, 2024

Code Pull Requests

Please provide the following:

  • a clear explanation of what your code does: fc_layers setting not working for the dense encoder for the vector input feature. To fix this, layers in the DenseEncoder class has been renamed to fc_layers.
  • if applicable, a reference to an issue: I posted a question on Slack, but haven't received an answer yet. https://ludwig-ai.slack.com/archives/C01PN6M2RSM/p1707631595275339
  • a reproducible test for your PR (code, config and data sample): If you run the code below without the suggested fix, it will not create Linear layers with outputs of 64 and 32, but only Linear layers with the default size of 256.
    https://gist.github.com/metamath1/0dc3893e11a887cdd2544372d9e46852

@@ -65,7 +65,7 @@ class DenseEncoder(Encoder):
def __init__(
self,
input_size,
layers=None,
fc_layers=None,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@metamath1 Thank you for proposing this change. Do you think it would potentially cause backward compatibility issues if other users already passed layers to the constructor? Thank you. /cc @arnavgarg1

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@alexsherstinsky Yes, that is right.

We have a file to add support for backward compatibility here: https://github.com/ludwig-ai/ludwig/blob/ea890d99e72fbc32fad8aae18070da9969ddb422/ludwig/utils/backward_compatibility.py

And an associated test file https://github.com/ludwig-ai/ludwig/blob/ea890d99e72fbc32fad8aae18070da9969ddb422/tests/ludwig/utils/test_backward_compatibility.py to add a test to make sure it works.

@metamath1 Are you able to add backward compatibility support and an associated test for it?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🙇

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@metamath1 Actually, I am a bit confused about this PR - what's the intended scope and reason for the change?

I misunderstood the change - this may not require any backward incompatibility changes since you're modifying the argument in the encoder class as opposed to the schema directly.

Copy link
Author

@metamath1 metamath1 Feb 13, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@alexsherstinsky Yes, that is right.

We have a file to add support for backward compatibility here: https://github.com/ludwig-ai/ludwig/blob/ea890d99e72fbc32fad8aae18070da9969ddb422/ludwig/utils/backward_compatibility.py

And an associated test file https://github.com/ludwig-ai/ludwig/blob/ea890d99e72fbc32fad8aae18070da9969ddb422/tests/ludwig/utils/test_backward_compatibility.py to add a test to make sure it works.

@metamath1 Are you able to add backward compatibility support and an associated test for it?

Here's the setup for creating multiple Linear layers in a dense encoder

config = {
    'preprocessing': {
        'split': {
            'type': 'fixed',
            'column': 'split'
        }
    },
    'input_features': [
        {
            'name': 'image',
            'type': 'vector',
            'encoder': {
                'type': 'dense',
                # This part doesn't work.
		# This part is ignored and always creates a layer with an output size of 256.
		'fc_layers': [
                    {'output_size': 64},
                    {'output_size': 32}
                ]
            }
        }
    ],
    'output_features': [
        {'name': 'label', 'type': 'category'}
    ],
    'trainer': {'epochs': 1, 'batch_size': 64, 'eval_batch_size': 128}
}

However, the fc_layers setting does not work.
This is because the argument to the DenseEncoder is named layers,
but the config dictionary object stores the setting as fc_layers.
The settings in the user's fc_layers are not being passed to the DenseEncoder.

Looking at the documentation, it seems that starting from version 0.7, layers has been renamed to fc_layers.
Since the layers key is no longer created in the config dictionary object, specifying layers in the config like in older versions instead of fc_layers won't work either.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This really needs to be fixed.

For retrocompatibility we can have a couple lines fo code that internally rename layers to fc_layers and warn the user of the change and that it is deprecated.

Also, if it was not working anyway, there's not a lot of damage in beckward incompatibility.

Finally, I'm surprised by the fact that the wrong keyword was not captured by either the config validar or the dereferencing of the kwargs, the should have been a check somewhere.

Copy link

Unit Test Results

  4 files  ±0    4 suites  ±0   9m 6s ⏱️ -29s
12 tests ±0    7 ✔️  - 2    5 💤 +2  0 ±0 
40 runs  ±0  20 ✔️  - 8  20 💤 +8  0 ±0 

Results for commit 274a647. ± Comparison against base commit ea890d9.

This pull request skips 2 tests.
tests.regression_tests.benchmark.test_model_performance ‑ test_performance[ames_housing.gbm.yaml]
tests.regression_tests.benchmark.test_model_performance ‑ test_performance[mercedes_benz_greener.gbm.yaml]

@mhabedank
Copy link
Collaborator

Seems to be important, need merge

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants