You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
An extension to the encoder that generalizes the metadata feature
representation from hand-crafted features (e.g. mean of means of numerical
features) and instead formulate encoder as a ANN function approximator module, where the input is some raw, numericalized form of the data, and the output is some lower-dimensional encoding of the data.
Possible architectures for the differentiable module:
Transformer
A hierarchical transformer that encodes each feature vector as "feature embeddings" and and each row vector as "instance embeddings".
feature embeddings are computed on random samples of the feature set {f_0, f_1, ... f_n}
instance embeddings are computed on random samples of the dataset (X, y)
the embeddings are concatenated and passed through a fully-connected layer that has the same output dimensionality as the controller input dimensionality.
this metafeature embedding is then used as input to the controller (decoder) to produce a ML framework.
CNN
A convnet can probably by used from the using the same data setup as the Transformer.
The text was updated successfully, but these errors were encountered:
An extension to the
encoder
that generalizes the metadata featurerepresentation from hand-crafted features (e.g. mean of means of numerical
features) and instead formulate
encoder
as a ANN function approximator module, where the input is some raw, numericalized form of the data, and the output is some lower-dimensional encoding of the data.Possible architectures for the differentiable module:
Transformer
CNN
A convnet can probably by used from the using the same data setup as the Transformer.
The text was updated successfully, but these errors were encountered: