You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In order to create pruning in my Model, I need to use a regularizer (typically L1 norm).
I create my own Layer (named LineGain) and build my kernel like this :
// call in the build function
kernel = add_weight("kernel", kernelshape, initializer: kernel_initializer, regularizer: kernel_regularizer, dtype: base.DType, trainable: true);
During the cnn construction it looks like that :
// Add it in the cnn
output = new LineGain(
new LineGainArgs
{
DataFormat = linegainLayer.DataFormat,
KernelRegularizer = new L1(lambdaLineGain),
Activation = KerasApi.keras.activations.GetActivationFromName(linegainLayer.activation)
}
).Apply(input);
My model compile and learn ! (It's one things)
Problem : it doesn't seem to apply the regularization during learning (weights are not put to zero, the change of lambdaLineGain doesn't seem to change anything even with aberrants values)
Is someone have already use Regularization and can provide me a exemple which work ?
Additionally it's seem to not working either in the Dense layer.
During my check I found that nowhere the regularizer is given to the kernel (see Tensorflow.Keras.Layers.Dense, build function) :
inputSpec = new InputSpec(TF_DataType.DtInvalid, null, min_ndim, null, axes);
kernel = add_weight("kernel", new Shape(num, args.Units), initializer: args.KernelInitializer, dtype: base.DType);
if (args.UseBias)
{
bias = add_weight("bias", new Shape(args.Units), initializer: args.BiasInitializer, dtype: base.DType);
}
Alternatives
No response
The text was updated successfully, but these errors were encountered:
Description
In order to create pruning in my Model, I need to use a regularizer (typically L1 norm).
I create my own Layer (named LineGain) and build my kernel like this :
During the cnn construction it looks like that :
My model compile and learn ! (It's one things)
Problem : it doesn't seem to apply the regularization during learning (weights are not put to zero, the change of lambdaLineGain doesn't seem to change anything even with aberrants values)
Is someone have already use Regularization and can provide me a exemple which work ?
Additionally it's seem to not working either in the Dense layer.
During my check I found that nowhere the regularizer is given to the kernel (see Tensorflow.Keras.Layers.Dense, build function) :
Alternatives
No response
The text was updated successfully, but these errors were encountered: