-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
model not converging the sparse categorical accuracy stays same for the whole epochs. #1813
Comments
Same experience, the model training stops because of the EarlyStopping callback. Also reproducible in Colab |
I think I might have found the solution. The GlobalAveragePooling1D just before the dense layer returns the wrong shape. With that change, the model does converge. DISCLAIMER: I am new to transformers, only played with LSTM's so far. If 500 is the number of steps ( the sequence length?) and there is only one feature in the dataset, channels_last makes sense, so I don't understand why channel_first works and channel_last does not. |
Without " name="TooSmallOutput" it is also working. |
Ah I forgot to remove that. |
Thanks for the reply |
For multiclass classification using time series data, the accuracy comes nearly 0.009. What are the factors that can improve accuracy? |
while comparing with the results provided in https://keras.io/examples/timeseries/timeseries_classification_transformer/, the dense layer is different.
how to fix it?
The text was updated successfully, but these errors were encountered: