-
Notifications
You must be signed in to change notification settings - Fork 65
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
About anomaly detection #27
Comments
For anomaly detection task on a single task, we follow existing works to do parameter sweeps to get the best setting. Here is the parameter and training log. It's possible that the results are not identical to the paper since results from these datasets in time series are not very stable. |
I have a question about the series anomaly detection. anomaly detection is a downstream tasks, because this task depends on forecast the imputation. why not include the imputation task in pretrain stage. @gasvn |
When I use my dataset, the anomaly detection score is too slow. I'm not sure this model has great generalization performance. |
In our paper, we want to use imputation task as a use case of few-shot learning on new tasks, so we didn't include it in pretraining. You can always include any task you want in your use during pretraining. When use your dataset, did you pretrain the model on a large amount of private data you have? Did you do the prompt tuning on the tokens? It's possible that the dataset we use for pretraining has a large domain gap with your data. |
Thanks for replay. |
I will modify the anomaly detection task. Instead of comparing the generated sequences with the original sequences, I will directly regress the positions and shapes of the anomaly intervals for this task. This way, the anomaly detection task can be placed in the pre-training stage of the UniTS model. |
Hello, thank you very much for providing such an excellent idea and implementation. However, the performance of my anomaly detection run has not reached the level stated in your paper. Could you please offer me some suggestions? The dataset I'm currently using is the SMD dataset. My F1 score is 81.81, which is quite a bit lower than the 88.09 mentioned in your paper. If I want to achieve results similar to yours, what parts of the code should I adjust? I would greatly appreciate any advice you can give.
The experimental environment is Ubuntu 22.04.3 LTS operating system, with an Intel® Xeon® CPU E5-2609 v4 @ 1.70GHz, two NVIDIA GeForce RTX 4090 GPUs, and 173GB of RAM.
The text was updated successfully, but these errors were encountered: