Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Distribution: 0 /Maniflod #6

Closed
crazyblueer opened this issue Jul 26, 2024 · 9 comments
Closed

Distribution: 0 /Maniflod #6

crazyblueer opened this issue Jul 26, 2024 · 9 comments
Labels
question Further information is requested

Comments

@crazyblueer
Copy link

crazyblueer commented Jul 26, 2024

When I run the model, the console printed this line out and then finished the program quickly. What does this mean and how can I fix the error? Thank you sm for answering!

"INFO:main:Dataset tomatoes : train=404 test=67
INFO:main:Selecting dataset [mvtec_tomatoes] (1/1) 2024-07-26 15:47:22

Distribution: 0 / Maniflod."

@cqylunlun cqylunlun added the question Further information is requested label Jul 26, 2024
@cqylunlun
Copy link
Owner

cqylunlun commented Jul 26, 2024

The above step involves the 'Image-level Spectrogram Analysis' described in the paper, aiming to determine the distribution for different categories as either 'Manifold' or 'Hypersphere', which corresponds to the variant between 'GLASS-m' and 'GLASS-h':

  1. For the datasets mentioned in the paper, we have already completed the 'Image-level Spectrogram Analysis' and stored the results in an Excel file located at ./datasets/excel/*_distribution.xlsx.
  2. For datasets not mentioned in the paper, GLASS first needs to perform 'Image-level Spectrogram Analysis'. Based on your console output, the first run of run-dataset.sh determines that the "tomatoes" category follows a 'Manifold' distribution, and a new Excel file has been generated at ./datasets/excel/mvtec_distribution.xlsx. Then, running run-dataset.sh again will allow you to start training using the 'GLASS-m'.

Additionally, you can directly select the GLASS variant by modifying the argument --distribution in run-dataset.sh:

  1. When distribution=0, it reads the distribution type stored in the local Excel file. If the file is not found, it would perform a new 'Image-level Spectrogram Analysis' (as in your case).
  2. When distribution=1, it ignores the local Excel file and performs a new 'Image-level Spectrogram Analysis'.
  3. When distribution=2, it directly selects the 'GLASS-m' based on the 'Manifold' distribution.
  4. When distribution=3, it directly selects the 'GLASS-h' based on the 'Hypersphere' distribution.

For more details, please see lines L205~L227 in glass.py.

@cqylunlun cqylunlun pinned this issue Jul 26, 2024
@crazyblueer
Copy link
Author

@cqylunlun Thank you very much for the detailed explanation! I have one more question regarding to running with custom dataset, I organized my dataset to match with the structure of mvtec dataset. Then I made a copy of the "run-mvtec.sh" to my dataset "run-dataset.sh". My question is: In the main.py and datasets/mvtec.py , do I have to also modify the mvtec dataset to the name of my dataset? Sorry to ask such a mundane question, I am quite inexperienced in it and trying to figure out how to run the model.

@cqylunlun
Copy link
Owner

cqylunlun commented Jul 27, 2024

For running with a custom dataset which matches the structure of MVTec AD, you don't need to modify any code to start training normally. However, you can make the following modifications to ensure that all results (excels, models, and visualizations) will have the prefix changed from 'mvtec' to 'your_dataset_name':

  1. Add the key-value pair "your_dataset_name": ["datasets.mvtec", "MVTecDataset"] to _DATASETS at L167~L168 in main.py.
  2. Replace 'mvtec' with 'your_dataset_name' in the last line of run-dataset.sh.

If you have any questions, feel free to ask!

@crazyblueer
Copy link
Author

crazyblueer commented Jul 30, 2024

@cqylunlun Thank you so much for answering my question. My training is still ongoing with epoch 530 and it has been running for more than 2 days. Is that normal training time for this model? My second question is I notice within one epoch, the loss value fluctuates between samples, sometimes it gets to 8.28e-02 to 9.28e-02 then down to 3.e to 2.e to 1.e, it is not really decreasing linearly. How can I evaluate the learning of the model given this fluctuation in loss' values? I am afraid my loss function is not calculated well since I handcrafted my ground truth images, and the masks are mostly a bit bigger than the actual defects. Would it have any serious effect on the learning of the model? Are the ground truth images need to be perfect? Thank you so much once again!

@cqylunlun
Copy link
Owner

  1. When training a category with 280 images, GLASS takes 4 hours to complete 640 epochs on an NVIDIA Tesla A800.
  2. Due to the use of adversarial learning in GLASS, there may occasionally be fluctuations. However, the loss will converge again based on experience.
  3. GLASS is an unsupervised anomaly detection method in which the defect images and labels in the test set are not exposed to the model during training.

@crazyblueer
Copy link
Author

crazyblueer commented Jul 31, 2024

@cqylunlun thank u!! I just got the test result and it came out very good. Thank u for the great model! I have one more question regarding to improving the accuracy of the model. Since some defects on the heatmap were less bright, they look just pale blue right now. How can I adjust the threshold so that the result will also record them? if I adjust the threshold, will I have to retrain the model? Second question is, if I have more "good" images in the future, should I keep training the model with new images so that it will improve its accuracy?

@cqylunlun
Copy link
Owner

  1. Instead of retraining, you can modify the visualization code in glass.py, as detailed in issue #3.
  2. Although we have not verified the online learning capability of GLASS, you can try using the saved trained model and optimizer to continuously learn from new normal samples.

@crazyblueer
Copy link
Author

crazyblueer commented Aug 2, 2024

@cqylunlun sorry to spam you with more question!!

My first question is, I see that in the training, the image size is resized to 288x288, and the result images after inference are also 288x288. However, my original images are around ~980x487. Therefore the output kinda cut away some parts of the images. Is there any way I can adjust the visualization so that the result will output the entire image? Or do I have to set the image size in the .sh file?

Second question is, if my dataset originally organized like in the mvtec dataset, including folders for train,test, ground truth. However, now that I no longer have ground truth images and I only want to do inference in real time. How can I disable the requirement for ground truth images?

Thank you for answering so many questions of mine. Really appreciated your help

@cqylunlun
Copy link
Owner

cqylunlun commented Aug 5, 2024

No problem at all! I'm happy to help with your questions.

  1. There is no need to modify the .sh file. First, change L113/L129/L150 in mvtec.py to ‘transforms.Resize([self.resize, self.resize])’. Second, comment out L122/L130/L154 in mvtec.py. Third, modify L540 in glass.py to the target size ‘(980 * 3, 487)’. For better results, please retrain after making these changes.
  2. In issue About Custom dataset #8, we provided a training strategy that does not require ground truth images. For the testing strategy, please follow steps 1 and 2 to modify the maskpath in mvtec.py and the tester in glass.py.

”Note that these steps cover only the key changes and may introduce some bugs, so your further debugging will be necessary.”

@cqylunlun cqylunlun reopened this Aug 6, 2024
@cqylunlun cqylunlun added duplicate This issue or pull request already exists and removed duplicate This issue or pull request already exists labels Aug 6, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants