Run predictions using python script. Parameters documentation? #924
Replies: 2 comments 2 replies
-
For the last part re GPU utilization I would maybe check out upping your batch size if you haven't tried that yet! And also see this answer from the legend Talmo himself. Edit: I mean your batch size is already pretty big. I've noticed that performing tracking after inferences can yield low GPU use since the tracking piece can take some time whereas running just predictions first and then tracking after could help get some predictions out faster. |
Beta Was this translation helpful? Give feedback.
-
Hi @anas-masood, @jmdelahanty's answer has lots of great pointers. To add to that, with regards to tracker customization, check out our notebook on standalone tracking which shows how to customize some of the tracker parameters that might not be exposed by the CLI. Let me know if you have any questions! Cheers, Talmo |
Beta Was this translation helpful? Give feedback.
-
Hi. Thanks for implementing SLEAP and its been working great so far. I'm trying to use the data structures documentation to create a python script in order predict videos sequentially from our data server. We have trained a top down model and the first minor tests seem to be working fine.
However, I was having trouble finding a documentation on how to pass some parameters to the predictor. The sequence of commands I follow are:
predictor = sleap.load_model([centroidPath, centered_InstancePath])
predictions = predictor.predict(myVideo)
I would like to increase the batch size of the predictor, and see if I can change any other parameters for fast processing (perhaps GPU allocation and for post tracking). I would like to set batch size to 64, and for the post tracker to assert tracking a single instance.
Update:
I tried (lazily I would admit) checking the code and found this at nn.Inference
and changed the predictor to sleap.loadmodel([centroidPath,centered_InstancePath],batch_size=64,tracker_max_instances=1) but it doesn't seem to make my inferences any faster. The GPU usage also doesnt go higher than 10% and in rare cases goes up to 40%. Any ideas?
Any help would be appreciated.
Thanks,
Anas
Beta Was this translation helpful? Give feedback.
All reactions