-
Notifications
You must be signed in to change notification settings - Fork 204
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update cloud-gpus.csv - Oblivus Cloud #59
Conversation
✅ Deploy Preview for comfy-licorice-ff5651 ready!
To edit notification comments on pull requests, go to your Netlify site settings. |
Thanks for the PR! We'd be happy to include Oblivus. Rather than ranges, we choose roughly comparable configurations based on what is offered by providers of pre-set machines. We typically compare to AWS's offerings. You can use the table to select configurations with the same GPU type and count and see their vCPU/RAM values, like so: Could you please update your PR with prices for specific configurations?
|
Added the pre-set configurations.
Hello, Thank you for the information. I have attempted to create and add some example pre-set configurations. Please let me know if it's okay now. Thank you! |
Thanks! That looks much better. I made one more commit before merging, with some smaller:
|
Hey there! First off, I want to thank you for including Oblivus in the list. I understand that the table has a certain structure, and I don't want to disrupt that. However, I do have a few things I'd like to mention:
I hope this clarifies our perspective. I'm awaiting your response and appreciate your consideration. Thanks again! |
Thanks for the detailed followup, and sorry for the delayed reply.
That's a helpful data point, thanks for sharing! I'd love to know if you have a sense of what kinds of workflows those users are running -- ML training/inference, rendering, mining, or something else. We are only trying to serve an audience that's running ML workflows, and really only neural networks. Our bias is also towards instances that can support neural network training workflows, which look different from inference workflows, e.g. less RAM. I will monitor this issue and resolve if we see more interest in GPU servers with different configurations as workflows move away from training and towards inference, following the rise of promptable foundation/pre-trained models.
You're right, we currently only state that GCP has configurable instances! That's our bad and I will fix, #64.
While it would be a noble goal, the purpose of our table isn't to show the price implications of every configuration option from every provider. There's just too much heterogeneity in offerings for a table to make sense. Instead, we provide a high-level overview of what's available, with a focus on standardization. Since not every provider -- including many major providers -- discretely prices GPUs, we don't pull that out into a separate column. The per-GPU column is only for easing price comparison across setups with varying numbers of GPUs.
Thanks for the context! An 8x configuration is standard -- cf Cudo and RunPod for A4000; AWS, Datacrunch, GCP, and Lambda for V100. Given our goal of standardization, we're sticking with card counts of 1, 2, 4, 8 and 16. See also #65. |
Hey, I appreciate the thorough information you provided! I now have a clear understanding. The table you made is incredibly helpful, and I want to express my gratitude for creating such a valuable resource for the community. Also, thanks for including us. |
Hello, can you please include Oblivus Cloud in the list?
We offer complete customization of virtual machines. Customers have the flexibility to select each component according to their requirements. As we do not provide pre-set configurations, I have filled in the minimum and maximum values for each row. The on-demand price is calculated based on the smallest virtual machine that can be deployed with the specified GPU.
For more details, you can refer to the following links:
https://oblivus.com/pricing/
https://oblivus.com/restrictions/
Thank you!